🗓 Day 2 / 100 – #100DaysOfLeetCode 📌 Problem 1636: Sort Array by Increasing Frequency The task was to sort an array such that elements with lower frequency appear first, and if two elements have the same frequency, the larger number comes first. 🧠 My Approach: Counted element frequencies using a hash map. Sorted the elements by ascending frequency and then by descending value. Reconstructed the array based on sorted frequency order. ⏱ Time Complexity: O(n log n) 💾 Space Complexity: O(n) 💡 Key Learning: This problem reinforced how powerful custom sorting logic can be in Python, especially when handling multiple sort priorities using tuple-based keys in sorting functions. Each day is helping me refine how I think about data organization, sorting, and frequency analysis — small steps that build strong foundations. #100DaysOfLeetCode #LeetCodeChallenge #Python #ProblemSolving #Algorithms #DataStructures #DSA #Sorting #CodingJourney #CodingChallenge #SoftwareEngineering #CompetitiveProgramming #CodeEveryday #LearningInPublic #DeveloperJourney #TechStudent #CareerGrowth #CodingCommunity #KeepLearning
uppala manish’s Post
More Relevant Posts
-
Iris Flower Classification using Machine Learning Excited to share my latest hands-on project where I trained and tested a Random Forest Classifier on the Iris dataset using Python and scikit-learn! 🔹 The first notebook focuses on quick model training and testing 🔹 The second notebook calculates and verifies accuracy This project highlights the end-to-end ML workflow — from data preprocessing to model evaluation. 💻 View the complete code and notebooks on my GitHub Repository here: https://lnkd.in/gtyUV7-Z #MachineLearning #Python #DataScience #ArtificialIntelligence #MLProjects #IrisDataset #ScikitLearn #RandomForest #OpenSource #GitHubProjects
To view or add a comment, sign in
-
How do we simplify complex data without losing key information? That’s the power of Principal Component Analysis (PCA)... a foundational technique in data science for dimensionality reduction and pattern discovery. I built PCA from scratch in Python to show exactly how it works, step by step, with visuals and image compression examples. 👉 Explore the full tutorial on Kaggle: https://lnkd.in/drs9tbFu If you find it useful, don’t forget to upvote, comment your thoughts, and share your feedback! #DataScience #MachineLearning #PCA #Python #Kaggle #DimensionalityReduction
To view or add a comment, sign in
-
-
Day 3 of #100DaysOfLeetCode Problem: 167. Two Sum II – Input Array Is Sorted Category: Arrays / Two Pointers Today’s challenge was all about finding two numbers in a sorted array that add up to a given target. Since the array is already sorted, using two pointers gives an elegant O(n) solution — no need for extra space! 🧠 Key Learnings: Initialized pointers at both ends (l = 0, r = n-1). If the sum is smaller than the target → move left pointer rightward. If the sum is greater → move right pointer leftward. Found the exact indices in linear time using smart pointer movement. 💡Time Complexity: O(n) 💡 Space Complexity: O(1) 🎯 Takeaway: When the array is sorted, two pointers can replace complex hash-based logic, simplifying both time and space usage. Staying consistent and learning one problem at a time! 💪 #LeetCode #100DaysOfCode #ProblemSolving #CodingJourney #Arrays #TwoPointers #Python #AIEngineer #Consistency
To view or add a comment, sign in
-
-
#Week3 | Mastering Search Algorithms: From Linear to Binary Search This week, I dove deep into the fundamentals of search algorithms, exploring how to efficiently find data in different scenarios. Here’s a quick rundown of what I covered: - Implemented Linear Search for unsorted data. - Mastered both iterative and recursive Binary Search for sorted data. - Tackled advanced challenges like finding the first occurrence of a value in a sorted array with duplicates and searching in a rotated sorted array. Tech Stack: Python, Jupyter Notebook My key takeaway is the incredible efficiency gain from using the right tool for the job. The O(log n) complexity of binary search is a testament to the power of smart algorithms. Next up: I’m jumping into the world of NumPy! For a detailed look at the code, check out the GitHub repo: https://lnkd.in/g_vHg-nH #AIJourney #MachineLearning #Python #DataStructures #Algorithms #LearningInPublic #12WeeksAIReset #RohitReboot #ProgressPost
To view or add a comment, sign in
-
-
🧹 Data Preprocessing & Handling Missing Values In this practical, I explored how to clean and prepare raw datasets using Pandas — focusing on detecting, handling, and imputing missing values to enhance overall data quality and reliability. 📘 Guided by: Ashish Sawant 💻 GitHub: 👉 [https://lnkd.in/dFff8cPb] #DataScience #MachineLearning #Pandas #Python #DataPreprocessing #MissingValues #DataCleaning #PracticalLearning
To view or add a comment, sign in
-
Entering week four of the Digital Skola Data Science Bootcamp with more advanced python concepts. This week's focus is on looping techniques (while, for, and nested loops), conditional statements and nested conditions, functional programming and pure functions, creating custom functions with proper scoping, string manipulation operations, and NumPy for numerical computing. NumPy has been the highlight learning to perform efficient mathematical operations on multidimensional arrays through reshape, flatten, transpose, advanced indexing, and broadcasting. These are essential tools for effective data preparation and analysis. Understanding array manipulation fundamentally changes how I approach data processing tasks. Detailed progress can be found in the attached slides. #DigitalSkola #LearningProgressReview #DataScience #Python #NumPy #DataAnalytics #BootcampJourney
To view or add a comment, sign in
-
🗓 Day 13 / 100 – #100DaysOfLeetCode 📌 Problem 1513: Number of Substrings With Only 1s The task was to count how many substrings in a binary string consist only of consecutive 1s. 🧠 My Approach: Identified continuous blocks of '1' in the string. For each block of length k, calculated the number of valid substrings using the formula: k × (k + 1) / 2 Summed these counts across all segments of consecutive 1s. This avoids checking all substrings individually and keeps the solution efficient and clean. 💡 Key Learning: This problem reinforces the value of recognizing sequences and using mathematical formulas to simplify substring counting. It’s a reminder that many problems can be solved much faster when we look for structure instead of brute force. One more problem, one more pattern learned 🚀 #100DaysOfLeetCode #LeetCodeChallenge #Python #ProblemSolving #Strings #Algorithms #MathInCoding #LogicBuilding #DataStructures #DSA #CompetitiveProgramming #CodingJourney #SoftwareEngineering #LearningInPublic #DeveloperJourney #TechStudent #CareerGrowth #CodeEveryday #CodingCommunity #KeepLearning
To view or add a comment, sign in
-
-
Today I explored NumPy, one of the most powerful library of Python for numerical and scientific computing. Here’s what I practiced: Creating arrays with np.array() Using functions like zeros(), ones(), arange(), eye(), and linspace() Checking dimensions with .ndim Understanding array shapes using .shape I’m really enjoying how NumPy makes working with data so much easier and faster. #Python #NumPy #DataScience #LearningJourney #PythonForDataScience
To view or add a comment, sign in
-
✨ Excited to share my latest Python practical on Logistic Regression! In this practical, I explored how Logistic Regression helps in predicting categorical outcomes and understanding relationships between variables. It was interesting to see how data patterns can be classified efficiently using this model. This exercise enhanced my understanding of supervised learning and how it can be applied to real-world problems like binary classification. 📁 Here's the Google drive : linkhttps://lnkd.in/gxfhQ8cB 🔗GitHub account : https://lnkd.in/gcCiRDfS #Python #MachineLearning #LogisticRegression #DataScience #LearningJourney
To view or add a comment, sign in
-
- Data Analysis:📊 Based on the image (a multi-panel plot, likely a FacetGrid, PairPlot, or similar from Seaborn using the classic mpg dataset), here are a few short and impactful descriptions for LinkedIn, depending on what you want to emphasize. Option 1: The Short & Punchy (Max 2 lines) Visualizing the classic mpg dataset in Python! Great exercise using Seaborn to explore relationships between horsepower, weight, cylinders, and miles per gallon. #DataVisualization #Python #Seaborn #DataScience
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development