🚀 Strengthening My Core DSA Skills – Hands-on Practice in Python Today, I focused on building strong fundamentals by implementing some important Data Structures & Algorithms concepts from scratch (without using built-in shortcuts). 🔹 Quick Sort (In-Place Implementation) Implemented Quick Sort using the partition logic and recursion. Worked deeply on understanding: Pivot selection Partitioning mechanism Role of low, high, and pivot index Time Complexity: O(n log n) average, O(n²) worst case This helped me clearly understand how divide-and-conquer works internally. 🔹 Palindrome Check (Logic-Based Approach) Built a string palindrome checker without using slicing shortcuts. Focused on: String traversal Reversing logic manually Comparing original and reversed string Improved clarity on string manipulation fundamentals. 🔹 Array Rotation (Right Rotation by K Steps) Solved array rotation using the reverse algorithm approach. Key takeaways: Handling edge cases (k > n) Using modulo for optimization In-place reversal for O(1) space complexity 💡 Key Learning: Understanding the logic behind algorithms is more important than just writing working code. Debugging partition logic in Quick Sort gave me deeper insight into how memory and indexes actually work. Practicing these core problems is strengthening my problem-solving foundation step by step. #DataStructures #Algorithms #Python #CodingPractice #DSA #ProblemSolving #LearningJourney 🚀
More Relevant Posts
-
Thinking through decision logic before writing anything. At some point, every program has to make a decision. Whether to continue or stop. Whether a condition is met or not. Whether one path should run instead of another. This is what decision logic exists for. Decision logic works by evaluating conditions exactly as they are written. Nothing is inferred, and nothing is implied. If a condition is incomplete or unclear, the behavior that follows will reflect that directly. This is why decision logic is less about syntax and more about precision. Each condition is a statement about what must be true. Each branch defines what should happen when that statement holds and what happens when it doesn’t. When logic is well-defined, code behaves predictably. When it isn’t, code may still run, but the outcome becomes unreliable. In data work especially, decision logic shapes how data is filtered, categorized, and acted upon. Small assumptions at this level can quietly affect results downstream. Decision logic doesn’t make code clever. It makes behavior explicit. Day 25 / 30 #30DaysOfDataScience #Indepthsofdatascience #Python #DecisionLogic #LearningInPublic
To view or add a comment, sign in
-
-
From Error to Execution; My Data Analysis Journey.... Today I ran a simple Python notebook on Google Colab. It asked for a name. It asked for an age. It returned the correct output. Simple? Yes. This is what most people don’t see: Behind this clean result is learning. Behind that smooth output is debugging. Behind that confidence is repetition. In data analysis, clarity matters. If your logic is correct, your output is correct. If your structure is clean, your results are reliable. If your foundation is strong, your insights are powerful. What looks basic is actually fundamental. I’m strengthening my core: • Python fundamentals. • Logical thinking • Attention to detail • Structured problem-solving Advanced analytics is built on simple concepts done correctly.. Growth is not always loud. It’s a working notebook and a result that prints exactly as expected. One step closer to mastery. 📈 #DataAnalysis #Python #LearningInPublic #AnalyticsJourney #WomenInTech #BuildingInPublic #TechGrowth
To view or add a comment, sign in
-
-
Python in Data Science #009 I feel like I’ve lost count of how many times I saw “feature importance” in a slide deck, nodded along. Sometimes I realize it is telling a comforting story, not the true one. The model workes, but the explanation is quiet misleading. I always default to permutation importance for explanations and treat impurity-based importance as a rough heuristic. Tree models (RF/GB/XGB) often expose impurity-based importance (the built-in “gain”/“gini” style). It’s fast, but it’s biased toward continuous/high-cardinality features, and it can inflate variables that simply offer more split opportunities. Permutation importance asks a more practical question: “If I shuffle this feature, how much does my metric drop?” That trade-off matters: permutation is slower and can get messy with highly correlated features (importance gets shared or diluted), but it’s much closer to “what the model actually uses” on the data distribution you care about. Also important: compute it on a validation set, not the training set, or you’ll explain overfitting.#datascience #machinelearning #python
To view or add a comment, sign in
-
𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗶𝗻𝗴 𝗠𝘆 𝗗𝗮𝘁𝗮 𝗦𝗰𝗶𝗲𝗻𝗰𝗲 𝗝𝗼𝘂𝗿𝗻𝗲𝘆 📊 Last week, I deepened my understanding of Python data structures and how they interact with one another. 🔹 Indexing & Slicing I practiced both positive and negative indexing, as well as slicing techniques, to access and manipulate data more effectively. Casting between types helped me see how Python transforms data behind the scenes. 🔹 Lists Explored key operations such as len(), append(), remove(), sort(), and pop(). These reinforced how lists can be dynamically managed. 🔹 Tuples Learned about immutability, tuples cannot be directly modified. To perform operations, I converted them into lists or sets. I also practiced slicing and combining tuples. 🔹 Sets Focused on intersection, difference, and clear operations, while appreciating how sets automatically eliminate duplicates. 🔹 Dictionaries Worked on creating and updating key–value pairs, using zip() and dict() to combine data from multiple structures. Practiced adding and modifying entries using methods such as update() to organize data efficiently. 🔹 Integration Exercise Concluded with a project that brought everything together: creating lists, sets, and tuples, then converting and combining them into dictionaries. This exercise highlighted how different structures complement each other in Python. Overall, this experience strengthened my foundation in Python and improved my confidence in organizing and manipulating data for real-world applications. #Python #DataScience #DataStructures #LearningJourney
To view or add a comment, sign in
-
Long time no nerding around on a weekend… but curiosity kicked in again 😄 Today’s rabbit hole: How much faster can a simple #RAG-style #retrieval #pipeline get if we compile #Python with #Codon? So I built a small benchmark and compared: 🐍 CPython 3.12 ⚡ Codon (AOT compiled Python) Across a simple retrieval setup using: • TF-IDF and BM25 • Linear scan vs Inverted index • Corpora of 10K → 200K words • 100 → 1000 queries Codon compilation took ~2.8 seconds once, then I ran identical workloads for both runtimes. And honestly… the results were pretty fun. ⚡ Overall runtime speedups Small dataset → 1.4× faster Medium dataset → 3.17× faster Large dataset → 3.59× faster But the real nerd excitement showed up in query performance. ⚡⚡For the largest dataset (200K words, 1000 queries): 🚀 TF-IDF (linear scan) → 2.7× faster 🚀 BM25 (linear scan) → 4.2× faster 🚀 TF-IDF (inverted index) → 6.85× faster 🚀 BM25 (inverted index) → 11.47× faster So the pattern became very clear: 🧠 Algorithmic structure matters more than the runtime Just switching linear scan → inverted index in Python alone already gives around 5–6× speedup for TF-IDF queries. Then compiling with Codon basically multiplies that gain. Memory usage did go up a bit with Codon on larger datasets (~1.3×), but query latency dropped significantly. For anyone playing with RAG pipelines, search systems, or classic IR methods, the takeaway is pretty satisfying: • Data structures give the first big win • Compilation can amplify it • Query-heavy workloads benefit the most Next weekend curiosity might involve: 1- hybrid dense + sparse retrieval 2- larger corpora 3- parallel queries Because once you start benchmarking… it’s hard to stop 🤓
To view or add a comment, sign in
-
🚀 Day-77 of #100DaysOfCode 📊 NumPy Practice – Finding Smallest Element Today I worked on finding the minimum value in an array using NumPy. 🔹 Concepts Practiced ✔ Array operations ✔ Using np.min() ✔ Basic data analysis 🔹 Key Learning Finding minimum values is a simple yet important operation used in data analysis, optimization problems, and real-world datasets. Small steps every day → Big progress 🚀 #Python #NumPy #DataScience #CodingPractice #100DaysOfCode #PythonDeveloper
To view or add a comment, sign in
-
-
Day 4: NumPy — Beyond the Python Loop 🏎️ When you’re handling millions of data points—whether in Finance, AI, or Data Engineering—Python loops become a bottleneck. Enter NumPy. In production, we don't just use NumPy for math; we use it for Memory Efficiency. 1. Why Arrays > Lists? A Python List is an array of pointers to objects (heavy). A NumPy Array is a contiguous block of memory (light). ❌ The Problem: Python Lists are flexible but slow because the CPU has to "look up" the type of every single element. ✅ The Pro Standard: Use NumPy ndarray for homogeneous data. The "Why": It’s not just faster; it’s Vectorized. Operations happen at the C-level, bypassing the Python Global Interpreter Lock (GIL) overhead. 2. The Death of the "For Loop" In a tutorial, you might loop through a list to multiply every number by 2. In professional engineering, that’s a "code smell." ❌ Slow: [x * 2 for x in my_list] ✅ Fast: my_array * 2 The "Why": This is Broadcasting. NumPy applies the operation across the entire memory block simultaneously. It’s cleaner to read and 10x–100x faster. 3. Slicing: Views vs. Copies This is a "Senior" detail that saves production systems from crashing due to Memory Errors. 🚩 The Trap: When you slice a NumPy array (sub_arr = arr[:5]), you aren't creating a new list. You are creating a View. 🛡️ The Consequence: If you change sub_arr, the original arr changes too! ✅ The Fix: If you need a totally separate array, use .copy(). 4. Essential Operations for the Sprint Forget manual math. Use the built-in aggregations that are optimized for performance: 📈 Aggregates: np.mean(), np.std(), np.sum() — these handle multi-dimensional data across specific axis points with one line of code. 🔍 Filtering: Use Boolean Indexing. arr[arr > 10] is significantly faster and more readable than an if statement inside a loop. #Python #NumPy #DataEngineering #SoftwareEngineering #Performance #CleanCode #ProgrammingTips #TechCommunity
To view or add a comment, sign in
-
Day 6 was the most hands-on day yet. I stopped looking at Python as a collection of rules and started using it as a high-powered filter for data. Here is how Day 6 changed my perspective on Algorithms and Strings: 🔹 The Accumulator Pattern: I learned how to make a loop remember things. Whether it’s counting occurrences, summing up values, or finding the average, it’s all about maintaining a state while the loop churns through data. 🔹 The Search Party: I built logic to find the largest and smallest values in a set. Realizing that Smallest is tricky—you have to be careful with how you initialize your variables, or your starting "zero" might accidentally become your answer. 🔹 Strings are Collections: I used to think of a word as just "text." Now I see it as a sequence. I’ve learned to Slice strings to grab exactly what I need, Strip away the "noise" (whitespace), and use Parsing to extract specific data from a messy block of text. 🔹 The "In" Operator: Python’s readability shines here. Using if 'search_term' in text: feels like writing English, but it’s actually a powerful logical tool for filtering information instantly. Next up: File Handling. I’m moving from typing data manually into the console to letting Python read and analyze entire documents for me. 📂 #Python #DataAnalysis #CodingJourney #BuildInPublic #SoftwareLogic #Algorithms #StringManipulation
To view or add a comment, sign in
-
-
📘 Data Science Journey | Day 25 🔥 Day 50 of my #100daysofcodechallenge Today I learned about the Requests Library in Web Scraping. Here’s what I covered today: 📌 Introduction to Requests Library ▫ A Python library used to send HTTP requests easily ▫ Helps to fetch webpage content programmatically 📌 Making HTTP Requests ▫ Using requests.get() to retrieve webpage data 📌 Working with Response Object ▫ Accessing data using: ▪ .text → HTML content ▪ .status_code → request status (200 = success) 📌 Downloading Multiple Pages ▫ Using loops to scrape multiple pages ▫ Automating data collection from websites 👉 See you tomorrow for Day 51. #DataScience #Python #WebScraping #RequestsLibrary #DataCollection #LearningJourney #Consistency #CodeWithHarry #100daysofcode
To view or add a comment, sign in
-
-
🚀 Day 58 – NumPy Library & Core Functions Today, I focused on strengthening my understanding of the NumPy library, a powerful tool for numerical computing in Python. Here’s what I explored 👇 🔹 Creating Arrays Learned different ways to create NumPy arrays using lists, ranges, and built-in functions like zeros, ones, and random values. 🔹 Indexing & Slicing Practiced accessing specific elements, rows, and columns, making data manipulation much easier and efficient. 🔹 Reshaping & Resizing Understood how to change the shape and size of arrays without affecting the data — very useful for handling structured datasets. 🔹 Stacking & Splitting Explored how to combine multiple arrays and split them into smaller parts for better data organization. 🔹 Broadcasting Discovered how NumPy performs operations on arrays of different shapes seamlessly — a game changer for writing clean and efficient code! 💡 Key Takeaway: NumPy simplifies complex calculations and helps in handling large datasets efficiently, making it an essential skill for Data Science and Machine Learning. 📈 Consistency is key — one step closer to mastering data-driven development! #Day58 #Python #NumPy #DataScience #MachineLearning #Developers
To view or add a comment, sign in
-
Explore related topics
- Approaches to Array Problem Solving for Coding Interviews
- Essential Python Concepts to Learn
- Key Skills Needed for Python Developers
- Prioritizing Problem-Solving Skills in Coding Interviews
- Ways to Improve Coding Logic for Free
- Solving Sorted Array Coding Challenges
- Strategies for Solving Algorithmic Problems
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development