🚀 𝗦𝘁𝗿𝗶𝗻𝗴 𝗖𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝗼𝗹𝘃𝗲𝗱 𝗶𝗻 𝟯 𝗗𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗪𝗮𝘆𝘀 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝘁𝗮𝘁𝗲𝗺𝗲𝗻𝘁: Given an array of characters, compress the string by replacing repeated characters with the character followed by its count. 🔗 𝗚𝗶𝘁𝗛𝘂𝗯 𝗖𝗼𝗱𝗲 👉https://lnkd.in/gap5HkW2 Example: Input: ["a","a","b","b","c","c","c"] Output: ["a","2","b","2","c","3"] 𝗜 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗲𝗱 𝟯 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵𝗲𝘀: 1️⃣ 𝗕𝗿𝘂𝘁𝗲 𝗙𝗼𝗿𝗰𝗲 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 Create a separate result array Count repeated characters Append character and count Time Complexity: O(n) Space Complexity: O(n) 2️⃣ 𝗕𝗲𝘁𝘁𝗲𝗿 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 Use two pointers Cleaner logic with left and right pointers Still uses extra space Time Complexity: O(n) Space Complexity: O(n) 3️⃣ 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗲𝗱 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵 Use in-place modification Maintain a write pointer Most memory efficient solution Time Complexity: O(n) Space Complexity: O(1) 𝗧𝗵𝗲 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗲𝗱 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝗶𝘀 𝗯𝗲𝘀𝘁 because it avoids extra memory usage and works directly on the input array. 𝗞𝗲𝘆 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: Always try to solve a problem in multiple ways: First for correctness Then for readability Finally for optimization 𝗧𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗴𝗿𝗲𝗮𝘁 𝗶𝗻𝘁𝗲𝗿𝘃𝗶𝗲𝘄 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 because it tests: ✔️ Arrays ✔️ Two Pointers ✔️ String Manipulation ✔️ Time and Space Complexity Data Structures and Algorithms and Python are all about improving problem-solving skills step by step. #Python #DSA #CodingInterview #ProblemSolving #Algorithms #100DaysOfCode #SoftwareEngineering #Programming #LeetCode #Coding
Sateesh Sonkamble’s Post
More Relevant Posts
-
🚀 Day 10 of DSA Practice: Merge Two Sorted Arrays Today’s problem is a classic and super important for interviews 👇 🧩 Problem Given two sorted arrays, merge them into a single sorted array. 🔍 Example Input: Array 1: [1, 3, 5] Array 2: [2, 4, 6] Output: [1, 2, 3, 4, 5, 6] 💡 Approach 1: Two Pointers (Optimal) Since both arrays are already sorted, we can use a two-pointer technique. 👉 Compare elements from both arrays and pick the smaller one each time. ✅ Time Complexity: O(n + m) ✅ Space Complexity: O(n + m) def merge_sorted_arrays(arr1, arr2): i, j = 0, 0 merged = [] while i < len(arr1) and j < len(arr2): if arr1[i] < arr2[j]: merged.append(arr1[i]) i += 1 else: merged.append(arr2[j]) j += 1 # Add remaining elements merged.extend(arr1[i:]) merged.extend(arr2[j:]) return merged 💡 Approach 2: Using Built-in Sort (Not Optimal) Merge both arrays and then sort. ❌ Time Complexity: O((n+m) log(n+m)) def merge_sorted_arrays(arr1, arr2): return sorted(arr1 + arr2) ⚡ Key Takeaways ✔️ Two-pointer approach is efficient and interview-friendly ✔️ Works because arrays are already sorted ✔️ Foundation for advanced topics like merge sort #Python #CodingJourney #30DaysOfCode #LearnToCode #Programming #Developers #ProblemSolving #PythonBasics
To view or add a comment, sign in
-
🚀 Solved Another Sliding Window Problem on LeetCode! Today’s problem: Maximum Number of Vowels in a Substring of Given Length (LeetCode #1456) 💡 Problem Summary: Given a string s and an integer k, find the maximum number of vowels in any substring of length k. ❌ Brute Force Approach: Generate all substrings of size k Count vowels in each substring Time Complexity: O(n × k) → not efficient ✅ Optimized Approach: Sliding Window Instead of recalculating everything: Count vowels in the first window Slide the window forward: Add next character Remove previous character Track the maximum count 👉 Core Idea: count = count + new_char - old_char 💻 Clean Code: def maxVowels(s, k): vowels = set("aeiou") count = 0 for i in range(k): if s[i] in vowels: count += 1 max_count = count for i in range(k, len(s)): if s[i] in vowels: count += 1 if s[i - k] in vowels: count -= 1 max_count = max(max_count, count) return max_count ⚡ Complexity: Time: O(n) Space: O(1) 🧠 Key Takeaway: Sliding Window is not just a technique — it’s a mindset. You reuse previous computation instead of recalculating everything. 🔥 This pattern applies to: Strings & Arrays Substring / Subarray problems Real-world streaming data #DSA #LeetCode #Coding #SlidingWindow #InterviewPreparation #Python #ProblemSolving
To view or add a comment, sign in
-
-
📌Day 5 tasks - Build your job market intelligence tool Today, more than summarizing - you are forcing the AI to categorize every role in your dataset as High Demand, Medium Demand, or Low Demand. By the end, you'll have an automated job market scoring system built entirely in Python. df['demand_level'] = df['description'].apply(classify_demand) 📌 Your Day 5 Tasks: 1️⃣ Design a classification prompt that returns consistent labels 2️⃣ Apply it across your full dataset 3️⃣ Analyse the demand distribution 4️⃣ Export your final intelligence report CSV 👉 Access Day 5 Module: https://lnkd.in/e5RYmsHK
To view or add a comment, sign in
-
-
My Day 13 of 90 Days Growth Challenge AMDOR ANALYTICS Today, we’ll investigate a Python concept called casting or type conversion. One beauty of Python is to convert from one datatype type to another; we can have a string variable and decide to convert to a list or control the input raw data keyed-in by the user For instance, I can have an age field to be filled by my user and s/he might key-in 34.4 which is a float or decimal value, python program interpreter will convert that float datatype to 34 which is an integer but I, the programmer must program it at my backend I can also use it to control the datatype that should be key-in in a particular field by the user. In type conversion, we can convert from the A datatype to B datatype. For example, we can convert into the following: a. Integer to Float b. Float to Integer c. String to Integer d. String to Float e. Integer to String f. Float to String g. List to Set h. String to List i. Integer List to String See y’all tomorrow #Techjourney #90daysgrowthchallenge #consistency #growth #aiengineering #Amdoranalytics
To view or add a comment, sign in
-
-
Have you ever heard of "Duck Typing"? 🦆 "I don't care who you are—I only care what you can do." 🦆 That is the soul of Duck Typing. In many programming languages, you have to show an "ID card" (a specific Class or Type) to get the job done. If the system expects a Duck, it will reject a Goose, even if the Goose behaves exactly the same. Python plays by different rules. It follows a simple philosophy: "If it walks like a duck and quacks like a duck, then for all intents and purposes, it’s a duck." The Big Picture: Old School— "Are you officially a 'Payment' object?" Duck Typing— "Do you have a 'process()' method? Great, let's go." Why this matters: Flexibility: You can swap components in and out without rewriting your whole logic. Speed: You spend less time defining strict hierarchies and more time building features. Testing: It’s why mocking and fakes are so seamless in Python. The Trade-off: You trade the safety of strict rules for the freedom of behavior. If your "duck" forgets how to quack halfway through, you’ll hit an error. #Python #Data Science #Insights #TechStrategy #Coding #CleanCode
To view or add a comment, sign in
-
-
We are excited to introduce OptiRefine, a static Python optimizer designed to eliminate O(n²) algorithmic patterns directly at the source level through CST transformation. The core concept is straightforward: rather than profiling code at runtime or relying on developers to manually identify inefficiencies, we parse the source code into a Concrete Syntax Tree (CST). We then pattern-match against known anti-patterns and rewrite them to O(n) equivalents in a single pass. Here are some benchmarks at n = 10,000: • .count() inside a loop → Counter() — 1,240× faster • `in list` membership check → set() — 910× faster • String += in a loop → ''.join() — 440× faster • Nested loop pair search → set + single pass — 780× faster The average speedup is 652×, achieved without a runtime agent, code annotations, or configuration. Engineering details include: — Built on libcst (lossless CST, ensuring formatting survives the rewrite) — Automatic and conditional import injection (Counter only added if the rewrite occurs) — Scoped sub-transformers, SubscriptReplacer and InCheckReplacer, handle inner rewrites without altering global state OptiRefine is particularly targeted at ML pipelines, data preprocessing, and backend Python, where these patterns can significantly impact performance at scale. #Python #MLOps #PerformanceEngineering #OptiRefine
To view or add a comment, sign in
-
-
Data collection series · Post 07 Imputation strategies — beyond filling with the mean "Mean imputation is fast. It's also wrong in most cases. Here are 4 better strategies and exactly when to use each." Filling missing values with the mean is fast. It's also quietly wrong in most cases. Here are 4 better strategies — and exactly when to use each. ▼ Mean imputation is the default. Everyone learns it first. It's one line of code. It ships fast. But it has a serious flaw: It collapses variance. Replace 500 missing values with the mean — and your distribution gets an artificial spike right in the middle. Your correlations weaken. Your model learns a distorted world. There are better options. Here's the practical guide. --- #Python #DataScience #DataQuality #DataCleaning #Analytics #DataAnalyst #DataAnalytics #DataEngineering #Imputationstrategies
To view or add a comment, sign in
-
Stop Jumping to Tools Most transitioning engineers ask: “Should I use SQL or Python?” Wrong question. Interviews don’t test tools first. They test thinking. Weak signal: • Tool-first thinking • Implementation focus Strong signal: 1️⃣ What’s the problem? 2️⃣ What’s the metric? 3️⃣ What’s the approach? Then tools. Because tools change. Thinking doesn’t. If your first instinct is a tool… You’re skipping the important part. #MachineLearning #DataScience #dataanalytics #SoftwareEngineering #dataanalyst #datascience #InterviewPrep #NextInterviewAI
To view or add a comment, sign in
-
Getting the "plumbing" right before the ML takes over. I’m currently building a House Price Valuation System, and if there’s one thing my CS background has taught me, it’s that a model is only as good as the data pipeline behind it. This screenshot is from the Data Preprocessing phase. I’m using Python (Pandas/NumPy) to handle the messy reality of raw data—things like categorical imputation and logical defaults—so the data is actually structured and ready for testing in the ML models. Whether it’s an ML project or a business dashboard, I’ve found that the real engineering happens in the "boring" parts: the cleaning, the logic, and the automated pipelines. Once the technical foundation is solid, the rest usually falls into place. #CSEngineer #Python #MachineLearning #SystemArchitecture #BuildingInPublic
To view or add a comment, sign in
-
-
What can raw sensor data really tell us? 🤔 In this project, No. 7 with KAITECH #programming_for_engineers_R02, I transformed a small dataset into clear engineering insights using NumPy, Pandas, and Matplotlib. Instead of just reading numbers, I: * Analyzed sensor performance under different conditions * Detected high stress and temperature patterns * Transformed timestamps into meaningful time-based trends * Visualized relationships between stress and displacement 📊 The goal wasn’t just coding… it was understanding the data and extracting value from it. This is a simple example, but it illustrates how data analysis informs real-world engineering decisions. 🎥 Watch the video to see the full workflow step by step. #DataAnalytics #Python #Engineering #NumPy #Pandas #Matplotlib
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development