🚀 DSA Learning Update – Reverse Words in a String Today I solved “Reverse Words in a String” (LeetCode #151) using a clean and practical approach. 🔍 Problem Insight The goal is not just reversing a string, but: Removing extra spaces Keeping only single spaces between words Reversing the order of words 💡 My Approach Trim the string to remove leading/trailing spaces Split the string using " " Traverse from right → left Skip empty strings (caused by multiple spaces) Build the result using StringBuilder 📌 Core Logic String str = s.trim(); String[] ar = str.split(" "); StringBuilder ans = new StringBuilder(); for (int i = ar.length - 1; i >= 0; i--) { if (ar[i].equals("")) continue; if (ans.length() > 0) { ans.append(" "); } ans.append(ar[i]); } return ans.toString(); 📌 Example Input: " hello world " Output: "world hello" 🧠 Key Learnings ✔ Always handle edge cases (especially spaces in strings) ✔ split(" ") can create empty values → must filter them ✔ Reversing traversal simplifies the logic ✔ Clean formatting matters as much as correctness ✨ Takeaway Even simple string problems can test attention to detail. Learning how to handle edge cases properly is a big step toward writing robust code. #DSA #LeetCode #Java #ProblemSolving #CodingJourney #LearnInPublic #SoftwareEngineering
Reverse Words in a String LeetCode 151 Java Solution
More Relevant Posts
-
This is actually a massive result that people are really all sleeping on. We can just start over from zero and, with the right language design, build a programming language that the models can write perfectly without a training corpus.
A model just wrote perfect code in a programming language it has never seen, and I've got some updated benchmark results for #Vera, https://veralang.dev/. Last week I posted early results from a single model on VeraBench. This week it's 6 models, 3 providers, and the same 50 problems. The headline is that Moonshot Kimi K2.5 writes Vera correctly 100% of the time. It only managed to generate working TypeScript 91% of the time, and Python 86% of the time. No model has seen Vera before. There is no example code on GitHub, no Stack Overflow answers, no tutorials. Every token of Vera code is generated from a single specification document in the prompt. Python and TypeScript have billions of lines of training data. Vera has none. Across the flagship tier (Claude Opus 4, GPT-4.1, Kimi K2.5), Vera averages 93% run_correct. Python averages 93%. Parity, with zero training data. In the Sonnet tier, Kimi K2 Turbo also writes better Vera than TypeScript, generating clean code 83% of the time for Vera, versus 79% for TypeScript. That's the same gap I reported last week with Claude Sonnet, though Sonnet's numbers flipped in the re-run, which I guess tells you something about single-pass evaluation and model non-determinism. The thesis behind Vera seems to be proving out. Language design matters more than training data volume. Mandatory contracts, typed slots, explicit effects. Give a model enough structure and it doesn't need training data. It just needs a specification. There are obvious caveats, and it's still early days. These are single-run results, with just 50 problems, and all that model non-determinism. Kimi's 100% may not hold on every run. But Pass@k evaluation is next. But so far at least, language design is doing a lot of heavy lifting. Language: https://lnkd.in/em5Kw75m Benchmark: https://lnkd.in/eue8RafE #AI #LLM #Programming #Benchmark #Vera #TypeScript #Python
To view or add a comment, sign in
-
-
More from our own Alasdair Allan, who has been asking whether language design matters more than training data for LLM code generation. The early evidence says yes. His #Vera (https://veralang.dev) is a programming language designed for LLMs to write, not humans. No variable names, mandatory contracts, explicit effects. The thesis is that if you give a model enough structure, it doesn't need training data, it just needs a specification. #AI #LLM #Vera #TypeScript #Python
A model just wrote perfect code in a programming language it has never seen, and I've got some updated benchmark results for #Vera, https://veralang.dev/. Last week I posted early results from a single model on VeraBench. This week it's 6 models, 3 providers, and the same 50 problems. The headline is that Moonshot Kimi K2.5 writes Vera correctly 100% of the time. It only managed to generate working TypeScript 91% of the time, and Python 86% of the time. No model has seen Vera before. There is no example code on GitHub, no Stack Overflow answers, no tutorials. Every token of Vera code is generated from a single specification document in the prompt. Python and TypeScript have billions of lines of training data. Vera has none. Across the flagship tier (Claude Opus 4, GPT-4.1, Kimi K2.5), Vera averages 93% run_correct. Python averages 93%. Parity, with zero training data. In the Sonnet tier, Kimi K2 Turbo also writes better Vera than TypeScript, generating clean code 83% of the time for Vera, versus 79% for TypeScript. That's the same gap I reported last week with Claude Sonnet, though Sonnet's numbers flipped in the re-run, which I guess tells you something about single-pass evaluation and model non-determinism. The thesis behind Vera seems to be proving out. Language design matters more than training data volume. Mandatory contracts, typed slots, explicit effects. Give a model enough structure and it doesn't need training data. It just needs a specification. There are obvious caveats, and it's still early days. These are single-run results, with just 50 problems, and all that model non-determinism. Kimi's 100% may not hold on every run. But Pass@k evaluation is next. But so far at least, language design is doing a lot of heavy lifting. Language: https://lnkd.in/em5Kw75m Benchmark: https://lnkd.in/eue8RafE #AI #LLM #Programming #Benchmark #Vera #TypeScript #Python
To view or add a comment, sign in
-
-
Most coding problems look different… until you realize they are just graphs in disguise. That’s where Graphs come in. A graph represents relationships using nodes (vertices) and edges (connections). You’ll find them everywhere from Google Maps finding routes to social media suggesting connections. Understanding Graphs Graphs can take different forms depending on the problem: • Directed and Undirected (based on direction) • Weighted and Unweighted (based on cost or distance) • Cyclic and Acyclic (DAG) These variations help model real-world scenarios more accurately. How Graphs Are Stored There are two main ways: • Adjacency Matrix → fast lookup but uses more space • Adjacency List → space efficient and used in most systems Core Traversal Techniques 🔹 BFS (Breadth-First Search) Moves level by level and helps find shortest paths in unweighted graphs 🔹 DFS (Depth-First Search) Goes deep into nodes and is useful for cycle detection and backtracking Important Algorithms Once basics are clear, these are must-know: • Dijkstra → shortest path • Bellman-Ford → handles negative weights • Kruskal & Prim → minimum spanning tree • Topological Sort → ordering in DAG How to Approach Graphs Start with basics, master BFS and DFS, then move to problems like cycle detection and shortest paths. With practice, patterns become clear. Graphs are not just a topic, they help you think in terms of connections and dependencies. 💬 Comment what kind of notes or topic you want next, and I will cover it in detail👇. #DSA #Algorithms #Java #Coding #InterviewPrep
To view or add a comment, sign in
-
Day 35/90 Why is database-level aggregation better than Python loops? Speed and scalability. When building a dashboard or a "stats" view, it is tempting to fetch all records and count them in a loop. But as the data grows, that approach kills performance. Moving the heavy lifting to the database using annotate and Count ensures the backend stays fast, even with thousands of nested records. It’s the difference between a system that scales and one that crawls. Backend (Gnowee EdTech Project): •Complex Aggregations: Implemented the /courses/with-stats/ endpoint using a single high-performance query with annotate() and Count(distinct=True) to consolidate student, teacher, and material stats in one execution. •Nested Logic Completion: Finalized the Course module by implementing nested endpoints for Assignments and Exams, including custom HH:MM:SS duration calculations in the serializer. •Query Optimization: Eliminated N+1 issues across all listing actions by utilizing select_related for single relationships and prefetch_related for nested sets. Data Integrity: Synchronized the Assignment model related_name to ensure accurate reverse-lookup counts and fixed invalid joins in the exam logic. I’m curious how others handle validation for things like duplicate assignments. Do you usually prefer keeping that logic in the serializer to ensure the feedback stays user-friendly? #python #django #backend #drf #90daysofcode #sql #refactoring #coding
To view or add a comment, sign in
-
-
🏗️ Beyond "It Works": Writing Maintainable Python One of the biggest shifts in my coding journey has been moving away from long, "flat" scripts and toward Modular Programming. Instead of writing one giant block of code, I’ve started breaking my logic into small, reusable functions. Why the change? Separation of Concerns: My math logic (calculating totals) is now separate from my formatting logic (adding currency symbols). Readability: The main script now reads like a story. Instead of staring at complex f-strings, I just see format_currency(). Future-Proofing: If I need to change the currency to Euros or update a tax calculation, I only change it in one place, not everywhere in the script. The Refactored Logic: Python import pandas as pd def calculate_total(quantity, price): """Calculate total for a single item""" return quantity * price def format_currency(amount): """Format number as currency for reports""" return f"${amount:,.2f}" # --- Main Workflow --- df = pd.read_csv('data/sales.csv', skipinitialspace=True) # Using our functions to clean and transform data df['total'] = df['quantity'] * df['price'] df['display_total'] = df['total'].apply(format_currency) print(df[['product', 'display_total']]) It’s a small structural change that makes a massive difference as projects scale. High-quality code isn't just about the output; it's about how easy it is for the next person (or "future me") to read and maintain. #Python #CleanCode #ProgrammingTips #DataScience #LearningToCode #SoftwareEngineering #Automation
To view or add a comment, sign in
-
🚨 “Python is slow.” If you’ve ever said this… There’s a 90% chance you don’t understand the GIL. And that misunderstanding is costing you performance. Big time. Let’s break your assumption: You spin up 10 threads. You expect 🚀 10x speed. Reality? 👉 Your CPU is still doing ONE task at a time. Welcome to the truth of Python. 🧠 The villain (or hero?): GIL — Global Interpreter Lock It ensures: 👉 Only ONE thread executes Python bytecode at a time 👉 Even on a multi-core machine So yes… ❌ Threads don’t give true parallelism for CPU-heavy work ❌ More threads ≠ more speed ❌ Sometimes performance actually DROPS 💥 Brutal example: You write multithreading for: Data processing Image transformations Heavy calculations And then… “Why is this still slow?” 😐 Because you solved the wrong problem with the wrong tool. 🧵 Where threads ACTUALLY shine: When your program is mostly waiting: ✅ API calls ✅ Database queries ✅ File I/O 👉 While one thread waits, another runs 👉 That’s where multithreading wins ⚙️ Want REAL power? Use Multiprocessing. ✔ Separate processes ✔ Separate memory ✔ Separate Python interpreters ✔ NO GIL bottleneck 👉 Finally… TRUE parallel execution across CPU cores ⚡ Shift your mindset: Multithreading ≠ speed booster Multiprocessing ≠ overkill 👉 They are tools. Use them correctly. 🔥 The rule elite developers follow: 👉 I/O-bound → Multithreading 👉 CPU-bound → Multiprocessing 💣 Hard truth: Most developers don’t have a performance problem… They have a mental model problem. 💬 Be honest: Did you ever assume threads = parallelism in Python? #Python #GIL #Performance #Multithreading #Multiprocessing #BackendDevelopment #Developers
To view or add a comment, sign in
-
-
🚀 DSA Journey — Day 17: Mastering Binary Search (LeetCode 704) Today I worked on one of the most fundamental and powerful algorithms in DSA — Binary Search. 🔍 Problem Understanding Given a sorted array, we need to efficiently find the index of a target element. If it exists, return its index; otherwise, return -1. 💡 Brute Force Approach Traverse the array linearly Compare each element with target Time Complexity: O(n) ⚠️ Not optimal for large datasets ⚡ Optimized Approach — Binary Search Since the array is sorted, we can eliminate half of the search space in every step. 👉 Steps: Initialize start = 0, end = n-1 Find middle: mid = (start + end) / 2 Compare: If nums[mid] == target → return index If target < mid → move left (end = mid - 1) If target > mid → move right (start = mid + 1) Repeat until found or search space ends 🧠 Example Walkthrough Array: [-1,0,3,5,9,12], Target: 9 mid = 2 → value = 3 → move right mid = 4 → value = 9 → ✅ Found ⏱️ Complexity Analysis Time Complexity: O(log n) Space Complexity: O(1) 🎯 Key Learning Binary Search is not just a problem — it's a pattern. Understanding this deeply will help in: Searching problems Optimization problems Many advanced DSA questions 🙏 Gratitude Grateful for the consistency and learning mindset every day 🙌 📈 Consistency is the real game changer. One problem a day = big results. #DSA #BinarySearch #LeetCode #CodingJourney #Java #ProblemSolving #Consistency #Learning #TechJourney #100DaysOfCode
To view or add a comment, sign in
-
-
Fine-Tuning Gemma 4 E2B Reduces Python Code Errors to Near Zero 📌 By leveraging LoRA fine-tuning and strategic prompt engineering, a developer has successfully transformed the Gemma 4 E2B model into a highly specialized Python assistant with near-zero error rates. This breakthrough demonstrates how combining parameter-efficient training with retrieval-confidence gating can turn a general-purpose model into a precision tool for deterministic code generation. The findings offer a roadmap for building ultra-reliable, on-device AI specialists. 🔗 Read more: https://lnkd.in/dC2tmBcH #Gemma4e2b #Lora #Python #Googledeepmind #Codegeneration
To view or add a comment, sign in
-
Data in Motion: Mastered JSON, API Consumption, and Server-Side Rendering! 🐍 Today was all about bridging the gap between raw data and a polished user interface. I took a deep dive into how modern applications exchange information and how we present that information dynamically to users. Here’s a breakdown of today’s learning milestones: 🌐 APIs & the requests Library: Learned how to act as a "client" by fetching data from external web services. Mastered the requests library in Python to send HTTP requests and handle the responses programmatically. 📦 JSON (JavaScript Object Notation): Explored the universal language of data. I now understand how to parse JSON responses and convert Python dictionaries into JSON for transmission. This is a critical skill for building any modern FastAPI or mobile-integrated backend. 🎨 Templates & Server-Side Rendering (SSR): Learned the concept of Separation of Concerns—keeping my logic in Python and my presentation in HTML. Understanding how the server "injects" data into a template before sending it to the browser. This makes building dynamic, data-driven websites possible. I’m starting to see how these pieces fit together for my India Mock Test Platform. I can now fetch external questions via APIs, process them as JSON, and render them beautifully for students! #Python #WebDevelopment #APIs #JSON #ServerSideRendering #FastAPI #BackendDevelopment #CodingJourney #SoftwareEngineering #ContinuousLearning #TechCommunity
To view or add a comment, sign in
-
-
Beginning this journey to learn in public and stay consistent with Problem :- Contains Duplicate (LeetCode 217) Problem Statement :- Given an integer array, return true if any value appears at least twice, otherwise return false. Approach 1 :- Brute Force (Nested Loops) i - Compare every element with others ii - Time Complexity:- O(n²) => because of nested loops iii - Simple but inefficient for large inputs class Solution { public boolean containsDuplicate(int[] nums) { for (int i = 0; i < nums.length - 1; i++) { for (int j = i + 1; j < nums.length; j++) { if (nums[i] == nums[j]) return true; } } return false; } } Approach 2 :- Optimized (Sorting) Sort array → check adjacent elements Time Complexity:- O(n log n) class Solution { public boolean containsDuplicate(int[] nums) { Arrays.sort(nums); for (int i = 1; i < nums.length; i++) { if (nums[i] == nums[i - 1]) return true; } return false; } } Key Learning: Start with brute force → then optimize. How would you optimize this further? One problem. Every day. No shortcuts, just consistency. #LeetCode #Java #DSA #CodingChallenge #SoftwareEngineering #Arrays #Sorting
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development