Day 17 of My 45-Day Python & DSA Journey Topic: Searching Algorithms – Linear Search & Binary Search Today, I explored one of the most important problem-solving areas in DSA — searching algorithms, which help find elements efficiently from a list or array. 🔹 What I Learned: Linear Search The simplest search method — check every element one by one until the target is found. def linear_search(arr, key): for i in range(len(arr)): if arr[i] == key: return i return -1 nums = [10, 20, 30, 40, 50] print(linear_search(nums, 30)) # Output: 2 Time Complexity: O(n) Works on unsorted data. Binary Search A much faster method — repeatedly divide the search space by half. def binary_search(arr, key): low, high = 0, len(arr) - 1 while low <= high: mid = (low + high) // 2 if arr[mid] == key: return mid elif arr[mid] < key: low = mid + 1 else: high = mid - 1 return -1 nums = [10, 20, 30, 40, 50] print(binary_search(nums, 40)) # Output: 3 Time Complexity: O(log n) Works only on sorted data. Reflection: Today’s lesson made me realize the importance of choosing the right approach. While Linear Search is simple, Binary Search saves time exponentially for large datasets — a great example of algorithmic efficiency. Key Takeaway: “Efficiency is not just about speed; it’s about the smart use of logic.” 🔜 Next: I’ll move on to Sorting Algorithms — Bubble Sort, Selection Sort, and Insertion Sort, the building blocks for understanding algorithm design. #Python #DSA #SearchingAlgorithms #BinarySearch #LinearSearch #CodingJourney #LearningInPublic #CodeEveryday
"Day 17: Mastering Linear & Binary Search in Python"
More Relevant Posts
-
Ever wonder how 'resilient' your code really is? Built this quick Python scorer: Blends structure (goals/steps/tests) with resonance (soft vs harsh lang) for a coherence metric—ρ=exp(-λD-κIu) guards the edge of chaos. Demo on sample snippets: density 0.65, coherence 0.94 (solid pass!). Open to thoughts or collabs on agent tools. #Python #AIAgents #CodeResilience from math import exp from typing import List, Dict import re import statistics as stats # Code Resilience Scorer SOFT = {"we", "let", "can", "now", "together", "clear", "safe", "steady", "choice"} HARSH = {"must", "always", "never", "fail", "worthless", "stupid", "idiot"} def resonance_score(text: str) -> float: words = re.findall(r"\w+", text.lower()) if not words: return 0.5 s = sum(w in SOFT for w in words); h = sum(w in HARSH for w in words) base = 0.6 + 0.05 * s - 0.08 * h return max(0.0, min(1.0, base)) def structure_ok(text: str) -> bool: g = re.search(r"\b(goal|objective|spec|requirement)\b", text, re.I) s = re.search(r"\b(step|procedure|algorithm|pipeline|pseudo)\b", text, re.I) t = re.search(r"\b(test|assert|verify|benchmark|pass|fail)\b", text, re.I) return sum(bool(x) for x in (g, s, t)) >= 2 def coherence_summary(snippets: List[str], lam: float = 3.7, kappa: float = 0.18) -> Dict: non_empty = [s for s in snippets if s and s.strip()] if not non_empty: density = 0.0 disp = 0.0 else: struct = [1.0 if structure_ok(s) else 0.3 for s in non_empty] density = sum(struct) / len(non_empty) res = [resonance_score(s) for s in non_empty] disp = stats.pvariance(res) if len(res) > 1 else 0.0 iu = disp + (1.0 - density) coh = exp(-(lam * disp + kappa * iu)) return {"density": round(density, 4), "dispersion": round(disp, 6), "coherence": round(coh, 6)} # Demo demo_snippets = [ "Goal: Build a resilient agent. Step: Add coherence check. Test: Rho > 0.5", "Explain core logic clearly. Include a simple benchmark for stability." ] result = coherence_summary(demo_snippets) print(result) # Output: {'density': 0.65, 'dispersion': 0.0, 'coherence': 0.938943}
To view or add a comment, sign in
-
My first Jupyter Notebook For Python Variables!⚡ Variables are simple yet powerful since I’m diving deeper into Python for AI & ML, here’s what I practiced today 👇 🔹 Purpose: → Variables store and manage data in your programs. → Python’s dynamic typing makes it flexible and beginner-friendly — perfect for AI, ML, and data science. 🔹 Syntax Simplicity: Python is readable and beginner-friendly: name = "Sidraa" age = 20 is_learning = True JavaScript is more structured but similar in logic: let name = "Sidraa"; let age = 20; let isLearning = true; 🔹 Use Cases: Python variables → Store user input, model parameters, temporary calculations, flags for program flow. 🔹 Reassigning & Type Casting: Python allows easy updates and conversions: score = 10 score = 15 # updated value num_str = "100" num_int = int(num_str) # converts string to integer Quick Question: How do you usually organize and name your Python variables? Let me know in the comments! --------------------------- ☺️ Here is my Python Variables Exercise (Beginner to Intermediate) GitHub Repo for you: Python Variables: https://lnkd.in/e9rjz-_D ------------------------- ⚡ Follow my learning journey: 📎 GitHub: https://lnkd.in/ehu8wX85 💬 Feedback: I’d love your thoughts and tips! 🤝 Collab: If you’re also exploring Python, DM me! Let’s grow together! -------------------------- #python #variables #machinelearning #artificialintelligence #deeplearning #codingjourney #AI #ML #PythonBasics
To view or add a comment, sign in
-
Python for AI: The Reference I Wish I Had When I started learning AI, I spent hours searching for Python syntax. Switching between tabs. Forgetting basic operations. So I built this reference. 9 Python concepts that appear in every AI project I build: ✓ Variables & Data Types - How to store your data ✓ Lists & Operations - Managing datasets ✓ Loops - Processing multiple items ✓ Conditionals - Making decisions in code ✓ Functions - Building reusable components ✓ Dictionaries - Storing configuration ✓ String Methods - Text preprocessing ✓ File Operations - Loading data ✓ Essential Imports - The libraries that matter Each slide shows working code. No theory. Just what actually runs. If you are new to python, W3Schools.com is a great platform to start with. In which concept you struggle the most? Get the editable carousel file here to build your own: https://lnkd.in/g38g4R5x Download the PDF from my telegram channel: t.me/factzandcodeofficial Follow Arijit Ghosh for daily shares #Python #AI #MachineLearning #DataScience #Coding
To view or add a comment, sign in
-
💻 Handwritten Digit Recognizer using KNN 🚀 Excited to share my latest ML project — a Handwritten Digit Recognizer built using the K-Nearest Neighbors (KNN) algorithm! 🧠 Tech Stack: Python 🐍 scikit-learn OpenCV Streamlit (for the web app interface) 🎯 About the Project: This app takes your handwritten digit as input (drawn on canvas) and predicts the correct digit using a KNN classifier trained on the Digits dataset from scikit-learn. 🔗 Try it here: 👉 https://lnkd.in/gXpC8RWM GitHub repo: https://lnkd.in/gMc3z3GN A small step in exploring Machine Learning and Model Deployment! ✨ #MachineLearning #KNN #Streamlit #AI #DataScience #Python #MLProjects
To view or add a comment, sign in
-
-
🚀 Day 6 of 7: Learning Machine Learning from O’Reilly’s Introduction to Machine Learning with Python Today’s concept: Method Chaining 🔗 If you’ve ever seen lines like this in Python 👇 df.dropna().groupby('category').mean().reset_index() and wondered “What’s going on here?”, that’s method chaining in action! 🚀 📌 What it means: Method chaining is when you call multiple methods sequentially on the same object — without creating temporary variables each time. Each method returns an object, allowing the next method to be called directly. ⚙️ Without chaining (same logic): cleaned = df.dropna() grouped = cleaned.groupby('category')['value'].mean() result = grouped.reset_index() Both work — but chaining feels smoother and more elegant 💫 💡Reference from the book: Introduction to Machine Learning with Python (pg-68) Common application of method chaining inscikit-learn is to fit and predict in one line: logreg = LogisticRegression() y_pred = logreg.fit(X_train, y_train).predict(X_test) Finally, you can even do model instantiation, fitting, and predicting in one line: y_pred = LogisticRegression().fit(X_train, y_train).predict(X_test) 👉 Note: This very short variant is not ideal, though. A lot is happening in a single line, which might make the code hard to read. Additionally, the fitted logistic regression model isn’t stored in any variable, so we can’t inspect it or use it to predict on any other data. #MachineLearning #DataScience #Python #scikitLearn #OReilly #LearningJourney #AI
To view or add a comment, sign in
-
Day 13: Python Statements (My TechRise Cohort 2.0 Journal) In Python, a statement is any instruction the interpreter can execute from printing a value to running an entire machine learning algorithm. Here’s a quick breakdown 👇🏾👇🏾 ✍🏾 Types of Python Statements: Print Statement: Displays output. print("Hello, Python!") Assignment Statement: Stores values. x = 10 Conditional Statement: Controls flow. if x > 0: print("Positive") Looping Statements: Repeat actions. for i in range(5): print(i) Other powerful statements include: Try – for error handling With – for managing files and resources while – for continuous looping 🛣️ In AI & Machine Learning🛣️ Python statements are the backbone of AI/ML models: Assignment statements store model weights and parameters. Loops train models over datasets. Conditionals control learning logic and evaluation. With/try statements manage data files and handle exceptions gracefully. Every AI workflow from data preprocessing to prediction is powered by simple Python statements. Try this:👇🏾 sum = 0 for i in range(2, 22, 2): sum += i print(sum) What do you notice in the output? 👀 Drop your thoughts in the comments! #Python #AI #MachineLearning #TechRiseCohort2 #CodingJourney #PythonStatements
To view or add a comment, sign in
-
-
📊 Visualizing How AI Learns — With Python 🧠🐍 The image above shows two 3D surfaces plotted in Python mathematical landscapes defined by f(x,y)=x2+xy2f(x, y) = x^2 + xy^2f(x,y)=x2+xy2 and f(x,y)=2x+y2f(x, y) = 2x + y^2f(x,y)=2x+y2. These aren’t just cool visuals 👀 They represent the loss surfaces that every AI model must navigate to learn. 🔍 Why this matters for AI ⛰️ Peaks = bad solutions 🌄 Valleys = good solutions 📉 Gradients guide models downhill toward better performance 🧭 The curvature shows how hard it is for algorithms like gradient descent to find the best parameters 🐍 Why Python? Using SymPy, NumPy, and Matplotlib, we can literally see how models improve by following the slope of these surfaces. 💡 The takeaway These 3D plots aren’t just math, they’re the terrain AI walks through as it learns, improves, and optimizes itself. #AI #Python #MachineLearning #DeepLearning #DataScience #Visualization #STEM #Innovation
To view or add a comment, sign in
-
-
How Machine Learning works using python ? 1. Create a model 2. Fit it 3. Train on the data 4. Test it 5. Check accuracy Using Python + scikit-learn with a basic train/test split and a classification model (Logistic Regression example). Machine Learning Workflow 1. Import Required Libraries from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import accuracy_score import pandas as pd 2. Load or Create Your Dataset Example dummy dataset: # Example dataset data = { "feature1": [1,2,3,4,5,6,7,8], "feature2": [5,4,3,2,1,6,7,8], "label": [0,0,0,1,1,1,1,1] } df = pd.DataFrame(data) 3. Split into Features and Labels X = df[["feature1", "feature2"]] y = df["label"] 4. Train–Test Split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42 ) 5. Create the Model model = LogisticRegression() 6. Fit (Train) the Model model.fit(X_train, y_train) 7. Predict on Test Data y_pred = model.predict(X_test) 8. Check Accuracy accuracy = accuracy_score(y_test, y_pred) print("Model Accuracy:", accuracy) Output Example You may see something like: Model Accuracy: 0.75 #ml
To view or add a comment, sign in
-
🧮 The Curious Case of Why NumPy Is So Fast Went through NumPy today and felt like sharing this. Why do developers always prefer NumPy over regular Python lists? At first glance, both can store numbers, right? But here’s where it gets interesting. Imagine you’re solving a math problem with a regular Python list. Python goes through each number one by one, like a slow cashier scanning items individually. But NumPy? It’s like having an entire team of cashiers scanning everything all at once. That’s because NumPy is written in C, a low-level language that talks directly to your computer’s hardware. It stores numbers in continuous memory blocks and performs vectorized operations, meaning it can handle whole arrays in a single shot — no slow loops, no waiting. The result: massive speed-ups and efficient memory usage, especially in AI, data science, and scientific research. So the next time someone tells you Python is slow, maybe remind them — it’s only slow until NumPy shows up. Thank Maxim Nizhar bhaiya for covering so much in such a short time. I truly appreciate the effort and clarity you brought to the session.
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development