Python type checkers can catch more than type mismatches. To understand your code's types, a type checker also has to understand control flow, scoping, and class hierarchies, which means it can catch a surprisingly wide range of bugs. In this Pyrefly blog, Rebecca Chen walks through five surprising bug patterns, none are straightforward type errors, but Pyrefly catches them all ✨ https://lnkd.in/eyJpQ7uY #python #pyrefly
Pyrefly Catches Surprising Bug Patterns in Python Code
More Relevant Posts
-
i->SLOW DRIFT MATCH<-!i “RIEMANN” VERSUS “THE PYTHON” 4 of 4 Total from both competitors after 4 rounds “Riemann & the Monty Python”. 100 punches thrown/56 landed from Riemann through 4 rounds and 150 punches thrown/ 111 punches landed!!! HOWEVER!i… Multiple points were taken off from the unnecessary 256 bytes to Riemann from the Monty Python. Beyond the Alert: Using Riemannian Logic to Catch Data Drift -First 2 rounds "fences" (tiered thresholds). /3rd and 4th rounds we’re all about the "flow." 🌊 In high-velocity data, a single breach is often just noise. The real danger is “Slow Drift”—a persistent, subtle climb that indicates a systemic failure. To catch this, I’ve integrated a "Phase 2" logic that takes inspiration from Riemannian Geometry. The Logic: Instead of viewing data as a flat list, we treat it as a curved space. By calculating the Riemannian Distance between our current "rolling average" and our historical baseline, we can detect when the "geometry" of our performance is starting to warp. Why it works: * Riemann Sums for Trends: We use discrete data "slices" to approximate a continuous trend line, catching shifts before they ever hit the red zone. * Predictive Awareness: We aren't just asking "is it broken?" We are measuring the “drift” relative to our "Critical Line"—the 1.0001 threshold. Data isn't static; it’s a landscape. If you aren't measuring the curves, you’re missing half the story. 📈🚀 Result Riemann was the clear winner after all the point deductions. Don’t tell Monty! bye byee!!! 4 of 4
To view or add a comment, sign in
-
Level Up Your Python API Design: Mastering / and * Have you ever looked at a Python function signature and wondered what those / and * symbols actually do? While many developers stick to standard arguments, modern Python (3.8+) provides surgical precision over how functions receive data. Understanding this is key to building robust, self-documenting APIs. Check out this "Ultimate Signature" example: def foo(pos1, pos2, /, pos_or_kwd1, pos_or_kwd2='default', *args, kwd_only1, kwd_only2='default', **kwargs): print( f"pos1={pos1}", f"pos2={pos2}", f"kwd_only1={kwd_only1}", # ... and so on ) The Breakdown: Positional-Only (/): Everything to the left of the slash must be passed by position. You cannot call foo(pos1=1). This is perfect for performance and keeping your API flexible for future parameter renaming. Positional-or-Keyword: The "classic" Python parameters that can be passed either way. The Collector (*args): Grabs any extra positional arguments and packs them into a tuple. Keyword-Only: Everything after *args (or a standalone *) must be named explicitly. This prevents "magic number" bugs and makes the intent of the caller crystal clear. The Dictionary (**kwargs): Catches any remaining keyword arguments. Why should you care? Good code isn't just about making it work; it’s about making it hard to use incorrectly. By using these boundaries, you create a strict contract. You force clarity where it’s needed (Keyword-Only) and allow flexibility where it’s not (Positional-Only). Are you using these constraints in your daily development, or do you prefer keeping signatures simple? Let’s discuss below! 👇 #Python #SoftwareEngineering #CleanCode #Backend #ProgrammingTips #Python3 #CodingLife
To view or add a comment, sign in
-
Async IO in Python is single-threaded. No mutexes, no race conditions, no surprise context switches. Coming from multi-threaded code, this felt like cheating. With threads, anything can interrupt anything. You lock shared state, hope you got it right, and debug it six months later when you didn't. With async, control only transfers at await. That's it. Your data is safe between those points by definition, not by luck. The payoff was immediate. Refactored a GitHub API client to fetch a user profile and repo list at the same time using asyncio.gather(). Two concurrent HTTP calls. No threads, no locks. The mental model shift took longer than the code change. If you've been avoiding async because threads burned you before, it's not the same thing.
To view or add a comment, sign in
-
𝗣𝘆𝘁𝗵𝗼𝗻 𝗠𝗲𝘁𝗮𝗰𝗹𝗮𝘀𝘀𝗲𝘀 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 Everything in Python is an object. Classes are objects too. They are instances of a metaclass. Type is the default metaclass. It builds your classes. Call type to create a class without the class keyword. Python follows four steps to build a class: - Find the metaclass. - Set up the namespace. - Run the class body. - Create the class object. Use the new method to change the class before it exists. Use the init method to record the class after it exists. Most developers do not need metaclasses. Use the init subclass method instead. It handles registration and interface checks. It is simpler to read. Avoid metaclasses in app code. They are hard to debug. Use them for frameworks. Otherwise, use decorators. The best metaclass is the one you do not write. Source: https://lnkd.in/gMfJU9Nx
To view or add a comment, sign in
-
🧠 Python Concept: strip(), lstrip(), rstrip() Clean your strings like a pro 😎 ❌ Problem text = " Hello Python " print(text) 👉 Output: " Hello Python " 😵💫 (extra spaces) ❌ Traditional Way text = " Hello Python " text = text.replace(" ", "") print(text) 👉 Removes ALL spaces ❌ (not correct) ✅ Pythonic Way text = " Hello Python " print(text.strip()) # both sides print(text.lstrip()) # left only print(text.rstrip()) # right only 🧒 Simple Explanation Think of it like cleaning dust 🧹 ➡️ strip() → clean both sides ➡️ lstrip() → clean left ➡️ rstrip() → clean right 💡 Why This Matters ✔ Clean user input ✔ Avoid bugs in comparisons ✔ Very useful in real-world apps ✔ Cleaner string handling ⚡ Bonus Example text = "---Python---" print(text.strip("-")) 👉 Output: "Python" 🐍 Clean data, clean code 🐍 Small functions, big impact #Python #PythonTips #CleanCode #LearnPython #Programming #DeveloperLife #100DaysOfCode
To view or add a comment, sign in
-
-
🐍 The most misunderstood line in Python is this: for item in [1, 2, 3]: Most developers think the for loop just "goes through the list". What it actually does: calls iter([1,2,3]) to get an iterator, then calls next() on it repeatedly until StopIteration is raised. That's the entire protocol. Once you understand that, generators click immediately. A generator function with yield IS an iterator — Python implements iter and next automatically. And the magic of yield is that the function pauses at each yield and resumes from there on the next call. Full guide: iterator protocol from scratch, generator functions vs expressions, yield from for delegation, lazy 5-stage file processing pipeline, context managers (enter/exit), @contextmanager, suppress, ExitStack, and send()/throw() for two-way generator communication. A generator expression uses 200 bytes. An equivalent list uses 8MB. For the same data. 📎 Free PDF. Zero pip installs — pure Python standard library. #Python #Generators #Iterators #ContextManagers #PythonProgramming #SoftwareEngineering #CleanCode #BackendDev #Programming
To view or add a comment, sign in
-
spent the last few days reading the original HNSW paper and rebuilding it from scratch in Python. HNSW is the algorithm running under every vector database you've heard of. Pinecone, Qdrant, Weaviate. most people just pip install and move on. I wanted to know what's actually in there. the idea is a layered graph. bottom layer has every node, densely connected. higher layers are sparser, fewer nodes, longer jumps. when you search you enter at the top, take big jumps to get in the right neighborhood, then drop down layer by layer to zero in. total hops maybe 15. brute force would check everything. the thing that genuinely got me was that layer assignment is random, not learned. turns out that's on purpose. any deterministic scheme creates hotspot nodes that everybody routes through and they become bottlenecks. randomness spreads the load. simple idea, non obvious consequence. numbers from my implementation at N=5000, 128 dim vectors: ef=10 gives 32.4% recall at 335 QPS, 13.6x faster than brute force ef=50 gives 76.3% recall at 114 QPS ef=100 gives 92% recall at 69 QPS ef=200 gives 99.1% recall at 40 QPS one number controls the whole speed accuracy tradeoff. no reindexing, no rebuild, just change ef at query time. also looked at M, the max neighbours parameter. M=8 gets you 52% recall, M=32 gets you 94.4% but builds a shallower graph because each node is already so well connected you need less hierarchy. that tradeoff took me a while to actually feel. this is a teaching implementation, pure Python, every line commented. production systems rewrite the inner loop in Rust with SIMD which is why they are orders of magnitude faster. the whole point here was just to understand what the algorithm is actually doing. code is up at https://lnkd.in/gjYr-C8P if you want to follow along with the paper side by side. storage engine next.
To view or add a comment, sign in
-
-
I just shipped a new feature to pydepgate: partial decode. When the scanner finds a high-entropy encoded blob, it now attempts to decode it and show you what's inside - without executing any of it. Here's what that looks like against the litellm 1.82.8 wheel. The 34,460-character string in proxy/proxy_server[.]py that I flagged in my last post? pydepgate now decodes it automatically. One layer of base64, 34,460 characters down to 25,844 bytes. Final form: Python source code. What's in that Python source? The first thing the decoder sees is import subprocess. Then import tempfile. Then import os. Then a PEM public key block. That's a complete second-stage payload, encoded and sitting inside a production Python package used by thousands of developers. The outer package does its advertised job. The encoded blob waits. pydepgate didn't execute it. It didn't import it. It decoded the bytes statically, identified the content type, extracted the indicators, and showed you the hex. The tool's job is to tell you what's there before anything runs - and now it can tell you more precisely what "there" is. --peek to enable decoding. --peek-chain to follow multi-layer encoding if the first decode produces another encoded blob. Still zero dependencies. Still stdlib only. pydepgate 0.2.0 is on PyPI now. By the way there's over 700 unittests, this project is covered extensively.
To view or add a comment, sign in
-
-
🚀 Day 19/60 – Iterators (Understand How Python Loops Work Internally ⚡) Yesterday you learned Decorators. Today, let’s go deeper into how Python actually loops 👇 🧠 What is an Iterator? An iterator is an object that lets you loop through data one item at a time. 👉 Implements __iter__() and __next__() 👉 Used behind every for loop 🔄 How for loop works internally numbers = [1, 2, 3] iterator = iter(numbers) print(next(iterator)) # 1 print(next(iterator)) # 2 print(next(iterator)) # 3 👉 StopIteration is raised at the end ⚡ Custom Iterator class CountUp: def __init__(self, max): self.max = max self.current = 1 def __iter__(self): return self def __next__(self): if self.current > self.max: raise StopIteration val = self.current self.current += 1 return val for num in CountUp(5): print(num) 🔍 Iterator vs Iterable 👉 Iterable → Object you can loop over (list, tuple, string) 👉 Iterator → Object that actually produces values 🔥 Why Iterators Matter? ✅ Memory efficient ✅ Lazy evaluation ✅ Core of generators ❌ Common Mistake Confusing iterable with iterator ❌ 👉 Not every iterable is an iterator 🔥 Pro Tip 👉 Use iter() to convert iterable → iterator 👉 Use next() to manually fetch values 🔥 Challenge for today 👉 Create a custom iterator 👉 That returns numbers from 1 to 3 👉 Use next() manually Comment “DONE” when finished ✅ Follow Adeel Sajjad to stay consistent for 60 days 🚀 #Python #PythonProgramming #LearnPython #Coding #Programming #Developer
To view or add a comment, sign in
-
-
In a world where the machine writes most of the code, Python lacks solid type enforcement, Rust is overly strict with complex lifetimes, while Go strikes the right balance by catching critical issues without hindering development velocity. The article argues in favor of Go over Python and Rust for AI-generated code due to Go's efficiency, simplicity, clear error-handling, and easy deployment capabilities. https://lnkd.in/eEgmd4qW --- Like what you see? Subscribe 👉 https://faun.dev/join
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development