I've shared requirements.txt files generated with pip freeze and watched them fail on every machine that wasn't mine. So I built envcore. Because waiting for the Python ecosystem to fix a 15-year-old problem seemed optimistic. It hooks directly into Python's import system and records what actually loads while your code runs. Not what's installed on your machine. Not what a static scanner thinks might be imported. What. Actually. Runs. envcore trace train.py → env_manifest.json → envcore restore Clean, pinned, minimal manifest. Exact environment rebuilt anywhere. No 200-package soup, no missing runtime imports, no "works on my machine" as if that's a valid thing to say to another human. It also resolves import aliases correctly — PIL to Pillow, cv2 to opencv-python, sklearn to scikit-learn — because the gap between what you type and what you install has existed since forever and apparently needed one person to care. pip freeze has been lying to you for 15 years. Everyone accepted it. I got tired of it. 30 seconds to try: pip install envcore If it's useful, a GitHub star helps a new project get noticed. https://lnkd.in/dz3MFTbD #Python #OpenSource #DevTools
Jan Bremec’s Post
More Relevant Posts
-
🚀 Mg 0.6.0 is out. This release brings Mg up to date with SysML v2 (0.57.0, 2026-02), aligning with the latest Pilot Implementation and pushing the ecosystem forward. We’ve also upgraded to Python 3.13.12, ensuring better compatibility with modern tooling. Most notably, Mg now supports MgS expressions—unlocking more expressive, flexible modeling workflows. More details here: https://lnkd.in/e4rjvi8i We’re continuing to expand integration across DOORS Next, Mg, and MgX. Stay tuned. #SysMLv2 #MBSE #SystemsEngineering #Modeling #Python
To view or add a comment, sign in
-
Path vs. Query Parameters — Know the difference! One of the most common questions when building APIs is: "Should this go in the URL path or as a query string?" In FastAPI, the distinction is clean and easy to implement. 📍 Path Parameters: Used to identify a specific resource. Example: /users/{user_id} Use these when the data is mandatory to find the object. 🔍 Query Parameters: Used for filtering, sorting, or pagination. Example: /users?active=true&sort=desc Use these for optional parameters that modify the results. FastAPI is smart enough to distinguish them just by how you define your function arguments. If it's in the path, it's a Path Param. If it’s not, it’s a Query Param. Simple as that! 🚀 #Python #FastAPI #WebDevelopment #Backend #RESTAPI #CodingTips #30DaysOfFastAPI
To view or add a comment, sign in
-
-
🚀 Top FastAPI Packages You Should Know in 2026 (Part 2) FastAPI already gives you speed. The real difference comes from the tools you add on top. These are a few packages I’ve been exploring recently while building APIs 👇 . . . . . . 💡 Small observation: Most performance gains don’t come from FastAPI itself. They come from how you structure, cache, and protect your APIs. If you’re interested in MCP + FastAPI use cases, I recently explored it here: https://lnkd.in/dbMFET_A Which one are you using in your projects right now? #FastAPI #Python #BackendDevelopment #APIs #WebDevelopment #OpenSource #DeveloperTools
To view or add a comment, sign in
-
I replaced Pydantic in one of my FastAPI projects this week. Two-line change. Validation is now 523x faster. The library is 𝗱𝗵𝗶, a drop-in Pydantic replacement powered by Zig + SIMD. Migration: → pip install dhi → from dhi import BaseModel, Field Same API. Same model_dump. Your existing models just work. The numbers: → 24.1M validations/sec in Python → 523x faster than Pydantic → 31x faster than msgspec → Also ships a TypeScript build, 20x faster than Zod 4 For the last 5 years, Rust has quietly taken over the core of Python. Pydantic v2, Polars, ruff, uv, tokenizers — all Rust underneath. The other day I also shared TurboAPI, a drop-in replacement for FastAPI with its core built in Zig — blazingly faster than FastAPI itself. dhi might be the start of the next wave: Zig.
To view or add a comment, sign in
-
-
Just shipped a new feature in my VS Code extension, CallFlow Tracer: automatic trace summarization. Performance traces usually give raw data, not answers. This feature turns complex call graphs into clear, actionable insights with one click. Identifies the slowest functions Shows exact time impact (percentages) Highlights bottleneck modules Suggests next optimization steps Provides complete trace statistics No more manually analyzing hundreds of nodes or guessing where the issue is. You get a clean summary of what to fix and why. Available now on the VS Code Marketplace — search “CallFlow Tracer”. What’s one performance insight you wish tools gave you automatically? #Python #VSCode #DeveloperTools #PerformanceOptimization #OpenSource
To view or add a comment, sign in
-
Jeste na nivou spekulacije ali su rezultati mog malog eksprimenta su vise nego fascinantni. Implementacija S-OS (Self-Organized Substrate Logic-05) u Python-u me je nateralo na duboko razmisljanje! It is still at the level of speculation, but the results of my small experiment are more than fascinating. Implementing S-OS (Self-Organized Substrate Logic-05) in Python has really forced me into some deep reflection.
To view or add a comment, sign in
-
Spent 5 days chasing ghosts—DLL hell and ABI mismatches. I followed the agentic debugger down the wrong path as it hallucinated at a wrong layer because it misread the WinError 1114 as a load-path issue rather than a missing export. The actual fix was two lines. I used TORCH_LIBRARY when I needed PYBIND11_MODULE. The Architecture Gap: - Use TORCH_LIBRARY to register ops into the PyTorch C++ Dispatcher (accessed via torch.ops). It fires static C++ constructors at DLL load time but does not create a PyInit_* function. Python can't "see" it as a module. - Use PYBIND11_MODULE to generate the standard Python C Extension entry point. This generates the PyInit_{name} entry point Python needs to "see" the module. The error was literal: "dynamic module does not define module export function." No PyInit_* existed because TORCH_LIBRARY isn't meant to be imported directly. {just correcting the record} #CPP #PyTorch #SystemsProgramming #MachineLearning #barebones #3D
To view or add a comment, sign in
-
called the same API endpoint 5 times in a row. without cache: 2.51s with lru_cache: 0.50s 5x faster. two lines of code. @functools.lru_cache(maxsize=128) def fetch_user(user_id): ... the cache info tells the real story: hits=4, misses=1 first call hits the actual API. next 4? served instantly from memory. this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second. lru_cache ships with Python. no libraries. just import functools. two lines between slow and fast. #Python #Backend #DataEngineering #Performance
To view or add a comment, sign in
-
-
.0158 vs .0005 for the cached version. So searching bing: "does python lru cache return previous objects" "Yes — Python’s built‑in functools.lru_cache returns the exact same object instance that was previously computed and cached, not a copy" The overhead is in the object being recreated each call with Python objects being known to have slow creation time. There are better options for performance like writing the API in C++ with pistache or crow. Testing the time with 4 million unique users requesting their user info 3 times would be more informative. Reading that the returned data is a user data object with the changing value being a score and a constant for the username, the code needs refactoring as it muddies two use cases together. The username only needs sent the first time then only if it is or has been updated. The score is better sent via a socket or websocket if it changes in realtime and requires input from the server to be calculated or not sent at all if it can be calculated client side. If it needs to be broadcast to other client network peers with their response sent back to other peers a message queue is needed but if the peers response does not matter, the main server can handle the broadcasting. Database queries that can not just be returned by directly querying the database are not conducive to caching or not useful if they change infrequently or are only needed once or a few times at most. Having less than 4 million users, giving each user their own database on a single server can be easier than writing APIs if the data is just database table views (and the service is paid, reducing risk of hacking from users plus database caching can be used across multiple client applications)
called the same API endpoint 5 times in a row. without cache: 2.51s with lru_cache: 0.50s 5x faster. two lines of code. @functools.lru_cache(maxsize=128) def fetch_user(user_id): ... the cache info tells the real story: hits=4, misses=1 first call hits the actual API. next 4? served instantly from memory. this is how production systems handle repeated expensive calls — user profiles, config lookups, ML model loads, anything that doesn’t change every second. lru_cache ships with Python. no libraries. just import functools. two lines between slow and fast. #Python #Backend #DataEngineering #Performance
To view or add a comment, sign in
-
-
Turbovec is now available on PyPi 🐍 and Crates.io 📦 Turbovec is a vector index built on Google's TurboQuant algorithm, written in Rust with Python bindings. Turbovec has identical or better speed, compression and recall compared to Faiss while also being data-oblivious. Because adding new vectors doesn't require re-indexing, Turbovec is dramatically simpler to operate in production. → pip install turbovec → cargo add turbovec Check out the open-source repo: https://lnkd.in/e5M4dVRk #RAG #LLM #OpenSource #Gemma4
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
I was once asked to help some less experienced developer improve his app that was actually somehow important on national level. I checked the requirements.txt, it was multiple screens long. Among other completely unnecessary things, it included a library for Chinese calendar. It was a wake up call - I never again used pip freeze 😅