I am happy to share 🥳🥳🥳 🚀 Just shipped my first open-source Python library: pyctxlog Ever tried tracing a single request across 40 log lines and given up? That's the problem I built this for. pyctxlog is a tiny decorator that auto-tags every log line inside a function with per-call context — request id, job name, tenant, whatever you want — using contextvars so it works correctly across threads and async tasks. @log_context(fields={"job": "ingest_orders"}) def run_ingest(batch_id): log.info(f"processing {batch_id}") # every line inside is auto-tagged with job + id ✅ Sync + async auto-detected ✅ Works with Django, FastAPI, Celery, or plain functions ✅ Zero framework assumptions — truly generic ✅ Python 3.9+, MIT licensed pip install pyctxlog 🔗 https://lnkd.in/dx9HpvXt 🐙 https://lnkd.in/df4rAkR4 Feedback very welcome — this is v0.1.0 and I'd love to hear what you'd want in v0.2. #Python #OpenSource #Logging #Observability #SoftwareEngineering
Introducing pyctxlog: Auto-Tagging Log Lines with Context
More Relevant Posts
-
🚀 Built a simple Python script to clean up my messy Downloads folder! We all download files daily, and things get cluttered fast. So I wrote a quick automation script using Python to organize files into folders like Images, Documents, Archives, etc. 💡 Here’s the code: ```python from pathlib import Path import shutil # Folder to organize source = Path("C:/Users/YourName/Downloads") # File type mapping folders = { ".jpg": "Images", ".png": "Images", ".pdf": "Documents", ".zip": "Archives", ".exe": "Installers" } for file in source.iterdir(): if file.is_file(): folder_name = folders.get(file.suffix.lower()) if folder_name: destination = source / folder_name destination.mkdir(exist_ok=True) shutil.move(str(file), destination / file.name) ``` ⚡ What it does: * Scans your Downloads folder * Detects file types * Creates folders automatically * Moves files to the right place Sometimes, small automations like this can save a lot of time and keep your system organized. #Python #Automation #Coding #Developers #Productivity #Backend
To view or add a comment, sign in
-
Technical post: I've been posting some graphs on here, talking about functions and "equivalence". This was all started by working on porting an MLOPs framework from python 3.10 to 3.12, and all the "dependency hell" one has to go through. Then naturally the question arose "What are the boundaries of one project to another, in terms of functions being called etc.,?" This led me down the rabbit hole (not too deep) of what happens when I do something like python -m <module> <somescript>. Specifically, what is a "no op" module, and what kind of ops can we inject, thanks to python being an interpreted language. A few years ago I'd worked on something along similar lines called TracePath, which provided a decorator to do something similar (e.g. who called who, how long it took, etc.). So I merged these two ideas (avoid decorating every function, have an "inspector" module) and ran this on a simple pandas dataframe creation. The resulting function invocation graph is the image attached to this post. When I ran it across the whole workflow (create, load, transform data etc.,), the graph had ~9000 connections. The nice thing is I can specify which modules (e.g. only pandas, or pandas and numpy) should be added to the graph etc. What do you think is the next logical thing to do with something like this? What kind of graphs would well structured software produce? How about badly written software? #graphs #swe #dependencyhell #python
To view or add a comment, sign in
-
-
🚀 Day 5/30 of My LeetCode Journey (Python + SQL) Staying consistent and pushing forward every single day! 💻🔥 🔹 **SQL Problem of the Day** 👉 *Customers Who Never Order* Given two tables `Customers` and `Orders`, write a query to find all customers who never placed any order. 💡 *Key Concept:* LEFT JOIN + filtering NULL values to identify missing relationships. 🔹 **Python Problem of the Day** 👉 *Search Insert Position* Given a sorted array of distinct integers and a target value, return the index if found. If not, return the index where it would be inserted. 💡 *Key Concept:* Binary Search (O(log n)) for efficient searching. Understanding patterns like joins and binary search is making problem-solving faster and cleaner 📈 Day 5 done ✅ Let’s keep going! #LeetCode #30DaysChallenge #Python #SQL #CodingJourney #Consistency #BinarySearch #ProblemSolving
To view or add a comment, sign in
-
Python 3.14 just dropped something I didn't know I needed. t-strings. For years I've been using f-strings for everything. They're clean, they're fast, and I love them. But there's always been that one nagging problem — you can't intercept what goes inside them. The moment you write f"Hello {user_input}", that string is already built. No hooks. No validation. No custom logic. Just a finished string. t-strings change that completely. Instead of immediately resolving to a string, t"Hello {user_input}" gives you back a Template object. You get both the static parts and the interpolated values — separately — before anything is joined together. That means you can sanitize SQL inputs, escape HTML, validate API payloads, or run any custom logic on the values before they ever become a string. The syntax feels identical to f-strings. The power underneath is completely different. I've already started thinking about how this simplifies things in backend work — especially anywhere user input touches a query or a template. The safety implications alone are massive. This is one of those features that looks small in the changelog and then quietly becomes the way you write Python. Have you tried t-strings yet? What's your first use case? #Python #Python3.14 #BackendDevelopment #SoftwareEngineering #WebDevelopment
To view or add a comment, sign in
-
🚀 Day 49 Today I explored Python’s HTMLParser and learned how to extract meaningful information from HTML snippets. 🔍 Key takeaways: • How to handle single-line and multi-line comments using handle_comment() • How to process text data inside HTML tags using handle_data() • The importance of ignoring unnecessary data like empty lines ('\n') • Understanding how parsers read content sequentially from top to bottom 💡 What I built: A Python program that reads HTML input and prints: ✔️ Single-line comments ✔️ Multi-line comments ✔️ Data content This task improved my understanding of how web data is structured and how parsers interpret it — a small step toward mastering web scraping and data processing! Consistency > Perfection. See you on Day 50 💻🔥 #Python #CodingJourney #LearningEveryday #HTMLParser #DeveloperLife
To view or add a comment, sign in
-
-
🚀 Day 3/30 of My LeetCode Journey (Python + SQL) Showing up daily and building consistency, one problem at a time! 💻🔥 🔹 **Python Problems of the Day** 👉 *1. Move Zeroes* Given an integer array, move all 0’s to the end while maintaining the relative order of non-zero elements. Do it in-place without making a copy. 💡 *Key Concept:* Two-pointer technique for efficient in-place rearrangement. 👉 *2. Remove Element* Given an array and a value, remove all occurrences of that value in-place and return the number of remaining elements. 💡 *Key Concept:* In-place filtering using pointer overwrite approach. 🔹 **SQL Problem of the Day** 👉 *Find Duplicate Emails* Given a `Person` table with an email column, write a query to report all duplicate emails. 💡 *Key Concept:* GROUP BY with HAVING COUNT > 1. Small steps daily = Big progress over time 📈 Staying consistent and enjoying the process! #LeetCode #30DaysChallenge #Python #SQL #CodingJourney #Consistency #ProblemSolving #LearnInPublic
To view or add a comment, sign in
-
Copying projects with "node_modules" feels like it takes eternity. Now imagine having multiple subfolders, each with Node.js and Python projects. The problem: Huge, unnecessary folders ("node_modules", "__pycache__") slow everything down. What I used to do: Manually go into each subfolder → delete "node_modules" and cache → then copy (Not scalable. Just repetitive work.) The smarter way: Automate it with Robocopy: robocopy "C:\source" "D:\dest" /E /MT:16 /XD node_modules __pycache__ ● Works across all subdirectories ● Skips unnecessary files ● Cuts transfer time drastically
To view or add a comment, sign in
-
While working with databases in FastAPI, one small feature saved me a lot of time and effort: using dictionaries to handle data efficiently. Instead of manually writing each column while creating a new entry in the database, you can simply use: new_post = Post(**post.dict()) Here, post represents the request body (Pydantic model), and ".dict()" converts all the fields into a dictionary. This dictionary is then unpacked directly into the database model. Why is this useful? • No need to manually map each field • Cleaner and more readable code • Reduces chances of missing or incorrect fields • Speeds up development This approach becomes especially powerful when your table (like "post") has multiple columns. Rather than repeating yourself, you let Python handle it smartly. Small optimizations like this make backend development more efficient and enjoyable 🚀 #FastAPI #Python #BackendDevelopment #APIs #LearningJourney
To view or add a comment, sign in
-
I built the fastest Python logging framework. 446K ops/sec. 2.7x faster than stdlib. 20% faster than Microsoft's picologging, which is written in C. It's a one-line migration: import logging → from logxide import logging Same getLogger(). Same format strings. Flask, Django, FastAPI all work. Sentry and OTLP are built in. Zero config. Wrote up the production guide with copy-paste examples. ⬇️ See comment #Python #Rust #OpenSource
To view or add a comment, sign in
-
🧠 Python Concept: strip(), lstrip(), rstrip() Clean your strings like a pro 😎 ❌ Problem text = " Hello Python " print(text) 👉 Output: " Hello Python " 😵💫 (extra spaces) ❌ Traditional Way text = " Hello Python " text = text.replace(" ", "") print(text) 👉 Removes ALL spaces ❌ (not correct) ✅ Pythonic Way text = " Hello Python " print(text.strip()) # both sides print(text.lstrip()) # left only print(text.rstrip()) # right only 🧒 Simple Explanation Think of it like cleaning dust 🧹 ➡️ strip() → clean both sides ➡️ lstrip() → clean left ➡️ rstrip() → clean right 💡 Why This Matters ✔ Clean user input ✔ Avoid bugs in comparisons ✔ Very useful in real-world apps ✔ Cleaner string handling ⚡ Bonus Example text = "---Python---" print(text.strip("-")) 👉 Output: "Python" 🐍 Clean data, clean code 🐍 Small functions, big impact #Python #PythonTips #CleanCode #LearnPython #Programming #DeveloperLife #100DaysOfCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Great work, keep it up!