I just built and published an open-source Python package chrono-temporal. It adds time-travel queries to any database entity. You can query what your data looked like at any point in history, track full change histories, and diff any two points in time. pip install chrono-temporal https://lnkd.in/ebm-cfRX
Chrono-Temporal: Time-Travel Queries for Databases
More Relevant Posts
-
We are encountering challenges with validating STAC JSON schemas in Python. Given that concurrency can be costly in Python, organizations ingesting and validating thousands of STAC items daily must seek more efficient solutions for the geospatial community that can reduce costs and energy consumption. To tackle this issue, we are developing a STAC validator in Golang. For more information, please visit gostac-validator hosted at StacLabs. https://lnkd.in/g2X3wC-P
To view or add a comment, sign in
-
🏗️ Building the Backbone: From Python Classes to Persistent Data "The best way to learn is to build, break, and fix. 🛠️ Over the last few days, I’ve been architecting the backend for my FastAPI TodoApp. It’s been a journey of connecting the dots: Defining the Schema: Using SQLAlchemy to turn Python classes into database tables. Establishing the Link: Setting up the engine and SessionLocal to bridge the gap between my app and SQLite. Overcoming Hurdles: Navigating Windows environment variables and mastering the SQLite3 CLI to verify data integrity. The foundation is now officially set. By calling models.Base.metadata.create_all(engine), my app now automatically generates its own database environment on startup. Next stop: Developing the CRUD API endpoints to bring this data to life! 🚀" #Python #FastAPI #SQLAlchemy #BackendDevelopment #CleanCode #SoftwareEngineering
To view or add a comment, sign in
-
-
I built and published a Python package to track LLM API costs in real time. You never really know how much you're spending until it's too late. Token-based pricing is not intuitive, and during development, costs can silently add up. So I built a Cost Tracker — a CLI tool that lets you track API costs without changing your code. 💡 Key features: • Zero code integration (just run your script) • Automatic provider detection (Gemini & OpenAI-style APIs) • Real-time token and cost tracking • Dynamic pricing (auto-updated) 📦 It’s live on PyPI: 👉 pip install costtracker 🛠 Usage: 👉 costtracker run your_script.py 🔗 GitHub repo: https://lnkd.in/gHG4nDDH Would love your feedback or suggestions :)
To view or add a comment, sign in
-
There is nothing quite like the transition from taking notes to writing the actual code. 💻 I spent today getting into the weeds of my current data project, figuring out exactly how the pieces will connect. Here is what I tackled: 🔹 Explored web scraping techniques using Python and BeautifulSoup to extract website data. 🔹 Began building out the storage side by setting up a MySQL database on my Linux machine. It's highly rewarding to see the data pipeline start to take shape. What are you all building or learning this week? See you guys next week! Stay tune for more updates #Python #BeautifulSoup #MySQL #DataProjects #TechJourney
To view or add a comment, sign in
-
𝗧𝗵𝗲 𝗣𝗼𝘄𝗲𝗿 𝗼𝗳 𝗣𝘆𝗧𝗵𝗼𝗻 𝗦𝘁𝗱𝗹𝗶𝗯 We built a multi-agent server with Python's standard library. Our server handles 200+ API endpoints, WebSocket connections, and 16 background daemons. You need a few key things when building a multi-agent coordination server: - HTTP API for external clients - WebSocket server for real-time agent push - Background threads for health monitoring and auto-restart - Atomic file operations for state persistence - No external dependencies that could break in production We chose Python's standard library over FastAPI. Here's why: - FastAPI would give us automatic OpenAPI docs and async request handling - But it would also give us extra dependencies and complexity We used: - http.server.HTTPServer for a simple HTTP handler - websockets library for the WebSocket server - threading for background tasks - tempfile + os.replace for atomic writes - json for state persistence We gave up automatic API documentation and request validation. But we gained: - Zero framework dependency risk - Full control over the execution model - Simple deployment - Predictable threading behavior Use the standard library when: - You need full control over threading and execution - Your server has mixed concerns - Deployment simplicity matters - You want zero framework lock-in Use FastAPI when: - You need automatic API docs - Your team expects framework conventions - You are building a pure REST API Source: https://lnkd.in/gCM8jKUD Optional learning community: https://lnkd.in/gAgpURzm
To view or add a comment, sign in
-
Machine Learning Data Visualization using bqplot #machinelearning #datascience #datavisualization #bqplot bqplot is an interactive plotting library for the Jupyter notebook in which every attribute of the plot is an interactive widget. bqplot can be linked with Jupyter widgets to create rich visualizations with just a few lines of Python code. These visualizations, which are based on D3.js and SVG, provide an unparalleled level of interactivity without a single line of JavaScript code. Abstract bqplot is an interactive 2D plotting library for the Jupyter notebook in which every attribute of the plot is an interactive widget. bqplot can be linked with other Jupyter widgets to create rich visualizations from just a few lines of Python code. Since bqplot is built on top of the widgets framework of the notebook it leverages the widget infrastructure to provide the first plotting library that communicates between Python and JavaScript code. The visualizations are based on D3.js and SVG, enabling fast interactions and beautiful animations. In this talk, attendees will learn how to build interactive charts, dashboards and rich GUI applications using bqplot and ipywidgets. https://lnkd.in/gG4bzjeb
To view or add a comment, sign in
-
Day 59 of #90DaysOfCode Today I built a dynamic blog web application using Flask that fetches content from an external API and renders it using HTML templates. The application displays a list of blog posts on the homepage and allows users to view individual posts through dynamic routing. How the application works • Fetches blog data from an external API • Converts JSON data into structured Python objects • Renders posts dynamically using Jinja templates • Uses dynamic routes to display individual blog posts • Serves styled frontend pages using Flask Key concepts explored • API integration using Requests • Data modeling using Python classes • Template rendering with Jinja2 • Dynamic routing using Flask • Building multi-page web applications This project helped me understand how real-world web applications handle data flow between backend and frontend. GitHub Repository https://lnkd.in/g8cjp9ek #Python #Flask #WebDevelopment #BackendDevelopment #SoftwareEngineering #90DaysOfCode
To view or add a comment, sign in
-
🛠️ #PythonJourney | Day 144 — Deep Dive: Fixing & Structuring the URL Shortener After starting the URL Shortener project yesterday, today I went deep into code review and debugging. Key work done: ✅ Analyzed main.py line by line ✅ Fixed 8 critical errors: • Missing imports (datetime, UUID, Request) • Type hint mistakes (Optinal → Optional) • Union syntax compatibility (| → Optional[]) • Undefined dependencies (get_db, get_current_user) ✅ Created database.py with: • PostgreSQL connection management • SQLAlchemy SessionLocal factory • Connection pooling configured This is exactly what real backend development looks like: not just writing code, but understanding what works and what doesn't. Debugging and fixing issues teaches way more than following tutorials. The project structure is now solid: - app/main.py (API endpoints) - app/database.py (DB config) - docker-compose.yml (local services) Next: create models.py and write tests. #Python #FastAPI #PostgreSQL #Debugging #BackendDevelopment #CodeReview #SoftwareEngineering
To view or add a comment, sign in
-
-
Yesterday I spent some time on a project that touches homelab and day to day business. The wrath of bash scripts in CI. Every codebase has them. They grow. They break in ways that give you exit code 1 and nothing else. There are good libraries that already address this, so I don't expect anyone to get hyper excited. `sh` lets you call commands like python functions - great for quick scripts. Plumbum gives you shell combinators with piping, redirection, ssh, even a cli toolkit. Full Swiss army knife (shout out). Both have been around for years. I ended up building another one anyway. Re-inventing, because core problem still bites in slightly different ways. Thus not focusing on "calling commands from python" instead Why does every script re-invent checking if tools exist and if versions match ? Why is there no visibility into data flowing through a pipe for when using that tool that did not implement progress reporting. No record of what I ran despite ever increasing sophisticated attacks. What if I could hand a single binary instead of a script that starts with 40 lines of `which` and `if` statements. The package is called cmdchain, zero dependencies, same kernel pipes as bash. Pypi package: https://lnkd.in/drTveQZJ
To view or add a comment, sign in
-
-
"ImportError: cannot import name 'FastMCP'" I stared at this for way too long. The library was installed. The version was correct. The import path was right. pip install mcp → success from mcp.server import FastMCP → ImportError Here's what happened: My project had a folder called mcp/. Python found it first. That's it. Python's import system checks the local directory before installed packages. My mcp/ folder — which held config files — was silently hijacking every import call. The fix was one line: sys.path.remove(os.path.dirname(__file__)) The debugging? Over an hour of reinstalling packages, checking versions, and questioning my sanity. Name your folders carefully. Python's import system doesn't care about your intentions — only your directory structure. If your installed package suddenly "can't be found," check if you accidentally created a folder with the same name. Ever had a naming collision silently break your project? #Python #Debugging #DeveloperLife #SoftwareEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development