Been spending my weekends on a Python side project and finally have something worth sharing. Built CareTrack API from scratch, a REST API for managing patient and provider records. It uses FastAPI, PostgreSQL, and JWT authentication, with a full CI/CD pipeline on GitHub Actions and a live deployment on Render. Tech stack: Python · FastAPI · PostgreSQL · SQLAlchemy · Alembic · JWT · GitHub Actions Live API docs: https://lnkd.in/e9beFYgM GitHub: https://lnkd.in/e53nQi9S Feedback always welcome. #Python #SideProject #BackendDevelopment #API
CareTrack API: Python REST API for Patient and Provider Records
More Relevant Posts
-
I learned Go by building an event REST API: auth with JWT, bcrypt passwords, event CRUD, and event registration. As a Python developer, the steepest parts were not the HTTP ideas but the workflow: explicit error handling on every call, structs instead of dicts, and seeing SQL without an ORM. Gin felt close to Flask or FastAPI routing, but the compiler and a single binary changed how I think about shipping code. Still early, but the contrast with Python is already useful. https://lnkd.in/dGiDwMpJ
To view or add a comment, sign in
-
-
LoraDB is now public. https://loradb.com/ It is a fast in-memory graph database written in Rust, with a Cypher-shaped query engine, an HTTP API, and bindings for Node.js, WebAssembly, and Python. It is built for developers who need relationship queries close to their application without adopting a large graph database stack on day one. This release is the beginning of the public journey: source-available core, developer-first adoption, and a path toward a hosted platform for teams that want managed operations later.
To view or add a comment, sign in
-
Graph databases are overkill for most product work. So I built LoraDB. An embedded, in-memory graph database that runs in the browser at million-node scale, with near-full Cypher support. It’s really, really fast: ~7.4M nodes/sec scans ~3.8M edges/sec traversals ~900K nodes/sec writes Node.js, WASM, Python, Go, Ruby Benchmarks: https://lnkd.in/e_qmfqQQ Curious who else felt this gap.
LoraDB is now public. https://loradb.com/ It is a fast in-memory graph database written in Rust, with a Cypher-shaped query engine, an HTTP API, and bindings for Node.js, WebAssembly, and Python. It is built for developers who need relationship queries close to their application without adopting a large graph database stack on day one. This release is the beginning of the public journey: source-available core, developer-first adoption, and a path toward a hosted platform for teams that want managed operations later.
To view or add a comment, sign in
-
🧪 You can just automate things! `scripts/mark_pr_files_viewed.sh` automates marking all unviewed files in a GitHub pull request as 'Viewed' using the GraphQL API. The script supports batching and multiple authentication methods. The script adds a dependency check for Python 3, removes a redundant and inefficient GraphQL call in the file-fetching loop, and optimises string escaping by using Bash's built-in parameter expansion instead of calling sed in a loop. 🌟 Grab the script for free here: https://lnkd.in/gXFj3JsG #bash #GraphQL #api #script
To view or add a comment, sign in
-
-
I’ve been spending my recent free time in building an Event-Driven Backtesting Engine from scratch for Options. Backtesting complex option strategies requires processing massive amounts of market data, calculating Greeks, and tracking portfolio metrics simultaneously. To handle this without latency bottlenecks, I decided to architect the entire core engine in C++. for now I have mostly tried to make it very flexible like modular commission and slippage and ability to write custom strategies instead of editing the core engine itself I completely decoupled most of the core things so The entire C++ backend is compiled as a standalone library. I am also trying to Integrate a python bridge using pybind11 exposing this compiled library directly to Python. The goal for this is to make the engine to do all the computation in the background, allowing anyone to write, test, and plug in custom strategies dynamically using simple Python scripts without ever needing to modify the core engine files. Getting the C++ event loop to work good with Python scripting is proving to be a little complicated right now! I'll be pushing a final README and some sample strategies once I get the bindings fully stabilized. You guys can check out the code here : https://lnkd.in/gRSgd4gs #quantfinance #cpp #python #algorithmictrading #options #pybind11 #derivatives
To view or add a comment, sign in
-
-
Apache Airflow 3.2 is here, bringing partitioned Dag runs and asset events, async Python support for @task and PythonOperator, and UI theming. This quick notes guide comes with code examples for every new feature to use as patterns in your own Dags. Download the guide to learn how to: ↔️ Pass timestamps between Dags scheduled based on assets without custom workarounds ⚡ Cut task runtime by running concurrent async API calls in a single @task 🎨 Flag critical production deployments by adjusting the colors in the Airflow UI using an Airflow configuration variable Link below.
To view or add a comment, sign in
-
-
Django can absolutely scale. But not if you treat it like a tutorial project. Here are 6 patterns we use in production at Horizon Dev that handle real load: → Fat models, thin views → Select_related and prefetch_related everywhere → Custom managers for complex queries → Celery for anything over 500ms → Database routers for read replicas → Cached querysets with smart invalidation Swipe through for details on each ↓ #Django #Python #BackendEngineering
To view or add a comment, sign in
-
🚀 Just Published My First Python Library on PyPI! Excited to share that I’ve built and published "common-fun" — a modular Python utility library designed to simplify everyday development tasks. 📦 Install: pip install common-fun 🖥️ Try CLI: common-fun help 🔗 GitHub: https://lnkd.in/gjWRyhpq 🔧 What it includes: • Number utilities (prime, gcd, factorial, etc.) • String processing (palindrome, slugify, etc.) • Array helpers (flatten, chunk, rotate) • Validators (email, URL, password) • File utilities • Performance decorators (timer, retry, caching) • 🔥 CLI support for direct terminal usage 💡 Why I built this: While working on multiple projects, I realized I was repeatedly writing similar utility functions. So I decided to consolidate everything into a clean, reusable, and structured library. ⚙️ Key highlights: • Fully modular architecture • Optimized implementations • CLI tool for quick access • PyPI-ready packaging • Clean documentation This project helped me understand: ✔️ Library design ✔️ Packaging & publishing ✔️ CLI development ✔️ Clean code practices Would love your feedback and suggestions! #Python #OpenSource #Developer #Programming #PyPI #SoftwareDevelopment
To view or add a comment, sign in
-
Built my first Python API using FastAPI! Coming from a MERN background, I decided to explore Python backend development—and it’s been an eye-opening experience. What I built: A simple REST API with GET & POST endpoints Request validation using Pydantic models Auto-generated API docs (Swagger UI) Key Learnings: How FastAPI handles routing (similar to Express but cleaner) Request body validation without extra libraries Importance of virtual environments (and debugging them the hard way) Running production-ready APIs using Uvicorn One thing that really stood out: FastAPI feels like TypeScript + Express, but with built-in validation and performance advantages. Example: Created a POST /user endpoint that validates incoming data using a schema and returns structured responses. GitHub Repo: https://lnkd.in/gF4FFR2u Would love feedback from the community #Python #FastAPI #BackendDevelopment #LearningInPublic #100DaysOfCode
To view or add a comment, sign in
-
I was debugging a Django service last week and hit a classic problem memory growing silently across requests, no obvious culprit. The usual suspects (tracemalloc, memory_profiler, objgraph) are great tools. But I wanted something I could drop on any function in 30 seconds and get a readable answer from. Also, honestly I wanted to understand what's happening at the GC and tracemalloc abstraction layer in Python. The best way I know to understand something is to build on top of it. So I built MemGuard over a weekend. What it does: Drop @memguard() on any function and after every call you get: Net memory retained (the actual leak signal) Peak vs net ratio — catches memory churn even when net looks clean Per-type gc object count delta tells you what is accumulating, not just how much Cross-call trend detection if net grows every call, it flags it Allocation hotspots via tracemalloc exact file and line Zero dependencies. Pure stdlib gc, tracemalloc, threading. @memguard() def process_batch(records): That's it. It also works as a context manager if you want to profile a block rather than a function. Biggest thing I learned building this: Python's gc and tracemalloc expose far more than most people use day to day. The object-reference graph alone tells a story that byte counts miss entirely. Repo: https://lnkd.in/gdjkHvfb Would love feedback from anyone who's dealt with Python memory issues in production. #Python #Django #SoftwareEngineering #OpenSource #BackendDevelopment #MemoryManagement
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
👏🏻👏🏻