#10 🚀 Project Update #10: NinjaFastAPI – Teams CRUD + What’s Next 👥🔥 Hey everyone 👋 It’s been a bit quiet again 😅 — balancing studies, work and learning new tech takes time… Recently I’ve been diving into C++, and now I’m starting a Java project at university 🧠💻 So yeah — progress is a bit slower, but still moving forward step by step 🚀 --- 🆕 What’s new? This time I focused on something bigger: 👥 Full Teams CRUD (TDD style 🥷) - creating teams - assigning sensei - managing members - business validations (no random ninja chaos 😄) 👉 Tests first → then implementation → clean code --- 🧠 What’s coming next? I want to push this project further into a more “real-world API”: 📸 User avatars (file upload) - multipart/form-data - saving files - serving via URL 💬 Async chat between ninjas (websockets incoming 👀) 🧪 More integration tests (DB + async + relationships) --- ⚔️ Tech stack: FastAPI • SQLAlchemy • Alembic • Docker • Pytest • PostgreSQL • GitHub Actions • Ruff • Black --- 🤔 What would YOU add? I’m thinking about: - Ninja rankings 🏆 - Missions system 🎯 - PvP fights ⚔️ - Forbidden jutsu endpoint 😈 But maybe you have better ideas? 👀 👇 Drop them in the comments — I’d love to hear your thoughts! Link to repo in comments! --- Step by step… even the Hokage wasn’t built in a day 🥷🚀 #FastAPI #Python #BackendDevelopment #WebDevelopment #APIDevelopment #SoftwareEngineering #TDD #Docker #PostgreSQL #Programming #LearningInPublic #OpenToWork #JuniorDeveloper #DevJourney #Coding
Kamil Kalicki’s Post
More Relevant Posts
-
🚀 Documentation #3 : Problem Solving, Adaptability & Choosing the Right Tools One key lesson I’m reinforcing through this project: Every project should be about gaining skills and solving problems. While building my Expense Tracker, I didn’t just write SQL I developed my architecture and decision making skills. I faced multiple API compatibility challenges and went through several iterations: Started with Flask Moved to FastAPI Even explored Django (widely recommended for Python APIs) But here’s the reality 👇 💡 The “best” tool isn’t universal it depends on your project’s size, requirements, and goals. After re-engineering the project multiple times, I made a key decision: ➡️ Switched the backend from Python to Node.js ➡️ Used JavaScript to handle the SQL database more efficiently for this use case and even using AES 256 encry This process wasn’t easy it required restructuring the project multiple times but it significantly strengthened my problem solving mindset and system design thinking. Now, I’m entering the final phase: 🐳 Docker containerization Alongside this, I’m learning Docker and applying it directly to the project turning theory into practice while also revising for my exams in a more engaging way. 💡 Key takeaway: Adaptability + hands-on building = real growth. #SoftwareEngineering #SQL #WebDevelopment #ProblemSolving #LearningByDoing #Docker #ComputerScience #Documentat
To view or add a comment, sign in
-
-
🌲 "The Forest Den" - Where Code Grows Wild I've spent the last few months learning full-stack development—building apps in C#, Python, and TypeScript. This weekend I added the other half: infrastructure and deployment. Built a complete DevOps home lab running 6 containerized applications with real-time monitoring. 🏗️ Infrastructure: • Ubuntu Server (converted ThinkCentre) • Docker container orchestration • SSH + firewall security • Nginx reverse proxy 📊 6 Containerized Applications (all healthy ✅): • CloudControl – Monitoring dashboard (Next.js, TypeScript) • PulseMonitor – Real-time health checks (FastAPI, Python) • SecureAuth-Lite – JWT auth service (C# .NET, SQLite) • LifeOS – Personal productivity OS with REST API, JWT auth, persistent storage • FinanceHub – Finance dashboard (C# ASP.NET Core) • StockTracker – Portfolio tracker (Python Flask) 📈 Results: • 6 apps running in production • 100% uptime • Sub-100ms response times • 4 languages: TypeScript, Python, C#, JavaScript • Multiple patterns: REST APIs, reverse proxy, microservices The approach: AI-assisted learning + 20 years of infrastructure experience + ADHD hyperfocus. I used AI (Claude) to learn Docker, understand architecture patterns, and debug deployment issues. But I validated everything, tested constantly, and built to production standards. When traditional learning methods don't work for you, AI becomes the tool that lets you learn by building. In interviews, I won't just show code—I'll show live, running infrastructure anyone can interact with. From full-stack developer → full-stack + DevOps in one weekend. Domain coming soon. The forest is just getting started. 🌲 #DevOps #Docker #HomeLab #FullStack #AIAssistedLearning #CareerTransition #Infrastructure
To view or add a comment, sign in
-
-
🚀 **Day 29/30 – 30 Days of Python Project Challenge** Consistency builds skill. Skill builds confidence. 🚀 As part of my 30-day challenge, I’m focused on solving real-world problems while strengthening core development concepts. 🧠 Today’s Project: **Website Status Checker** I built a Python-based tool that monitors whether websites are **UP or DOWN** using HTTP requests, helping identify server issues quickly and efficiently. ✨ Why this project matters: In today’s digital world, uptime is critical. This project demonstrates how Python can be used to build simple monitoring tools that simulate real-world systems used in DevOps and backend operations. ⚙️ Key Features: 🌐 Multi-Website Monitoring: Check multiple URLs in one run 📊 Status Code Insights: Displays HTTP responses (200, 404, 500, etc.) 🎨 Colored Output: Uses Colorama for clear and readable terminal results ⚠️ Error Handling: Detects unreachable or invalid websites gracefully ⚡ Fast Execution: Lightweight and efficient with minimal setup 💡 Concepts Applied: HTTP Requests using Python (requests library) Exception Handling for robust error management Working with APIs and status codes Clean and readable terminal UI with color formatting Basic automation and monitoring concepts 🔗 GitHub: https://lnkd.in/dcDpkarZ 📌 Takeaway: Even simple scripts can solve real problems. Building tools that monitor systems is a powerful step toward understanding real-world software and infrastructure. On to Day 30. 🔥 #Python #BuildInPublic #DeveloperJourney #30DaysOfCode #Automation #DevOps #Backend #SoftwareDevelopment #Coding #Learning #OpenSource #Projects
To view or add a comment, sign in
-
Yesterday, I joined the Open Source Friday live stream featuring Serena from GitHub. It was an insightful deep dive into collaborative development and real-world problem-solving. Watching experts navigate a codebase in real-time offers a perspective you can't get from documentation alone. It highlights the critical thinking, the debugging logic, and the architectural "why" behind every decision. My main takeaways from the session: Community-Driven Growth: Open source is about collective intelligence and making technology inclusive. The Learning Curve: Continuous exploration remains a constant, even for seasoned developers. The Reality of Building: Exploring the Serena project (https://lnkd.in/d9EKMbCn) was a great reminder that every polished feature begins with iterative, sometimes complex steps. For those looking to go beyond watching and start using it, here is how to get Serena MCP running on your machine: 1. Installation on Windows The simplest way to run Serena MCP is using uvx. It allows you to run Python packages on the fly without a complex local setup: Ensure Python is installed. Use uvx in your terminal to execute the package directly as per the documentation. This keeps your environment clean while giving you full access to Serena's capabilities. 2. Setup in VS Code (using GitHub Copilot) Once the server is ready, you can integrate it into your workflow in seconds: Add a configuration file: Create a .vs code folder in your project and add an mcp.json file. Register the server: Use the "Add Server" button or the command palette to link your MCP server. You can run it via a local command or even a Docker image. Interact: Once connected, your tools will appear directly in the Copilot chat, allowing you to query the Serena codebase or perform tasks through the MCP protocol seamlessly. A big thank you to Serena and the GitHub team for promoting the "Build in Public" mindset. 🚀 Are you currently contributing to any open-source initiatives or exploring MCP tools in your IDE? I’d love to hear about your experience in the comments. #OpenSource #GitHub #SoftwareEngineering #MCP #VSCode #SerenaProject #WebDevelopment #LearningInPublic #TechCommunity
To view or add a comment, sign in
-
Most developers don’t write bad code on purpose. Bad code usually starts as “just for now.” A quick fix. A shortcut to meet a deadline. Something you plan to clean up later. But “later” rarely comes. Over time, those small decisions compound: simple changes become risky bugs take longer to trace onboarding gets harder performance issues appear unexpectedly Bad code doesn’t just fail — it resists change. It hides intent, uses inconsistent naming, and tightly couples logic so everything depends on everything else. You spend more time understanding it than improving it. Good code is different. It’s clear, intentional, and built for change. you can read it and understand it quickly names explain purpose components are loosely coupled edge cases are handled deliberately Good code reduces mental overhead. It makes change easier. Some principles I follow: Do: write for the next developer keep functions small and focused choose clarity over cleverness refactor when patterns emerge Don’t: don’t over-engineer don’t mix responsibilities don’t ignore edge cases don’t rely on memory Good code isn’t about speed. It’s about how easily it can evolve. I focus on building backend systems with Python, Django, and DRF that scale in maintainability, not just traffic. What’s one coding habit you had to unlearn? #BackendEngineering #CleanCode #Django #SoftwareArchitecture #TechGrowth
To view or add a comment, sign in
-
-
LLM coding agents need a thinking medium. Right now they write code blind. They produce a function, run the test suite, read the failure, iterate. Every execution is a fire-and-forget shell command. No persistent state. No way to build up context incrementally. No scratchpad. A REPL changes this. Instead of "write then verify," an agent can "verify then write." Eval an expression to check an assumption before committing it to a file. Import a module and inspect a return type. Test an edge case. All in a persistent session where state accumulates across calls, the same way a developer thinks at a REPL. Bash gives agents hands. A REPL gives them a place to think. I built replsh: an open-source REPL tool designed for LLM agents. It manages named sessions across Python, Node.js, and Clojure, speaking each backend's native wire protocol. All output is structured JSON. Sessions persist across invocations. Timeouts return partial output instead of errors, so even an interrupted eval teaches the agent something. Every backend is zero-dependency. Python uses a stdlib-only bridge script that replsh auto-deploys and runs with your project's interpreter: venv, Poetry, conda, whatever. No Jupyter, no pip install. Node.js uses the built-in REPL module. Clojure uses nREPL, which ships with every Clojure toolchain. replsh plugs into whatever you already have. It also ships with an agent skill document which is a structured reference that teaches LLM agents when and how to use replsh and the REPL. Drop the skill into your agent's context and it knows how to install replsh, launch sessions, eval code, stream output, and manage background tasks. Tools like E2B and Open Interpreter are in this space but target cloud sandboxes or agent UIs. MCP REPL servers like repl-mcp exist but screen-scrape terminal prompts. replsh is protocol-native and local-first -- it runs in your project's environment with your dependencies. Github: https://lnkd.in/d4MbGWj5 #llm #ai #python #developer-tools #open-source
To view or add a comment, sign in
-
Nobody posts about the boring part of learning to code. So here's me doing exactly that. I'm still learning Python. Right now I'm working through comparisons and conditional logic — if/else statements, evaluating expressions, controlling the flow of a program. It's not flashy. There's no project to show off yet. It's just me, Boot.dev, and a lot of repetition. A few weeks ago I shared that I built my first Python project from scratch — a tip calculator and bill splitter. That felt like a milestone. But what I didn't talk about was the stretch between milestones. The part where you're grinding through concepts, not building anything visible, and wondering if you're actually making progress. Here's what I've realized about this phase: → Fundamentals aren't exciting, but they're everything. Every complex system I've worked with in my homelab — Terraform configs, monitoring pipelines, anomaly detection models — is built on basic logic like the stuff I'm learning right now. Skipping this step is how you end up copying code you don't understand. → Progress doesn't always look like progress. Some days I fly through exercises. Other days I stare at a simple if/else block longer than I'd like to admit. But each concept is clicking a little faster than the last, and that's the signal that matters. → Consistency beats intensity. I'm not doing 4-hour coding marathons. I'm showing up regularly, working through the material, and trusting the process. The people who actually learn to code aren't the ones who sprint — they're the ones who don't stop. I'm not where I want to be yet. But I'm past where I started, and that's enough to keep going. Anyone else in the middle of learning something where the progress feels slow? How do you stay motivated through the fundamentals? #Python #LearnInPublic #BootDev #DevOps #ProfessionalDevelopment #ContinuousLearning
To view or add a comment, sign in
-
Ever wondered how your Temporal workflows behave across SDK versions? 🤔 The new GitHub repository centralizes feature snippets, harnesses, and version‑specific runners for Temporal SDKs, letting you test behavior, generate history, and verify compatibility without reinventing the wheel. What sets this apart is the built‑in compatibility layer: you can run the same test against multiple SDK releases, automatically generate history for the earliest compatible version, and even opt‑out of history checks when needed. It turns a fragmented testing landscape into a single, reproducible suite. 🚀 Actionable Takeaways: - Run features across Go, Java, TypeScript, Python, Ruby in one command - Generate history for the earliest compatible SDK version - Skip history checks with `--no-history-check` when speed matters - Use prepared directories to isolate test runs - Tag Docker images per SDK version for reproducible environments When our code stands the test of time, we build not just software, but trust—knowing that today’s experiments won’t break tomorrow’s production. What compatibility challenge are you facing as your stack evolves, and how could a unified test harness change the game? #Temporal #DevOps #AI #Leadership #Innovation Reference: [https://lnkd.in/gqTG4Fh7] 🔄 Share 👍 React 🌐 Visit www.aravind-r.com #AravindRaghunathan
To view or add a comment, sign in
-
-
I wanted to share a side project I've been working on recently. An AI-powered software development system called #AgenticSDLC. https://lnkd.in/dixK2Dtr The idea was to build something end-to-end from scratch, covering backend, frontend, infrastructure, and #MLOps concepts while learning how modern AI-assisted development workflows are structured. This project gives me a good high level of understanding of how an end-to-end application of this sort can be built. The system allows you to describe a development task in plain English, and a LangGraph-based agent autonomously plans, writes, reviews, and (lints the code in case of coding agents). Executions are tracked in PostgreSQL, run asynchronously via Celery workers, and streamed live to a Next.js dashboard. Note: The project is in development phase where I’m adding feature one after another. Right now, and there is only python coding agent I’ve added so far, more agents will come in future. (I was focusing on the building app infra first and infra is ready). Tech stack: - Agent framework: LangGraph AI , LangChain - Backend: FastAPI, Postgresql, sqlalchemy.org, Celery + Redis - Frontend: NextJs 14, TypeScript, Tailwind CSS [ Note: FE of this app was completely vibe coded ; )] - Infrastructure: #DockerCompose, Alembic migrations I used Claude as an AI pair programmer throughout the build to accelerate development — it helped with architecture decisions, code generation, and debugging. This allowed me to cover a lot of ground quickly while still understanding every layer of the system. Key things I learned through this project: - End-to-end full stack development (actually more of backend development) - Async task queue architecture (#Celery + #Redis) - LLM agent workflows with #LangGraph - Docker Compose for multi-service orchestration - Database migrations and async #ORM patterns The code is available here if you'd like to take a look (#README file should help you set it up on your system): GitHub link: https://lnkd.in/dixK2Dtr The codebase has very detailed comments along with documentation that explain the concept and implementation.
To view or add a comment, sign in
-
🚀 pyresilience — All resilience patterns in 1 decorator, no dependency 💡 What is resilience? Your app keeps working even when dependencies fail, slow down, or overload. No crashes. No hanging. Just smart recovery. ⚠️ Pain point: Python teams often stitch together: • Retries ("tenacity") • Circuit breakers ("pybreaker") • Timeouts ("asyncio", "signal") • Rate limiting ("limits", "slowapi") • Fallbacks (custom code) 👉 These don’t coordinate → messy + inconsistent failure handling 📊 Existing tools: • "tenacity" (retries ~263.6M downloads/month) • "pybreaker" (circuit breaker ~9.6M downloads/month) 👉 Great individually, not unified ⚡pyresilience Benchmark: 🚀 pyresilience → 0.64 μs (🔥 ~10.4x faster) 🐢 tenacity → 6.64 μs 🛠️ What pyresilience does: One decorator with: ✅ Retry ✅ Circuit Breaker ✅ Timeout ✅ Fallback ✅ Bulkhead ✅ Rate Limiter ✅ Cache ➡️ Works together, not glued ➡️ Zero dependency ➡️ Sync + Async ➡️ High performance Frameworks: 🌐 FastAPI • Flask • Django 👨💻 For all Python developers 🔗 GitHub: https://lnkd.in/d-SRygNQ 🔗 PyPI: https://lnkd.in/dRg2H4D5 🔗 Docs: https://lnkd.in/dxZ4xYkw 💬 How are you handling resilience in Python today? #Python Python #BackendDevelopment #SoftwareEngineering #Microservices Python Software Foundation #SystemDesign #FastAPI #Django #Flask Python Coding #OpenSource #DevOps Python #Cloud #Resilience Python Valley
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
https://github.com/Kali2114/NinjaFastAPI