Shifted focus this week from authentication to the core feature of FlowBoard: projects. The goal was to start moving from auth infrastructure into actual project management logic. Week's Progress: - Added database models for projects and related entities using SQLAlchemy. Implemented project ownership so each project is tied to the authenticated user. - Connected the dashboard to the backend API to fetch real project data instead of placeholder content. - Built the frontend flow for creating new projects from the dashboard with validation. Currently debugging an issue that is preventing projects from being created from the frontend, it looks like it may be related to authentication or request handling. It is interesting how once authentication is in place, every new feature has to properly pass through that security layer. #BuildInPublic #Python #React #FastAPI #SoftwareEngineering
Implementing Project Management Logic in FlowBoard
More Relevant Posts
-
API Documentation in Django REST Framework — Simplified with drf‑spectacular Building APIs is easy. Maintaining them at scale? That’s where things get tricky. As teams grow and endpoints multiply, keeping a clear API contract becomes essential. That’s why I explored drf‑spectacular, a powerful tool that turns your DRF code into a clean, OpenAPI‑compliant schema — ready for Swagger and Redoc. In my latest Medium article, I break down: How to set up drf‑spectacular in minutes Why schema generation matters for scaling and collaboration Integrating JWT authentication for secure testing Hiding internal endpoints and documenting complex responses Best practices for production‑ready API docs Think of it as reverse‑engineering your API into documentation. 👉 Read the full article here: https://lnkd.in/dbuTaNym #Django #DRF #API #Documentation #OpenAPI #Swagger #Redoc #Python #BackendDevelopment
To view or add a comment, sign in
-
-
~700 downloads in just 1 week of launch. Crazy I built and launched agentkube-mini, a tiny agent orchestration engine just to show how multi-agent systems actually work under the hood. Most frameworks abstract everything away. Which is great… until something breaks. Then you realize: You don’t understand the system you built. So I built something different. 𝗮𝗴𝗲𝗻𝘁𝗸𝘂𝗯𝗲-𝗺𝗶𝗻𝗶 is: • a task DAG-based orchestration engine • dependency-aware parallel execution • event-driven observability • shared memory across agents All in ~300 lines of Python. Zero dependencies. What it’s for: • understanding agent orchestration deeply • building simple, reliable pipelines • debugging multi-agent workflows • layering on top of existing systems (LangGraph, etc.) What it’s NOT: • not a full agent framework • not for complex tool loops or persistence The goal wasn’t to build the most powerful system. It was to build the clearest one. Because once you understand: 👉 agents = nodes 👉 dependencies = edges 👉 scheduler = execution You understand the core of every multi-agent runtime. Appreciate everyone who tried it, shared feedback, and pushed it forward. More coming soon 🚀 #pypi #orchestration #agentkubemini #opensource
To view or add a comment, sign in
-
-
🚀 Day 80 – Error Handling, Logging & System Monitoring Continuing my journey in the 90 Days of Python Full Stack, today I focused on making the system more reliable by implementing error handling, logging, and monitoring. Even a well-built system can face unexpected issues. The goal today was to handle errors gracefully and track system behavior for better debugging and maintenance. 🔹 Work completed today • Implemented proper error handling for APIs and backend logic • Added structured logging (info, warning, error levels) • Tracked system events and failures • Improved debugging process with meaningful error messages • Ensured stable and predictable application behavior 🔹 System Workflow User sends request ⬇ Backend processes request ⬇ If error occurs → handled gracefully ⬇ Error/log recorded in system ⬇ Response sent without crashing system 🔹 Why this step is important Reliability is key for any production-ready system. With this implementation: ✔ Prevents system crashes ✔ Makes debugging easier and faster ✔ Helps track issues in real-time ✔ Improves overall system stability 📌 Day 80 completed — implemented error handling, logging, and monitoring. #90DaysOfPython #PythonFullStack #ErrorHandling #Logging #SystemMonitoring #BackendDevelopment #LearningInPublic #DeveloperJourney
To view or add a comment, sign in
-
I put together a lightweight agent orchestration system for Claude Code called the Claude Agents Plugin. It breaks complex development tasks into tracked, parallel sub-agents using hierarchical markdown file trees. The idea is to let you describe a task naturally—like "Build a user auth system with login, signup, and JWT tokens." Claude then automatically scans your codebase, maps out the work, spawns parallel agents, and tracks everything in markdown files. A few architectural details: Context-aware: It reads your existing project before touching anything, clarifies what it will modify versus create, and never overwrites existing code. Dependency management: It builds hierarchical task trees to handle parent-child relationships and detects circular dependencies. Zero dependencies: It’s a single file relying purely on the Python standard library (Python 3.9+). It is MIT licensed. If you are building with Claude Code and want to test out structured agent orchestration, the repository is linked below. https://lnkd.in/gG3fdrmZ #ClaudeCode #Python #OpenSource #AgenticAI #DeveloperTools
To view or add a comment, sign in
-
-
🚀 𝗗𝗮𝘆 𝟮𝟲/𝟯𝟬 – 𝟯𝟬 𝗗𝗮𝘆𝘀 𝗼𝗳 𝗣𝘆𝘁𝗵𝗼𝗻 𝗣𝗿𝗼𝗷𝗲𝗰𝘁 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲 Consistency builds skill. Skill builds confidence. 🚀 As part of my 30-day challenge, I’m focused on solving real-world problems while strengthening core development concepts. 🧠 𝗧𝗼𝗱𝗮𝘆’𝘀 𝗣𝗿𝗼𝗷𝗲𝗰𝘁: 𝗠𝗼𝘃𝗶𝗲 𝗥𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗦𝘆𝘀𝘁𝗲𝗺 I built a Python-based CLI application that recommends movies based on user mood (genre), powered by real-time API data. ✨ 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗽𝗿𝗼𝗷𝗲𝗰𝘁 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: Instead of static or hardcoded data, this project interacts with a live API — making it dynamic, scalable, and closer to real-world applications. ⚙️ 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: • Genre-based movie recommendations 🎬 • Intelligent filtering (rating ≥ 7) ⭐ • Randomized suggestions for variety 🎲 • Robust retry mechanism for API reliability 🔁 • Clean and efficient CLI experience 💻 💡 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝗔𝗽𝗽𝗹𝗶𝗲𝗱: • API integration using `requests` • Handling JSON responses effectively • Implementing retry strategies for fault tolerance • Writing clean, modular Python code • Exception handling for real-world scenarios 🔗 𝗚𝗶𝘁𝗛𝘂𝗯: https://lnkd.in/dkNbKieJ 📌 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Building small, consistent projects like this helps bridge the gap between theory and practical development. The goal isn’t just to code — it’s to build solutions that reflect real engineering practices. On to Day 27. 🔥 #Python #BuildInPublic #DeveloperJourney #30DaysOfCode #APIs #SoftwareDevelopment #Coding #Learning #OpenSource #Projects
To view or add a comment, sign in
-
What if the question isn't which tool is best, but which process requirements you haven't mapped yet? Every tool comparison post on LinkedIn ranks platforms by features. Number of integrations. Pricing tiers. UI screenshots. None of that tells you which one fits your actual workflow. Four questions that matter more than any feature list: Does your data need to stay on-premise? If yes, the field narrows to n8n self-hosted or custom Python. Zapier and Make are cloud-only. For regulated industries, this single question eliminates half the options. How many exception paths does the process have? Under 3: Zapier handles it. Between 3 and 10: Make or n8n. Above 10: you need n8n's flexibility or custom code. Who maintains it after deployment? If the ops team maintains it without engineering support, visual tools win. If an engineering team owns it with code review and CI/CD, Python or n8n with Git integration. Does the workflow need version control? If deployments need rollback capability and audit trails, cloud-only tools with no Git backing create risk. Map the process. Answer the four questions. The tool becomes obvious. In your last tool evaluation, did anyone map the process requirements before the vendor demos started? #WorkflowAutomation #ProcessAutomation #n8n #OperationsManagement
To view or add a comment, sign in
-
Can you make your first PI Web API request in under 10 minutes? Challenge accepted. 🚀 Reading documentation is slow. Writing code is fast. That’s why we built the PISharp "Start Here" Guide—to get you straight into the action without the usual configuration headaches. In this 10-minute guide, you will: ✅ Verify Connectivity: Instantly confirm if your server is reachable (No more 404 mysteries). ✅ The Path Strategy: Learn the most reliable way to find PI Points (Better than basic search). ✅ Master WebIDs: Understand the 'Primary Key' that powers every single API interaction. ✅ Read Live Data: Fetch snapshot and historical values like a pro. This tutorial is for developers who want to stop "experimenting" and start building production-ready integrations. 🛠️ 👉 Take the Challenge here: [https://lnkd.in/dUVZdB4g] #PISharp #PISystem #PIWebAPI #Python #CodingChallenge #IndustrialIoT #DataEngineering
To view or add a comment, sign in
-
-
*Debugging Lesson from a Real-World Integration* While working on a Python-based bot integration, I ran into an interesting issue that initially looked like a simple dependency error — but turned out to be a deeper compatibility problem. *The Error:* ModuleNotFoundError: No module named 'pkg_resources' AttributeError: module 'pkgutil' has no attribute 'ImpImporter' At first glance, it seemed like a missing package. But even after installing dependencies, the issue persisted. *Root Cause:* The environment was running on Python 3.12, where certain legacy components like pkgutil.ImpImporter have been removed. However, some widely used libraries (and their dependencies) still rely on these older components — leading to unexpected runtime failures. *Solution:* Instead of patching individual packages, the clean and stable solution was: - Align Python version to 3.10 - Use compatible versions of dependencies 💡 Key Takeaways: • Not every issue is a “missing install” problem • Ecosystem compatibility matters more than latest versions • Stable environments > cutting-edge versions in production Sometimes, the real bug is not in your code — but in the version mismatch between your tools. #Python #Debugging #SoftwareEngineering #BackendDevelopment #LearningJourney
To view or add a comment, sign in
-
From a simple log parser to simulating real SRE scenarios I extended my Log Analyzer project to make it more aligned with real-world production systems and incident handling. 🔧 What’s new: • Regex-based log parsing to extract timestamp, log level, and message • Top N error analysis using Python’s Counter • Error spike detection based on a time window (simulating incident conditions) 📊 Example insight: The tool can now detect abnormal error spikes within a short duration — something SREs rely on during production incidents. 💡 What I learned: Log analysis isn’t just about counting errors — it’s about identifying patterns, trends, and anomalies over time. 🔗 Project: https://lnkd.in/dEZyK7qH Next step: exploring real-time log monitoring and alerting integrations. Would love your feedback! #SRE #DevOps #Python #Observability #SiteReliabilityEngineering #LearningInPublic #GitHub
To view or add a comment, sign in
-
-
Coding agents generate code like there is no tomorrow. Soon enough, they struggle under the weight of what they created. AI writes a new helper instead of reusing an existing one. Old functions stay around because tests still call them, even though production does not. The codebase grows, but the agent's ability to reason about it does not. On bigger projects, especially ones that have been heavily vibe-coded, this turns into chaos. The problem is not just messy code. It is slower reviews, weaker trust in the codebase, and agents that get less reliable as the surface area grows. We have put a lot of energy into making code generation faster. I think the next thing to get right is safe code removal. There is a reason senior engineers get excited about deleting code. It is a bit like never throwing away clothes you no longer wear. It seems fine at first. Then one day, you have five versions of everything, and finding what you actually need means digging through closets you forgot existed. I built a Claude Code skill to help with this. It gives Claude a methodology for dead code removal: classify what you are looking at, verify the cases static tools miss, and avoid drifting into refactor territory while you are in there. It is tuned for Python and TypeScript, but should be easy to adapt. Clone it, fork it, open a PR if you improve it. https://lnkd.in/ds5AcC5U #CodingAgents #CodeQuality
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development