🧪 #PythonJourney | Day 149 — Testing API Endpoints & Validating Backend Today was about validating that everything works end-to-end. After days of building, it was time to test the actual API with real requests. Key accomplishments: ✅ All 8 API endpoints are functional: • POST /api/v1/urls (create shortened URL) • GET /api/v1/urls (list user's URLs) • GET /api/v1/urls/{id} (get URL details) • GET /api/v1/urls/{id}/analytics (get analytics) • DELETE /api/v1/urls/{id} (soft delete) • GET /{short_code} (redirect & track) • GET /health (health check) ✅ Database integration fully operational: • User authentication via API key works • URL creation with validation • Click tracking with proper foreign keys • Analytics aggregation ready ✅ Docker environment stable: • PostgreSQL 15 storing data correctly • Redis 7 ready for caching • FastAPI container running smoothly • All services healthy ✅ Tested with curl: • Health check endpoint responds • API authentication working • Request/response validation functioning • Error handling in place ✅ Code committed to GitHub: • Clean commits with meaningful messages • Full project history tracked • Ready for collaboration What I learned today: → End-to-end testing reveals integration issues early → API key authentication is simple but effective → Docker composition makes local development seamless → Curl is a powerful tool for API testing → Validating one endpoint at a time saves debugging time The backend is now production-ready in terms of basic functionality. Next: comprehensive testing with pytest and then deployment. Current status: - Backend: ✅ Functional - Database: ✅ Operational - API Endpoints: ✅ All working - Docker: ✅ Stable - Tests: ⏳ Next step - Deployment: ⏳ After tests #Python #FastAPI #API #Testing #Backend #Docker #PostgreSQL #SoftwareDevelopment #DevOps
Marcos Vinicius Thibes Kemer’s Post
More Relevant Posts
-
🚀 #PythonJourney | Day 151 — BREAKTHROUGH: API Fully Functional & First Successful Request Today marks a major milestone: **the URL Shortener API is LIVE and responding correctly!** After 8 days of building and debugging, I finally got the first successful POST request working. This breakthrough moment proves that all the pieces fit together. Key accomplishments: ✅ Fixed critical database type mismatch: • PostgreSQL was storing user_id as VARCHAR • SQLAlchemy was trying to query with UUID • Solution: Dropped volumes, rebuilt schema from scratch ✅ Fixed Pydantic response validation: • Model had clicks_total, database had total_clicks • Docker image was caching old code • Solution: Forced rebuild of container image ✅ First successful API call: • POST /api/v1/urls now returns proper JSON • Short code generated automatically • URL stored in database correctly • Full response validation passing ✅ Production-ready API endpoints confirmed: • Authentication working (API key validation) • Request validation (Pydantic models) • Database operations (CRUD) • Error handling (proper HTTP status codes) • Response serialization (JSON output) ✅ Lessons learned about debugging: • Always check the actual container logs • Volume management is critical in Docker • Type consistency across layers matters • Docker caching can hide recent changes • Patience and persistence beat quick fixes What happened today: → Identified the root cause through careful log analysis → Understood the full request/response cycle → Learned when to reset vs. when to patch → Experienced the joy of a working API! The API now successfully: - Validates user authentication - Creates shortened URLs with unique codes - Stores data in PostgreSQL - Returns properly formatted JSON responses - Handles errors gracefully This is what backend development is about: building reliable systems piece by piece, debugging methodically, and celebrating when it finally works. Status update: - ✅ Backend: FUNCTIONAL - ✅ Database: WORKING - ✅ API Endpoints: RESPONDING - ✅ Authentication: VERIFIED - ⏳ Full test suite: Next - ⏳ Deployment: Next week #Python #FastAPI #Backend #API #PostgreSQL #Docker #Debugging #SoftwareDevelopment #Victory #CodingJourney
To view or add a comment, sign in
-
-
# New learning and Self imporvement Modern backend systems often don’t fail because of scale alone — they struggle due to complexity. In a recent architecture redesign, the focus was on simplifying how dynamic, large-scale form data is handled while improving performance, maintainability, and developer experience. The shift (as shown in the diagram): 🔹 Moved from rigid column-based schema → flexible JSONB-based storage 🔹 Replaced heavy raw SQL usage with clean Python ORM-driven data access 🔹 Introduced structured payload handling with clear state management (status-driven flow) ⚙️ Backend Architecture Improvements ✔️ Adopted a modular design using Django applications for better separation of concerns ✔️ Implemented class-based views for cleaner, reusable API logic ✔️ Structured API routing using Django Ninja Router for better organization and scalability ✔️ Reduced the number of APIs by consolidating responses into optimized, meaningful endpoints ✔️ Designed APIs in collaboration with frontend to ensure smooth data flow and minimal overhead 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Managed 300–500+ fields without schema changes → Simplified debugging with structured payload visibility → Enabled faster iteration without impacting production stability Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard Reflection 🚀 Outcome ✔️ Reduced system complexity significantly ✔️ Improved API performance and response clarity ✔️ Eliminated production risks caused by excessive raw queries ✔️ Created a scalable foundation for handling dynamic data ✔️ Delivered a smoother integration experience for frontend systems Security handled using JWT-based authentication with proper token flow. The system continues to evolve with ongoing improvements in validation, background processing, and performance tuning. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT #SoftwareArchitecture
To view or add a comment, sign in
-
-
Part 1: Architecture & Real-World System Design Modern backend systems don’t break because of scale alone — they break due to complexity. In a recent redesign, the focus was on simplifying the handling of large, dynamic form data while improving performance, maintainability, and the developer experience. 📊 The shift: 🔹 From rigid column-based schema → flexible JSONB-based storage 🔹 From heavy raw SQL → clean ORM-driven queries 🔹 From scattered APIs → structured, minimal endpoints ⚙️ Architecture Improvements ✔️ Modular design using separate Django applications ✔️ Class-based views for reusable and maintainable logic ✔️ API structuring using Django Ninja Router ✔️ Reduced the number of APIs by consolidating responses ✔️ Strong alignment with frontend for payload and contract design 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Handled 300–500+ fields without schema changes → Simplified debugging with structured payloads → Enabled faster iteration without production risks 🔄 Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard reflects real-time updates 🚀 Outcome ✔️ Reduced schema complexity ✔️ Improved API performance ✔️ Avoided production issues caused by raw queries ✔️ Built a scalable and flexible backend system ✔️ Delivered smoother frontend-backend integration Security handled via JWT-based authentication with proper token flow. Still evolving with improvements in performance, validation, and system design. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT
To view or add a comment, sign in
-
Introducing mcp-assert: 𝕕𝕖𝕥𝕖𝕣𝕞𝕚𝕟𝕚𝕤𝕥𝕚𝕔 testing for MCP servers Most MCP tools return structured data: 𝐟𝐢𝐥𝐞 𝐜𝐨𝐧𝐭𝐞𝐧𝐭𝐬, 𝐪𝐮𝐞𝐫𝐲 𝐫𝐞𝐬𝐮𝐥𝐭𝐬, 𝐜𝐨𝐝𝐞 𝐥𝐨𝐜𝐚𝐭𝐢𝐨𝐧𝐬. The correct output is knowable in advance. You don't need an LLM to grade it: You need assert.Equal. mcp-assert is a single binary that connects to any MCP server (Go, TypeScript, Python, Rust, Java), calls your tools, and asserts the results. Define assertions in YAML, run them in CI. No SDK, no LLM, no API costs. 𝐙𝐞𝐫𝐨 𝐭𝐨 𝐟𝐮𝐥𝐥 𝐜𝐨𝐯𝐞𝐫𝐚𝐠𝐞 𝐢𝐧 𝐨𝐧𝐞 𝐜𝐨𝐦𝐦𝐚𝐧𝐝: mcp-assert init evals --server "my-server" Connects to your server, discovers every tool, generates assertions, captures baselines. Edit the YAMLs to taste, then run them forever. 𝐖𝐡𝐚𝐭 𝐢𝐭 𝐜𝐨𝐯𝐞𝐫𝐬: ▫️ 15 deterministic assertion types (contains, json_path, regex, file_unchanged, net_delta, etc.) ▫️ Trajectory assertions: validate that agents call tools in the correct order, with safety gates and absence checks. No server needed. ▫️Bidirectional MCP: test client capabilities (roots, sampling, elicitation), not just server tools ▫️ Reliability metrics (pass@k / pass^k), regression detection, snapshot testing ▫️ Docker isolation for write-tests ▫️ Same YAML, different servers: test your Go and Python implementations produce identical results 𝐎𝐧𝐞-𝐥𝐢𝐧𝐞 𝐂𝐈: - uses: blackwell-systems/mcp-assert-action@v1 with: suite: evals/ We've tested it against 18 server suites across 3 languages with 174 assertions and found real bugs in real servers along the way. 𝐈𝐧𝐬𝐭𝐚𝐥𝐥 𝐡𝐨𝐰𝐞𝐯𝐞𝐫 𝐲𝐨𝐮 𝐰𝐚𝐧𝐭: npx @blackwell-systems/mcp-assert pip install mcp-assert brew install blackwell-systems/tap/mcp-assert Open source, MIT licensed. GitHub: https://lnkd.in/geE_Fhck docs: https://lnkd.in/gw69j42G If you're building MCP servers, I'd love to hear what you think. #MCP #ModelContextProtocol #OpenSource #AIAgents #Testing #DevTools
To view or add a comment, sign in
-
I built an AI agent skill that turns any codebase into an interactive architecture map — in one command. /arcmap → one self-contained HTML file → open in any browser, no server needed. What it generates: Architecture card grid grouped by tier Infrastructure diagram (Postgres, RabbitMQ, Prometheus, K8s…) Force-layout dependency graph with typed edges (HTTP, gRPC, AMQP…) Data flow cards with protocol labels Live full-text search across files, classes, and method signatures Works with C# / .NET, TypeScript, Python, Flutter, Go, Rust, Java, and more. Install it in seconds For example on Claude code npx degit illicitus79/arcmap-skill ~/.claude/skills/arcmap Just type /arcmap in GitHub Copilot (VS Code), Claude Code, Cursor, or any supported agent. 100% offline — no fetch(), no dependencies, no server. The whole map is a single HTML file you can share, commit, or email. Repo: https://lnkd.in/gut7DGxP #AITools, #DeveloperTools, #GitHubCopilot, #DevEx, #OpenSource
To view or add a comment, sign in
-
⭐️Stop Relying on Suggestions: A Hard Technical Defense Against Code Rot⭐️ Many teams are currently trying to govern AI-generated code by adding "steering" files (like 'AGENTS.md) or PR templates. The problem? Markdown instructions are just suggestions they are subjective and easily ignored by AI agents. To protect architectural integrity, you need deterministic enforcement, not subjective guessing. What began as a prototype to solve the "AI slop" crisis in the Apache ecosystem is now a finished, universal framework published to Maven Central for everyone. What the AIV Integrity Gate Does: Density Gate: Uses entropy-based checks to flag low-signal boilerplate and scaffolding. - Design Gate: Enforces your specific architectural constraints (forbidden or required patterns) via YAML rules. - Dependency Gate: Validates imports against your actual build configuration to stop hidden supply-chain risks. - Invariant Gate: Provides hooks for property-based testing and critical edge-case validation. - 100% Local: Runs entirely in your local environment with zero external API calls, ensuring your IP stays secure. Feedback & Support: I am looking for feedback from the community how does this solve your PR quality issues? If you have features you’d like to see, please add them as a GitHub Issue. If you find this project valuable, adding a Star ⭐️ to the repo is a huge motivation for me to keep building. - Maven Coordinates: io.github.vaquarkhan:aiv-gate (https://lnkd.in/gHHtqXYy) - GitHub Repository: https://lnkd.in/gkV_S5he Note -This project and its contents are entirely my own. They are developed independently and do not represent the views or interests of any employer. #OpenSource #SoftwareArchitecture #Java #DevOps #AISlop #CodeIntegrity #Engineering #BigData #DistributedSystems
To view or add a comment, sign in
-
🚨 My Django app crashed in Docker… and the issue was NOT where I expected. It started with a simple change: 👉 Switching from volume-based .env to env_file in Docker Compose Sounds harmless, right? 💥 It broke everything. 🔍 What actually happened: Removed volume: /home/app/.env:/app/.env Switched to env_file CI/CD created .env But… ❌ SECRET_KEY missing in CI/CD ❌ Django failed to start ❌ Celery couldn’t connect properly ❌ RabbitMQ setup looked fine but still failing 🧠 The real mistake: 👉 I changed the source of environment variables 👉 But didn’t migrate ALL variables 🔥 Key lessons: ✅ Volume ≠ environment injection ✅ env_file ≠ mounted file ✅ Docker uses service names (rabbitmq), not localhost ✅ Config source change = migrate everything ✅ Logs > assumptions 🚀 Fix: ✔ Reused volume-based .env ✔ Removed env_file ✔ Added CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672// ✔ Restarted all containers 🎯 Result: ✅ Django running ✅ Celery working ✅ RabbitMQ connected ✅ Emails sending successfully 💡 Biggest takeaway: 👉 DevOps is NOT about setup 👉 It’s about understanding how systems behave under changes I documented the full debugging journey here 👇 (Real issue → Root cause → Fix → Best practices) 🔗 https://lnkd.in/g9259BEG #DevOps #Docker #Django #RabbitMQ #Celery #CI_CD #Debugging #AWS
To view or add a comment, sign in
-
I've spent years debugging the same silent failure mode in production notification systems: welcome emails rendering "Hi !", configs missing variables, push notifications with broken merge fields. So I built Templane. It's an open-source protocol that brings typed contracts to templates. You drop a small .schema.yaml next to your existing template; the binding refuses to render when the data doesn't match, and reports every missing field, wrong type, or invalid value at once, with "did you mean?" suggestions for typos. The same schema produces the same errors in every language. Available now on the public registries: - templane-java 0.4.0 — Maven Central, FreeMarker binding - templane-python 0.2.0 — PyPI, Jinja2 binding - templane-ts 0.2.0 — npm, Handlebars binding plus the xt CLI - templane-go v0.2.0 — Go module proxy, text/template integration - templane-conform 0.1.0 — npm, cross-language conformance runner What makes the cross-language guarantee credible: every release is verified against a shared 40-test conformance suite — covering schema parsing, type checking, intermediate representation, and rendering — run through all five implementations. If any diverges on a single test, the build fails. Same mechanism gRPC and OpenAPI use to keep cross-language SDKs honest. Roadmap: more engine bindings (Mustache, Liquid, ERB, Pug, Twig), a TypeScript breaking-change detector, and IDE integration. I'd genuinely value your review. If you've worked with templates in production, try migrating one and let me know what you find. 🔗 https://lnkd.in/gxd8su2U Apache 2.0. Built in the open. #OpenSource #SoftwareEngineering #DeveloperExperience
To view or add a comment, sign in
-
Still spending time on Spring Boot code reviews? • Missed edge cases • Security gaps slipping through • Endless PR cycles slowing delivery There’s a better way. Introducing SpringInsight - an open-source autonomous AI intelligence suite built for Java & Spring Boot teams. ⚡ Run 18 specialized AI agents on your repo ⚡ Get severity-ranked issues + exact fixes ⚡ All in under 60 seconds 🔍 AI Code Review (18 Agents) Catch what humans miss: • OWASP vulnerabilities • N+1 query issues • Concurrency bugs • Architecture flaws Upgrade Advisor → migrate to Spring Boot 3.5 / 4.0 effortlessly Reverse Engineering → understand legacy code instantly 🔮 Semantic Code Search Ask your codebase questions like: 👉 “How does authentication work?” Get precise, contextual answers grounded in your actual code. ✔ Runs fully local (RAG + vector DB) ✔ Your code never leaves your machine 🤖 SpringTeam (Autonomous Dev Agents) Describe a feature → AI delivers: • Code implementation • JUnit 5 tests • Documentation • Ready-to-review PR ⚙️ Built for Real Dev Workflows • GitHub PR integration (scan only changed files) • VS Code extension + MCP support + Web UI 💡 MIT Licensed. Self-hosted. Zero lock-in. 👉 Get started: https://lnkd.in/dJD_Cmm8 https://lnkd.in/dw_5u9Eb ⭐ If this saves you time, consider starring the repo! #SpringBoot #Java #AI #DeveloperTools #OpenSource #DevOps #CodeReview
To view or add a comment, sign in
-
Voice Support and Human in the loop: These features are tremendous! HITL is mandatory in an angentic infrastructure. In many case, autonomous agent cannot be deployed. Voice support, in the other hand,... I love letting my agent do my work just talking with it. Call me lazy 😉
What's dropped recently in kagent 👇 🎙️ Voice Support Voice input/output support was added across all agents. 🧑💻 Human-in-the-Loop Two distinct HITL modes were shipped back-to-back: 1. Confirmation mode: agents can pause and ask the user to confirm before taking an action. 2. User input mode: agents can pause mid-task and prompt the user for freeform input 🧠 Long-Term Memory Store A memory store was added enabling agents to retain and access long-term memory across sessions. 📝 Built-in Prompts, Prompt Templates & Context Management Configurable context management was added for agents, alongside support for built-in prompts and prompt templates, making it easier to reuse and standardize instructions. ⚙️ Go/Python Runtime Selection + Go Module Refactoring A runtime field was added to the Declarative Agent CRD, allowing users to choose between Go and Python runtimes. 📦 Git-Based Skill Fetching Agents can now pull skills from Git repositories with shared auth support and a lightweight init image 📡 Distributed Tracing + A2A Trace Propagation The controller was instrumented with distributed tracing, including propagation of traces across agent-to-agent (A2A) calls 🗄️ Postgres Support + Security Hardening A `--postgres-database-url-file` flag was added for file-based DB credential injection. A Postgres variant was added to the e2e CI test matrix via matrix strategy. 🔍 Dynamic Provider/Model Discovery in UI The UI now dynamically discovers available providers and models, rather than requiring a hardcoded list 🔐 API Key Passthrough API keys can now be passed through directly in `ModelConfig` and a `--token` flag was added to kagent invoke for the same purpose from the CLI. 🌐 Global Default Service Account for Agent Deployments A global default serviceAccountName can now be set for all agent deployments, reducing per-agent boilerplate. #agenticai #KubeCon #CloudNativeCon
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development