Excited to share that I’ve contributed to the latest release of AWS Lambda Powertools for Python (v3.26.0) https://lnkd.in/dG3C_R9P 🚀 This release introduces useful enhancements, including a new Lambda Metadata utility and improvements across logging, batch processing, and more. I’m particularly happy to have contributed to the logging space, helping improve how external loggers can be integrated with Powertools through buffering support - making observability in Lambda even more robust and flexible. Big thanks to the maintainers and contributors for the collaboration 🙌 If you’re building serverless applications on AWS, Powertools is definitely worth checking out. #AWS #Serverless #Python #OpenSource #CloudComputing #AWSLambda
Antonio Cascella’s Post
More Relevant Posts
-
When I first started my career, I encountered one of the core architectures that's used across software: message queuing ("pub/sub"). At the time, this was complicated and difficult for me to understand. As I started working with it in AWS, I realized how powerful this method of decoupling can be at a systems level. I break down what messages and message queues are in the latest Python Snacks article here: https://lnkd.in/gBxw5JcZ
To view or add a comment, sign in
-
-
Just implemented a Celery-based background processing layer for a Django real-time messaging system. This layer handles critical async operations that shouldn’t block user requests: • Syncing unread message counters from Redis → PostgreSQL • Ensuring message state durability beyond in-memory cache • Retry-safe background execution with Celery task retries • Foundation for offline message delivery (push notifications) • Cleanup hooks for stale real-time states like “typing…” indicators The key design idea: Redis handles speed, Celery ensures reliability, and the database remains the source of truth. This separation is what makes real-time systems scalable without losing consistency under load. #Django #Celery #BackendEngineering #SystemDesign #Python #SoftwareEngineering #Scalability
To view or add a comment, sign in
-
-
I've been working through how MCP and A2A fit together in practice, specifically how to keep agent context clean as you scale across multiple service domains. Wrote up what I learned: how progressive reveal lets an MCP server guide agent reasoning through layers of capability discovery, and why pairing that with A2A object handoffs solves the context churn problem that shows up when agents navigate deep tool hierarchies. There's a working Python MCP server you can clone and run, plus research citations on why context management matters more than most people think. #MCP #A2A #AIAgents #SoftwareArchitecture #Python
To view or add a comment, sign in
-
How many unused Lambda functions are running in your AWS account right now? I built a simple Python script that helped identify 53 wasteful functions within minutes,highlighting gaps in visibility and cost optimization in serverless setups. Take a look: https://lnkd.in/d9evji-B
To view or add a comment, sign in
-
The MCP Toolbox for Databases is an open-source server simplifies complex developer workflows by handling authentication, connection pooling, and observability out of the box. https://lnkd.in/gTFJR5b4 - Bridge the gap between artificial intelligence agents and enterprise data systems. -Connect AI models and IDEs to various databases like PostgreSQL, BigQuery, and MongoDB using the Model Context Protocol. -Prebuilt tools for immediate data exploration alongside a custom framework for building secure, specialized queries. -SDKs for Go, Python, and JavaScript, allowing for seamless integration into popular AI orchestration frameworks
To view or add a comment, sign in
-
-
Everyone reaches for Python when building MCP servers. I am building mine in C#. Here is the decision I had to make. Most MCP servers are simple. A tool or two, some API calls, done. Python is fine for that and the ecosystem is hard to beat. But my current project is different. I am building an MCP server that processes large documents for RAG and vector search. High-volume file ingestion, chunking strategies, embedding pipelines. The kind of work where memory management and throughput actually matter. That is where Python starts to feel soft. C# handles memory pressure better at scale. The type system catches integration bugs before runtime. And for anyone already working in the Azure and Microsoft ecosystem, the tooling fits naturally. Here is how I think about the decision: If you are prototyping or your MCP server is lightweight, use Python. The SDK support is mature, the AI ecosystem lives there, and you will move faster. If your MCP server is doing heavy lifting, processing large files, running ingestion pipelines, sitting inside enterprise infrastructure, C# is worth the setup cost. MCP servers tend to be focused, single-purpose implementations. That simplicity actually removes the main argument for Python. You are not saving weeks of development time. You are just picking the right tool for the workload. I will share what I find as this project develops. #MCP #CSharp #Python #RAG #AIEngineering #DeveloperTools #AI
To view or add a comment, sign in
-
-
Are messy Python dependencies and 'it works on my machine' debugging slowing down your data projects? Environment inconsistencies can derail progress and frustrate your team. It's a persistent problem, but you can finally conquer it! 😤 Discover how Docker creates consistent, reproducible environments. Package your Python code, its exact version, and all system libraries into a single, portable unit. Build, share, and deploy your data solutions identically across any machine or cloud, eliminating headaches. ✨ Our beginner’s guide walks you through containerizing everything: from data cleaning scripts and FastAPI-powered ML models to multi-service pipelines with Docker Compose and scheduled cron tasks. Say goodbye to environment debugging and accelerate your development lifecycle. Ready for seamless consistency? 🚀 **Comment "DockerData" to get the full article** Learn more about building consistent Python & Data Project environments with Docker https://lnkd.in/gQQmtBnF 𝗥𝗲𝗮𝗱𝘆 𝘁𝗼 𝘀𝗲𝗲 𝘄𝗵𝗲𝗿𝗲 𝘆𝗼𝘂𝗿 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝘀𝘁𝗮𝗻𝗱𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗿𝗮𝗽𝗶𝗱𝗹𝘆 𝗲𝘃𝗼𝗹𝘃𝗶𝗻 world 𝗼𝗳 𝗔𝗜? 𝗧𝗮𝗸𝗲 𝗼𝘂𝗿 𝗾𝘂𝗶𝗰𝗸 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻 𝘁𝗼 𝗯𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸 𝘆𝗼𝘂𝗿 𝗔𝗜 𝗿𝗲𝗮𝗱𝗶𝗻𝗲𝘀𝘀 𝗮𝗻𝗱 𝘂𝗻𝗹𝗼𝗰𝗸 𝘆𝗼𝘂𝗿 𝗽𝗼𝘁𝗲𝗻𝘁𝗶𝗮𝗹! https://lnkd.in/g_dbMPqx #Docker #Python #DataEngineering #DevOps #Containerization #SaizenAcuity
To view or add a comment, sign in
-
-
We just published a step-by-step guide to building your first MCP server in Python. But the most important thing we learned building production MCP servers for clients isn't in any official doc: **Your tool description is worth more than your code.** Here's why: When an LLM decides whether to call your tool, it reads the docstring. Not the function name. Not the parameter types. The description. We had a client whose MCP tool for querying their CRM was called 300 times in a session — causing API rate limits. The tool description was: "Get customer data." We changed it to: "Retrieve full customer profile by customer ID. Use only when the user asks about a specific customer. Do not call multiple times for the same customer in one conversation." Calls dropped by 80%. Your tool descriptions are prompt engineering. Treat them that way. → Full tutorial with 6 steps + production deployment: https://lnkd.in/dK7skARH
To view or add a comment, sign in
-
I spent a weekend asking one question: Why does it take analysts 2+ hours to answer "why did revenue drop?" The answer: the data exists. The pipeline exists. But nobody automated the "so what." So I built MetricPulse, a root cause analysis engine that detects metric anomalies and explains them in plain English in under 30 seconds. Stack: AWS S3 → Redshift → dbt → Python → Django → SNS alerts The part that surprised me most? The hardest problem wasn't the ML. It was making the output readable to someone who doesn't live in the data. Live demo in the comments.
To view or add a comment, sign in
-
⚡ Kafka Series – Day 2 Partitioning is NOT just about scaling. It is one of the most critical design decisions in Kafka. Because partitioning directly controls: 👉 Ordering guarantees 👉 Parallelism 👉 Load distribution 👉 Data locality 💡 In Python producers (confluent-kafka / kafka-python): Every message can have a key. That key decides: 👉 Which partition the message goes to 👉 Which consumer processes it 👉 Whether ordering is preserved 🔥 Real-world mistake I’ve seen: Using random keys (or no key at all) Result: ❌ Messages scattered randomly ❌ Ordering broken ❌ Debugging becomes impossible 💡 What I do instead: I choose keys based on business boundaries Examples: user_id → for user events order_id → for transactions account_id → for financial flows This ensures: ✔️ All related events go to same partition ✔️ Ordering is preserved ✔️ Easier debugging ⚠️ But there’s a catch: Bad key choice → hot partitions Example: If one user generates 80% traffic → one partition gets overloaded 🔥 Rule I follow: 👉 “Partition key = consistency boundary, not convenience” 💬 Question: What key strategy are you using — random, round-robin, or business-driven? #Kafka #SystemDesign #Scalability #Python #DataEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development