For years, the AWS Lambda Handler Cookbook was missing one thing I kept putting off: real, production-grade CRUD across multiple functions with a single, unified Swagger. v9.6.0 finally fixes that, thanks to the event handler alpha feature in Powertools for AWS Lambda's event handler. What's new in v9.6.0: 🔧 Create, get, and delete order APIs as micro Lambda functions over DynamoDB 📄 Unified OpenAPI schema generated across all endpoints 🔍 Automated API breaking changes detection in CI 📑 Swagger published to GitHub Pages and always in sync with the code What you get overall in the cookbook template: 🏗️ Production-ready serverless project in Python with CDK infrastructure 🧪 Five testing strategies: unit, integration, infrastructure, security, and E2E ⚙️ CI/CD with GitHub Actions across dev, staging, and production environments 📊 CloudWatch dashboards and alarms with SNS notifications out of the box 🔒 WAF protection, input validation with Pydantic, and idempotent API design 🏷️ Feature flags and dynamic configuration via AppConfig 📈 Business KPI metrics and distributed tracing with Powertools for AWS Lambda Thanks to Leandro Cavalcante Damascena for developing the Powertools OpenAPI feature that enabled the unified schema. I hope you merge it soon :) 🔗 https://lnkd.in/dZe74TCc #AWSLambda #Serverless #AWS #OpenAPI #PowertoolsForAWS #PlatformEngineering
Unified Swagger for AWS Lambda with Powertools v9.6.0
More Relevant Posts
-
Storing AWS keys in GitHub Secrets is like hiding a spare house key under the doormat. Everyone knows to look there. The final part of my streaming pipeline series is live — and it's about doing this properly. 🔗 Zero Stored Credentials: CI/CD for a Streaming Data Pipeline with GitHub Actions and OIDC Instead of static keys, GitHub and AWS exchange short-lived cryptographic tokens at deploy time via OIDC. Nothing to store. Nothing to rotate. Nothing to accidentally commit. The pipeline also covers: → A multi-stage workflow: lint → Terraform plan → 4 parallel Docker builds → apply with a manual approval gate → Tagging images with git commit SHAs for full deployment traceability → Why the plan artifact gets uploaded between plan and apply (it's subtle, but it matters) This wraps up the four-part series on the nasdaq-equity-kafka-flink-streaming-pipeline — Kafka, Flink, Lambda orchestration, and now CI/CD, all on AWS for a few dollars a month. 🔗 Read the Github repo: https://lnkd.in/gctWrZ7f 🔗 Read the Medium: https://lnkd.in/gPWetyj5 #DataEngineering #GitHubActions #OIDC #Terraform #AWS #CICD #StreamProcessing
Zero Stored Credentials: CI/CD for a Streaming Data Pipeline with GitHub Actions and OIDC medium.com To view or add a comment, sign in
-
Just uploaded the full project to GitHub 🔗 If you're learning Azure Functions or serverless architecture, the repo has everything: • Clean commit history that shows the actual development process (not just the final result) • Detailed README breaking down what I built and what I learned • Proper project structure with .gitignore protecting secrets requirements.txt documenting all dependencies function_app.py with the blob trigger implementation • The goal wasn't just to build it, it was to build it the right way. Professional workflow, clean code, honest documentation. Check it out and let me know if you have questions about the project or serverless architecture. GitHub: https://lnkd.in/eziE_HVb #GitHub #AzureFunctions #Python #OpenSource #CloudComputing
To view or add a comment, sign in
-
If you have been running Docker containers one by one, there is a much better way. Docker Compose lets you define your entire application stack in a single YAML file and spin it all up with one command. 🐳 We just published a complete beginner's guide on xCloud, covering what Docker Compose is, how the docker-compose.yml file works, a real Node.js and MySQL example and a full commands cheat sheet to keep handy. Whether you are new to Docker or building out a local dev environment, this guide gives you everything you need to get started confidently. https://lnkd.in/gSu-sE8f #DockerCompose #Docker #DevOps
To view or add a comment, sign in
-
Day 59: The .tfvars File — Organizing Your Environments 📂 Today we tackle the "Management" side of variables. We know how to declare variables, but in a real-world company, you don't just have one set of values. You have different requirements for Dev, Staging, and Production. Using a .tfvars file is like having a Configuration Dashboard for your infrastructure. It allows you to keep your logic (.tf files) separate from your settings (.tfvars files). 1. Why Separate the Values? In a professional setting, your Python or Terraform code should be a "Generic Template." The Logic (variables.tf & main.tf): Defines what to build (e.g., "An EC2 instance with a security group"). The Values (terraform.tfvars): Defines how to build it for a specific environment (e.g., "Use t3.large for Prod"). Benefit: You can hand your code to a teammate, and they only need to look at the .tfvars file to understand the configuration, without getting lost in the complex resource blocks. 2. The "Automatic" File: terraform.tfvars Terraform is designed to look for a specific filename by default. If you name your file exactly terraform.tfvars, Terraform will automatically suck in those values when you run plan or apply. Example terraform.tfvars content: instance_type = "t2.medium" vpn_ip = "1.2.3.4/32" env_name = "development" 3. Managing Multiple Environments (The -var-file Flag) In a real job, you might have dev.tfvars, stage.tfvars, and prod.tfvars in the same folder. Terraform won't load these automatically (to prevent you from accidentally deploying to Prod!). To use these specific files, you must "point" Terraform to them using the command line: # To deploy to Development terraform plan -var-file="dev.tfvars" # To deploy to Production terraform apply -var-file="prod.tfvars" 4. Best Practice: Keep Secrets Out of .tfvars While .tfvars files are great for organization, never put your AWS Secret Keys or Database Passwords in them if you plan to push your code to GitHub. The Solution: Add *.tfvars to your .gitignore file, or use a "Secrets Manager" to inject those specific values at runtime. #Terraform #DevOps #IaC #TfVars #EnvironmentManagement #CloudArchitecture #SoftwareEngineering #Automation #BestPractices #MultiCloud
To view or add a comment, sign in
-
-
Pod crashed at 3 AM. With raw YAML, I spent 40 minutes finding which ConfigMap was wrong. With Helm, I found it in 30 seconds. I deployed the same microservices platform twice — AWS EKS (raw manifests) vs Azure AKS (Helm chart). The debugging experience was night and day. ❌ The Debugging Nightmare with Raw YAML: • MongoDB OOMKilled → Which file has the memory limits? • Frontend permission denied → Is it the Deployment or ConfigMap? • Service selector mismatch → Check 3 files to verify label consistency • "Which version is running?" → No single source of truth • Rollback = Git checkout + manual kubectl apply + hope you remember what changed ✅ The Helm Debugging Experience: • helm get values myapp → See entire config in one output • helm history myapp → See every deployment with timestamps • helm rollback myapp 3 → Instant recovery to known-good state • helm template . --debug → Catch errors before deployment • Labels auto-generated from templates → Impossible to mismatch 📊 Real Incident Response Times: Scenario: MongoDB Memory Limit Too Low (OOMKilled) Raw YAML approach (EKS): 1. Check pod status (2 min) 2. Find mongodb.yaml file in folder structure (5 min) 3. Locate resources section (3 min) 4. Edit, apply, verify (8 min) Total: ~18 minutes Helm approach (AKS): 1. Check pod status (2 min) 2. helm get values myapp → See mongodb.resources.limits.memory: 200Mi (30 seconds) 3. Edit values.yaml, helm upgrade myapp . (2 min) Total: ~5 minutes 🔧 How Helm Saved Me During Real Production Issues: Issue 1: Frontend CrashLoopBackOff (nginx port 80 permission denied) Helm approach - trace the problem: → helm get values myapp | grep -A 5 frontend → Found: containerPort: 80 (requires root) → Fixed: values.yaml → containerPort: 8080 → Deploy: helm upgrade myapp . → Time: 3 minutes Issue 2: Label Selector Mismatch (Services not finding Pods) Raw YAML: Labels defined in 3 places: • Deployment.spec.selector.matchLabels • Deployment.spec.template.metadata.labels • Service.spec.selector Result: Easy to mismatch, hard to debug Helm: Labels from _helpers.tpl: {{ include "myapp.selectorLabels" . }} Result: Impossible to mismatch, single source of truth Issue 3: Rollback After Bad Config Raw YAML: Git revert + kubectl apply (manual, error-prone) Helm: helm rollback myapp 2 (atomic, tested) 💡 Key insight: The biggest reliability win wasn't during deployment — it was during incident response at 2 AM. When you're sleep-deprived and production is down, helm history + helm rollback is the difference between 5-minute recovery and 45-minute panic. Real example from last week: Accidentally set MongoDB memory to 512M instead of 512Mi. Pod crashed. With Helm revision history, I saw exactly what changed between revision 3 (working) and revision 4 (broken). Rollback took one command. #Kubernetes #SRE #Helm #DevOps #IncidentResponse #ProductionReady #AzureAKS
To view or add a comment, sign in
-
-
💥 Just completed an end-to-end CI/CD pipeline integrating Jenkins with AWS CodeBuild and CodeDeploy! Building on my previous work with AWS Copilot and ECS, I wanted to deepen my understanding of pipeline orchestration — this time with Jenkins as the central coordinator. The goal: automate the journey from a git push to a running Flask application on EC2. The architecture: GitHub → Jenkins (Poll SCM) → AWS CodeBuild → S3 → AWS CodeDeploy → EC2 Jenkins orchestrates, while AWS services handle the heavy lifting. What I built: ✅ Jenkins server on Amazon Linux 2023 with AWS CodeBuild, CodeDeploy, File Operations, and HTTP Request plugins ✅ Four IAM roles with least-privilege scoping ✅ CodeBuild project that pulls from GitHub, runs unit tests, and outputs artifacts to S3 ✅ CodeDeploy in-place deployment across two tagged EC2 app servers ✅ Jenkins freestyle project with SCM polling and CodeDeploy post-build action The real learning came from troubleshooting: 🔧 Java version mismatch — Current Jenkins LTS requires Java 21, but the user data script installed Java 17. Diagnosed via journalctl, patched the systemd override. 🔧 Plugin UI drift — The AWS CodeBuild Jenkins plugin now requires selecting "Use Project source" via radio button, not the legacy dropdown. 🔧 Python version incompatibility — Sample scripts called python3.7 and bare python, neither of which exists on AL2023. Patched with sed and pushed a fix. 🔧 CodeDeploy state corruption — Failed deployments cache scripts in /opt/codedeploy-agent/deployment-root/, causing the agent to run OLD ApplicationStop scripts before downloading new bundles. Resolved by clearing the archive and restarting the agent. 🔧 File collision protection — CodeDeploy refuses to overwrite existing files. Cleaning /web/* on both app servers got past it. Key takeaways: 🔵 CodeDeploy lifecycle event logs are the fastest path to diagnosing failures — drill three clicks deep, the error is always there. 🔵 Tutorials age faster than the underlying tools. Java versions, plugin UIs, and distro defaults all change. The fundamentals stay the same. 🔵 Jenkins's flexibility is a double-edged sword — managing plugin compatibility is ongoing work, but the trade-off is portability across providers. 🔵 The CodeDeploy agent's caching behavior is a real gotcha: one failed deployment can block all future ones until the cache is cleared. Code on GitHub: https://lnkd.in/g54d7BYD Big thanks and shoutout to the AWS docs and Jenkins community for the troubleshooting breadcrumbs! #AWS #DevOps #CICD #Jenkins #CodeBuild #CodeDeploy #CloudComputing #InfrastructureAsCode #Automation
To view or add a comment, sign in
-
🚨 “𝗪𝗵𝘆 𝗱𝗼𝗲𝘀 𝗺𝘆 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 𝗰𝗼𝗱𝗲 𝗯𝗿𝗲𝗮𝗸 𝗲𝘃𝗲𝗿𝘆 𝘁𝗶𝗺𝗲 𝗜 𝗱𝗲𝗽𝗹𝗼𝘆 𝘁𝗼 𝗮 𝗻𝗲𝘄 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁? Because you’re not using variables the right way. Let’s fix that — using a real Azure example (beginner-friendly, no jargon). 🌍 𝗥𝗲𝗮𝗹 𝗦𝗰𝗲𝗻𝗮𝗿𝗶𝗼 (𝗔𝘇𝘂𝗿𝗲 + 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺) You want to create an Azure Resource Group. Basic Terraform code looks like this: resource "azurerm_resource_group" "rg" { name = "my-app-dev-rg" location = "East US" } Looks fine… until: ❌ You need a different name for prod ❌ You need a different location ❌ You deploy multiple environments Now you’re stuck editing code again and again 😓 🔥 𝗘𝗻𝘁𝗲𝗿 𝗩𝗮𝗿𝗶𝗮𝗯𝗹𝗲𝘀 (𝗬𝗼𝘂𝗿 𝗥𝗲𝗮𝗹 𝗣𝗼𝘄𝗲𝗿) Terraform variables follow a simple flow: 👉 𝟭. 𝗗𝗲𝗰𝗹𝗮𝗿𝗲 → 𝟮. 𝗨𝘀𝗲 → 𝟯. 𝗔𝘀𝘀𝗶𝗴𝗻 Let’s break this down step-by-step. 🧠 𝟭. 𝗗𝗘𝗖𝗟𝗔𝗥𝗘 (𝗧𝗲𝗹𝗹 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 𝘄𝗵𝗮𝘁 𝗰𝗮𝗻 𝗰𝗵𝗮𝗻𝗴𝗲) variable "rg_name" { description = "Name of the Resource Group" type = string } variable "location" { description = "Azure Region" type = string } You’re basically saying: 👉 “Hey Terraform, these values will come later.” ⚙️ 𝟮. 𝗨𝗦𝗘 (𝗣𝗹𝘂𝗴 𝘃𝗮𝗿𝗶𝗮𝗯𝗹𝗲𝘀 𝗶𝗻𝘁𝗼 𝘆𝗼𝘂𝗿 𝗰𝗼𝗱𝗲) resource "azurerm_resource_group" "rg" { name = var.rg_name location = var.location } Now your code is dynamic instead of hardcoded 🔄 🎯 𝟯. 𝗔𝗦𝗦𝗜𝗚𝗡 (𝗚𝗶𝘃𝗲 𝗮𝗰𝘁𝘂𝗮𝗹 𝘃𝗮𝗹𝘂𝗲𝘀) This is where most beginners get confused 👇 There are 3 powerful ways to assign values: ✅ 𝗠𝗲𝘁𝗵𝗼𝗱 𝟭: 𝗖𝗟𝗜 𝗜𝗻𝗽𝘂𝘁 (𝗠𝗮𝗻𝘂𝗮𝗹 𝗘𝗻𝘁𝗿𝘆) terraform apply -var="rg_name=my-app-dev-rg" -var="location=East US" 👉 Best when: • You want quick testing • You want user input at runtime 💡 Think of it like filling a form before execution ✅ 𝗠𝗲𝘁𝗵𝗼𝗱 𝟮: 𝗗𝗲𝗳𝗮𝘂𝗹𝘁 𝗩𝗮𝗹𝘂𝗲𝘀 (𝗔𝘂𝘁𝗼 𝗔𝘀𝘀𝗶𝗴𝗻𝗺𝗲𝗻𝘁) variable "location" { description = "Azure Region" type = string default = "East US" } 👉 If no value is passed, Terraform uses this automatically 💡 Best for: • Common values • Reducing repetition ✅ 𝗠𝗲𝘁𝗵𝗼𝗱 𝟯: 𝘁𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺.𝘁𝗳𝘃𝗮𝗿𝘀 𝗙𝗶𝗹𝗲 (𝗣𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹 𝗪𝗮𝘆) Create a file: rg_name = "my-app-prod-rg" location = "Central India" Run: terraform apply 👉 Terraform automatically picks values from this file 💡 Best for: • Managing multiple environments • Clean & scalable setups 🧩 Real DevOps Flow 👉 dev.tfvars 👉 staging.tfvars 👉 prod.tfvars Same code. Different configs. terraform apply -var-file=dev.tfvars terraform apply -var-file=prod.tfvars No duplication. No chaos. Just clean infrastructure 🚀 🔥 𝗙𝗶𝗻𝗮𝗹 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆 𝗩𝗮𝗿𝗶𝗮𝗯𝗹𝗲𝘀 𝗶𝗻 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 👉 𝗗𝗲𝗰𝗹𝗮𝗿𝗲 → 𝗗𝗲𝗳𝗶𝗻𝗲 𝘄𝗵𝗮𝘁 𝗰𝗮𝗻 𝗰𝗵𝗮𝗻𝗴𝗲 👉 𝗨𝘀𝗲 → 𝗠𝗮𝗸𝗲 𝘆𝗼𝘂𝗿 𝗰𝗼𝗱𝗲 𝗳𝗹𝗲𝘅𝗶𝗯𝗹𝗲 👉 𝗔𝘀𝘀𝗶𝗴𝗻 → 𝗖𝗼𝗻𝘁𝗿𝗼𝗹 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿 𝗽𝗲𝗿 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁
To view or add a comment, sign in
-
GitHub has 150M+ developers. 420M+ repositories. And zero cross-repo intelligence. 🫠 🔔 Use Bytebell MCP to cut AI Code Copilots and Agents cost by 90%. Their Copilot breaks beyond 3000+ files. Their own community has been begging for org-wide search for years. They still haven't shipped it. They won't. They are too busy fighting Cursor on IDE, GitLab on CI/CD, and burning cash on Copilot adoption. Don't wait for them. 🫥 Meanwhile your engineering team is drowning. 🫨 74% of teams are afraid to touch shared code because they don't know what depends on it. 57% report production breaks from dependency chains nobody could see. Developers spend 58% of their working time just understanding existing code, not writing new code. You update an API contract in one repo and 23 downstream services break in production. You don't find out until the pager goes off at 3am. 🪦 This is not a tooling gap. This is a structural blindness. Every AI coding tool today sees one repo at a time. Copilot, Cursor, Claude Code, Codex. All of them. 🫣 One solution could be to run AI copilots across all your repositories simultaneously and let them read everything. 🧱 A 200K token window fills up in minutes when you're reading across 50 repos. The model triggers auto-compaction. ► File paths gone. ► Error messages gone. ► Debugging state gone. After 3 to 4 compaction cycles the agent is generating code based on fragments of fragments. ► Claude Opus drops from 92% accuracy at 256K tokens to 78% at 1M. ►GPT drops from 80% to 37%. You either get stuck in a compaction death spiral 🌀 or you get degraded accuracy that silently ships bugs into production. Brute-force reading across repos is not a solution. It is a more expensive version of the same problem. 🫗 The most obvious answer to "someone will build this eventually" is the same answer for every infrastructure layer in history. ► MongoDB is open source. MongoDB Atlas is a $1.7B business. ► Redis is open source. Redis Cloud prints money. ► Kubernetes is open source. Every cloud provider charges you to manage it. Code is not rocket science. Running it at scale for enterprises is. 🏗️ Cross-repo intelligence is the same kind of problem. Everyone knows it needs to exist. Nobody has built the managed infrastructure layer for it. Until now. 🛸 bytebell.ai #AI #DevTools #CodeIntelligence #ContextEngine #Engineering
To view or add a comment, sign in
-
💥 I made a small mistake in Kubernetes… and it broke everything. While deploying MongoDB + Mongo Express on Kubernetes, I hit a scary error: 👉 CreateContainerConfigError At first, I thought something was wrong with my containers, YAML, or even Minikube setup. But the real issue? ❌ Just ONE mismatched name. I used: mongo-secret But actually created: mongodb-secret That’s it. One tiny inconsistency = entire app fails. 👉 What I learned from this: • Kubernetes is VERY strict about naming • Secrets & ConfigMaps must match EXACTLY • kubectl describe pod is your best debugging weapon 👉 After fixing it, everything worked perfectly: ✔ MongoDB connected ✔ Mongo Express UI running ✔ Services communicating via internal DNS If you're learning Kubernetes, don’t just watch tutorials — 👉 build real projects and break things. That’s where real learning happens. 📖 I wrote a full step-by-step guide here: https://lnkd.in/gVCzAhHs 💻 Full source code: https://lnkd.in/g3vgsvQh #Kubernetes #DevOps #MongoDB #CloudNative #Backend #LearningInPublic #100DaysOfCode
To view or add a comment, sign in
-
-
Every developer is talking about Cursor and GitHub Copilot. Nobody is talking about the one that might beat them both. AWS Kiro — here's everything you need to know 👇 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗞𝗶𝗿𝗼? An AI-powered IDE built by AWS. Built on Code OSS — so you keep all your VS Code settings and plugins. Free to use during preview. Powered by Claude under the hood. 𝗪𝗵𝗮𝘁 𝗺𝗮𝗸𝗲𝘀 𝗶𝘁 𝗱𝗶𝗳𝗳𝗲𝗿𝗲𝗻𝘁 𝗳𝗿𝗼𝗺 𝗖𝘂𝗿𝘀𝗼𝗿? Cursor and Copilot take your prompt → generate code immediately. Kiro takes your prompt → generates a spec first → then generates code. This is called spec-driven development. And it changes everything. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀 — 𝟯 𝘀𝘁𝗲𝗽𝘀: 𝗦𝘁𝗲𝗽 𝟭 — 𝗦𝗽𝗲𝗰𝘀 Type: "Add a review system for products" Kiro generates → user stories, acceptance criteria, edge cases. You review and approve before a single line of code is written. 𝗦𝘁𝗲𝗽 𝟮 — 𝗗𝗲𝘀𝗶𝗴𝗻 Kiro analyses your codebase → generates data flow diagrams, database schemas, API endpoints, TypeScript interfaces. You know exactly what will be built before it's built. 𝗦𝘁𝗲𝗽 𝟯 — 𝗧𝗮𝘀𝗸𝘀 Kiro generates tasks and sub-tasks, sequences them by dependencies, links each task to requirements. Unit tests, integration tests, loading states — all included automatically. 𝗞𝗲𝘆 𝗳𝗲𝗮𝘁𝘂𝗿𝗲𝘀: → Hooks — automations that trigger in the background. Auto git commits, auto documentation updates, auto code quality checks. → MCP Support — connects to databases, APIs, AWS docs, any external tool → Steering Rules — guide AI behaviour across your entire project → SageMaker Integration — connect directly to AWS infrastructure from your IDE 𝗖𝘂𝗿𝘀𝗼𝗿 𝘃𝘀 𝗞𝗶𝗿𝗼 𝗰𝗼𝗺𝗽𝗮𝗿𝗶𝘀𝗼𝗻: → Cursor wins for fast iteration and quick fixes → Kiro wins for complex features that need upfront design → Kiro is the better choice if you build on AWS 𝗙𝘂𝗻 𝗳𝗮𝗰𝘁: During early access an engineer used Kiro to build an AWS integration. Kiro's agent code triggered a cascade that caused a real AWS service disruption. The internet called it "vibe too hard, brought down AWS." 😂 𝗧𝗵𝗲 𝘀𝗵𝗶𝗳𝘁 𝗳𝗿𝗼𝗺 𝘃𝗶𝗯𝗲 𝗰𝗼𝗱𝗶𝗻𝗴 𝘁𝗼 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻-𝗿𝗲𝗮𝗱𝘆 𝗰𝗼𝗱𝗲 𝗶𝘀 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗔𝗜 𝘁𝗼𝗼𝗹𝘀 𝗵𝗮𝘃𝗲𝗻'𝘁 𝘀𝗼𝗹𝘃𝗲𝗱 𝘆𝗲𝘁. 𝗞𝗶𝗿𝗼 𝗶𝘀 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝘀𝗲𝗿𝗶𝗼𝘂𝘀 𝗮𝘁𝘁𝗲𝗺𝗽𝘁. Have you tried Kiro yet? Cursor or Kiro — what's your pick? 👇 #AWS #Kiro #AITools #BackendEngineering #Java #Developer #LearningInPublic #SoftwareEngineering #CloudComputing Kiro Image Credits Electromech Cloudtech Pvt. Ltd.
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development