Day 17 / 30 – AWS Learning Journey ☁️ ✅ What I learned today: - What AWS Lambda is and how serverless architecture works - Key components of serverless computing - Differences between EC2 vs Lambda - When to use server‑based vs serverless architectures - Trade‑offs in cost, scalability, and management 🔍 Key takeaway: Choosing between EC2 and Lambda depends on the use case. Serverless reduces operational overhead, while EC2 offers more control—architecture decisions should be workload‑driven. 📘 Resources: • AWS Zero To Hero Course For DevOps Engineers – YouTube • Official AWS Lambda Documentation 🎯 Next up: Hands‑on cost optimization using AWS Lambda #30DaysOfAWS #AWS #Lambda #Serverless #CloudArchitecture
Shubham Malik’s Post
More Relevant Posts
-
I recently explored AWS CloudFormation and got a clear picture of how it automates the deployment of infrastructure with help of Infrastructure as Code (IaC). I got to know that CloudFormation allows me to define AWS resources, such as VPCs, EC2 instances, and S3 buckets, in template files, and rather than clicking through the AWS Console, I can describe my environment as code and have AWS create it automatically. Another thing I learned is the correspondence between CloudFormation templates and CloudFormation stacks that create, edit, and delete resources in a controlled manner, minimizing the chances of configuration drift between environments. Integrating CloudFormation and CodeCommit and CodePipeline, I understood how a CI/CD pipeline can automatically update stacks when I make changes to the Git and make deployments quicker, regular, and easier to monitor. This trip helped me to understand that infrastructure-as-code approximates version control, repeatability and recovery probabilities to cloud environments, a phenomenon that has fundamentally changed my approach to configuring and operating AWS resources. #AWS #CloudFormation #DevOps #IaC #CICD
To view or add a comment, sign in
-
Completed Day 28 of my AWS learning journey, Where I explored AWS Lambda (Serverless computing) and Route 53 (DNS service). This session helped me understand how automation and domain routing work in cloud environments. Through hands-on practice, I automated tasks using Lambda and configured different routing policies. What I worked on: 🔹 Created an EC2 instance 🔹 Created Lambda function with Python runtime 🔹 Configured IAM role for Lambda 🔹 Wrote code to stop EC2 instance automatically 🔹 Tested Lambda function using events 🔹 Created Route 53 Hosted Zone 🔹 Configured different routing policies 🔹 Tested DNS-based traffic routing Quick Understanding : 🔹 Lambda → Run code without managing servers (serverless) 🔹 IAM Role → Grants permissions to Lambda 🔹 Route 53 → Converts domain name to IP address 🔹 Routing Policies → Controls how traffic is distributed This session helped me understand automation and intelligent traffic routing in AWS. #AWS #Lambda #Route53 #Serverless #CloudAutomation #DNS #HandsOnLearning
To view or add a comment, sign in
-
⚡ AWS Lambda – From Basics to Real Understanding! A few days ago, I shared my first hands-on experience with AWS Lambda… Today, I’ve taken it a step further by completing AWS Lambda Getting Started from AWS Training & Certification. 💡 What’s different this time? 👉 Before: I knew how to create and trigger a Lambda function 👉 Now: I understand how to design serverless workflows 🧠 What I strengthened: ✔ Event-driven architecture (S3 → Lambda → downstream systems) ✔ Role of IAM in secure execution ✔ Function lifecycle & execution flow ✔ Debugging using CloudWatch logs ✔ Real-world use cases of serverless ⚡ Big Realization: Serverless is not just a feature… 👉 It’s a mindset shift from infrastructure to logic 🎯 My focus now: Building end-to-end pipelines using Lambda + S3 + Streaming services #AWS #AmazonWebServices #Lambda #Serverless #CloudComputing #DataEngineering #LikedinLearning #AWSCertification #CareerGrowth #DataWithHemanth
To view or add a comment, sign in
-
Day 7 / 30 – AWS Learning Journey ☁️ ✅ What I learned today: - Designed and created a production‑level AWS VPC project - Applied concepts learned earlier: • Subnets • Route Tables • Internet Gateway & NAT Gateway • Security Groups and NACLs - Understood how different AWS networking components work together in a real setup 🔍 Key takeaway: Hands‑on projects bring clarity. Building a VPC from scratch helped connect theoretical concepts with real‑world cloud architecture. 📘 Resources: • AWS Zero To Hero Course For DevOps Engineers – YouTube • Official AWS Documentation 🎯 Next up: Week‑1 revision & interview question practice #30DaysOfAWS #AWS #VPC #CloudArchitecture #HandsOnLearning
To view or add a comment, sign in
-
-
🚀 New Video Uploaded: AWS VPC Peering Lab | Step-by-Step Configuration Between Two VPCs In this hands-on AWS lab, I have demonstrated how to create and configure VPC Peering between two custom VPCs and establish secure communication between EC2 instances across different VPCs. ✅ Created two custom VPCs ✅ Configured public and private subnets ✅ Launched EC2 instances in separate VPCs ✅ Created VPC Peering Connection (Requester & Accepter) ✅ Updated Route Tables for connectivity ✅ Configured Security Groups and NACLs ✅ Verified connectivity using Ping between EC2 instances This practical lab is highly useful for AWS beginners, DevOps Engineers, Cloud Engineers, and students preparing for interviews and real-time projects. 🎥 Watch the full video here: https://lnkd.in/dfFy2Jew Your support through Like, Share, and Subscribe means a lot! #AWS #VPCPeering #EC2 #DevOps #CloudComputing #AWSLab #Networking #AmazonWebServices #DevOpsEngineer #CloudEngineer #HandsOnLab #Infrastructure #Learning
Connect 3 VPCs in AWS | VPC Peering Configuration Hands-On
https://www.youtube.com/
To view or add a comment, sign in
-
I worked on creating an Amazon Machine Image (AMI) from an existing EC2 instance using Terraform on KodeKloud. 🚀 Task Overview The objective was to generate an AMI from an existing EC2 instance and ensure the image reaches the available state. 🛠️ What I Did ✅Defined an EC2 instance in Terraform with the required configurations ✅Added the aws_ami_from_instance resource to create an AMI from the running instance ✅Leveraged Terraform’s dependency handling to reference the instance dynamically ✅Executed terraform apply to provision the AMI ✅Verified the AMI state using AWS CLI 💡 Key Learning This task reinforced how Terraform enables infrastructure automation beyond just provisioning, extending into image creation and reusability. AMI creation is a crucial concept in: Immutable infrastructure Auto Scaling setups Backup and disaster recovery strategies Successfully created and verified an AMI in the available state, ready for reuse in launching identical EC2 instances. #DevOps #Terraform #AWS #CloudComputing #InfrastructureAsCode #CloudSecurity #LearningJourney
To view or add a comment, sign in
-
-
Completed Day 27 of my AWS learning journey, Where I explored Elastic Beanstalk, a Platform as a Service (PaaS) that simplifies application deployment. This session helped me understand how AWS automatically manages infrastructure like EC2, Load Balancer, and Auto Scaling. Through hands-on practice, I deployed an application without manually configuring servers. What I worked on: 🔹 Created an Elastic Beanstalk application 🔹 Selected platform (Python) and sample application 🔹 Created Service Role & EC2 Instance Role 🔹 Configured VPC, availability zones, and instance settings 🔹 Deployed application environment 🔹 Verified EC2 instances created automatically 🔹 Accessed application using generated domain URL Quick Understanding: 🔹 Elastic Beanstalk → Deploy applications without managing infrastructure 🔹 Service Role → Allows Beanstalk to manage AWS resources 🔹 EC2 Instance Role → Grants permissions to instances 🔹 Environment → Runs and manages application setup This session showed how AWS enables fast and easy application deployment with minimal effort. #AWS #ElasticBeanstalk #PaaS #CloudDeployment #EC2 #HandsOnLearning
To view or add a comment, sign in
-
🔹 New Video Alert! 🔹 👉 Master advanced Amazon S3 concepts essential for AWS exams & real-world projects! 🔗 https://lnkd.in/gVyE2pAW Explore presigned URLs, versioning, secure sharing, recovery, and more with practical demos! Level up your AWS skills today. Like, share, and comment below 👇 #AWS #AmazonS3 #AWSCertification #CloudComputing #DevOps #LearnAWS #S3Tips #AWSDeveloper #CloudSkills
To view or add a comment, sign in
-
Day 17 of 30: AWS CloudWatch Alarms - Proactive Monitoring Made Simple What I learned: ✓ Creating CloudWatch alarms for EC2 CPU utilization ✓ Setting up SNS notifications for alarm triggers ✓ Defining alarm thresholds and evaluation periods CloudWatch Alarms allow you to watch a single CloudWatch metric or the result of a math expression based on CloudWatch metrics. The alarm performs one or more actions based on the value of the metric relative to a threshold over a number of time periods. Key benefits: 🔔 Get notified before issues become critical 📊 Track resource performance automatically 💡 Make data-driven decisions for scaling Continuing my 30-day journey into AWS Core Services! #AWS #CloudWatch #CloudMonitoring #DevOps #CloudEngineering #InfrastructureMonitoring
To view or add a comment, sign in
-
My 30-Day AWS Terraform Challenge. Day 3/30. Inspired by Piyush sachdeva Today I worked on creating an S3 bucket using Terraform and focused on understanding how authentication and resource provisioning come together in AWS. Instead of just writing code, I spent time understanding how Terraform communicates with AWS and how credentials play a foundational role in every deployment. Key Learnings: • Terraform uses AWS credentials to interact with AWS APIs • Authentication can be configured using AWS CLI, environment variables, or profiles • Amazon S3 is an object storage service used for storing files like backups, logs, and application data • S3 bucket names must be globally unique and follow strict naming conventions • Terraform workflow is simple and powerful: init → plan → apply → destroy • Tagging resources helps with organization and cost tracking What I built: • A simple S3 bucket using Terraform • Configured AWS provider with region • Applied infrastructure changes and verified in AWS This day reinforced a key idea: Cloud infrastructure is not just about creating resources, it’s about understanding how systems securely connect and operate together. Building step by step. Full blog here: https://lnkd.in/gYgNXrzr GitHub repo: https://lnkd.in/gmsCBxZi #30DaysOfAwsTerraform #Terraform #DevOps #AWS #CloudEngineering
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development