My 30-Day AWS Terraform Challenge. Day 3/30. Inspired by Piyush sachdeva Today I worked on creating an S3 bucket using Terraform and focused on understanding how authentication and resource provisioning come together in AWS. Instead of just writing code, I spent time understanding how Terraform communicates with AWS and how credentials play a foundational role in every deployment. Key Learnings: • Terraform uses AWS credentials to interact with AWS APIs • Authentication can be configured using AWS CLI, environment variables, or profiles • Amazon S3 is an object storage service used for storing files like backups, logs, and application data • S3 bucket names must be globally unique and follow strict naming conventions • Terraform workflow is simple and powerful: init → plan → apply → destroy • Tagging resources helps with organization and cost tracking What I built: • A simple S3 bucket using Terraform • Configured AWS provider with region • Applied infrastructure changes and verified in AWS This day reinforced a key idea: Cloud infrastructure is not just about creating resources, it’s about understanding how systems securely connect and operate together. Building step by step. Full blog here: https://lnkd.in/gYgNXrzr GitHub repo: https://lnkd.in/gmsCBxZi #30DaysOfAwsTerraform #Terraform #DevOps #AWS #CloudEngineering
Terraform AWS S3 Bucket Authentication and Provisioning
More Relevant Posts
-
My 30-Day AWS Terraform Challenge. Day 2/30. Inspired by Piyush sachdeva Today I moved from concepts to actually deploying infrastructure using Terraform. This session focused on Terraform Providers and versioning, but the real learning came from connecting everything end to end. Key learnings: Terraform providers act as a bridge between Terraform and AWS Terraform core and providers have separate versioning Version constraints help maintain stability and avoid breaking changes terraform plan provides a safe preview before making changes What I built: AWS VPC S3 bucket Random ID for unique resource naming Also worked on: Setting up AWS CLI authentication Configuring IAM user access Validating connectivity using aws sts get-caller-identity I initially ran into an authentication issue, but fixing it helped me understand how critical credential setup is before working with Terraform. Seeing Terraform successfully create real AWS resources was a great moment. It reinforces how Infrastructure as Code brings consistency and confidence to cloud environments. Full blog here: https://lnkd.in/gdH9BAaR GitHub repo: https://lnkd.in/g4TiBatN #30DaysOfAwsTerraform #Terraform #DevOps #AWS #CloudEngineering
To view or add a comment, sign in
-
How many DynamoDB lock tables are sitting in your AWS account right now? OpenTofu silently kept stacking features I dreamed of in the old days, part 3 of v1.10.0. 👇 Every IaC project on AWS was a chore: 1. Create the S3 bucket for state 2. Create a DynamoDB table… for a single boolean lock 3. Write an IAM policy that grants access to both I still can't believe we needed an entire second AWS service just to hold one lock row. One line. That's the whole migration. Stay tuned for more OpenTofu content. 📦🔥 #OpenTofu #Terraform #InfrastructureAsCode #IaC #AWS #DevOps #PlatformEngineering #S3 #CloudNative #SRE
To view or add a comment, sign in
-
-
I used to wonder what separates a "cloud engineer" from someone who just passed the AWS exam. The answer? They've actually provisioned something that would break in production. So I'm building it, a full Kubernetes cluster on AWS. Multi-AZ, private subnets, spot instances, bash-automated lifecycle. No tutorials. No sandbox. Just real architecture decisions and their consequences. system that is: • Highly available across multiple Availability Zones • Secure with workloads running in private subnets • Cost-optimized using spot instances • Fully automated from provisioning to teardown Using this architecture as my blueprint, here’s what I’ll be working on: 🔹 Moving from AWS free tier to a production-ready setup 🔹 Designing and provisioning a custom VPC across 3 AZs 🔹 Deploying a kOps Kubernetes cluster 🔹 Containerizing applications with Docker and deploying via Kubernetes manifests 🔹 Automating infrastructure lifecycle with Bash scripts This is the kind of setup behind real-world cloud systems, not just demos. I'll be posting the whole thing as I go, including the parts that break. First update coming soon.. #DevOps #AWS #Kubernetes #CloudEngineering #BuildInPublic
To view or add a comment, sign in
-
-
I’ve been learning AWS core services and wanted to put together a simple breakdown with real-time examples. In this video, I covered the key areas that are actually used in most projects: Compute (EC2, Lambda), Storage (S3, EBS), Database (RDS, DynamoDB), Networking (VPC, CloudFront), and Security (IAM). Instead of just theory, I tried to explain how these are used in real-world scenarios like websites, applications, and company infrastructure. This helped me understand how everything connects in cloud environments, especially from a job perspective in Cloud and DevOps roles. If you’re starting with AWS or trying to get job-ready, this might be useful. Here’s the video: https://lnkd.in/gpBuBvEk Open to feedback and suggestions. #AWS #CloudComputing #DevOps #AWSCertification #CloudCareers #Learning
Introduction to AWS Core Service Areas
https://www.youtube.com/
To view or add a comment, sign in
-
There are many ways to use AWS serverless: click through AWS console, write Terraform scripts... or take the easy route with Lime Boost.
To view or add a comment, sign in
-
-
🚀 Terraform Challenge – Day 2: Providers & Authentication Yesterday I created infrastructure using Terraform. Today I learned something even more important: 👉 How Terraform connects to AWS securely In real projects, we don’t use just one account: Dev Staging Production So authentication becomes critical 🔐 💡 What I practiced today: AWS provider configuration Using AWS profiles (dev, prod) Running Terraform with specific accounts ⚠️ Issues I faced: ❌ “No valid credential sources found” ✅ Fixed by configuring AWS CLI profile ❌ “Access Denied” ✅ Fixed IAM permissions Big learning: 👉 Never hardcode credentials in Terraform Tomorrow: Terraform State (Most important concept for teams) To Read Full Blog: https://lnkd.in/gFKsagnp #Terraform #DevOps #AWS #Cloud #InfrastructureAsCode #LearningInPublic #Kubernetes #Automation #Jenkins #CiCD #Docker #Ansible #Automate
To view or add a comment, sign in
-
-
Still learning Docker and wondering how to store your container images in AWS? 🐳☁️ I wrote a beginner-friendly article on AWS ECR and how it helps you store, manage, push, and pull Docker images more easily in AWS. 👉 Read the full article here: https://lnkd.in/gmAgpT4j #enlear #AWS #AWSECR #Docker #AWSCloud #DevOps #CloudComputing
To view or add a comment, sign in
-
🚀 Day 62 of #90DaysOfDevOps | TerraWeek Day-02 Instead of creating isolated resources, I built a complete AWS infrastructure using Terraform: VPC Subnet Internet Gateway Route Table Security Group EC2 Instance But the real learning? Dependencies I finally understood: How Terraform decides what to create first Why order in code DOESN’T matter How everything is connected behind the scenes Example: Subnet automatically waits for VPC EC2 waits for subnet + security group That’s Terraform’s dependency graph magic Also explored: Implicit vs Explicit dependencies depends_on usage Lifecycle rules (create_before_destroy 🔥) Biggest takeaway: Infrastructure is NOT separate resources… It’s a connected system GitHub Repo: https://lnkd.in/guvpSGp7 What was confusing for you when learning Terraform dependencies? 🤔 Let’s discuss #90DaysOfDevOps #TerraWeek #DevOpsKaJosh #TrainWithShubham
To view or add a comment, sign in
-
🚀 What is AWS Lambda? (In simple terms) Imagine running your code without worrying about servers at all 🤯 That’s exactly what AWS Lambda does. 👉 It is a serverless compute service where: You don’t manage servers You don’t worry about scaling You only pay for what you use 💡 How it works: An event triggers your function → Lambda runs your code → returns the result 📌 Real-world example: When a user uploads an image to S3 → Lambda automatically resizes it → stores the optimized image 🔥 Why developers love it: ✔ No infrastructure management ✔ Auto scaling ✔ Cost-efficient ✔ Easy integration with AWS services 👉 Focus on your logic, AWS handles the rest. 💭 Once you understand Lambda, you’ll never look at backend the same way again. #AWS #Lambda #Serverless #CloudComputing #DevOps #TechSimplified #LearningInPublic
To view or add a comment, sign in
-
-
Say goodbye to DynamoDB... 👋 Terraform just became even simpler! 🚀 Amazon S3 now supports native state locking. No more needing to pair Amazon DynamoDB alongside S3. 💡 How to implement it? Simple. Just add one line to your backend: use_lockfile = true ✅ Why is state locking essential? 🧐 Without it, multiple people or pipelines can run Terraform simultaneously, which can lead to: • Overwriting each other’s changes 💥 • Corrupting the state file 🛠️ • Failed or inconsistent deployments ❌ Why is this a game changer? • No more unnecessary DynamoDB table 📉 • Simpler bootstrap setup ✨ • Lower costs 💰 • Cleaner CI/CD pipelines 🧼 #Terraform #AWS #DevOps #CloudComputing #InfrastructureAsCode
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development