Devops Hands-on practice with KodeKloud Completed real world task ✨ ✨ Task: Install iptables with persistent rules and block incoming traffic to port 6000 for everyone except load balancer host ➡️ Install Iptables "sudo yum install iptables iptables-services -y" ➡️ Enable and start the iptables "sudo systemctl enable iptables" "sudo systemctl start iptables" ➡️ Rules wil be deleted after a reboot, Save the iptables rules to make persistent "sudo /usr/libexec/iptables/iptables.init save" ➡️ Check the existing rules and add the required rules as per the priority "sudo iptables -L INPUT -n --line-numbers" ➡️ If any rule to reject traffic from everyone is available add the Allow and block 6000 port rule with high priority than the reject rule If reject rule has 7 priority, Allow and block rules should be added before that i.e; 5 and 6 ➡️ Add the rule with priority 5 to allow traffic only from load balancer server "sudo iptables -I INPUT 5 -p tcp --dport 6000 -s load-balancer-host-name -j ACCEPT" ➡️ Add the rule with priority 6 to block everyone "sudo iptables -I INPUT 6 -p tcp --dport 6000 -j DROP" ➡️ Save the added Rules "sudo iptables-save | sudo tee/etc/sysconfig/iptables #Devops #Linux #DevopsEngineer #Learning #Kodekloud
Install iptables with persistent rules on Linux
More Relevant Posts
-
🚀 Day 12: Linux Internals for DevOps Engineers (Advanced) 👉 Disk Issues in Production (Not as simple as you think) Most people think: ❌ Disk full → delete files → done But real production issues are more complex. Today I explored how engineers actually debug disk-related failures. 📌 What I learned: 🔹 `df -h` shows disk usage 🔹 `du -sh` helps trace large directories 🔹 Sometimes deleted files still occupy space (hidden usage) 🔹 Log rotation is critical to prevent repeated failures 💡 Real Scenario: Disk shows 100% usage… But you can’t find large files. Why? 👉 Because deleted files are still held by running processes. Solution: ✔ Use `lsof | grep deleted` ✔ Restart the process This is something most beginners don’t know. 🧠 Question for you: Have you ever faced a situation where disk was full but you couldn’t find the files causing it? 👇 Would love to know your experience! 🎯 Learning Goal: To debug storage issues deeply and prevent recurring failures. 📅 Day 13 Tomorrow: Networking Basics (IP, Ports, DNS) Let’s keep going deeper 🚀 #DevOps #Linux #SRE #Storage #CloudComputing #SoftwareEngineering #TechLearning #LearningInPublic #ITCareers #EngineeringMindset #CareerGrowth #ProductionIssues
To view or add a comment, sign in
-
🔐 Day 4 of #100DaysOfDevOps — Linux file permissions Today's task: a backup script existed on a production server but no one could run it. One missing permission bit was the culprit. Here's what I learned: Every Linux file has a 10-character permission string like -rw-r--r-- It's split into 3 groups: → Owner (the user who created it) → Group (a team sharing access) → Others (everyone else) Each group gets 3 bits: r (read=4), w (write=2), x (execute=1) The fix was one command: chmod a+x /tmp/xfusioncorp.sh or chmod 755 /tmp/xfusioncorp.sh a = all users | +x = add execute permission Before: -rw-r--r-- (no one can run it) After: -rwxr-xr-x (everyone can run it) Why does this matter in DevOps? → Automation scripts fail silently when permissions are wrong → CI/CD pipelines break if deploy scripts aren't executable → Every cloud server you ever manage will need this The numeric equivalent: chmod 755 Meaning: owner gets rwx (7), group gets r-x (5), others get r-x (5) "r = read, w = write, x = execute. Three bits. Three groups. That's all of Linux permissions." #DevOps #Linux #BashScripting #chmod #CloudEngineering KodeKloud
To view or add a comment, sign in
-
-
🚀 Day 6: Linux Internals for DevOps Engineers 👉 Logs & Debugging (How Engineers Actually Find Issues) When a system breaks… Most beginners panic or restart the service. But real engineers do something different. 👉 They check logs. Every system, every application, every request leaves a trace. And those traces are called *logs*. 📌 What I learned today: 🔹 Logs are records of system and application events 🔹 Most logs are stored in `/var/log/` 🔹 Commands like `tail -f` and `journalctl` help in real-time debugging 🔹 Logs contain the actual reason behind failures 💡 Real Scenario: Your application suddenly goes down in production. What would you do? ❌ Restart the server ✅ Check logs first Because logs might show: * “Port already in use” * “Permission denied” * “Out of memory” 🧠 Question for you: Which command would you use to monitor logs in real time? 👇 Drop your answer! 🎯 Learning Goal: To debug issues based on root cause, not guesswork. 📅 Day 7 Tomorrow: Linux Permissions & Security (Real-world access control) Let’s keep growing 🚀 #DevOps #Linux #SRE #Debugging #SystemDesign #CloudComputing #SoftwareEngineering #TechLearning #LearningInPublic #ITCareers #EngineeringMindset #CareerGrowth
To view or add a comment, sign in
-
Ever happened to you that you tried to enter a #container and it failed? OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH Most people stop here. 👉 “No shell = no access” But that’s not actually true. Modern containers are often distroless / minimal, which means: 👉 no /bin/sh 👉 no /bin/bash 👉 no easy way to debug 💡 But here’s the thing: you can still enter the container. The manual way looks like this: 1. Find the container PID: docker inspect -f '{{.State.Pid}}' <container> # or crictl inspect <container-id> | grep pid 2. Inject a static binary (e.g. busybox): sudo apt install busybox-static sudo cp /bin/busybox /proc/<PID>/root/tmp/busybox sudo chmod +x /proc/<PID>/root/tmp/busybox 3. Enter with nsenter: sudo nsenter -t <PID> -m -u -i -n -p --root=/proc/<PID>/root /tmp/busybox sh Works… but yeah 😅 not fun. ----------- 🚀 So I built ctenter to automate all of this: sudo ctenter list sudo ctenter --pid <PID> Simple as that. It uses a lightweight agent ctenterd, but you can use any custom binary that provides a shell (like busybox): sudo ctenter --pid <PID> --agent-path /path/to/busybox --exec sh --interactive ✨ Features: 🔍 Cross-runtime discovery: Docker, containerd, CRI-O 🐚 Shell access without a shell ⚡ One-shot command execution 🧩 Custom agent support, bring your own binary 🪶 Lightweight injection via /proc/<pid>/root 🔐 No container modification required 🧠 Namespace-aware execution using nsenter 👉 Try it out: https://lnkd.in/er3dvrj3 #containers #docker #kubernetes #linux #devops
To view or add a comment, sign in
-
One of the most powerful aspects of Linux is how efficiently it handles **search operations and file permissions**. In real-world DevOps and server administration, these two concepts are used almost every day. 🔍 **Search Commands in Linux** Finding files, logs, or configurations quickly is crucial while working on servers. Some of the most commonly used commands are: `find` → Used to search files and directories based on name, type, size, or modified time Example: `find /home -name "*.log"` `grep` → Used to search for a specific word, pattern, or text inside files Example: `grep "error" application.log` `locate` → Quickly finds file paths from the system database Example: `locate nginx.conf` These commands make troubleshooting and log analysis much faster in production environments. 🔐 **Permission Commands in Linux** Linux permissions decide **who can read, write, or execute a file**. Each permission has a numeric value: `r = 4` → Read permission `w = 2` → Write permission `x = 1` → Execute permission These values are added together to form permission codes. For example: `7 = 4 + 2 + 1` → `rwx` `6 = 4 + 2` → `rw-` `5 = 4 + 1` → `r-x` `4 = 4` → `r--` So when we use: `chmod 755 file.sh` It means: Owner → `7` = `rwx` Group → `5` = `r-x` Others → `5` = `r-x` Understanding permissions is essential for security, script execution, and access control in Linux-based environments. Linux is not just about commands — it’s about control, security, and efficiency. #Linux #DevOps #CloudComputing #AWS #SystemAdministration #ServerManagement #Automation #SoftwareEngineering #Infrastructure #LinuxCommands #CareerGrowth #Technology #DevopsWithMultiCloud #flm #frontlinesmedia
To view or add a comment, sign in
-
-
🚨 The first time I opened a Linux server… I saw folders like `/bin`, `/etc`, `/var`, `/home` and thought: “What is all this? And why is everything starting with `/`?” 😅 That confusion led me to learn something very important for DevOps. 💡 𝘋𝘢𝘺 9 𝘰𝘧 𝘮𝘺 𝘋𝘦𝘷𝘖𝘱𝘴 𝘑𝘰𝘶𝘳𝘯𝘦𝘺 Today I learned about the 𝗟𝗶𝗻𝘂𝘅 𝗙𝗶𝗹𝗲 𝗦𝘆𝘀𝘁𝗲𝗺 — the structure that organizes everything inside a Linux system. --- 📖 Think of it like a 𝗺𝗮𝗽 𝗼𝗳 𝘁𝗵𝗲 𝗲𝗻𝘁𝗶𝗿𝗲 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝘀𝘆𝘀𝘁𝗲𝗺. In Linux, everything starts from a single root directory: 👉 `/` (root) From there, the system branches into different directories, each with a specific purpose. --- Here are some important ones I learned today: 📁 /𝗯𝗶𝗻 Contains essential command binaries like `ls`, `cp`, `mv`. 📁 /𝗲𝘁𝗰 Stores system configuration files. 📁 /𝗵𝗼𝗺𝗲 Personal directories for users. 📁 /𝘃𝗮𝗿 Contains logs, cache, and variable data. 📁 /𝘂𝘀𝗿 Stores system programs and utilities. 📁 /𝘁𝗺𝗽 Temporary files used by applications. --- 🚀 𝘞𝘩𝘺 𝘵𝘩𝘪𝘴 𝘮𝘢𝘵𝘵𝘦𝘳𝘴 𝘪𝘯 𝘋𝘦𝘷𝘖𝘱𝘴? Because when you manage servers, deploy applications, or troubleshoot issues… You constantly interact with these directories. Knowing 𝘄𝗵𝗲𝗿𝗲 𝘁𝗵𝗶𝗻𝗴𝘀 𝗹𝗶𝘃𝗲 𝗶𝗻 𝗟𝗶𝗻𝘂𝘅 saves a lot of time when debugging systems. --- 🔥 𝘔𝘺 𝘣𝘪𝘨𝘨𝘦𝘴𝘵 𝘳𝘦𝘢𝘭𝘪𝘻𝘢𝘵𝘪𝘰𝘯 𝘵𝘰𝘥𝘢𝘺: Linux is not random. Every directory has a 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗿𝗼𝗹𝗲 𝗶𝗻 𝗸𝗲𝗲𝗽𝗶𝗻𝗴 𝘁𝗵𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗲𝗱 𝗮𝗻𝗱 𝘀𝘁𝗮𝗯𝗹𝗲. --- 📌 𝘚𝘮𝘢𝘭𝘭 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘭𝘪𝘬𝘦 𝘵𝘩𝘪𝘴 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘱𝘰𝘸𝘦𝘳𝘧𝘶𝘭 𝘸𝘩𝘦𝘯 𝘸𝘰𝘳𝘬𝘪𝘯𝘨 𝘸𝘪𝘵𝘩 𝘴𝘦𝘳𝘷𝘦𝘳𝘴 𝘢𝘯𝘥 𝘤𝘭𝘰𝘶𝘥 𝘪𝘯𝘧𝘳𝘢𝘴𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦. And that’s exactly what DevOps engineers deal with daily. --- 💬 𝘋𝘦𝘷𝘖𝘱𝘴 𝘦𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘴 𝘩𝘦𝘳𝘦 — Which Linux directory confused you the most when you started? 😅 --- Learning step by step 🚀 #DevOps #Linux #LinuxFileSystem #LearningInPublic #DevOpsInsiders #TechJourney
To view or add a comment, sign in
-
-
🐧 Stop Googling, Start Typing: The Ultimate Linux Cheat Sheet! 🚀 Ever feel like the Linux terminal is a secret club you don’t have the password for? It’s time to change that. 💻 Whether you’re a Developer, SysAdmin, Student or DevOps Engineer, mastering the command line isn't just a flex—it’s a massive productivity multiplier. To help you navigate the shell like a pro, I’ve compiled the "Essentials" into one easy-to-save guide. 📌 What’s Inside the Toolkit: 📂 File & Directory Management: Navigate, create, and organize with ease (ls, cd, mkdir, rm). 👤 User & Permissions: Take control of security (chmod, chown, sudo). ⚙️ Process Handling: Monitor and manage system resources like a boss (top, ps, kill). 🌐 Networking Basics: Troubleshoot and connect instantly (ip addr, ping, netstat). Mastering Linux isn't about memorizing 1,000 commands—it’s about knowing the right 20 that get 80% of the work done. 🌟 _______________________________________________________________ 📥 [DOWNLOAD/SAVE THIS POST] 📥 Don't let this get lost in your feed. Hit the Save button and keep this reference handy for your next terminal session! Which Linux command saved your life today? Let's talk in the comments! 👇 #Linux #CheatSheet #SysAdmin #DevOps #Programming #TechSkills #OpenSource #DeveloperJourney #LinuxCommands #Networking #DevopsEngineer #AWS #Linuxpdf
To view or add a comment, sign in
-
🚀 Day 14/100 — DevOps Challenge Today’s task felt like real production troubleshooting. 🔍 Issue: A monitoring system reported that the Apache service was down on one of the application servers in a multi-tier architecture. 🛠️ What I did: Checked Apache status across all app servers Identified the faulty host where the service failed to start Investigated logs and found a port conflict error Discovered another service (sendmail) was already using port 5004 Stopped and disabled the conflicting service Reconfigured and successfully started Apache on the required port (5004) Verified that Apache is running on all app servers 💡 Key Takeaways: Always check logs — they tell you the real problem Port conflicts are a common cause of service failure Don’t just restart services blindly — understand why they fail Troubleshooting is a critical DevOps skill, not just configuration 📐 Real-world insight: In production, issues are rarely “install and run.” Most of the work is diagnosing failures and resolving conflicts under pressure. #DevOps #Linux #Apache #Troubleshooting #100DaysOfDevOps #KodeKloud
To view or add a comment, sign in
-
-
🚀 My First Step into DevOps – Nginx Deployment on Linux Taking my project beyond just development, I successfully deployed my personal portfolio (HTML, CSS, JS) on a Linux server using Nginx experiencing how applications actually run in real environments. 🛠️ What I implemented & learned: • Installed and configured Nginx • Set up custom server blocks (virtual hosts) • Deployed the project on a custom port (81) • Managed website files inside /var/www/ • Linked configurations with sites-enabled • Resolved permission issues using Linux commands 💡 Why this matters: This hands-on practice helped me understand how web servers handle requests, how deployments work in real-world scenarios, and how important configuration & permissions are in hosting applications. It wasn’t just about running a project — it was about learning how applications actually live and perform on servers. Big thanks to my mentor Nabeel Hassan for the continuous guidance and support 🙌 📈 This is just the beginning , aiming next for CI/CD, automation, and cloud platforms like AWS & Docker. #DevOps #DevOpsJourney #Linux #Ubuntu #Nginx #WebServer #Deployment #CloudComputing #AWS #Docker #CI_CD #Automation #Infrastructure #ServerManagement #SystemAdmin #WSL #LearningByDoing #FutureEngineer #TechJourney #PortfolioProject #Backend #EngineeringLife #CloudJourney #ITSkills #DevOpsEngineer
To view or add a comment, sign in
-
🚀 Day 20 of 30 – Debugging in Terraform (TF_LOG) When I first started learning Terraform, one big question I had was: 👉 How do you actually see what Terraform is doing under the hood? Turns out — Terraform has a powerful built-in logging system you can enable with a single environment variable. 🔹 What is TF_LOG? TF_LOG is an environment variable that controls Terraform’s logging verbosity. ✔ Helps debug failed plans ✔ Understand provider behavior ✔ Identify state-related issues ✔ No code changes required 🔹 Log Levels (highest → lowest) TRACE ← most detailed DEBUG INFO WARN ERROR ← least detailed 🔹 Enable Logging (Linux / Mac) export TF_LOG=INFO terraform plan 👉 Logs will be printed directly in your terminal 🔹 Store Logs to a File export TF_LOG=INFO export TF_LOG_PATH=terraform.txt terraform plan 👉 Logs will be saved in terraform.txt 👉 Useful for debugging & sharing with teams 🔹 Practical Example resource "local_file" "foo" { content = "foo!" filename = "${path.module}/foo.txt" } Run: export TF_LOG=DEBUG terraform apply 👉 You can see: • How path.module is resolved • File creation steps • Internal Terraform execution flow 🎯 Key Takeaway When Terraform behaves unexpectedly: 👉 Don’t guess 👉 Don’t assume Check the logs first — TF_LOG is your best friend 📅 Tomorrow: Terraform format #30DaysOfTerraform #Terraform #DevOps #CloudEngineering #AWS
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development