🔐 Day 7 of #100DaysOfDevOps — Passwordless SSH authentication Today's task: set up key-based SSH access from the jump host to all 3 app servers so automation scripts can run without any password prompts. This is how EVERY real DevOps pipeline works. Here's the concept: SSH key authentication uses a key PAIR: 🔑 Private key → stays ONLY on your machine (never shared) 🔓 Public key → copied to every server you want to access When you SSH in, the server challenges you to prove your identity using the private key. No password. No human input. Fully automated. The 3 commands that make it happen: 1. ssh-keygen -t rsa → Generates your key pair on the jump host 2. ssh-copy-id tony@stapp01 → Copies your public key to the server (password used once — for the last time) 3. ssh tony@stapp01 → Connects instantly. Zero password prompt. ✅ Repeated for all 3 app servers. Why this matters in DevOps: → CI/CD pipelines deploy code via SSH — they can't type passwords → Cron jobs that SSH between servers need this → Ansible, Terraform, and most automation tools rely on key auth → It's more secure than passwords — virtually impossible to brute force "ssh-copy-id uses a password once so you never need a password again." #DevOps #Linux #SSH #Automation #Security #CloudEngineering #KodeKloud
Passwordless SSH authentication with key pairs for DevOps
More Relevant Posts
-
10 days into the 100 Days of DevOps Challenge by KodeKloud — here's my progress so far! 🗓️ I've been documenting everything on GitHub and now sharing it here on LinkedIn too. Here's a quick look at what I've tackled: ✅ Linux User Management — creating service accounts with non-interactive shells ✅ Temporary Accounts with Expiry — setting time-bound access for developers ✅ Disabling Root SSH Access — hardening servers against brute-force attacks ✅ Script Execution Permissions — managing file permissions for automation ✅ SELinux Configuration — installing and managing security policies ✅ Cron Job Automation — scheduling tasks across multiple servers ✅ SSH Key Authentication — enabling passwordless access for automated scripts ✅ Ansible Setup — installing and configuring a configuration management tool ✅ MariaDB Troubleshooting — diagnosing and fixing a production database outage ✅ Bash Backup Script — automating website backups with secure remote transfer Biggest lesson so far? The basics aren't boring — they're the foundation everything else is built on. Security, automation, and reliability all trace back to getting these fundamentals right. 🏗️ What surprised me the most? How many of these tasks overlap in real-world DevOps — SSH keys show up in backup scripts, permissions show up everywhere, and nothing works without proper user management. Still 90 days to go. One task at a time. 💪 #DevOps #Linux #KodeKloud #100DaysOfDevOps #LearningInPublic #Ansible #Automation #WeeklyRecap
To view or add a comment, sign in
-
🔐 In DevOps, many deployment issues are not caused by broken code. They are caused by permissions. I recently reviewed a practical training guide on Linux file permissions and user management, and it reinforces a lesson every engineer learns sooner or later: If you do not understand permissions, you do not fully control your environment. What makes this resource especially useful is how clearly it connects fundamentals to real operational problems: ✅ why chmod +x fixes a script that fails with Permission denied ✅ why SSH refuses private keys that are too open and requires tighter permissions like 400 or 600 ✅ why .env files and secrets should never be left broadly readable ✅ why web servers return 403 Forbidden when ownership and directory permissions are wrong ✅ why understanding users, groups, and sudo is essential for managing Docker, services, deployments, and production access safely This matters because Linux permissions are not just a beginner topic. They directly affect: * deployment reliability, * service availability, * access security, * secret protection, * and day-to-day troubleshooting speed. A small mistake in permissions can mean: * a script does not execute, * a key is rejected, * a site goes down, * or sensitive data becomes readable by the wrong users. 🎯 My takeaway: Strong DevOps execution is not only about automation. It is also about mastering the Unix fundamentals that quietly control everything underneath. Because in production, permissions are not a detail. They are part of the system’s security and stability model. #DevOps #Linux #LinuxPermissions #SystemAdministration #CloudEngineering #Infrastructure #Automation #CyberSecurity #SSH #Docker #Nginx #UserManagement #chmod #chown #Sudo #SecretsManagement #PlatformEngineering #CloudOps #ITOperations #LinuxBasics
To view or add a comment, sign in
-
🚀 Day 10 of #100DaysOfDevOps Challenge | KodeKloud Today’s task focused on backup automation using Bash scripting, a fundamental responsibility in production support and site reliability workflows. 💾 Day 10: Automating Website Backups with Bash Script The production support team needed a reliable way to back up a static website hosted on an application server. I developed a Bash script to automate the backup process and securely transfer the archive to a centralized storage server—ensuring data safety and operational continuity. Here’s what I accomplished: ✅ Created a Bash script named news_backup.sh under the /scripts directory ✅ Generated a zip archive of the website directory /var/www/html/news ✅ Stored the backup archive temporarily in the /backup/ directory ✅ Copied the archive to a remote storage server for persistent backup ✅ Configured password-less authentication to enable automated file transfer ✅ Ensured the script runs without using sudo, following best practices This exercise strengthened my understanding of backup strategies, Bash scripting, and secure file transfer, all of which are essential for maintaining data integrity and disaster recovery readiness in real--world DevOps environments. Consistently building hands-on skills in automation and reliability engineering. On to Day 11! 🚀 #DevOps #Bash #Automation #Backup #Linux #SRE #100DaysOfDevOps #LearningInPublic #CloudComputing
To view or add a comment, sign in
-
🚀 100 Days of DevOps – Days 9–12 Journey The past few days pushed me out of my comfort zone — and honestly, that’s where the real learning happened. 🔧 What I worked on: -> Troubleshooting database issues in MariaDB (service failures due to permission problems) -> Fixing application downtime by debugging logs and system services -> Writing automation scripts for backups and enabling password-less SSH -> Setting up and configuring Apache HTTP Server and Apache Tomcat -> Debugging connectivity issues across servers in a multi-tier environment ⚠️ Challenges I faced (and learned from): -> Services failing silently due to wrong permissions -> Ports being blocked even when services were running (this one was tricky!) -> Confusion between different servers and ports during debugging “No route to host” errors that turned out to be network-level restrictions -> Fixing one issue… only to discover another layer underneath 😅 💡 Key lessons: -> Always debug layer by layer (Service → Port → Firewall → Network) -> Logs (systemctl status, journalctl) are your best friend -> Small configuration mistakes can cause big failures -> Real DevOps work is not just writing code — it’s thinking, testing, and troubleshooting #DevOps #Linux #SystemAdministration #100DaysOfDevOps #LearningByDoing #CloudJourney
To view or add a comment, sign in
-
-
𝗧𝗰𝗽𝗱𝘂𝗺𝗽: 𝗦𝗲𝗲𝗶𝗻𝗴 𝘁𝗵𝗲 𝗧𝗿𝘂𝘁𝗵 𝗮𝘁 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗟𝗲𝘃𝗲𝗹 In DevOps, when something breaks, logs and dashboards don’t always give the full picture. This is where tcpdump becomes useful. It lets you see actual 𝗻𝗲𝘁𝘄𝗼𝗿𝗸 𝗽𝗮𝗰𝗸𝗲𝘁𝘀 moving in and out of your server. No assumptions, just raw data. For example, when users report timeouts or 5xx errors, you might check CPU, memory, and logs first. If everything looks normal, tcpdump helps you go deeper. You can 𝗰𝗮𝗽𝘁𝘂𝗿𝗲 𝘁𝗿𝗮𝗳𝗳𝗶𝗰 on a specific port and see if requests are reaching your server, if responses are going out, or if packets are getting retransmitted. A simple command like capturing traffic on port 8080 can already tell a lot. You may notice 𝗿𝗲𝗽𝗲𝗮𝘁𝗲𝗱 𝗦𝗬𝗡 𝗿𝗲𝗾𝘂𝗲𝘀𝘁𝘀 𝘄𝗶𝘁𝗵 𝗻𝗼 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲, which means connections are not getting established. Or you might see 𝗿𝗲𝘁𝗿𝗮𝗻𝘀𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀, which usually indicates packet loss somewhere in the network path. In real environments, traffic is high, so you do not capture everything blindly. You 𝗳𝗶𝗹𝘁𝗲𝗿 𝗯𝗮𝘀𝗲𝗱 𝗼𝗻 𝗽𝗼𝗿𝘁, 𝗵𝗼𝘀𝘁, 𝗼𝗿 𝗽𝗿𝗼𝘁𝗼𝗰𝗼𝗹. You capture a small sample or write it to a file and analyze it later using tools like Wireshark. The goal is not to read every packet, but to identify 𝗽𝗮𝘁𝘁𝗲𝗿𝗻𝘀. Tcpdump is not a daily tool for every issue, but when 𝗱𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴 𝗻𝗲𝘁𝘄𝗼𝗿𝗸 𝗽𝗿𝗼𝗯𝗹𝗲𝗺𝘀, it becomes one of the most powerful tools. It helps you confirm whether the issue is in your application, the network, or somewhere in between. ➕ Follow Sai P. for more insights on DevOps & Cloud ♻ Repost to help others learn and grow in DevOps 📩 Save this post for future reference #Tcpdump #K8s #DevOps #Linux #Networking
To view or add a comment, sign in
-
-
Puppet Continuous Delivery 5.15.0 is now available 🎉 This release focuses on stability, security hardening, and practical usability improvements for teams running Puppet automation pipelines at scale. If you’re already on CD 5.x, this is a straightforward upgrade. What’s new in 5.15.0: - More flexible webhook configuration with the new external_webhook_url option (useful for proxy-based deployments) - Idle session timeouts in the CD console for improved security - Impact Analysis improvements, including skip_empty_catalogs support in Pipelines as Code - Clearer commit status reporting for GitLab when native pipelines are present - Better data visibility in the fact picker for pe_patch package updates - Multiple security and authorization fixes, including tightened access controls and CSRF protections - Amazon Linux 2023 support for Docker-based installs - Postgres base image updated to postgres:17-trixie - Dependency updates addressing a broad set of reported CVEs This release continues the work of reducing pipeline friction, tightening security posture, and keeping CD upgrades low risk across minor versions. Full details and CVE listings are available in the official release notes: https://lnkd.in/e6r786nR If you’re standardizing on Puppet CD 5.x, staying current helps ensure fixes land before they turn into operational issues. #Puppet #ContinuousDelivery #DevOps #InfrastructureAsCode
To view or add a comment, sign in
-
-
Day 23 of my Kubernetes (Official) 30 Days Series Kubernetes is powerful. But by default, it’s not secure enough for production. If you don’t configure security properly, you’re exposing your cluster across multiple layers: Cluster Network Workload Code Runtime In today’s carousel, I break down Kubernetes security end-to-end: 🔹 The 5 security layers and full attack surface 🔹 Pod Security Standards: • privileged • baseline • restricted 🔹 A hardened securityContext YAML: • runAsNonRoot • readOnlyRootFilesystem • Drop all Linux capabilities • seccomp profiles 🔹 Why each field matters — mapped to real attack vectors 🔹 How Admission Controllers enforce policies (OPA Gatekeeper) 🔹 Built-in controllers you should always enable 🔹 Runtime security tools: • Falco → detect suspicious behavior • Trivy → scan images • Cosign → sign and verify images One important mindset: 👉 Security is layered, not a single setting Another rule: 👉 Default settings are not production-ready You must explicitly harden workloads, enforce policies, and monitor runtime behavior. Day 24 tomorrow: Multi-tenancy & Cost Optimisation — spot nodes, VPA, and right-sizing workloads. Follow along if you're learning Kubernetes from fundamentals → secure production systems. #Kubernetes #K8s #DevOps #CloudNative #KubernetesSecurity #PlatformEngineering #Containers #CloudComputing #LearnDevOps
To view or add a comment, sign in
-
Disk Space Full Issue — How DevOps Engineers Fix It (Step by Step) One of the most common real-world production issues is the message: “No space left on device.” Here’s how to debug and fix it: Step 1: Check disk usage Use the command: df -h This shows which partition is full. Step 2: Find which directory is consuming space Use the command: du -sh /* 2>/dev/null This helps identify large folders. Step 3: Drill down further Use the commands: du -sh /var/* du -sh /home/* This narrows down to the exact location. Step 4: Find large files Use the command: find / -type f -size +100M This lists files larger than 100MB. Step 5: Common root cause Logs often fill up: /var/log/syslog /var/log/messages Step 6: Fix the issue - Clean old logs: rm -rf /var/log/*.gz - Clear temp files: rm -rf /tmp/* - Truncate large logs: /var/log/syslog Step 7: Prevent future issues - Set up monitoring (Prometheus/Grafana) - Enable log rotation - Set alerts at 70–80% usage Lesson: Issues don’t crash systems suddenly; they grow silently — monitoring is key. Real DevOps is not just tools — it’s how you handle problems. #DevOps #Linux #SRE #Cloud #Monitoring #RealWorld
To view or add a comment, sign in
-
You’re setting up a production server. You need a way for your team to deploy code, but you can’t give everyone root access and you shouldn't use your personal account for services. This is the moment every DevOps engineer needs a solid permission strategy. Here’s exactly what you do: Step 1: Create a dedicated deploy user. Use useradd -m to give it a home directory, but never use it for manual logins. Step 2: Setup a webteam group. Add the deploy user and your account using usermod -aG. Warning: Forget the -a and you’ll strip your user of all other groups. Step 3: Set ownership & permissions. Assign the app folder to deploy:webteam and set it to 775. Now the group can collaborate safely. Step 4: The Session Refresh. If you get "Permission Denied" after adding the group, run newgrp webteam. This updates your permissions instantly without logging out. Step 5: Lock the shell. Change the deploy user's shell to /usr/sbin/nologin. It can own files and run services, but nobody can SSH in directly. This is what a secure, professional server environment actually looks like. What’s your go-to move for locking down service accounts? 🔗 #Linux #DevOps #Security
To view or add a comment, sign in
-
-
Most DevOps issues are NOT complex. They’re just… blocked ports. Every day we work with: CI/CD pipelines Kubernetes clusters Cloud infrastructure Monitoring tools But when something breaks, we often overthink it. 👉 Pipeline failed? 👉 API not responding? 👉 Dashboard not loading? Before diving deep… check the port. 💡 Real-world truth: I’ve seen issues like: Jenkins not triggering → Port 8080 blocked Kubernetes API unreachable → Port 6443 issue Database connection timeout → Port 3306 / 5432 closed Grafana dashboard down → Port 3000 not exposed ⚠️ Most of the time, it’s not the system — it’s the network. 🧠 Why ports matter in DevOps: ✔ Debug faster (minutes instead of hours) ✔ Fix connectivity issues quickly ✔ Configure firewalls & security groups properly ✔ Avoid unnecessary downtime in production 🚀 Simple habit that makes you better: Before debugging code or infrastructure… 👉 Ask yourself: “Is the required port open and reachable?” 🔥 Pro Tip: Run this before anything: netstat -tulnp ss -tulnp telnet <host> <port> 💬 Let’s discuss: Which port do you troubleshoot the most in your daily work? #DevOps #Networking #Linux #CloudComputing #Kubernetes #Docker #AWSServices #CICD #SRE #TechCareers #ITSupport #DevOpsEngineer
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development