💻 Coding Questions Asked in DevOps/SRE Interviews (Linux, Shell, Python, Terraform, Ansible) Many DevOps interviews now include hands-on coding or scripting rounds to test real problem-solving skills 👇 Here are some commonly asked coding questions across different areas 👇 --- 🐧 Linux / Shell Scripting 🔹 Write a script to find top 5 CPU-consuming processes 🔹 Script to monitor disk usage and send alert if > 80% 🔹 Find and delete files older than 7 days 🔹 Count number of lines, words, characters in a file 🔹 Script to check if a service is running, if not restart it 💡 Example: ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%cpu | head -5 --- 🧾 Shell Scripting (Logic-Based) 🔹 Reverse a string using shell 🔹 Check if a number is prime 🔹 Print fibonacci series 🔹 Find duplicate lines in a file --- 🐍 Python (Very Common Now) 🔹 Script to read a log file and find ERROR lines 🔹 Parse a JSON file and extract specific fields 🔹 Call an API and print response 🔹 Write a script to monitor a website (uptime check) 💡 Example: with open("app.log") as f: for line in f: if "ERROR" in line: print(line) --- 🌍 Terraform (Logic + Understanding) 🔹 Write a Terraform config to: - Create EC2 instance - Attach security group 🔹 Use count or for_each to create multiple resources 🔹 Write a module and reuse it 🔹 Handle dependency using "depends_on" --- ⚙️ Ansible (Automation Tasks) 🔹 Write a playbook to: - Install nginx - Start service 🔹 Use variables for different environments 🔹 Create a role for reusable configuration 🔹 Use handlers to restart service only when config changes --- 🧠 General Coding / Logic Questions 🔹 Reverse a string (any language) 🔹 Find second largest number in array 🔹 Remove duplicates from list 🔹 Count frequency of characters 🔹 Check if string is palindrome --- 🚨 Real-World DevOps Tasks 🔹 Write script to: - Automate backup - Check server health - Deploy application 🔹 Parse logs and generate summary 🔹 Automate cleanup of unused files/resources --- 💡 Pro Tip Interviewers are not expecting perfect coding ❌ They look for: 👉 Logic 👉 Approach 👉 Clean explanation --- 💬 Which type of coding questions have you faced in interviews? #DevOps #SRE #Python #ShellScripting #Linux #Terraform #Ansible #CodingInterview #TechInterviews
DevOps Interview Coding Questions: Linux, Shell, Python, Terraform, Ansible
More Relevant Posts
-
🚀 Python for DevOps – Log Monitoring with Timestamp & Alerts (Mini Project) Built a hands-on Python script to analyze logs, generate alerts, and track system health — a small step toward real-world DevOps automation. 📂 Problem: Manually scanning logs is inefficient and error-prone. Needed a way to automatically filter and track critical issues. 💻 Solution (Python Script): from datetime import datetime ERROR_COUNT = 0 WARNING_COUNT = 0 INFO_COUNT = 0 with open("app.log") as f, open("alerts.log", "a") as alert_file: for line in f: timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") if "ERROR" in line: ERROR_COUNT += 1 alert_file.write(f"{timestamp} - {line.strip()}\n") elif "WARNING" in line: WARNING_COUNT += 1 alert_file.write(f"{timestamp} - {line.strip()}\n") elif "INFO" in line: INFO_COUNT += 1 print("============ LOG SUMMARY ============") print("ERROR:", ERROR_COUNT) print("WARNING:", WARNING_COUNT) print("INFO:", INFO_COUNT) Output: ubuntu@satheesha:~/python$ python3 log-mon_alert-time.py ============LOG SUMMARY================ ERROR: 1 WARNING: 1 INFO: 2 ubuntu@satheesha:~/python$ cat alerts.log 2026-04-21 11:37:1776771454 - INFO - INFO: Service startes 2026-04-21 11:37:1776771454 - WARNING - WARNING: High CPU 2026-04-21 11:37:1776771454 - INFO - INFO: Service startes 2026-04-21 11:37:1776771454 - ERROR - ERROR: Disk full 2026-04-21 11:45:59 - INFO - INFO: Service startes 2026-04-21 11:45:59 - WARNING - WARNING: High CPU 2026-04-21 11:45:59 - INFO - INFO: Service startes 2026-04-21 11:45:59 - ERROR - ERROR: Disk full 🔍 What this script does: Reads application logs (app.log) Filters critical log levels (ERROR / WARNING / INFO) Appends important alerts into alerts.log Adds timestamps for better traceability Generates summary metrics for quick insights 📊 Why this matters: Faster troubleshooting in production Clear visibility into system health Reduces manual effort in log analysis 🔥 Key Learning: Python is a powerful tool in DevOps—not just for scripting, but for automation, monitoring, and observability. 📈 Next Steps: Add alerting (Email / Slack integration) Convert logs to structured format (JSON for ELK stack) Build real-time log monitoring (tail -f style) #Python #DevOps #Automation #Logging #Monitoring #Cloud #Scripting #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 Python Basics for DevOps Engineers (Practical Examples) Python is a powerful tool for automation in DevOps. Here’s a quick guide to essential data types with real-world use cases 👇 🔹 1. String (str) Used for text (server names, logs, messages) server = "web-server-1" print(server) 💡 DevOps Example: log = "ERROR: Disk full" if "ERROR" in log: print("Issue found") 🔹 2. Integer (int) Used for numbers (CPU, memory, ports) cpu = 75 print(cpu) 💡 DevOps Example: cpu = 85 if cpu > 80: print("Alert: High CPU") 🔹 3. Boolean (True / False) Used for status (running/stopped, success/failure) is_running = True if is_running: print("Service is running") 💡 DevOps Example: deployment_success = False if not deployment_success: print("Rollback required") 🔹 4. List (list) Used to store multiple values (servers, services) servers = ["web1", "web2", "web3"] print(servers[0]) 💡 DevOps Example: services = ["nginx", "docker", "jenkins"] for service in services: print(service) 🔹 5. Combine All (Real Example) servers = ["web1", "web2"] cpu_usage = 85 status = True if cpu_usage > 80: print("Alert: scale up needed") if status: for s in servers: print(f"{s} CPU: {cpu_usage}") 🔹 6. Quick Practice services = ["web1", "web2"] status = True cpu_usage = 85 # fixed variable name if status: for s in services: print(f"Server {s} CPU {cpu_usage}") if cpu_usage > 80: print(f"Alert: CPU {cpu_usage}") Out put: >>> services = ["web1", "web2"] >>> status = True >>> cup_usage = 85 >>> >>> if status: ... for s in services: ... print(f"server {s} CPU {cpu_usage}") ... server web1 CPU 85 server web2 CPU 85 >>> if cpu_usage > 80: ... print(f"Alert: CPU {cpu_usage}") ... Alert: CPU 85 💡 Key Takeaway: Mastering these basics helps automate monitoring, alerts, and system management in real DevOps environments. #DevOps #Python #Automation #Scripting #Learning #AWS #Kubernetes
To view or add a comment, sign in
-
Production-Style CI/CD Pipeline for Python Application using GitHub Actions and Kubernetes (Minikube) Act as a Senior DevOps Engineer, Kubernetes Expert, and Trainer. Create a complete real-world DevOps hands-on project demonstrating an end-to-end CI/CD pipeline using GitHub Actions for a Python Flask application, containerized with Docker and deployed to Kubernetes using Minikube (local cluster). The tutorial must be practical, beginner-friendly, and step-by-step, with clear explanations and real code examples. Include the following sections: Project Architecture Explain the overall DevOps workflow. Show a simple architecture diagram like: Developer → GitHub → GitHub Actions → Build & Test → Docker Image → Push to Docker Hub → Deploy to Kubernetes → Run on Minikube Create Python Flask Application Build a simple API with endpoints: / → returns "Hello DevOps" /health → returns health status Provide full code for: app.py requirements.txt Project Directory Structure python-devops-project/ ├── app.py ├── requirements.txt ├── Dockerfile ├── tests/ │ └── test_app.py ├── k8s/ │ ├── deployment.yaml │ └── service.yaml ├── helm/ │ └── python-app-chart/ └── .github/ └── workflows/ └── ci-cd.yml Git and GitHub Setup Provide commands to: Initialize git Commit code Push to GitHub repository Docker Containerization Create a production-ready Dockerfile. Explain each Docker instruction. Show commands: docker build -t python-devops-app . docker run -p 5000:5000 python-devops-app Minikube Setup Explain installation and usage of: Docker Kubectl Minikube Commands: minikube start kubectl get nodes Kubernetes Deployment Provide complete Kubernetes manifests: deployment.yaml service.yaml Show commands: kubectl apply -f k8s/ kubectl get pods kubectl get services Access Application minikube service python-service Helm Chart Deployment Create a basic Helm chart for the application. Explain how Helm simplifies Kubernetes deployments. GitHub Actions CI/CD Pipeline Create .github/workflows/ci-cd.yml including stages: Checkout repository Setup Python Install dependencies Run unit tests using pytest Build Docker image Login to Docker Hub Push Docker image Update Kubernetes deployment Secrets Management Explain how to store: Docker Hub username Docker Hub password using GitHub Secrets. End-to-End Pipeline Flow Show complete CI/CD flow: Developer Push Code ↓ GitHub Repository ↓ GitHub Actions Triggered ↓ Install Dependencies ↓ Run Tests ↓ Build Docker Image ↓ Push Image to Docker Hub ↓ Update Kubernetes Deployment Application Running on Minikube Ensure the tutorial is hands-on, practical, and easy for beginners learning DevOps and Kubernetes. https://lnkd.in/gWCksXaN
To view or add a comment, sign in
-
🔧 Lab Title: 5 - Jenkins Basics Demo - Freestyle Job 🚀 Project Steps PDF Your Easy-to-Follow Guide:https://lnkd.in/g4xg4uBg 🔗 GitLab Repo Code:https://lnkd.in/g5Xt4HQz 🔗 DevsecOps Portfolio:https://lnkd.in/g6AP-FNQ 💼 DevOps Portfolio: https://lnkd.in/gT-YQE5U 🔗 Kubernetes Portfolio:https://lnkd.in/gUqZrdYh 🔗 GitLab CI/CD Portfolio:https://lnkd.in/g2jhKsts Summary: Today, I worked on 5 - Jenkins Basics Demo - Freestyle Job, where I created and configured Jenkins freestyle and Maven jobs to verify tool installations, integrate GitLab repositories, and automate Java application builds. I explored concepts such as Jenkins job configuration, NodeJS plugin setup, Git SCM integration, and Maven build lifecycle. I applied these to automate build verification and run unit tests through CI/CD pipelines. This lab also involved setting up Jenkins with plugins and shell scripts to automate testing and packaging processes, focusing on creating a reliable and efficient continuous integration environment. 🔧🧪 Tools Used: Jenkins: Created freestyle and Maven jobs for CI/CD automation. NodeJS Plugin: Installed and configured for Node.js integration in Jenkins. Git: Managed source code and linked GitLab repositories to Jenkins jobs. Maven: Automated Java unit testing and packaging inside Jenkins. Skills Gained: Jenkins Job Configuration: Built and managed freestyle and Maven jobs with build steps and plugins. CI/CD Pipeline Setup: Linked SCM branches, ran build scripts, automated tests and packaging. Tool Integration: Connected Jenkins with NodeJS, Git, and Maven for seamless automation. Challenges Faced: NodeJS Plugin Setup: Added and removed build steps for NodeJS to keep the job clean. Branch-Specific Builds: Configured Jenkins to pull the exact GitLab branch using proper specifiers. Why It Matters: This lab provides hands-on experience in automating software builds and tests using Jenkins and related tools. It shows how integration with source control and build automation enhances software delivery efficiency and reliability. Mastering Jenkins freestyle and Maven jobs helps me streamline CI/CD workflows, reduce manual steps, and improve software quality — critical skills for any modern DevOps or Cloud infrastructure role. ⚙️💡 📌 hashtag#DevOps hashtag#Jenkins hashtag#FreestyleJob hashtag#CI_CD hashtag#Automation hashtag#TechLearning hashtag#DevOpsJourney 🚀 Stay tuned! The next project 6 - Docker in Jenkins is coming soon. 🔥
To view or add a comment, sign in
-
-
🚀 Python for DevOps – Triggering Jenkins Jobs via API Explored how to automate CI/CD by triggering Jenkins jobs using Python and APIs — a key real-world DevOps capability. 📂 Use Case: Instead of manually clicking “Build Now” in Jenkins, we can trigger jobs programmatically using APIs. 💻 Python Script: import requests jenkins_url = "http://your-jenkins-url/job/your-job-name/build" username = "your-username" api_token = "your-api-token" response = requests.post(jenkins_url, auth=(username, api_token)) if response.status_code == 201: print("✅ Jenkins job triggered successfully") else: print("❌ Failed to trigger job:", response.status_code) 🔐 Handling CSRF (Crumb Token): crumb_url = f"{jenkins_url}/crumbIssuer/api/json" crumb_data = requests.get(crumb_url, auth=(username, api_token)).json() headers = { crumb_data['crumbRequestField']: crumb_data['crumb'] } ⚙️ Trigger Job with Parameters: params = { "ENV": "prod", "VERSION": "1.0" } response = requests.post( "http://your-jenkins-url/job/your-job-name/buildWithParameters", params=params, auth=("user", "token") ) 🔍 What this enables: Automate CI/CD pipelines Trigger builds from scripts or monitoring tools Integrate Jenkins with other systems Reduce manual intervention 🔥 Why this matters in DevOps: Automation is the backbone of DevOps. Using APIs, we can connect tools and build end-to-end automated workflows. 💡 Key Learning: Jenkins + APIs + Python = powerful combination for pipeline automation and integration. 📈 Next Steps: Trigger Jenkins from log monitoring script Send build status to Slack Integrate with cloud deployments (AWS) #DevOps #Jenkins #Python #Automation #CICD #API #Cloud #Scripting #Learning #100DaysOfCode
To view or add a comment, sign in
-
🚀 DevOps Learning Journey – Day 2 | Docker Hands-On Flow & Image Creation Continuing my Docker learning journey, today I worked on the practical workflow of building and running containers using a real Maven web application. Sharing my notes for anyone learning containerization step-by-step. 📦 Docker Flow – Basic Commands I Practiced Checking Docker images and containers: docker images docker ps docker container ls Running containers: docker run -d -p 80:80 --name nginxcontainer nginx:latest docker run -d -p 8080:8080 --name tomcatcontainer tomcat:latest Checking open ports: netstat -tunlap Installing netstat if missing: sudo apt install net-tools Inspecting container contents: docker exec tomcat ls /usr/local/tomcat/webapps docker exec tomcat java -version Removing and recreating containers: docker rm -f tomcat docker run -d -p 8080:8080 --name tomcat tomcat:6.0.43-jre8 📦 Understanding Docker Objects Containers → Running instances of images Images → Templates used to create containers Volumes → Persistent storage Networks → Communication between containers Registries → Store container images Dockerfile → Defines image creation steps Docker Compose → Multi-container application management 📁 What is Docker Build Context? Docker build context includes the Dockerfile and all files required during the image build process. It determines what files Docker can access while building images. ⚙️ Practical Session – Building a Docker Image from a Maven Project Step 1: Verify Docker installation docker --version Step 2: Clone sample application git clone https://lnkd.in/gJb7XZXQ cd maven-web-app-project-kk-funda Step 3: Install dependencies sudo apt install maven -y java --version mvn -version Step 4: Build artifact mvn clean package Step 5: Create Dockerfile FROM tomcat:9.0.100 COPY target/maven-web-application.war /usr/local/tomcat/webapps/maven-web-application.war Step 6: Build Docker image docker build -t kkeducation12345/java-web-app . Step 7: Verify images docker images Step 8: Push image to Docker Hub docker login docker push kkeducation12345/java-web-app Step 9: Run container docker run -d -p 8080:8080 --name javawebapp kkeducation12345/java-web-app Step 10: Verify container docker ps -a Access container shell: docker exec -it javawebapp /bin/bash 📌 Key Learning Outcome Today ✔ Built WAR file using Maven ✔ Created Dockerfile ✔ Built custom Docker image ✔ Pushed image to Docker Hub ✔ Deployed containerized web application using Tomcat Next step in my learning journey: ➡ Docker Volumes ➡ Docker Networks ➡ Docker Compose Currently strengthening hands-on skills in Docker, Kubernetes, AWS, Terraform and CI/CD pipeline automation as part of my DevOps roadmap. #Docker #DevOps #AWS #Kubernetes #Tomcat #Maven #CI_CD #CloudComputing #LearningJourney
To view or add a comment, sign in
-
🔧 Lab Title: 6 - Docker in Jenkins 🐳 Project Steps PDF Your Easy-to-Follow Guide:https://lnkd.in/gBSpd7uc 🔗 GitLab Repo Code:https:https://lnkd.in/gm7-QuWM 🔗 DevsecOps Portfolio:https://lnkd.in/g6AP-FNQ 💼 DevOps Portfolio: https://lnkd.in/gT-YQE5U 🔗 Kubernetes Portfolio:https://lnkd.in/gUqZrdYh 🔗 GitLab CI/CD Portfolio:https://lnkd.in/g2jhKsts Summary: Today, I worked on 6 - Docker in Jenkins, where I integrated Docker with Jenkins to automate building and pushing Docker images to Docker Hub and Nexus repositories. I explored key concepts such as Docker containerization, Jenkins pipeline automation, Docker Hub and Nexus registry management, and secure Docker credential handling. This lab involved setting up Docker inside the Jenkins container, building Docker images from Java JAR files, and automating the push of these images to public and private registries, ensuring a consistent and portable CI/CD pipeline. 🚀🐳 Tools Used: Jenkins: Orchestrated CI/CD pipelines and managed Docker container builds inside Jenkins jobs. Docker: Built, managed, and pushed container images from Java applications. Docker Hub: Hosted and shared Docker images publicly. Nexus Repository: Served as a private Docker registry for secure internal image storage. Skills Gained: Docker & Jenkins Integration: Enabled Jenkins to access Docker daemon for container management. Docker Image Creation: Wrote Dockerfiles and built images for Java apps for containerized deployment. Docker Registry Management: Learned to push images securely to Docker Hub and Nexus, handling credentials properly. Challenges Faced: Permissions on Docker Socket: Adjusted socket permissions (chmod 666) inside Jenkins container to enable Docker commands. Insecure Registry Configuration: Configured Docker daemon to trust Nexus as an insecure registry for image pushes. Why It Matters: This lab is critical for modern DevOps workflows as it teaches how to containerize applications and automate their build and deployment through Jenkins. By integrating Docker and Jenkins, I ensure consistent, repeatable builds and seamless deployment of containerized applications across environments. These skills enable scalable, reliable CI/CD pipelines essential for cloud-native and microservices architectures. 🧱⚙️ 📌 hashtag#DevOps hashtag#Jenkins hashtag#Docker hashtag#CI_CD hashtag#Containerization hashtag#Automation hashtag#TechLearning hashtag#DevOpsJourney 🚀 Stay tuned! The next project 8 - Intro to Pipeline Job is coming soon. 🔥
To view or add a comment, sign in
-
-
🚀 Python for DevOps – Log Level Automation Project Today I built a practical DevOps script using Python to analyze logs and separate them based on log levels. 📂 Problem: Manually checking logs is time-consuming. Needed a way to automatically filter and organize logs. 💻 Solution (Python Script): with open("app.log") as f, \ open("error.log", "w") as err, \ open("warning.log", "w") as warn, \ open("info.log", "w") as info: for line in f: if "ERROR" in line: err.write(line) elif "WARNING" in line: warn.write(line) elif "INFO" in line: info.write(line) ####################################### Output: ubuntu@satheesha:~/python$ python3 multiple-log_level.py ubuntu@satheesha:~/python$ ls -ltr error.log warning.log info.log -rw-r--r-- 1 ubuntu ubuntu 18 Apr 21 07:45 warning.log -rw-r--r-- 1 ubuntu ubuntu 44 Apr 21 07:45 info.log -rw-r--r-- 1 ubuntu ubuntu 17 Apr 21 07:45 error.log ubuntu@satheesha:~/python$ cat app.log INFO: Service startes WARNING: High CPU INFO: Service startes ERROR: Disk full ubuntu@satheesha:~/python$ cat error.log ERROR: Disk full ubuntu@satheesha:~/python$ cat warning.log WARNING: High CPU ubuntu@satheesha:~/python$ cat info.log INFO: Service startes INFO: Service startes ######################################### does: Reads app.log Filters logs into: error.log warning.log info.log 📊 Outcome: Faster troubleshooting Organized logs for better monitoring Reduced manual effort 🔥 Real DevOps Use Cases: Production log monitoring CI/CD pipeline validation Incident detection and alerting 💡 Key Learning: Python is a powerful tool for automation in DevOps, especially for handling logs and system data. 📈 Next Step: Enhancing this script to: Count log levels Trigger alerts (email/Slack) Monitor logs in real-time (tail -f style) #Python #DevOps #Automation #Scripting #Cloud #Learning #100DaysOfCode
To view or add a comment, sign in
-
Ansible_9 Automation Execution Environment An Automation Execution Environment (EE) is a portable, consistent runtime for running Ansible automation. Think of it as a pre-built container image (like Docker/Podman) that already has everything your automation needs. 🔹 Simple Idea Instead of installing Python, Ansible, collections, and dependencies on every control node, you package everything into one environment and run it anywhere. Ansible execution environments (EE) were introduced in Ansible Automation Platform 2 to provide a defined, consistent and portable environment for executing automation jobs. Execution environments are basically Linux container images that help execute Ansible playbooks. The container images for the execution environments contain the necessary components to execute Ansible automation jobs. These include Python, Ansible (ansible-core), Ansible Runner, required Python libraries, and dependencies. When you install Ansible Automation Platform, the installer deploys the following container images whether you're in a connected or an unconnected installation: * The ee-29-rhel8 image contains Ansible 2.9 to use with older Ansible playbooks. * ee-minimal-rhel8 is the minimal container image with ansible-core and basic collections. * ee-supported-rhel8 is the container image with ansible-core and automation content collections supported by Red Hat. Ansible Automation Platform's default container images let you start doing automation without any additional configurations. You can follow the standard container image build process for building execution environment container images, but Ansible Automation Platform also includes a command-line utility called ansible-builder to build container images for custom execution environments. The ansible-builder tool can be installed from the upstream Python repository or the Red Hat RPM repository: $ pip3 install ansible-builder $ sudo dnf install ansible-builder The ansible-builder helps you build container images with the definition file execution-environment.yml. A typical execution-environment.yml contains the base container image (EE_BASE_IMAGE), ansible.cfg, and other dependency file details: --- version: 1 build_arg_defaults: EE_BASE_IMAGE: 'https://lnkd.in/gu-Z_8zx' ansible_config: 'ansible.cfg' dependencies: galaxy: requirements.yml python: requirements.txt additional_build_steps: append: - RUN microdnf install which Once you've prepared the execution-environment.yml, execute the ansible-builder build command to create a build context that includes the Containerfile. $ ansible-builder build --tag my_custom_ee Running command: podman build -f context/Containerfile -t my_custom_ee context /home/ralagarasan/ansible-aap-demo/context Two options to build and use custom execution environments with Ansible Automation Platform: building and transferring the container image or creating a custom environment.
To view or add a comment, sign in
-
🐧 Shell Scripting Journey - Understanding Variables in Bash! If you're diving into DevOps, scripting is not optional - it's essential! 🚀 There are two major scripting languages you'll encounter: - 🐍 Python Scripting - 💻 Bash Scripting But before jumping into Bash, you need a solid understanding of Linux fundamentals. Once you're comfortable there, Bash scripting starts to make a lot more sense and that's exactly where I am right now! ------------------------------------------------------------------------- 📦 What Are Variables in Bash? Think of a variable as a labeled box where you store data temporarily. Instead of repeating the same value across your script, you define it once and reuse it everywhere! ✅ Basic variable usage: name="david" echo "My name is $name" x=100 echo $x echo $(($x - 10)) # Arithmetic with variables ⚠️ Common mistakes to avoid: - name = "david" ❌ —> No spaces around = sign! - echo name ❌ —> Always use $ to access a variable value! ------------------------------------------------------------------------- 📥 Taking User Input & Passing Arguments 🔹 Interactive Input using read: echo -n "Which service do you want to check?" read service sudo service $service status 🔹 Command Line Arguments - pass values directly when running the script: ./sample.sh pwd free Inside the script: command1=$1 command2=$2 echo "Command-1 : $command1" echo "Command-2 : $command2" 💡 Used heavily in CI/CD pipelines (like Jenkins) to pass environments like Dev/Prod at runtime! ------------------------------------------------------------------------- 🔒 Variable Types - Know the Difference! - 📌 Local Variables —> Live only inside the script. Gone when the script ends. - 🌍 Environment Variables —> Accessible outside the script too. Use export to create them. Gone when terminal closes. - 📁 Shell Variables (in ~/.bashrc) —> Persist even after restart! 🚨 Security Reminder: NEVER hardcode passwords or API keys inside your scripts! Use Environment Variables or dedicated Secret Managers like: - HashiCorp Vault - AWS Secrets Manager This is industry best practice - and it matters! 🔐 If you're on the same journey, let's connect and grow together! 🤝 #Bash #ShellScripting #DevOps #Linux #Automation #CloudComputing #LearnInPublic #DevOpsJourney #variables #BashScripting
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development