Day 11 – File Ownership Challenge 🔐 Today’s #90DaysOfDevOps challenge focused on mastering file and directory ownership in Linux using chown and chgrp. At first, it seemed like a simple exercise in changing owners and groups. But when I thought about my day-to-day DevOps work, I realized how often this comes up: 💡 Real-life examples where I use this daily: Making sure application logs are owned by the right service account so CI/CD pipelines don’t fail mid-run. Setting correct ownership for shared team directories so developers can collaborate without hitting “Permission denied” errors. Managing container volumes where ownership decides whether the app inside the container can read/write data. Ensuring deployment artifacts in build pipelines are accessible to the right users/groups. Keeping production servers secure by restricting sensitive files to specific owners and groups. ✅ What I practiced today: Changing ownership with chown and chgrp Recursive ownership changes across directories (-R) Setting up realistic scenarios with multiple users and groups Verifying everything with ls -l ⚙️ Impact: Getting ownership right means smoother deployments, fewer late-night permission errors, and more reliable collaboration across environments. It’s one of those foundational skills that quietly powers everything from Docker volumes to Kubernetes pods. Day 11 reminded me that DevOps isn’t just about flashy tools – it’s about mastering the basics that keep systems secure and workflows efficient. #Day11 #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #Linux #FileOwnership #DevOps
Mastering File Ownership in Linux with chown and chgrp
More Relevant Posts
-
𝗗𝗮𝘆 𝟯𝟴 𝗼𝗳 𝗺𝘆 𝗗𝗲𝘃𝗢𝗽𝘀 𝗝𝗼𝘂𝗿𝗻𝗲𝘆 💻 Managing container images — pulled and re-tagged a Docker image for better version control 🐳 𝗧𝗮𝘀𝗸: Pull Docker Image and Re-Tag 𝗪𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝗼𝗱𝗮𝘆: • How Docker images are pulled from registries like Docker Hub • Purpose of image tagging in container workflows • Difference between image name and tag • How multiple tags can point to the same image • Importance of tagging for environment-specific usage 𝗪𝗵𝗮𝘁 𝗜 𝗯𝘂𝗶𝗹𝘁 / 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝗱: • Connected to Application Server 2 (`stapp02`) • Verified Docker service status • Pulled `busybox:musl` image • Created a new tag `busybox:blog` using `docker tag` • Verified both tags using `docker images` 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀: • Understanding how tagging works internally • Differentiating between image versions and tags • Ensuring correct syntax for tagging 𝗙𝗶𝘅 / 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: • Learned that tags are just references to the same image • Understood how tagging helps in version control and environment separation • Gained clarity on managing images locally • Realized importance of naming conventions in real projects 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Tagging isn’t just naming — it’s how you organize, version, and manage your container images effectively. This felt like handling real-world image versioning in DevOps 🚀 How do you usually manage your Docker image tags — simple naming or structured versioning (v1, latest, prod)? #Day38 #DevOps #Docker #Containerization #Linux #Automation #CloudComputing #AWS #DevOpsJourney #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
-
Recently I worked on CI/CD pipeline project to understand how automated deployments actually work in practice. This was my first time practically implementing a CI/CD workflow, and it helped me understand how different tools integrate together. Instead of only learning the tools individually, I tried connecting them together in a small working setup. 🔹 Server Side Configuration • Used Ansible to configure the target Linux server • Installed Docker and Docker Compose using automation • Prepared the server environment for running containers • Ensured the server was ready for automated deployments 🔹 CI/CD Pipeline • Code pushed to GitHub • GitHub webhook triggers Jenkins pipeline • Jenkins pulls the latest code from the repository • Pipeline deploys the application using Docker Compose • Application runs inside an Nginx container 🔹 What I Learned • How a CI/CD pipeline works end-to-end • How GitHub webhooks trigger Jenkins pipelines • Using Ansible for server configuration and automation • Deploying containers using Docker Compose • Connecting multiple DevOps tools in a simple workflow Tech Stack: GitHub | Jenkins | Ansible | Docker | Docker Compose | Nginx | Linux This was a small but useful learning project that helped me understand how these tools work together in a real workflow. #DevOps #CICD #Jenkins #Docker #Nginx #LearningJourney #Infrastructure #SRE #Platform
To view or add a comment, sign in
-
-
🚀 Day 7 of My 14 Days Docker Journey | Real DevOps Project: Log Monitoring System 🔥 After learning Docker fundamentals (Images, Dockerfile, Volumes, Networking), I built my first real DevOps-style project 💪 🧠 💡 Project: Log Monitoring System (Docker) In real-world systems, applications generate logs continuously. So I built a mini system where: 👉 One container generates logs 👉 Another container monitors logs in real-time 🧩 Architecture App Container → Volume → Viewer Container ✔ Shared storage using Docker Volumes ✔ Real-time log streaming using tail -f ✔ Multi-container communication 🛠️ What I Used ✔ Dockerfile (custom images) ✔ Docker Volumes (data persistence) ✔ Docker Networking (container communication) ✔ Linux scripting 🔥 Key Learning 💥 Containers are temporary, but data can persist using volumes 💥 Real-world systems separate: Log generation Log monitoring ⚡ Challenges I Faced ❌ Container execution error (exec ./app.sh) ❌ File format issues (Linux vs Windows) ✔ Debugged using: docker logs docker exec Container inspection 👉 This was a huge learning moment 🔥 🎯 Outcome ✔ Built a working multi-container system ✔ Logs generated & streamed in real-time ✔ Strong understanding of Docker internals GitHub Repo Link : https://lnkd.in/gXp7sPR6 💬 If you're learning DevOps, let’s connect & grow together! #Docker #DevOps #LearningInPublic #CloudComputing #AWS #Linux #Containers #BuildInPublic #TechJourney
To view or add a comment, sign in
-
-
Ever wondered why your CI/CD pipeline feels fast at first… but starts struggling as projects grow? Let me tell you a simple story When I first started working with Jenkins, everything was running on a single server. Small builds, fewer users — life was easy. But as the team grew, builds increased, pipelines got heavier… and suddenly everything slowed down. Delays, queue issues, frustration That’s when I understood the real power of Jenkins Master-Agent Architecture Think of it like this: Master Node = Brain Handles UI, job scheduling, configurations, and controls everything. Agents = Muscles They do the actual heavy lifting — running builds, tests, deployments. API Layer = Communication Bridge Connects Jenkins with tools like Git, SonarQube, Slack, etc. Instead of one server doing all the work, Jenkins distributes tasks across multiple agents. Now imagine running builds on Linux, Windows, Docker… all at the same time That’s not just automation — that’s scalability. Lesson I learned If you want to grow in DevOps, don’t just learn tools… Understand how they scale. That’s where real engineering begins. #DevOps #Jenkins #CI_CD #Automation #Scalability #SystemDesign #CloudComputing
To view or add a comment, sign in
-
-
🔁 Revisiting the Foundations — Connecting the dots again As I continue documenting my DevOps journey, I decided to pause and revisit the basics — not to relearn, but to rebuild them with better clarity. Over time, I’ve worked with tools like VirtualBox, WSL, Docker, PowerShell… but what stood out this time is how much clearer everything feels when you connect them as a system instead of treating them as separate tools. --- 💡 Starting with virtualization Virtual Machines are powered by hypervisors, and understanding their types made a big difference: - Type 1 (bare metal) → runs directly on hardware - Type 2 → runs on top of an OS Tools like VirtualBox/VMware fall under Type 2, while Hyper-V is Type 1 — and once enabled, even Windows operates on top of it. --- 💡 Re-looking at WSL I always used WSL assuming it’s just “Linux on Windows”, but revisiting it clarified the layers: - WSL1 → translates Linux system calls into Windows - WSL2 → runs an actual Linux kernel using a lightweight VM (via Hyper-V) That shift from translation to real virtualization is quite powerful. --- 💡 Docker in the right context It’s easy to group Docker with VMs, but they solve different problems: - VMs → full operating systems - Docker → application-level isolation using the same OS kernel Understanding this distinction makes its efficiency much more intuitive. --- 💡 Shells vs Terminal (something we often overlook) - CMD → basic command execution - PowerShell → object-based and more powerful And both are shells. Which ties back to a simple clarity: - Terminal = interface - Shell = command interpreter --- 🔑 What stood out to me this time: These aren’t isolated tools — they’re layers working together. Hardware → Hypervisor → OS → Shell → Containers / Applications Revisiting these fundamentals with this perspective made everything feel more structured and less fragmented. --- I’ll be consistently sharing my DevOps journey as I continue strengthening these foundations and exploring deeper concepts 🚀 #DevOps #RevisitingBasics #Docker #WSL #Virtualization
To view or add a comment, sign in
-
🚀 Docker Deep Dive: From OS-Level Virtualization to Real Execution In modern production environments, speed and consistency are everything. That’s exactly where OS-level virtualization (containers) stands out. 🐳 OS-Level Virtualization (Manual vs Automation) Earlier: 👉 Manual setups → Install dependencies, configure environments, fix conflicts Now with Docker: 👉 Automated builds → Same environment, every time Result: ✅ Zero “works on my machine” issues ✅ Faster deployments ✅ Predictable infra behavior 📦 Dockerfile = Blueprint of Your Application A well-written Dockerfile defines everything your application needs to run. 🔧 Core Components Explained: FROM → Base image RUN → Execute commands during build CMD → Default command after container starts ENTRYPOINT → Overrides CMD (higher priority) 📁 File Handling: COPY → Local → Container ADD → URL/Archive → Container ⚙️ Environment & Config: WORKDIR → Set working directory ENV → Environment variables (inside container) ARGS → Variables passed during build LABEL → Metadata for images EXPOSE → Define application port 💻 Build & Run (Versioned Deployments) docker build -t srushti:v1 . docker run -it --name cont1 srushti:v1 docker build -t srushti:v2 . docker run -it --name cont2 srushti:v2 docker build -t srushti:v3 . docker run -it --name cont3 srushti:v3 👉 Versioning images = controlled deployments + easy rollback 🔥 Bulk Cleanup Commands (Real Ops Usage) docker kill $(docker ps -qa) docker rm $(docker ps -qa) docker rmi -f $(docker images -qa) 👉 Useful for clearing unused resources in dev/test environments 💡 In real-world DevOps, Docker is not just about running containers. It’s about: 👉 Standardization 👉 Automation 👉 Reliability at scale 💬 How are you managing image versioning and cleanup in your environment? #Docker #DevOps #SRE #Cloud #Automation #Linux #Containerization
To view or add a comment, sign in
-
-
Scaling your infrastructure automation? It's time to master Ansible Collections. 🚀 If you're moving beyond writing simple, standalone playbooks and looking to build enterprise-grade automation, Ansible Collections are the modern standard. They represent a massive leap forward in how we distribute, maintain, and scale infrastructure as code. Instead of dealing with scattered roles and custom modules, Collections allow you to bundle roles, custom modules, plugins, and documentation into a single, easily portable package. Here is a breakdown of why they are essential for modern DevOps workflows: 📦 Standardized Structure: Collections enforce a clean hierarchy (roles/, modules/, plugins/, playbooks/), making your code base predictable and easier for teams to collaborate on. 🌐 Namespace Organization: Say goodbye to naming conflicts. By grouping content under specific namespaces (like community.general or vendor-specific namespaces), managing dependencies across massive environments becomes vastly simpler. ⌨️ Essential Commands to Know: ansible-galaxy collection install <name> -> Pull down what you need. ansible-galaxy collection list -> Audit your current environments. ansible-galaxy collection init <name> -> Scaffold your own custom collection. 💡 Pro-Tips for Production Environments: 1️⃣ Always use FQCNs (Fully Qualified Collection Names): Instead of just calling user, use ansible.builtin.user in your tasks. It speeds up execution and prevents module resolution conflicts. 2️⃣ Pin your versions: Never rely on the "latest" tag in production. Pin your collection versions in your requirements.yml to ensure idempotent and predictable playbook runs. Are you currently migrating legacy roles into Collections, or building them from scratch? Let’s discuss in the comments! 👇 #Ansible #DevOps #Automation #InfrastructureAsCode #Linux #RedHat #SystemAdministration #CloudComputing #TechTips
To view or add a comment, sign in
-
-
🚨 𝗔 𝘀𝗶𝗻𝗴𝗹𝗲 𝘄𝗿𝗼𝗻𝗴 𝗽𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻 𝗰𝗮𝗻 𝗯𝗿𝗲𝗮𝗸 𝗮𝗻 𝗲𝗻𝘁𝗶𝗿𝗲 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝘀𝘆𝘀𝘁𝗲𝗺. Sounds dramatic… but it happens more often than you think. While learning Linux for DevOps, I discovered that many real-world issues are not caused by bugs in code — they’re caused by 𝗶𝗻𝗰𝗼𝗿𝗿𝗲𝗰𝘁 𝗳𝗶𝗹𝗲 𝗽𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀. 💡 𝘋𝘢𝘺 11 𝘰𝘧 𝘮𝘺 𝘋𝘦𝘷𝘖𝘱𝘴 𝘑𝘰𝘶𝘳𝘯𝘦𝘺 Today I explored two very important Linux commands: 🔐 `𝘤𝘩𝘮𝘰𝘥` 👤 `𝘤𝘩𝘰𝘸𝘯` These commands control 𝘄𝗵𝗼 𝗰𝗮𝗻 𝗮𝗰𝗰𝗲𝘀𝘀, 𝗺𝗼𝗱𝗶𝗳𝘆, 𝗼𝗿 𝗲𝘅𝗲𝗰𝘂𝘁𝗲 𝗳𝗶𝗹𝗲𝘀 𝗼𝗻 𝗮 𝘀𝘆𝘀𝘁𝗲𝗺. And in DevOps, this matters a lot. --- 📖 Imagine this scenario: An application is deployed successfully. But when the service starts… ❌ It cannot read a configuration file ❌ It cannot access a log directory ❌ It fails to start The problem? Not the code. Not the server. 👉 Just 𝘄𝗿𝗼𝗻𝗴 𝗽𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀. --- 🔹 `𝗰𝗵𝗺𝗼𝗱` — 𝗖𝗵𝗮𝗻𝗴𝗲 𝗙𝗶𝗹𝗲 𝗣𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀 This command controls what users can do with a file. Example: ``` 𝘤𝘩𝘮𝘰𝘥 755 𝘴𝘤𝘳𝘪𝘱𝘵.𝘴𝘩 ``` This means: ✔ Owner can read, write, execute ✔ Others can read and execute DevOps engineers use this when: • Making deployment scripts executable • Securing configuration files • Controlling access to directories --- 🔹 `𝗰𝗵𝗼𝘄𝗻` — 𝗖𝗵𝗮𝗻𝗴𝗲 𝗙𝗶𝗹𝗲 𝗢𝘄𝗻𝗲𝗿𝘀𝗵𝗶𝗽 Sometimes the issue is not permission… it's ownership. Example: ``` 𝘤𝘩𝘰𝘸𝘯 𝘶𝘣𝘶𝘯𝘵𝘶:𝘶𝘣𝘶𝘯𝘵𝘶 𝘢𝘱𝘱.𝘭𝘰𝘨 ``` This assigns the file to a specific user and group. Very common when working with: ✔ Docker containers ✔ Application logs ✔ Server directories --- 🔥 One thing I learned today: Security in Linux is 𝗻𝗼𝘁 𝗼𝗻𝗹𝘆 𝗮𝗯𝗼𝘂𝘁 𝗳𝗶𝗿𝗲𝘄𝗮𝗹𝗹𝘀 𝗼𝗿 𝗮𝘂𝘁𝗵𝗲𝗻𝘁𝗶𝗰𝗮𝘁𝗶𝗼𝗻. It also starts with 𝗰𝗼𝗿𝗿𝗲𝗰𝘁 𝗳𝗶𝗹𝗲 𝗽𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀. --- 📌 My biggest takeaway: In DevOps, small details like permissions can decide whether an application runs smoothly… or crashes immediately. And that’s why mastering Linux fundamentals is so important. --- 💬 Quick question for DevOps engineers here: Which permission do you use the most? `𝘤𝘩𝘮𝘰𝘥 755` `𝘤𝘩𝘮𝘰𝘥 777` or something else? Let’s discuss 👇 --- #DevOps #Linux #LinuxPermissions #CloudComputing #DevOpsEngineer #SRE #TechLearning #OpenSource #ITCar
To view or add a comment, sign in
-
-
Working on the same codebase with multiple people can quickly become messy. Features are half-done. Bug fixes are in progress. No one knows what’s safe to release. That’s exactly why Git Branches exist. 👉 They give every developer their own space to work — without affecting the main codebase. This blog breaks down branching in a simple, practical way so you can understand how teams actually use it in real projects. Here’s what this blog/attached PDF covers: 1) The real problem branches solve in team environments 2) What a branch actually is (not just a command, but a concept) 3) Why the master/main branch should always stay stable 4) Clean and readable branch naming conventions 5) Creating branches from UI and CLI 6) Why local and remote branches behave differently 7) How to sync branches properly (git pull) 8) Switching and working across branches 9) Creating and pushing a new branch step-by-step 10) Understanding upstream (git push -u origin branch-name) 11) Different branching strategies used in teams 12) How branching connects with CI/CD pipelines 13) Essential commands you’ll use daily One key idea: Branches are what make parallel development possible without breaking things. Once this concept clicks, working in teams becomes much more structured and predictable. You can read the complete blog using the link below, or you can review the attached document—both contain the same information: [https://lnkd.in/gwPWWH5s] 💡 Quick takeaway: If you know how to use branches properly, you can build features, fix bugs, and collaborate without interfering with others. Comment what should I write about next? Feel free to comment below & I’ll try to create a post on your suggestion within a day. I can cover topics like: Git, Ansible, Jenkins, Groovy, Terraform, AWS, Networking, Linux, DevOps practices, Cloud architecture, CI/CD pipelines, Infrastructure as Code, or anything related. If you find the content useful, please share it with your network and drop a like 👍 it really helps these posts reach more Linux, DevOps, and Cloud folks. Your likes and shares are what keep me motivated to keep writing consistently. Thanks in advance for your ideas and support! #Git #VersionControl #DevOps #Linux #SoftwareDevelopment #CI_CD #LearningJourney #TechCareers
To view or add a comment, sign in
-
🚀 End-to-End CI/CD Pipeline using Jenkins, Docker, and DockerHub Excited to share my hands-on project where I built a complete CI/CD pipeline to automate the build, test, and deployment process of a containerized application. 🔹 What I Did Designed and implemented an end-to-end CI/CD pipeline using Jenkins to automate code integration, Docker for containerization, and DockerHub for image storage and distribution. 🔹 Tools & Technologies Used Jenkins (Automation Server) Docker (Containerization) DockerHub (Image Repository) GitHub (Source Code Management) Linux / EC2 (Execution Environment) 🔹 Key Highlights ✔️ Automated build and deployment process using Jenkins pipelines ✔️ Integrated GitHub with Jenkins for continuous integration ✔️ Built and pushed Docker images to DockerHub ✔️ Reduced manual deployment effort and improved efficiency ✔️ Implemented continuous delivery workflow 🔹 Workflow Code Push → Jenkins Build → Docker Image Creation → Push to DockerHub → Deployment 🚀 This project enhanced my practical knowledge of DevOps practices, automation, and container-based deployment pipelines. #DevOps #CICD #Jenkins #Docker #DockerHub #Automation #CloudComputing #GitHub #Learning #SoftwareEngineering
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development