Day 05 of 30 Not every day is about big deployments. Some days are about keeping everything clean, organized and moving smoothly. Here is what I worked on: 🌐 Domain Update — Updated domain names and base URL environment variables across dev and production environments to reflect the new URLs correctly 📂 CI File Segregation — Split CI configuration files for the main app and other deployment into separate files with single repository. This makes each deployment independent, easier to manage and less risky to touch 🐳 Docker Image Cleanup — Verified GitLab registry cleanup and updated CI files to build and push Docker images with branch specific tags only — no more unnecessary images piling up in the registry 🚀 New Service Deployed — Set up complete GitLab CI configuration for a new API service and deployed it successfully to the dev environment 🔀 Code Reviews — Reviewed and merged MRs to keep the team unblocked The CI file segregation was the most useful thing today. When one big CI file handles everything, a small mistake can break all deployments. Splitting it gives you better control and reduces risk. Clean pipelines and clean registries are just as important as building new things. #DevOps #GitLabCI #Docker #CICD #DockerRegistry #Linux #DevOpsEngineer
DevOps Update: Domain, CI, Docker, and Code Review
More Relevant Posts
-
The DevOps Tools Engineer 2.0 exam from Linux Professional Institute (LPI) dedicates the entire objective 701.3 to Source Code Management, with #Git front & center! Dive into this new episode of the #DevOps series, by Fabian Thorns and Uirá Ribeiro, to learn why #Git matters: https://lpi.org/dl72 #SCM #DevOps #Git #VersionControl #opensource #FOSS #SoftwareDevelopment
To view or add a comment, sign in
-
-
Recently I worked on CI/CD pipeline project to understand how automated deployments actually work in practice. This was my first time practically implementing a CI/CD workflow, and it helped me understand how different tools integrate together. Instead of only learning the tools individually, I tried connecting them together in a small working setup. 🔹 Server Side Configuration • Used Ansible to configure the target Linux server • Installed Docker and Docker Compose using automation • Prepared the server environment for running containers • Ensured the server was ready for automated deployments 🔹 CI/CD Pipeline • Code pushed to GitHub • GitHub webhook triggers Jenkins pipeline • Jenkins pulls the latest code from the repository • Pipeline deploys the application using Docker Compose • Application runs inside an Nginx container 🔹 What I Learned • How a CI/CD pipeline works end-to-end • How GitHub webhooks trigger Jenkins pipelines • Using Ansible for server configuration and automation • Deploying containers using Docker Compose • Connecting multiple DevOps tools in a simple workflow Tech Stack: GitHub | Jenkins | Ansible | Docker | Docker Compose | Nginx | Linux This was a small but useful learning project that helped me understand how these tools work together in a real workflow. #DevOps #CICD #Jenkins #Docker #Nginx #LearningJourney #Infrastructure #SRE #Platform
To view or add a comment, sign in
-
-
How many times have you pushed a workflow file only to have the CI fail because of a typo, a missing job reference, or a circular dependency? PipeChecker catches those problems before they reach your repo. 🔍 What it checks: ✅ Circular job dependencies (via Tarjan's SCC algorithm) ✅ Missing needs / depends_on references ✅ Hardcoded secrets & undeclared env vars ✅ Unpinned GitHub Actions & Docker :latest tags ✅ Empty or malformed pipelines 🛠 Built with Rust for speed, with support for GitHub Actions, GitLab CI, and CircleCI. 📦 Available on crates.io, npm, and as standalone binaries (Linux, macOS, Windows). 🔗 GitHub: https://lnkd.in/gXu_-a7e Would love feedback from the DevOps / platform engineering community. What's the worst CI/CD config mistake you've shipped to prod? 😅o #Rust #DevOps #CICD #GitHubActions #PlatformEngineering #CLI #OpenSource
To view or add a comment, sign in
-
📌 Continuing my DevOps Journey — this time with Git & GitHub! Git is the backbone of every modern development workflow. No Git = no DevOps. Here's what I covered: ✅ How Git works — Working Directory → Staging → Local Repo → GitHub ✅ Core commands — init, add, commit, push, pull, log, status ✅ Branching & Merging — and handling conflicts ✅ Stash, Reset & Revert — undoing changes the right way ✅ Tags, Clone & .gitignore ✅ Pull Requests — reviewing before merging Every DevOps pipeline starts with a git push. Now I actually understand what happens after that. 🚀 #DevOps #docker #linux #Git #GitHub #VersionControl #CloudEngineering #cheatsheet
To view or add a comment, sign in
-
-
Going from git push to a live production container - all on my own hardware. 🛠️ I recently shared a project I built for media processing. While the app itself was fun, deploying updates to my self-hosted environment was painful: manually build and push a new docker file, SSH into the server, pull the new image, spin up the new container. I wanted to streamline this process with professional-grade CI/CD pipeline, this was the real playground. The Pipeline: 1. Version Control: Code pushed to GitHub. 2. CI: GitHub Actions triggers an Ubuntu VM to build from my Dockerfile. 3. Registry: The fresh image is pushed to my public Docker Hub repository. 4. The "Handshake": GitHub Actions hits a custom Webhook Server I wrote running on my home lab. 5. CD: My server pulls the latest image and restarts the container automatically. Was a full CI/CD pipeline necessary for a tool used by exactly two people? Probably not. But building it taught me more about the "plumbing" of DevOps - like handling GitHub actions, securing all my secrets!! webhooks and Docker registries - core concepts i can apply to future development, all while having fun building something simple. And most importantly, removing friction from my own workflow with one-click deployment. The biggest hurdle? It wasn't the Webhook server or the Docker orchestration. It was spending longer than i care to admit figuring out why my test downloads were "corrupted" on my Linux machine, only to realize I just needed to install VLC to support the MP4/MKV codecs. Sometimes the "bug" is just your media player! 😂 It’s not just about the final tool; it’s about having a playground to break things and fix them. #DevOps #Docker #GithubActions #SelfHosted #HomeLab #BackendEngineering
To view or add a comment, sign in
-
-
🚀 Jenkins + GitHub Integration (CI/CD Basics) Documented a complete setup of integrating Jenkins with GitHub covering: ✔ Git Pull using pipeline ✔ Webhook-based auto trigger ✔ Secure Git Push from Jenkins Also explored the difference between credentialsId and withCredentials — a small concept but very important in real pipelines. 📊 Sharing my step-by-step PPT for reference 👇 #DevOps #Jenkins #GitHub #CICD #LearningInPublic
To view or add a comment, sign in
-
Stop typing cd ../../../ to get to your repo. The terminal is where we live, but most DevOps engineers accept the default, painful experience. The micro-friction of navigation and typos kills your flow state. Here are 2 CLI tools that stopped me from wasting time: 1. 𝘁𝗵𝗲𝗳𝘂𝗰𝗸 You typed git psuh or forgot sudo. Don't retype the whole line. Just type fuck. It grabs the previous command, fixes the error, and runs it. 👉 'alias fix='fuck'' (HR safe version 🙂 ) 2. 𝘇𝗼𝘅𝗶𝗱𝗲 Stop memorizing paths like cd ~/projects/client/backend. Zoxide learns your habits. Just type z backend. It takes you there instantly using fuzzy logic. I’ve broken down 𝟯 𝗺𝗼𝗿𝗲 𝘁𝗼𝗼𝗹𝘀 (including a cat replacement that actually has syntax highlighting and a top alternative that visualizes Docker stats) in this week's DecodeOps. Subscribe and read the full list and upgrade your terminal quality of life here: 👉 https://lnkd.in/gT8QjSR7 P.S. After subscribing, please check your email and download the IaC & CI/CD from Day Zero Checklist as a welcome gift. #DevOps #Linux #Productivity #SRE #CLI
To view or add a comment, sign in
-
-
If a change isn’t in Git, it didn’t happen. That’s not something I’m adopting for this build, it’s something I stopped tolerating. In it-re-dc01, everything lives in one place: Ansible roles. Terraform modules. Docker Compose stacks. Helm charts. Docs. Runbooks. ADRs. One repo: it-re-dc01-infra. GitHub Actions runs on a self-hosted runner on re01-mgmt-01. Every push triggers: • ansible-lint • docker compose validation • terraform validate Merges to main trigger applies. Terraform state is not local. It’s stored and secured via Vault-backed workflows. Secrets don’t live in the repo. They’re issued dynamically. Git defines the desired state. Vault secures access to it. GitHub Actions enforces the path to change. No manual changes. No exceptions. I’ve seen what happens without this discipline. At one point, we ran Jenkins where the pipeline configuration lived inside Jenkins itself. When Jenkins went down, the pipelines went with it. We rebuilt them from memory and screenshots and it took two weeks. That wasn’t a tooling issue, it was a process failure. The tool doesn’t matter, the discipline does. No change, no matter how small or urgent, happens outside the pipeline. Because if your system can’t be rebuilt from Git, you don’t control it. #GitOps #PlatformEngineering #CI/CD
To view or add a comment, sign in
-
-
Sometimes the most frustrating CI/CD pipeline failures come down to the simplest directory contexts. I was recently migrating a massive batch of microservices from Jenkins to GitHub Actions. Everything looked perfect in the YAML, but the deployment step kept failing or timing out. The issue? The Docker container was building successfully, but it was missing critical environment variables. In our legacy Jenkins setup, the bash script explicitly navigated into the service directory (cd my-service) before copying the .env.prod file and running the Docker build. When translating that to GitHub Actions, that subtle path logic got lost. The runner was executing commands in the root directory, silently failing to copy the .env file into the build context. The fix was incredibly simple but easy to miss: explicitly setting the 𝘄𝗼𝗿𝗸𝗶𝗻𝗴-𝗱𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆 𝗸𝗲𝘆 𝗮𝘁 𝘁𝗵𝗲 𝘀𝘁𝗲𝗽 𝗹𝗲𝘃𝗲𝗹 𝘁𝗼 𝗲𝗻𝘀𝘂𝗿𝗲 𝘁𝗵𝗲 𝗰𝗽 𝗰𝗼𝗺𝗺𝗮𝗻𝗱 and the docker build command ran in the exact same context as the Dockerfile. What I learned is that When migrating legacy pipelines to a modern CI tool, don't just copy-paste the shell commands - verify the execution context of every single step. What is the weirdest bug you have run into while migrating CI/CD platforms? #CICD #DevOpsEngineering #GitHubActions #Jenkins #PlatformEngineering #TechCareers
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development