Day 30 of #100DaysOfDevOps 🚀 Today’s task focused on cleaning up a Git repository by resetting its commit history to a stable point. The repository had a few extra test commits that were no longer needed, and the goal was to move the branch back so that only two commits remained: the initial commit and the add data.txt file commit. What I worked on I navigated to the repository on the storage server and reviewed the commit history to identify the correct commit to roll back to. Once I confirmed the target commit, I reset the branch so that HEAD pointed back to add data.txt file, effectively removing the unnecessary test commits from the history. After verifying that only the required commits remained, I pushed the updated branch to the remote repository. Key takeaway Not every situation calls for a revert but sometimes a clean reset is the right approach, especially when working with test repositories or preparing for a clean release state. Understanding the difference between rewriting history and preserving it is a critical Git skill for DevOps engineers. This task reinforced: How Git branch pointers and HEAD work When to use reset vs revert How to responsibly update shared repositories The importance of maintaining a clean and reliable commit history Thirty days in, and the Git concepts are getting deeper and more practical. #DevOps #Git #VersionControl #Linux #CI_CD #100DaysOfDevOps
Resetting Git Repository Commit History for Clean Release State
More Relevant Posts
-
🔧 DevOps Practice: Cleaning Up Git Test Branches Today, day 5/5, I completed a hands-on Git task in the KodeKloud Stratos Datacenter environment that focused on maintaining a clean repository by removing unused test branches. 📌 Task Objective The Nautilus development team had created several temporary branches during testing in the repository located at: /usr/src/kodekloudrepos/blog My task was to delete the branch xfusioncorp_blog from the Git repository on the Storage Server. 🛠 Steps Taken 1️⃣ Connected to the Storage server using SSH 2️⃣ Navigated to the repository directory 3️⃣ Verified existing branches to confirm the target branch 4️⃣ Switched to a different branch (master) because Git does not allow deleting the active branch 5️⃣ Deleted the test branch safely using Git Key Commands Used: ssh natasha@ststor01 cd /usr/src/kodekloudrepos/blog git branch git checkout master git branch -d xfusioncorp_blog ✅ Result The branch was successfully removed, helping keep the repository organized and reducing clutter from temporary development branches. 💡 Key Learning Git will not allow deletion of the branch you are currently on, so switching to another branch first is essential. Consistent repository cleanup is an important practice in DevOps workflows to maintain an efficient development environment. #DevOps #Git #KodeKloud #Linux #CloudEngineering #ContinuousLearning
To view or add a comment, sign in
-
-
Day 30 - Git Hard Reset #100DaysOfDevOps🧑💻 Today’s task focused on rewriting Git history in a controlled environment. A skill that directly translates to real-world production support. The Nautilus team had a test repository where multiple temporary commits were pushed. The requirement was clear: clean up the repository so only two commits remain in history with both the branch pointer and HEAD aligned to one of the commits. To achieve this, I navigated to the repository, identified the correct commit using "git log --oneline", and executed a "git reset --hard commit-hash" to move the branch pointer and HEAD back to the required state while cleaning the working tree. Since this rewrites commit history, I followed up with a "git push --force" to update the remote repository safely. This reflects real production scenarios where accidental commits, sensitive data exposure, or test changes must be surgically removed while keeping repository integrity intact. Understanding when and how to rewrite history, and the risks involved, is critical in DevOps and release engineering workflows. Full documentation and command breakdown available here: https://lnkd.in/g-HZxwzw Another solid step forward. Excited to tackle tomorrow’s challenge and keep building production-ready Git expertise. 🚀 #DevOps #Git #VersionControl #CloudEngineering #SRE #Linux #ContinuousLearning
To view or add a comment, sign in
-
🚀 DevOps Journey Update Spent some time revisiting and brushing up on Git & GitHub fundamentals — tools that quietly power almost every modern development and DevOps workflow. Refreshed concepts like: 🔹 Version Control – tracking changes without the “who broke this?” mystery 🔹 Branches – experimenting safely without disturbing main 🔹 Rollback & Revert – the DevOps safety net when things go sideways 🔹 SSH Authentication – secure GitHub access without the password dance 🔹 Tags & Semantic Versioning – keeping releases organized (v1.0.0, v1.1.0, etc.) Also went through the usual Git muscle memory workout: git clone • git add • git commit • git push • git pull • git branch • git merge • git revert It’s interesting how something developers use daily becomes even more important from a DevOps perspective — where traceability, collaboration, and reliable releases really matter. Next in my DevOps journey: Linux Servers 🐧 Time to spend more quality time with the terminal. #DevOpsJourney #LearningInPublic #Git #GitHub #DevOps #Linux
To view or add a comment, sign in
-
-
🚀 Week 3 of My DevOps Journey Completed – Shell Scripting This week was all about mastering Linux Shell Scripting and building real-world automation tools used in system administration. Instead of just learning commands, I built practical automation scripts 👇 💻 What I Implemented: 🔹 User Account Creation (-c | --create) → Automated user creation with password validation 🔹 User Deletion Script → Clean and safe account removal 🔹 Password Reset Automation (-r | --reset) → Secure password reset for existing users 🔹 List System Users (-l | --list) → Display usernames with their UIDs 🔹 Help Menu (-h | --help) → CLI-style usage documentation like real Linux tools 🔹 Backup Automation Script → Timestamped backups → Backup rotation (keeps only last 3 backups automatically) This week strengthened my understanding of how automation eliminates repetitive tasks — which is at the core of DevOps 🚀 Grateful for the guidance from Shubham Londhe 🙌 On to Week 4 🔥 📌 GitHub Repository: https://lnkd.in/gdmZBt_u #DevOps #Linux #ShellScripting #Automation #CloudComputing #SystemAdministration #LearningInPublic
To view or add a comment, sign in
-
🚀 Day 22 of #90DaysOfDevOps Today, I shifted my focus to the backbone of modern DevOps: Version Control with Git. Understanding how to track changes and manage code history is where the real automation journey begins. What I practiced: ✅ Git Setup & Config: Configuring my identity with user. name and user.email. ✅ Repository Initialization: Creating a local repo from scratch using git init. The Git Workflow: Mastered the flow between Working Directory, Staging Area, and Local Repository. ✅ Commit History: Using git log --oneline to maintain a clean, readable audit trail of my project. ✅ Deep Dive: Exploring the hidden .git/ folder to understand how Git stores metadata and branches. Key takeaway: The Staging Area is a game-changer—it acts as a "buffer zone" that gives us full control over what goes into our history, keeping our production code clean and professional. #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #Git #VersionControl #Linux #OpenSource #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 23 of #90DaysOfDevOps — Git Branching & GitHub Workflow Today I worked on one of the most important concepts in Git — Branching 🌿 This is where Git actually becomes powerful in real-world DevOps and development workflows. 🔧 What I practiced today: ✔ Creating and switching branches (git branch, git switch, git checkout) ✔ Making isolated commits on feature branches ✔ Verifying that changes don’t affect the main branch ✔ Deleting unused branches ✔ Pushing multiple branches to GitHub ✔ Understanding origin vs upstream ✔ Learning git fetch vs git pull ✔ Practicing clone vs fork workflow 💡 Key Learning: Branching allows teams to work on features, fixes, and experiments independently without breaking production code — this is the foundation of CI/CD pipelines. 🧪 Hands-on Work: Created feature branches (feature-1, feature-2) Made commits on different branches Tested branch isolation Synced changes between local and remote Practiced full GitHub workflow (push, pull, fork, clone) 📂 GitHub Repository: 👉 https://lnkd.in/g9i-KJx3 Consistency is the real game changer. Learning in public 🚀 #DevOps #Git #GitHub #Linux #Automation #LearningInPublic #90DaysOfDevOps #DevOpsEngineer #TrainWithShubham
To view or add a comment, sign in
-
-
My GitLab CI pipeline failed before the build even started. No code ran. No containers built. Just an error during “Fetching changes”. At first I thought it was a Git issue. But after debugging the runner server, I found something unexpected: 👉 A single file owned by root inside the GitLab Runner workspace. That small permission mismatch stopped the entire pipeline. The error looked like this: error: insufficient permission for adding an object to repository database .git/objects fatal: failed to write object After investigating the runner workspace, the fix was simple: sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds Pipeline started working again immediately. What this taught me CI/CD pipelines don't always fail because of code. Sometimes the real problem is the environment running the pipeline. Even one wrong file permission can break the entire workflow. I wrote a detailed blog explaining: • why this happens • how to troubleshoot it • how to prevent it in production Read it here 👇 🔗 https://lnkd.in/ga6tuT_z 💬 Have you faced strange CI/CD issues like this? #DevOps #GitLab #CICD #CloudEngineering #Linux #Infrastructure
To view or add a comment, sign in
-
🚀 DevOps Practice Update Today I practiced deploying a 2-tier application using Docker Compose. Here’s what I did step by step: 🔹 Built and ran the application containers using docker-compose up -d 🔹 Verified that the services were running correctly 🔹 Stopped and removed all containers 🔹 Re-ran the same command to recreate the entire environment 💥 Result: The whole application stack started again instantly and worked perfectly. This small practice shows the power of containerization and infrastructure reproducibility. With Docker Compose, you can define an entire multi-service architecture in one configuration file and bring it up or down with a single command. It's exciting to see how DevOps tools make application deployment faster, cleaner, and more reliable. I’m continuing my journey learning Docker, containers, and DevOps practices step by step. More experiments coming soon! #DevOps #Docker #DockerCompose #Containerization #CloudComputing #Linux #LearningInPublic #TechJourney #BackendDevelopment bongoDev
To view or add a comment, sign in
-
-
📌 Day 9 of My #30DaysOfDevOps Journey 🚀 Today, I continued building out my DevOps skills by working on process monitoring and repository organization. 🔹 I developed a Process Monitoring Script, this helps capture and track CPU and memory usage for processes in real time, which is an important step toward understanding system performance and reliability. 🔹 I also restructured my GitHub repository to organize each day’s work into separate folders. This keeps the project clean, scalable, and easier to navigate — a practice that mirrors real-world engineering standards. You can check out the project here 👉 https://lnkd.in/dN2Jvtt2 Consistent practice is helping me understand how DevOps engineers combine scripting, system insights, and version control to build dependable workflows. On to Day 10! 🚀 #DevOps #Linux #ShellScripting #ProcessMonitoring #GitHub #RepositoryManagement #LearningInPublic #CloudEngineering #30DaysOfDevOps #LearningWithTSAcademy #30DaysOfTech
To view or add a comment, sign in
-
-
Title: My first Docker build failed for a very silly reason I thought I was finally doing something “real” in DevOps. I wrote my first Dockerfile, ran docker build -t myapp ., and expected magic. Instead, Docker threw an error because I wrote the file name as dockerfile instead of Dockerfile. It felt small, but I was stuck longer than I want to admit. My first thought was that maybe Docker itself was broken. Then I checked the command again, checked the folder, and finally noticed the file naming issue. The fix was simple: rename the file correctly. Dockerfile Then the build worked. That day I learned an important DevOps lesson: small naming mistakes can break automation. Tools are strict. They do not guess what we meant. Now, whenever something fails, I stop assuming the problem is “complex” and first verify the basics. Have you ever lost time because of a tiny typo like this? #DevOps #Docker #Containers #CICD #CloudComputing #GitHubActions #Linux #Automation #PlatformEngineering #LearningInPublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
30 days of consistency!! Keep it up my bro