Image Build & Promotion Pipeline with GitHub Actions In actual projects, You do not just build an image and push it to a registry. The image needs to pass through several quality and security gates first. We have a hands-on guide that walks you through a complete, production-grade Docker image build and promotion pipeline using GitHub Actions. Here is what is covered in this blog. - Automating a Java application build and Dockerization with GitHub Actions - Managing container registry credentials securely with GitHub Secrets - Promoting images through dev, stage and prod registries - Signing the final container image using Cosign - Build caching, image tagging strategy, registry architecture, and more.. 𝗗𝗲𝘁𝗮𝗶𝗹𝗲𝗱 𝗕𝗹𝗼𝗴: https://lnkd.in/gWWWEMTV Bookmark it, try the pipeline in your own repo, and let us know what you would add or change. #devops #practicaldevops #docker
Automate Docker Image Build & Promotion with GitHub Actions
More Relevant Posts
-
CI/CD pipelines play a critical role in today's cloud-native software development cycle. They are the backbone of how developers build, test, and deploy code. But CI/CD security is often overlooked. Blindly. Usually, due to lack of awareness. Not intentionally. But attackers don't care whether misconfigurations were introduced purposely or not. They simply exploit them. KONTINUERLIG, a GitHub Actions challenge from Hack.lu 2025 that I participated in, is a perfect example of how that plays out. It chained three distinct attack primitives to extract a secret from a private repository. Each one exploiting a misconfiguration pattern you'd find in a real production pipeline. Here's how the chain works: 🔗 Stage 1 - Heredoc Injection via pull_request_target A workflow used pull_request_target + untrusted checkout (classic "pwn request" pattern). By crafting filenames that terminate a bash heredoc prematurely, I injected LD_PRELOAD into the GitHub Actions environment, then leveraged artifact poisoning and Python module shadowing to achieve code execution with pull-requests: write permissions. 🔗 Stage 2 - Docker Build Context Escape >> A second workflow ran docker build ./docker/ with contents: write permissions. A single symlink (ln -s . docker) redirected the build context to the repository root, exposing .git/ inside the container. From there, the embedded GITHUB_TOKEN was used to push arbitrary commits directly to the main branch. 🔗 Stage 3 - Secret Exfiltration via Problem Matchers GitHub Actions redacts secrets in logs - but Problem Matchers execute before the redaction mechanism. By committing a matcher.json to main and using ::add-matcher:: as the commit message (echoed by the workflow), I registered a regex pattern that captured the flag before masking occurred. None of these primitives are exotic. Pull_request_target misuse, overly permissive GITHUB_TOKEN scopes, Docker build context assumptions, and trust in secret redaction as a last line of defense - these show up in production pipelines. Full writeup on my blog (link in the comments section) 👇 #AppSec #hacklu #CICD #GithubActions #OffensiveSecurity #PenetrationTesting #SecDevOps #CTF
To view or add a comment, sign in
-
-
Docker & Microservices Series 🚀 | Part 3 In my last post, I asked: What actually happens after writing a Dockerfile? Here’s what I understood 👇 Docker uses the Dockerfile to create something called an Image. 👉 Docker Image An Image is a packaged version of your application. It contains: • Your application (app.jar) • Required runtime (like Java) • Dependencies • Instructions to run So instead of setting up everything again and again, you just use this ready package. Flow becomes: Dockerfile → Image Real-world example 👇 Think of Dockerfile like a recipe 🍳 And Docker Image like the prepared food, packed and ready 📦 You don’t need to cook again you just use the ready package anywhere. That’s why Docker is powerful: • Build once • Run anywhere Still exploring Docker, but this step made the whole flow much clearer. Next: Where does this Image actually run? (this part is interesting 👀) Have you worked with Docker Images before? #Docker #Microservices #BackendEngineering #Java #LearningInPublic
To view or add a comment, sign in
-
-
Docker in Real Projects – Part 2: Images & Containers ❌ Problem Deployment was inconsistent and difficult to scale. 🔻 Without Docker - Setup required on every server - Errors due to missing dependencies ✅ With Docker - Docker Image → application blueprint - Container → running instance 💡 Types of Docker Images (Simple View) 1️⃣ Base Image - Minimal OS (like Ubuntu, Alpine) - Starting point 2️⃣ Official Image - Ready-to-use (Java, MySQL, Node) - Maintained by Docker/community 3️⃣ Custom Image - Your application + dependencies - Built using Dockerfile 💡 Image Layer Concept Each step in Dockerfile creates a layer → reused for faster builds 👉 Example Flow Base Image → Add dependencies → Add code → Final Image → Run Container 📌 Result Fast deployment + easy scaling. #Docker #Containers #DevOps #BackendDevelo
To view or add a comment, sign in
-
I am glad to share a recent project I co-developed from scratch: j-kube-watch. It is a custom Kubernetes Operator designed to streamline cluster monitoring and eliminate alert fatigue. When monitoring Kubernetes environments, repetitive warnings like a CrashLoopBackOff or a failing probe can easily flood notification channels. To solve this, we built an operator that actively watches Pod lifecycle events and routes them intelligently. Key technical aspects of the project include: ● Built with Java 21 and the Fabric8 client, utilizing Virtual Threads for lightweight, concurrent event processing. ● An intelligent deduplication engine using Caffeine cache to suppress alert storms, sending grouped summaries instead of redundant notifications. ● Fully native configuration using Custom Resource Definitions (CRDs) for routing alerts to external channels like email. ● Packaged completely with Helm to handle deployments, RBAC rules, and network policies. This project was fully co-developed from start to finish in collaboration with Adham Ayad. You can find the full source code, architecture flow, and Helm charts on GitHub here: https://lnkd.in/dYJzZPZ5 Feedback and code reviews are always welcome. #Kubernetes #DevOps #Java #CloudNative #PlatformEngineering #Helm #Automation #ITI
To view or add a comment, sign in
-
-
It's impressive when tools can recognize complexity and offer relevant solutions, like suggesting a GitLab CI runner setup. Imagine going from simply building an app to managing a full CI/CD pipeline, including deployment, dependencies like Python and Postgres, and even choosing a web server. This goes beyond basic development, touching on maintainability and automated processes. The next step involves integrating security testing (SAST/DAST) into the pipeline to proactively catch issues. Check out more insights at https://lnkd.in/gjXm7xzG #CI/CD #DevOps #SoftwareDevelopment #CloudComputing #Automation
To view or add a comment, sign in
-
I am glad to share a recent project I co-developed from scratch: j-kube-watch. It is a custom Kubernetes Operator designed to streamline cluster monitoring and eliminate alert fatigue. When monitoring Kubernetes environments, repetitive warnings like a CrashLoopBackOff or a failing probe can easily flood notification channels. To solve this, we built an operator that actively watches Pod lifecycle events and routes them intelligently. Key technical aspects of the project include: ● Built with Java 21 and the Fabric8 client, utilizing Virtual Threads for lightweight, concurrent event processing. ● An intelligent deduplication engine using Caffeine cache to suppress alert storms, sending grouped summaries instead of redundant notifications. ● Fully native configuration using Custom Resource Definitions (CRDs) for routing alerts to external channels like email. ● Packaged completely with Helm to handle deployments, RBAC rules, and network policies. This project was fully co-developed from start to finish in collaboration with Mostafa Mahmoud. You can find the full source code, architecture flow, and Helm charts on GitHub here: https://lnkd.in/dTsukBRK Feedback and code reviews are always welcome. #Kubernetes #DevOps #Java #CloudNative #PlatformEngineering #Helm #Automation #ITI
To view or add a comment, sign in
-
-
I recently worked on containerizing a Spring Boot application using Docker, and it was a great hands-on experience in understanding how deployment works in real-world scenarios. The application I built is a simple REST API that returns the message “Hello Namaste” when accessed through a browser. I started by developing the Spring Boot application with a basic controller that handles HTTP requests and returns the response. Once the application was ready, I packaged it into a JAR file using Maven with the command mvn clean package. This generated the required JAR file inside the target folder. Next, I created a Dockerfile in the root directory of my project. In the Dockerfile, I used an OpenJDK base image, added the generated JAR file into the container, and specified the entry point to run the application. This step helped define how my application should run inside a container. After that, I built a Docker image using the command docker build -t rest-demo .. This created an image of my application along with all necessary dependencies. Then, I ran the container using docker run -p 8081:8081 rest-demo, which allowed me to access the application on my local machine. Finally, when I opened http://localhost:8081/, I successfully got the output “Hello Namaste”, confirming that my Spring Boot application was running inside a Docker container. Through this process, I learned how Docker helps in making applications portable, consistent, and easy to deploy across different environments. #Docker #SpringBoot #Java #Maven #DevOps #Containerization #Microservices #BackendDevelopment #CloudComputing #SoftwareDevelopment #Programming #Tech #Learning #OpenJDK #RESTAPI #WebDevelopment #BuildAndDeploy #DeveloperLife
To view or add a comment, sign in
-
You're probably deploying manually. Here's how to stop. GitHub Actions gives you free CI/CD directly in your GitHub repo — no external services needed. Here's a complete workflow that runs on every push to main: ```yaml name: Deploy on: push: branches: [main] jobs: test-and-deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - uses: actions/setup-node@v3 with: node-version: '20' - run: npm install - run: npm test - name: Deploy to server run: | ssh user@yourserver 'cd /app && git pull && npm install && pm2 restart app' ``` Every push to main: 1. Checks out the code 2. Installs dependencies 3. Runs tests 4. Deploys to your server — only if tests pass Free for public repos. 2,000 minutes/month free for private repos. Stop deploying manually. Set this up once. Never think about it again. Link in bio — starter workflow files for Node, Python, and Docker deployments. #GitHubActions #CICD #DevOps #Automation #TechFinSpecial
To view or add a comment, sign in
-
-
"It works on my machine" is the enemy of a peaceful production deployment. 😅 This week, I took a major step in getting our Sentinal fraud detection system production-ready by entirely revamping our continuous integration pipeline. The biggest win? Integrating Testcontainers into our automated GitHub Actions workflow. Instead of relying on flaky, shared development databases or mocking away critical infrastructure, our integration tests now spin up ephemeral, production-like environments in Docker containers during the CI build. We're now validating our Spring Boot backend against real database instances and real message brokers on every single commit. Moving from mocks to infrastructure-independent integration testing has been an absolute game-changer for our deployment confidence. If you haven't looked into Testcontainers for your Java/Spring Boot pipelines yet, I highly recommend giving it a try. GitHub Link : https://lnkd.in/eggWaxcT #Testcontainers #CICD #SpringBoot #Java #SoftwareEngineering #DevOps #GitHubActions
To view or add a comment, sign in
-
Faced an interesting issue while working with Microservices + Eureka — curious if others have seen this I recently built a system with 3 services using a full microservices architecture and registered them with Eureka for service discovery. Everything looked good in code, but when I ran them locally: ❌ Only 1–2 services would start ❌ Others failed randomly or didn’t register properly But here’s the surprising part When I containerized all services (including Eureka) using Docker: ✅ All 3 services ran smoothly ✅ Properly registered with Eureka ✅ Service discovery worked perfectly Same code. Same setup. Different result. So what changed? My guesses: Port conflicts / environment issues on local machine Startup timing issues (services trying to register before Eureka is ready) Dependency or config mismatch locally But Docker seems to have stabilized everything. Has anyone else faced this? Would love to understand the exact reason behind this behavior — especially why containerization made it work flawlessly. #Microservices #SpringBoot #Eureka #Docker #BackendDevelopment #Java
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development