Sometimes the most frustrating CI/CD pipeline failures come down to the simplest directory contexts. I was recently migrating a massive batch of microservices from Jenkins to GitHub Actions. Everything looked perfect in the YAML, but the deployment step kept failing or timing out. The issue? The Docker container was building successfully, but it was missing critical environment variables. In our legacy Jenkins setup, the bash script explicitly navigated into the service directory (cd my-service) before copying the .env.prod file and running the Docker build. When translating that to GitHub Actions, that subtle path logic got lost. The runner was executing commands in the root directory, silently failing to copy the .env file into the build context. The fix was incredibly simple but easy to miss: explicitly setting the 𝘄𝗼𝗿𝗸𝗶𝗻𝗴-𝗱𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆 𝗸𝗲𝘆 𝗮𝘁 𝘁𝗵𝗲 𝘀𝘁𝗲𝗽 𝗹𝗲𝘃𝗲𝗹 𝘁𝗼 𝗲𝗻𝘀𝘂𝗿𝗲 𝘁𝗵𝗲 𝗰𝗽 𝗰𝗼𝗺𝗺𝗮𝗻𝗱 and the docker build command ran in the exact same context as the Dockerfile. What I learned is that When migrating legacy pipelines to a modern CI tool, don't just copy-paste the shell commands - verify the execution context of every single step. What is the weirdest bug you have run into while migrating CI/CD platforms? #CICD #DevOpsEngineering #GitHubActions #Jenkins #PlatformEngineering #TechCareers
Great lesson, pipeline migrations often fail because of small execution context issues rather than complex logic 🚀
Great catch — classic build-context drift during Jenkins → GitHub Actions migrations. We’ve seen this bite when .env isn’t part of the Docker build context or gets excluded via .dockerignore. We now enforce context + env validation pre-build and prefer --env-file/secrets over copying files. Curious how you’re handling this at scale?