Maksym Voitko’s Post

 8 PRs with 8,768 lines deleted: migrating JupyterHub to GitOps I have just completed the migration of our production JupyterHub from manual Helm scripts to an ArgoCD-managed GitOps approach. What started as "just add an ArgoCD Application" became an 8-PR refactoring journey. The starting point: A 3,200-line Python config embedded inside YAML. 40+ copy-pasted profile blocks. Deploying meant sourcing secrets and running a shell script. Onboarding a new project was a surgical operation. One wrong Ctrl-C could leave Helm stuck in pending-upgrade. The approach: refactor first, migrate second. 1. Extract Python config from YAML into standalone files 2. Deduplicate with builder functions – 3,200 lines to 640 3. Migrate QA to ArgoCD, validate, and delete old files 4. Extract shared module, migrate prod Each phase was independently deployable. QA went first as the proving ground. Surprises along the way: * Kubernetes strategic merge patches can't transition env vars from using value: to valueFrom: – it merges both fields into an invalid spec * Server-side apply seemed like the elegant fix, but broke on resources with stale managedFields from years of client-side applies * The actual fix took 10 seconds: delete the Deployment, let ArgoCD recreate it Result: Net deletion of ~3,000 LOC across 8 PRs. Secrets moved from git-tracked files to a secrets manager with automatic sync. Deployments went from "run a script" to git push. Large infrastructure migrations succeed through a boring chain of incremental refactoring, not heroic big-bang changes.  #devops #gitops #ArgoCD #kubernetes #k8s #jupyterhub #platformengineering

To view or add a comment, sign in

Explore content categories