Debugging AI, One Edge Case at a Time
Over the past few days, I’ve pushed and merged a few PRs across different repos. Nothing flashy — but the kind of work that actually matters when you try to run AI systems in production.
A few examples:
What I enjoy in this kind of work is that it sits right in the messy middle:
between GPUs, drivers, APIs, and actual usage in production.
And honestly, that’s where most problems are.
Not the model.
Not the theory.
But everything around it.
That’s also where I tend to focus:
making AI systems stable, reproducible, and usable outside of demos.
I’m based in Switzerland and currently open to senior roles around:
AI infrastructure, platform engineering, or anything where things need to actually work at scale.
If that’s what you’re building, happy to chat.
#AI #MLOps #Infrastructure #NVIDIA #LLM