From Testing Software to Supervising Intelligence: Why Quality Matters More Than Ever
A few days ago, I delivered a keynote speech at the Colombo Quality Camp on "From Testing Software to Supervising Intelligence: The New Role of Software Quality Engineers." I just wanted to share some of the points that I shared with them here as well.
The conversation that followed revealed something fundamental: quality isn't about job titles—it's about a critical function that's becoming more vital as AI reshapes how we build software.
Why Quality Is More Critical, Not Less
Here's the insight that's easy to miss: AI doesn't eliminate the need for quality—it changes what quality means and amplifies its importance.
Traditional Software Development:
AI-Augmented Development:
This is why quality is more critical: We're no longer just validating code—we're validating the intelligence that creates the code.
The Mental Model Shift: From Tool to Apprentice
Here's the reframe that changes everything:
Stop thinking of AI as a tool that needs testing. Start thinking of AI as an apprentice engineer that needs supervision.
When you hire a junior engineer, you don't just test their code. You:
AI needs the same approach.
The Three Pillars of Quality in the AI Age
Quality in AI-augmented development rests on three foundations:
1. TRUST
Can we believe what the AI tells us?
AI can hallucinate—generate plausible but incorrect information. It might create a test suite that looks comprehensive but misses critical scenarios. The quality function validates:
2. RELIABILITY
Does the AI perform consistently?
AI is probabilistic—the same input doesn't always yield the same output. The quality function ensures:
3. RISK
What's the blast radius if AI is wrong?
Not all AI outputs carry the same risk:
The quality function identifies high-risk scenarios and ensures appropriate oversight. It defines boundaries: where should AI assist versus where is it too risky?
What This Means for Your Organization
If you're a leader thinking about quality in your organization, here are the critical questions:
Recommended by LinkedIn
1. Are you testing tools or supervising intelligence?
If your quality approach is "run the AI-generated tests and see if they pass," you're testing tools. You need to be validating:
2. Do you have a quality function or just quality activities?
Quality activities: Developers run tests, code reviews happen, CI/CD catches regressions.
Quality function: Someone owns the question "How do we ensure AI-human collaboration produces trustworthy software?" across the entire organization.
3. What's your AI supervision framework?
When AI generates code, tests, or configurations, what's your systematic approach to validation? If the answer is "we review it like any other code," that's not enough. AI-generated code requires different validation because:
Three Principles to Remember
Principle 1: Tools change, principles remain (I heard this somewhere and registered in my memory)
AI tools evolve constantly. The specific models, frameworks, and platforms will be different next year. But the principle—ensure software is trustworthy—never changes.
Principle 2: Test the tester, validate the validator
In the AI age, quality becomes meta. We're not just testing software. We're validating the AI systems that create software. We're ensuring the intelligence building our world is trustworthy.
Principle 3: Supervise intelligence, don't just test software
This is the fundamental shift. Traditional quality was about finding bugs in static code. Modern quality is about supervising dynamic, learning systems to ensure they produce reliable outcomes.
The Opportunity Ahead
Quality engineering isn't disappearing in the AI age. It's becoming the most critical function in software development.
Why? Because as AI writes more code, makes more decisions, and handles more complexity, the stakes get higher. We need people who:
Organizations that understand this are transforming their quality functions—not eliminating them. They're evolving from quality assurance teams to AI quality supervisors. From testers to teachers. From gatekeepers to guides.
Where Do We Go From Here?
If you're responsible for quality in your organization—whether that's your title or not—here's what I encourage you to do:
1. Start with the apprentice mindset When AI generates something, ask: "Did it understand the requirement?" not "Did the output work?"
2. Build supervision frameworks Don't validate ad-hoc. Create systematic approaches for reviewing AI-generated work. Make it repeatable and teachable.
3. Assess risk deliberately Not all AI usage carries the same risk. Apply different levels of scrutiny based on blast radius. High-risk scenarios need more oversight.
4. Invest in skills, not just tools Prompt engineering, AI interaction, validation frameworks—these are the new quality engineering skills. But they're useless without judgment, domain knowledge, and systems thinking.
5. Educate your organization Quality isn't just the quality team's job anymore. Developers, product managers, executives—everyone needs to understand AI supervision, not just AI usage.
Final Thought
Software is eating the world. AI is eating software. Someone needs to ensure both are trustworthy.
That's quality. That's the function. That's why it matters more than ever.
What's your organization doing to ensure quality in the age of AI? I'd love to hear your thoughts in the comments.
#QualityEngineering #AI #SoftwareQuality #AISupervision #SoftwareDevelopment #TechLeadership #ArtificialIntelligence #TechTrends2026
I don't think AI can still challenge the human intelligence. The experience which Quality engineer has on the domain helps to ensure the quality. As you rightly said, AI should be used as an assitant to help with much boring activities so that a quality engineer can spend time where more human intellegence is needed.