March Issue

March Issue

1. Quest Global trusts us for NYC AI Bias Audit 🖥️

We’re excited to announce that Quest Global , a global leader in engineering research and development (ER&D) services, has chosen code4thought to conduct an independent Bias Testing Audit for an Automated Employment Decision Tool (AEDT) under New York City’s Bias Law (Local Law 144).

With a presence in 20 countries, 85 delivery centers, and a team of 20,000+ engineers, Quest Global tackles critical challenges across industries like Aerospace, Automotive, Energy, Hi-tech, Healthcare, Rail, and Semiconductors.

Why This Matters

As AI-driven hiring systems become more common, ensuring compliance is essential for building fair and trustworthy systems. Our collaboration with Quest Global focuses on:

  • Assessing performance and accuracy for reliable hiring decisions.

  • Conducting bias testing to identify and address discrimination.

  • Providing practical recommendations for mitigation.

While this project aligns with NYC regulations, it also supports global AI governance trends emphasizing transparency, accountability, and fairness. At code4thought, we see compliance as an opportunity for strategic improvement, not just a requirement. Stay tuned for more updates!

Article content

2. Monthly Insider 🔴 Avoiding Common Pitfalls in AI Performance Testing

AI models often fail silently—until they don’t. Performance issues can go unnoticed until they cause real-world failures, leading to missed opportunities, compliance risks, or reputational damage. While AI performance testing aims to safeguard systems before they go live, many organizations still fall into recurring pitfalls that compromise the reliability and trustworthiness of their models.

To help you navigate these challenges, we’ve put together a detailed blog post outlining the most common pitfalls in AI performance testing and practical solutions for each. Backed by our extensive experience and hundreds of performance tests on AI systems, we share the best practices to help you ensure your models perform as expected.

👉 Read the full blog post here

Article content

3. Ensuring AI Fairness with iQ4AI

In the EU and UK, principles like transparency, accountability, and fairness are becoming vital components of AI compliance. At the same time, AI fairness is more than just a compliance checkbox—it’s fundamental to building unbiased, ethical, and legally sound decision-making systems. That’s why we developed iQ4AI: a comprehensive platform to evaluate, monitor, and optimize fairness in your AI models, aligned with regulatory standards like ISO 29119-11 and NYC Bias Law.

How iQ4AI Helps You Ensure Fairness:

✅ Measure & Detect Bias: Assess fairness using metrics like disparate impact, group benefit, and equal opportunity.

✅ Analyze Fairness vs. Performance: Use our interactive dashboard to explore trade-offs, balancing equity with accuracy.

✅ Ensure Compliance: Maintain inclusivity and diversity while meeting evolving regulations.

✅ Monitor Fairness Over Time: Keep track of fairness across your ML lifecycle with recurrent assessments.

✅ Detect Intersectional Bias: Identify hidden biases affecting multiple demographics.

With iQ4AI, you can confidently build fair, transparent, and compliant AI models.

Get your free demo today and see how iQ4AI can empower your team to make better, fairer decisions.

Article content

4. DeepSeek and the Speed of Innovation: Where Does Reliability Fit In?

In a recent Verge article, Microsoft CEO Satya Nadella highlighted the rapid innovation coming from open-source AI projects like DeepSeek, noting that “some of the best work” is now happening outside major AI labs. This observation underscores a powerful shift in momentum — one where openness and speed are becoming key competitive factors in generative AI.

While the momentum around open-source LLMs like DeepSeek is indeed impressive, this pace of development also raises an important, if often under-discussed, question: how do we ensure these systems are not just powerful — but reliable, fair, and safe?

As new models emerge and deployment accelerates, the demand for rigorous, ongoing quality testing becomes critical. Whether in enterprise, public sector, or productized AI, performance alone is no longer enough. Reliability, explainability, and governance must evolve just as fast.

This growing gap is a call to action — to integrate testing and oversight into the development cycle, not just as a safeguard, but as an enabler of long-term trust and value.

Read the original article "Satya Nadella: DeepSeek is the new bar for Microsoft’s AI success" here



To view or add a comment, sign in

More articles by code4thought

Others also viewed

Explore content categories