Overcoming the Problems With MLOps Maturity Frameworks
A construction team working in coordination to build a large block - Photo by Ricardo Gomex Angel on Unsplash

Overcoming the Problems With MLOps Maturity Frameworks

For an increasing number of organisations, machine learning is moving from a niche area of innovation to an embedded, and hence critical, capability. At Hypergolic, we regularly hear from department heads trying to understand better the opportunities available to them to improve this capability - often looking to free up resources, tackle more projects or improve the SLA performance of existing products. Google’s MLOps maturity framework, “MLOps: Continuous delivery and automation pipelines in machine learning”, is a well-known and popular place to start. This and other product-oriented frameworks are the wrong tools for almost all organisations.

Product Maturity Frameworks Don’t Improve Productivity

By product-centric framework, I mean a framework that describes the attribute of a single machine learning product. Several of these frameworks and questionnaires exist. They usually describe levels of automation around the management of an existing product, for example:

  • Are product KPIs and SLAs established and monitored? Is there some automated alert when a model breaches these?
  • Can a model be retrained automatically on new data?
  • Can a newly retrained model be deployed to production automatically?

Many teams start with these frameworks when a machine learning product becomes embedded in an organisation’s day-to-day operations. I think it’s not too much of a stretch to say that this kind of thinking has led to the explosion of MLOps tools in the last two years. But whilst an automated monitoring store might make you more comfortable about operational risk, team output will continue to become more constrained, and the backlog of new ideas will continue to grow. All product-centric maturity frameworks are doomed to reach this dead end.

Product maturity assessments yield little benefit because they ask the wrong question. The purpose of a machine learning team, like any technology team, is not to build and maintain one or more products. Its purpose is to use AI to solve organisational problems and help achieve strategically important goals. This is not a merely semantic point. Taking a product view leaves the question of “which products?” unanswered and completely ignores the development of new products. A product assessment cannot tell you if you are effectively solving strategically essential problems or how you can continue to solve these problems better in the future.

Viewing Maturity of Capability Through an Organisational Lens

An organisational framework, distinct from a product framework, must answer at least three questions:

  1. Is there a clear AI strategy for the team, and is it well aligned with the organisational strategy and the team’s capabilities?
  2. Are projects clearly defined in terms of that strategy, broken down well to support continuous feedback and selected carefully to maximise strategic impact?
  3. Are project processes articulated and updated, techniques and skills defined, and appropriate technologies introduced to support the efficient execution of defined projects and ongoing support of released products?

This holistic approach to understanding an organisation’s AI capability can help transform teams of all sizes. AI teams in 2023 have two defining characteristics:

  1. They have often grown out from niche areas of innovation teams with loose connections to the core operations of their organisations.
  2. Product and project processes are less standardised in AI than in traditional software engineering.

As a result, most teams can find significant early advantages by refining their processes associated with the connection between strategy and execution. Often the most impactful single change that can be made quickly is to adjust how project work is defined. This definition supports a more effective allocation of time, freeing up resources to take on more impactful work.

Using Organisational Maturity for Continuous Improvement

An organisational maturity framework can be applied well beyond quick wins and incremental process improvements. Each of the three areas articulated above can be further divided by the “People, Process, Technology” framework. And whilst processes can be introduced and updated relatively quickly, identifying skill gaps, training and recruiting are long-term projects, as are identifying technology needs and introducing suitable technology.

It’s essential to plan the sequencing of your response to an organisational maturity assessment carefully. It can be tempting, with the wealth of technology products and coaching opportunities available, to plan training and new technology investment early. Investing in these areas can create significant advantages, particularly in project execution. But if your team is still weak in strategy and alignment, these investments are often misplaced and fail to yield real advantage.

The final level of almost all organisational maturity assessments is the establishment of processes to monitor and improve continuously. Teams that embed a culture of continuous improvement won’t only outperform in achieving strategically important results; these teams become better at estimating new projects, making investment decisions more predictable, and the team members who operate in such an environment are happier, with lower team turnover and more opportunities for individual development.

Conclusion

If your New Year’s resolution for 2023 is to benchmark your team and find areas for investment, you’ll be hard-pressed to find a better place to start than an organisational maturity assessment. I strongly recommend taking a step back for two weeks to objectively assess where you are before investing in areas which won’t bring you the advantage you’re looking for.

To view or add a comment, sign in

More articles by Christopher Kelly

Others also viewed

Explore content categories