New Release: Performance Reports for Classification Models in Production

New Release: Performance Reports for Classification Models in Production

A new report in the performance tabs family: now, you can use Evidently to summarize and explore the quality of classification models. It works for probabilistic and non-probabilistic classifiers, binary and multi-class alike.

What is it?

Like our Regression Performance Report, this new report helps understand machine learning performance in production. You can run it whenever you have the ground truth labels available to evaluate the classification quality.

Regression Performance Report

The report answers the following questions:

  • How well does the model perform in production?
  • Did it change since the model training or a particular past period?
  • Where does the model assign the wrong labels? Can specific features explain the misclassification?

It has a few features I especially like. 

For example, this Class Separation plot shows not only model accuracy but also the quality of the model calibration.

Class Separation

A Precision-Recall table helps model decision boundaries for different thresholds.

Precision-Recall Table

And the Classification Quality table shows if specific feature values can explain the model errors.

Classification Quality

All instantly generated from your model logs!

Read more on how to use the tool in our release blog post here: https://evidentlyai.com/blog/evidently-018-classification-model-performance 

Or, pip install evidently to give it a try: https://github.com/evidentlyai/evidently 




Swiftly, Evidently is becoming the go-to library for all things ML monitoring. That’s fantastic!

To view or add a comment, sign in

More articles by Emeli Dral

Explore content categories