Model Insights

Prediction Explanation in Machine Learning - The AI & Analytics Engine


Prediction explanations are important to ensure ML model transparency, as they reveal how feature values impact individual predictions. In this blog, we’ll discuss what it is, how it’s useful to understand predictions, and how you can use it in the AI & Analytics Engine.

What is prediction explanation in machine learning

When machine learning models generate predictions, the value of each input variable, also known as features, affects the resulting output prediction.

Prediction explanation provides insight into how the specific values for each input feature impacts the resulting prediction.

Prediction explanation shouldn’t be confused with feature importance, which are values that describe how impactful each feature as a whole generally impacts the predictions.

 

Why is prediction explanation useful

Prediction explanation is useful to improve oversight into how ML models arrive at predictions, a concept called “model interpretability”, which is important when building ML models.

Model transparency and trust

Being able to break down how values for each feature affects predictions on an individual prediction greatly increases model transparency and trust.

Prediction explanation provides transparency to the inner workings of the model which can easily be communicated to stakeholders, ensuring accountability, fairness and trust.

Regulatory compliance

For domains where sensitive decisions are influenced by a ML models predictions, a high degree of model interpretability may be a hard requirement to maintain regulatory compliance.

Prediction explanations gives a granular clarification of predictions for any particular case.

 

The Prediction explanation feature in the Engine

For any regression, binary classification or multi-class classification model, the prediction explanation tool sits within the AI & Analytics Engine’s Insights tab.

Simply click Generate, and select any row of data displayed in the table. The section on the right can be displayed in text or chart view, and displays the value of the five most impactful features, and how much it increased or decreased the prediction value.

prediction explanation in the AI & Analytics Engine

The impact of feature values is generated using a method called Kernal SHAP (SHapley Additive exPlanation), which takes into account both the individual feature contribution when in isolation as well as the contribution of that feature when it interacts with other features in the sample.

Similar posts