Giannis Tolios’ Post

𝗖𝗿𝗲𝗮𝘁𝗲 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹𝘀! 🖥️ Machine learning models lack explainability, making it difficult to understand their predictions. This is a significant obstacle in various cases, including regulated industries where black box models are unacceptable. Shap is a Python library utilizing shapley additive explanations, a game theoretic approach that explains the output of machine learning models. The library generates plots visualizing the effect of each variable, hence being a significantly useful tool! Check the links below for more information, and make sure to follow me for regular data science content. 𝗦𝗵𝗮𝗽 𝗹𝗶𝗯𝗿𝗮𝗿𝘆 𝘄𝗲𝗯𝘀𝗶𝘁𝗲: https://lnkd.in/dE2cxKN8 𝗖𝗵𝗲𝗰𝗸 𝗺𝘆 𝗗𝗮𝘁𝗮 𝗣𝗼𝗿𝘁𝗳𝗼𝗹𝗶𝗼: https://lnkd.in/dA8XSw4Q #datascience #python #machinelearning #deeplearning

  • No alternative text description for this image

Thanks Gianni...you always suggest cutting edge knowledge.

Pablo Neirz

Data Scientist Sr. | Neurociencia y Machine Learning

3mo

SHAP values stand out as the only additive feature attribution method that simultaneously satisfies the three key desirable properties: Local Accuracy: explanation sums exactly to the model's prediction Missingness: absent features get zero attribution Consistency: if a feature's marginal contribution increases (or stays the same) in the model, its importance cannot decrease This unique theoretical guarantee is why SHAP has become the gold standard in model interpretability. However, it is important to remember that SHAP is for explaining the model, not the phenomenon; just as correlation does not imply causation, the explainability of the model does not imply causation or information.

Like
Reply
See more comments

To view or add a comment, sign in

Explore content categories