Add SHAP to identify most important features
There is SHAP (SHapley Additive exPlanations) which can explain the output of any machine learning model.
@frenner suggested adding it to umami.
This is the website: https://github.com/slundberg/shap
There is SHAP (SHapley Additive exPlanations) which can explain the output of any machine learning model.
@frenner suggested adding it to umami.
This is the website: https://github.com/slundberg/shap