July 2nd, 2024

Explainability is not a game

Importance of explainability in machine learning for trustworthy AI decisions. Challenges with SHAP scores in providing rigorous explanations, potentially leading to errors. Emphasis on reliable explanations in critical domains like medical diagnosis.

Read original articleLink Icon
Explainability is not a game

The article discusses the importance of explainability in machine learning models, particularly in situations where decisions impact people. It emphasizes the need for rigorous explanations to build trust and debug complex AI systems. While eXplainable AI (XAI) aims to provide explanations, popular approaches like SHAP scores lack guarantees of rigor. The use of Shapley values in XAI, known as SHAP scores, has been widely adopted but raises concerns about computational complexity and reliability. Recent research has highlighted practical issues with SHAP scores, showing that they can provide misleading information on feature importance. The article presents examples where SHAP scores inaccurately prioritize features, potentially leading to errors in decision-making. Formal explanations, such as abductive explanations, offer a logic-based approach for computing explanations but also come with limitations. The discussion underscores the critical need for trustworthy and accurate explanations in AI systems, especially in high-risk domains like medical diagnosis.

Related

Link Icon 4 comments
By @tomxor - 4 months
> ML models is most often inscrutable, with the consequence that decisions taken by ML models cannot be fathomed by human decision makers

Yup, James Mickens planted this seed 6 years ago, well worth the watch, even if you already agree it's brilliant cathartic humour (the subject isn't really "security").

https://www.youtube.com/watch?v=ajGX7odA87k

By @brigadier132 - 4 months
ML models are just statistical function approximators right? So isn't asking for "explainability" from ML models equivalent to asking "why is fire hot?"? If an ML model makes a prediction the reason why is because it was trained on something that made the prediction more likely than others.
By @dwheeler - 4 months
Explainability is a wonderful property. However, as far as I can tell, the most accurate models do not support explainability (by accuracy I mean "most likely to give correct answer"). There are reasons other models are more popular.

If that's true, we are confronted with a classic trade-off: what is more important? Explainability or other properties like accuracy?

I suspect in that in some high-consequence areas explainability will be the most important property (and possibly mandated by law), even though it has lower accuracy. For the rest, accuracy will be more important.

Sadly, I think there is good reason to believe that this trade-off will continue. I hope I'm wrong about that.

By @mvanveen - 4 months
I am a co-author of a patent for model explainability for credit risk underwriting applications using Shapley values.

In fairness I haven't given this article a thorough read but my initial impression is that I'm finding myself frustrated by the FUD this article is attempting to spread. As my boss would often remark to remind us all: model explainability is an under-constrained optimization problem. By definition there isn't a unique explanation decomposition unless you further constrain the problem.

Therefore, I personally find that hand-wringing around there not being 100% agreement around different explanations for a model inference, while definitely thought provoking and worth considering, should at least account for this reality. For some reason a lot of folks in the ML community seem to have come the opinion that because the problem is under-constrained that means that explanations shouldn't be calculated or have no utility.

Would you prefer a model that examines which features are driving the model to deny a disproportionate number of folks of a particular race or ethnicity or not, all things being equal? My point is even if there are limitations to explainability I think there are a lot of very real, critical scenarios where applying SHAP can be of actual, real world utility.

Furthermore, it's not clear that LIME or other explainability methods will provide better or more robust explanations than Shapley values. As someone that has looked at this pretty extensively in credit underwriting I'd personally feel most comfortable computing SHAP values while acknowledging some of the limitations and risks this article calls out.

Axioms such as completeness are also pretty reasonable and I think there is a fair amount of real world utility to explainability algorithms that derive from such an axiomatic basis.