General pitfalls of model-agnostic interpretation methods for machine learning models

General pitfalls of model-agnostic interpretation methods for machine learning models

Abstract

An increasing number of model-agnostic interpretation techniques for machine learning (ML) models such as partial dependence plots (PDP), permutation feature importance (PFI) and Shapley values provide insightful model interpretations, but can lead to wrong conclusions if applied incorrectly. We highlight many general pitfalls of ML model interpretation, such as using interpretation techniques in the wrong context, interpreting models that do not generalize well, ignoring feature dependencies, interactions, uncertainty estimates and issues in high-dimensional settings, or making unjustified causal interpretations, and illustrate them with examples. We focus on pitfalls for global methods that describe the average model behavior, but many pitfalls also apply to local methods that explain individual predictions. Our paper addresses ML practitioners by raising awareness of pitfalls and identifying solutions for correct model interpretation, but also addresses ML researchers by discussing open issues for further research.

Grafik Top
Authors
  • Molnar, Christoph
  • König, Gunnar
  • Herbinger, Julia
  • Freiesleben, Timo
  • Dandl, Susanne
  • Scholbeck, Christian A.
  • Casalicchio, Giuseppe
  • Grosse-Wentrup, Moritz
  • Bischl, Bernd
Grafik Top
Shortfacts
Category
Book Section/Chapter
Divisions
Neuroinformatics
Subjects
Kuenstliche Intelligenz
Title of Book
xxAI - Beyond Explainable AI
Page Range
pp. 39-68
Date
17 April 2022
Official URL
https://doi.org/10.1007/978-3-031-04083-2_4
Export
Grafik Top