Explainable AI: The New 42?

Explainable AI: The New 42?

Abstract

Explainable AI is not a new field. Since at least the early exploitation of C.S. Pierce's abductive reasoning in expert systems of the 1980s, there were reasoning architectures to support an explanation function for complex AI systems, including applications in medical diagnosis, complex multi-component design, and reasoning about the real world. So explainability is at least as old as early AI, and a natural consequence of the design of AI systems. While early expert systems consisted of handcrafted knowledge bases that enabled reasoning over narrowly well-defined domains (e.g., INTERNIST, MYCIN), such systems had no learning capabilities and had only primitive uncertainty handling. But the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge.

Grafik Top
Authors
  • Goebel, Randy
  • Chander, Ajay
  • Holzinger, Katharina
  • Lecue, Freddy
  • Akata, Zeynep
  • Stumpf, Simone
  • Kieseberg, Peter
  • Holzinger, Andreas
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
Machine Learning and Knowledge Extraction
Divisions
Security and Privacy
Subjects
Computersicherheit
Angewandte Informatik
Event Location
Hamburg, Germany
Event Type
Conference
Event Dates
27-30 Aug 2018
Publisher
Springer International Publishing
Page Range
pp. 295-303
Date
2018
Export
Grafik Top