Explainable AI - Methods, Applications & Recent Developments
Dr. Wojciech Samek discusses methods, applications and recent developments of Explainable AI, in particular, demonstrates the effectivity of explanation techniques such as Layer-wise Relevance Propagation (LRP) when applied to various datatypes (images, text, audio, video, EEG/fMRI signals) and neural architectures (ConvNets, LSTMs). LRP provides information about individual predictions, e.g., heatmaps visualizing which pixels have been most relevant for the model to arrive at its decision. The talk will finish with a discussion of challenges and open questions in the field of explainable AI.
Explainable AI for Science and Medicine
Understanding why a machine learning model makes a certain prediction can be as crucial as the prediction’s accuracy in many applications. Here l will present a unified approach to explain the output of any machine learning model. It connects game theory with local explanations, uniting many previous methods. I will then focus specifically on tree-based models, such as random forests and gradient boosted trees, where we have developed the first polynomial time algorithm to exactly compute classic attribution values from game theory. Based on these methods we have created a new set of tools for understanding both global model structure and individual model predictions. These methods were motivated by specific problems we faced in medical machine learning, and they significantly improve doctor decision support during anesthesia. However, these explainable machine learning methods are not specific to medicine, and are now used by researchers across many domains. The associated open source software (http://github.com/slundberg/shap) supports many modern machine learning frameworks and is very widely used in industry (including at Microsoft).