Explainable AI in medical technology and automotive applications

Machine learning and questions of »how« and »why«

KI in der Medizin und Automotive
© science photo/nd3000 - stock.adobe.com

When it comes to the subject of machine learning and deep neural networks (DNNs), there are usually more questions than answers, as data analysis by ML-based systems remains an enigmatic process for developers and users alike. Nevertheless, it is vital that the systems provide transparency and interpretability – particularly with regard to safety issues in the automotive sector, such as driver drowsiness detection, or in medicine, with automated screening of tissue samples.

The use of automatic processes in critical strategic decision-making is dependent on the explainability of data analysis, which also underpins the general acceptance of these processes.

How can the decisions of AI-based systems be explained to users?

The aim of the research focus »Explainable AI for medicine and the automotive sector« is to create new methods of explainable machine learning in combination with prediction and prescription (explanation) in the stated areas of application. This includes methods intended not only to ensure transparency in the training and working methods of DNNs, but also to improve the validatability of forecast model content.

»Explainable AI« for in-the-wild applications

Another focus of research is the generalizability of ML systems – that is, the development of adaptive methods that can deal with individual variations in patients’ physiological parameters (during mobile use in sports, for example) and operate under varying conditions (»in the wild«).