Semantics

Creating transparency in AI decisions

Semantik
© Fraunhofer IIS

Data is the raw material for all machine learning and artificial intelligence applications. Useful and meaningful insights based on this data can only be extracted if the knowledge associated with or contained in it, i.e. its »semantics«, is captured in a suitable way during or after the creation of the data, described in a suitable form, i.e. equally in a representation understandable by humans and machines, and correlated with the actual data.

With reference to these requirements, the competence pillar »Semantics« deals with two key areas:

  • Acquisition of knowledge: The first key area focuses on the question of how »model knowledge« in various specific application areas (such as driver assistance, self-localization, digital pathology, or segmentation of XXL tomography data) can be captured and jointly described with the measurement data used and required for this purpose (e.g., vital data and emotions of persons in the vehicle, localization parameters, microscopy data of histological tissue, XXL tomography data).
  • The second key area deals with the challenge of linking the captured information or semantics with the associated measurement data in such a way that these can be made available and usable for various applications by means of methodological approaches from the fields of data analysis, machine learning and artificial intelligence.

In the context of knowledge acquisition, a survey in the form of structured interviews with the experts of the) application projects was started in order to extract, capture and document in which form the semantics (i.e. the »knowledge«) regarding the related questions and different data sources (images, image volumes, videos, multimodal time series, etc.) are available, captured and managed in the different application projects. The goal of this survey is, on the one hand, to establish a common understanding about the term »semantics« and, on the other hand, to find synergies in their capture and use.

On the feedback collected in this way, a first clustering of the different methods for knowledge capture was performed. These approaches can currently be divided into the following groups. 

Iconic annotation

In iconic annotation, regions in 2D and 3D image data are drawn in and marked (»labeled«). In the field of »Digital Pathology«, for example, these labeled regions consist of different tissue areas with certain anatomical or pathological properties such as »tumor«, »connective tissue« or »inflamed tissue«, whereas in the segmentation of XXl-CT data these labeled regions describe e.g. »screws«, »sheets« or »rivets«. Similar approaches are also used for capturing information in video streams (e.g., of soccer matches), where 2D positions of ball and player are manually marked over time, as well as important events (foul, goal, out).

 

Simulation

For applications from the fields of »Autonomous Driving« or »Automatic AI-based Analysis of Games« (Efficient Search and Representation of Tracking Data e.g. Football, Basketball, Ice Hockey), commercially available simulators (driving and game simulators) are used, among others, in addition to (hard to obtain real data), where the information (»semantics«) to be predicted by the data analysis is automatically provided by the simulator, thus forming the »Measurable Ground Truth«.

Reference systems

For self-localization, indoor tracking, and navigation applications using low-cost smartphones, high-quality sensors such as precise optical tracking systems or robots are used as reference systems.

Semantic networks and rule-based systems

Expert knowledge is defined and stored about a domain (e.g., about the composition of assemblies in automobiles or airplanes) in the form of suitable machine-readable rules and formal relation graphs, which can then be interpreted by a machine.

The goal of the processing and compilation is to create a recommendation catalog for the acquisition of different semantics of different data, in order to then turn to the second focus, the usability of the knowledge for different applications.

Our focus areas within AI research

Our work at the ADA Lovelace Center is aimed at developing the following methods and procedures in nine domains of artificial intelligence from an applied perspective.

Automatisches Lernen
© Fraunhofer IIS

Automated Learning covers a large area starting with the automation of feature detection and selection for given datasets as well as model search and optimization, continuing with their automated evaluation, and ending with the adaptive adjustment of models through training data and system feedback.


 

Sequenzbasiertes Lernen
© Fraunhofer IIS

Sequence-based Learning concerns itself with the temporal and causal relationships found in data in applications such as language processing, event processing, biosequence analysis, or multimedia files. Observed events are used to determine the system’s current status, and to predict future conditions. This is possible both in cases where only the sequence in which the events occurred is known, and when they are labelled with exact time stamps.

Erfahrungsbasiertes Lernen
© Fraunhofer IIS

Learning from Experience refers to methods whereby a system is able to optimize itself by interacting with its environment and evaluating the feedback it receives, or dynamically adjusting to changing environmental conditions. Examples include automatic generation of models for evaluation and optimization of business processes, transport flows, or control systems for robots in industrial production.

© Fraunhofer IIS

Data-centric AI (DCAI) offers a new perspective on AI modeling that shifts the focus from model building to the curation of high-quality annotated training datasets, because in many AI projects, that is where the leverage for model performance lies. DCAI offers methods such as model-based annotation error detection, design of consistent multi-rater annotation systems for efficient data annotation, use of weak and semi-supervised learning methods to exploit unannotated data, and human-in-the-loop approaches to improve models and data.

© Fraunhofer IIS

To ensure safe and appropriate adoption of artificial intelligence in fields such as medical decision-making and quality control in manufacturing, it is crucial that the machine learning model is comprehensible to its users. An essential factor in building transparency and trust is to understand the rationale behind the model's decision making and its predictions. The ADA Lovelace Center is conducting research on methods to create comprehensible and trustworthy AI systems in the competence pillar of Trustworthy AI, contributing to human-centered AI for users in business, academia, and society.

© Fraunhofer IIS

Process-aware Learning is the link between process mining, the data-based analysis and modeling of processes, and machine learning. The focus is on predicting process flows, process metrics, and process anomalies. This is made possible by extracting process knowledge from event logs and transferring it into explainable prediction models. In this way, influencing factors can be identified and predictive process improvement options can be defined.

Mathematical optimization plays a crucial role in model-based decision support, providing planning solutions in areas as diverse as logistics, energy systems, mobility, finance, and building infrastructure, to name but a few examples. The Center is expanding its already extensive expertise in a number of promising areas, in particular real-time planning and control.

Tiny Machine Learning (TinyML) brings AI even to microcontrollers. It enables low-latency inference on edge devices that typically have only a few milliwatts of power consumption. To achieve this, Fraunhofer IIS is conducting research on multi-objective optimization for efficient design space exploration and advanced compression techniques. Furthermore, hierarchical and informed machine learning, efficient model architectures and genetic AI pipeline composition are explored in our research. We enable the intelligent products of our partners.

© Fraunhofer IIS

Hardware-aware Machine Learning (HW-aware ML) focuses on algorithms, methods and tools to design, train and deploy HW-specific ML models. This includes a wide range of techniques to increase energy efficiency and robustness against HW faults, e.g. robust training for quantized DNN models using Quantization- and Fault-aware Training, and optimized mapping and deployment to specialized (e.g. neuromorphic) hardware. At Fraunhofer IIS, we complement this with extensive research in the field of Spiking Neural Network training, optimization, and deployment.

Das könnte Sie auch interessieren

What the ADA Lovelace Center offers you

 

The ADA Lovelace Center for Analytics, Data and Applications offers - together with its cooperation partners - continuing education programs around concepts, methods and concrete applications in the topic area of data analytics and AI.

Seminars with the following focus topics are offered: