Automated Learning

AutoML – Optimal machine learning pipelines with minimal effort

Automatisches Lernen
© Fraunhofer IIS

Machine Learning (ML) and Artificial Intelligence have been very popular for several years - both in research and commercial applications. The groundbreaking successes in a wide range of areas such as computer vision, speech recognition, autonomous driving and others are having a huge impact on our society today.

A major obstacle for the practical application of ML is the extremely high effort to identify the best ML pipeline, which contains the appropriate sub-tasks for an independent workflow to solve the entire ML task, and finally to configure it optimally for the respective application: Generally, ML experts first check a wide variety of methods for suitability based on the available data and the intended task before they select the best one for the current problem, design the ML process and set it up. Often, a number of different methods and configurations have to be tested before a decision can be made based on the results.

AutoML attempts to automate exactly this process - i.e., to find the optimal ML pipeline without manual effort. Current efforts in AutoML research are aimed at automating the entire ML process, but the focus is on Feature Engineering, Model Selection and Hyperparameter Optimization, and continuous adaptation of the ML model.

AutoML can therefore be used for automating the above steps and it creates a democratization of machine learning, since expert knowledge that is normally essential for selecting a suitable model is no longer urgently needed.

The competence is an integral part of the "DEAL: Data Efficient Automated Learning" project group

AutoML is one of the core competencies of the project group at the Munich site, which was set up as part of the ADA Lovelace Center and has already been able to complete a number of industrial projects dealing specifically with the topic of AutoML. Often, this involves bridging the gap between the very abstract research field of "AutoML" and an application in an industrial context that must generate added value in the end. This is the classic case of "reality does not match research". For challenges involving unknown costs in classification, multimodal data, imbalanced datasets, or AutoML for sensor data, new approaches with AutoML systems tailored specifically to the application can help.

The first step for successful AutoML is the selection of a suitable search space, meaning the decision which methods, models etc. can be tested. This search space is then screened for an optimal ML pipeline using a suitable optimization method. Model-based optimization (MBO) methods, most prominently Bayesian Optimization, are often a good choice and have been successfully used in past projects, e.g. for the design of an AutoML system for quality assurance in industrial manufacturing. A popular alternative that handles hierarchical and complex search spaces well and scales better than MBO in certain cases are Evolutionary Algorithms.

Furthermore AutoML perfectly harmonizes with the other competence pillars of the project group: Explainable Learning and Few-Labels Learning. For example, Few-Labels Learning methods are also configurable and must be selected differently depending on the application. This can be automated with the support of suitable AutoML methods. Explainability, however, is often a weak point for AutoML systems, if the optimal model is chosen only on the basis of performance. The results of AutoML are then often black-box models that deliver very good results, but are no longer interpretable. Multidimensional optimization in the context of AutoML can combine two different metrics for evaluating a model - such as performance and interpretability - in one approach. Another approach is meta-modeling, where a black-box AutoML system is made explainable by a meta-model.

AutoML in Predictive Maintenance: ALONE - Self-learning Adaptive Logistic Networks

AutoML can be used in settings where very similar tasks with slightly different circumstances occur multiple times. One example is predictive maintenance or machine health monitoring. Machine Learning can predict the probability of failure of an (expensive) machine or the remaining time until failure, allowing optimized and predictable maintenance at minimal cost. Certain ML models and preprocessing methods are very promising for this kind of ML tasks, but in practice it is not reasonable to manually select an optimal ML model for each kind of machine and environment. AutoML can help in this case and generate an optimal ML pipeline from all relevant methods and models for each specific application.

More information about the application "Self-optimization in adaptive logistics networks"

AutoML and Meta-Learning: AI Frameworks for Autonomous Systems

Application "AI Framework for Autonomous Systems" is focused on reinforcement learning methods, whose performance is often extremely dependent on certain hyperparameters. At the same time, reinforcement learning algorithms are extremely expensive in many cases. Therefore, efficient Hyperparameter Tuning for reinforcement learning (LINK) was investigated in A05; furthermore research was done on meta-learning for this setting. Meta-learning tries to use the information already learned from previous tasks in a more useful way for AutoML (or in this case hyperparameter tuning) on new tasks. Depending on the application, meta-learning can be relevant for finding an optimal pipeline ("warmstarting") or even help to apply existing fixed architectures to new tasks ("transfer learning"). Meta-learning can also be a promising approach for recurring tasks that are similar - such as in the Self-learning Adaptive Logistic Networks application.

AutoML as part of almost every application

In the ADA Lovelace Center, questions from the applications "Intelligent Power Electronics" and "Monitoring and fault diagnosis of industrial wireless systems", which also belong to the research field of Automated Learning, were discussed for the first time. As a result, research was done there on automatic stability determination of DC networks as well as radio networks by ML methods.

Hyperparameter optimization, feature engineering, model selection etc. are a part of almost every application of Machine Learning. The competences of the pillar AutoML are therefore also used in many other projects beyond the ADA Lovelace Center - for example in the project "Demand Forecast as a Service (dFASSI)".

AutoML for the generation of AI models with minimal energy demand (AutoML ASIC)

Integrating energy demand prediction into multicriteria AutoML methods allows us to automatically generate AI processing chains for embedded hardware with minimal energy demand. As in many industrial ML applications, two conflicting objectives are relevant to users: Models with small energy requirements are often less complex in comparison and therefore show weaker performance. Therefore, developers are offered several (pareto-optimal) solution combinations in a multi-criteria AutoML solution. This way, an optimal trade-off between prediction accuracy (performance) and later energy demand, adapted to the own hardware configuration, can be chosen. In addition to evolutionary algorithms and Bayesian optimization, methods from the field of reinforcement learning (e.g. augmented random search) are also used here.

Learn more about TinyML


»ADA wants to know« Podcast

In our new podcast series, "ADA wants to know," the people responsible for the competence pillars are in conversation with ADA and provide insight into their research priorities, challenges and methods. In this episode, listen to ADA with Automated Learning expert Florian Karl.

Our focus areas within AI research

Our work at the ADA Lovelace Center is aimed at developing the following methods and procedures in nine domains of artificial intelligence from an applied perspective.

Sequenzbasiertes Lernen
© Fraunhofer IIS

Sequence-based Learning concerns itself with the temporal and causal relationships found in data in applications such as language processing, event processing, biosequence analysis, or multimedia files. Observed events are used to determine the system’s current status, and to predict future conditions. This is possible both in cases where only the sequence in which the events occurred is known, and when they are labelled with exact time stamps.

Erfahrungsbasiertes Lernen
© Fraunhofer IIS

Learning from Experience refers to methods whereby a system is able to optimize itself by interacting with its environment and evaluating the feedback it receives, or dynamically adjusting to changing environmental conditions. Examples include automatic generation of models for evaluation and optimization of business processes, transport flows, or control systems for robots in industrial production.

© Fraunhofer IIS

Data-centric AI (DCAI) offers a new perspective on AI modeling that shifts the focus from model building to the curation of high-quality annotated training datasets, because in many AI projects, that is where the leverage for model performance lies. DCAI offers methods such as model-based annotation error detection, design of consistent multi-rater annotation systems for efficient data annotation, use of weak and semi-supervised learning methods to exploit unannotated data, and human-in-the-loop approaches to improve models and data.

© Fraunhofer IIS

To ensure safe and appropriate adoption of artificial intelligence in fields such as medical decision-making and quality control in manufacturing, it is crucial that the machine learning model is comprehensible to its users. An essential factor in building transparency and trust is to understand the rationale behind the model's decision making and its predictions. The ADA Lovelace Center is conducting research on methods to create comprehensible and trustworthy AI systems in the competence pillar of Trustworthy AI, contributing to human-centered AI for users in business, academia, and society.

© Fraunhofer IIS

Process-aware Learning is the link between process mining, the data-based analysis and modeling of processes, and machine learning. The focus is on predicting process flows, process metrics, and process anomalies. This is made possible by extracting process knowledge from event logs and transferring it into explainable prediction models. In this way, influencing factors can be identified and predictive process improvement options can be defined.

Mathematical optimization plays a crucial role in model-based decision support, providing planning solutions in areas as diverse as logistics, energy systems, mobility, finance, and building infrastructure, to name but a few examples. The Center is expanding its already extensive expertise in a number of promising areas, in particular real-time planning and control.

© Fraunhofer IIS

The task of semantics is to describe data and data structures in a formally defined, standardized, consistent and unambiguous manner. For the purposes of Industry 4.0, numerous entities (such as sensors, products, machines, or transport systems) must be able to interpret the properties, capabilities or conditions of other entities in the value chain.

Tiny Machine Learning (TinyML) brings AI even to microcontrollers. It enables low-latency inference on edge devices that typically have only a few milliwatts of power consumption. To achieve this, Fraunhofer IIS is conducting research on multi-objective optimization for efficient design space exploration and advanced compression techniques. Furthermore, hierarchical and informed machine learning, efficient model architectures and genetic AI pipeline composition are explored in our research. We enable the intelligent products of our partners.

© Fraunhofer IIS

Hardware-aware Machine Learning (HW-aware ML) focuses on algorithms, methods and tools to design, train and deploy HW-specific ML models. This includes a wide range of techniques to increase energy efficiency and robustness against HW faults, e.g. robust training for quantized DNN models using Quantization- and Fault-aware Training, and optimized mapping and deployment to specialized (e.g. neuromorphic) hardware. At Fraunhofer IIS, we complement this with extensive research in the field of Spiking Neural Network training, optimization, and deployment.

Other topics of interest




Automated Machine Learning (AutoML) is currently enjoying a lot of attention as it promises to automate the development and configuration of AI processes. Together with our customer from the industrial manufacturing sector, we have therefore investigated which specific adaptations are useful for the use of AutoML systems in practical enterprise applications.

What the ADA Lovelace Center offers you


The ADA Lovelace Center for Analytics, Data and Applications offers - together with its cooperation partners - continuing education programs around concepts, methods and concrete applications in the topic area of data analytics and AI.

Seminars with the following focus topics are offered:

More information

Learn more about Automated Machine Learning (AutoML) for industrial applications.


Get the book AutoML.


Join the AutoML Online Training to gain competencies regarding the development of Machine Learning applications.

ADA Lovelace Center Blog

The blog offers the possibility to get informed - about the project itself, about news from the applications, about the further development of the methods and the network. For example, our experts talk about challenges and application references in their respective competence pillar or application in the "Ada wants to know" podcast.