Resilient autonomous systems

At Fraunhofer ESK, resilience describes an autonomous system that continues to function reliably despite the occurrence of expected or unexpected changes.

Artificial intelligence (AI) validation

Because AI processes are susceptible to sporadic errors, Fraunhofer ESK develops methods for validating artificial intelligence technologies used in applications such as autonomous driving and Industry 4.0.

Safe autonomous driving

In autonomous driving environments, driving decisions impacted by artificial intelligence (AI) technology must be reliable in order to exclude the possibility of accidents and other dangers. With this in mind, Fraunhofer ESK conducts research into cost-effective fail-operational approaches.

Safe autonomous systems – resilient systems

While new technologies such as artificial intelligence (AI) or 5G open up completely new opportunities for users and companies, there are also associated risks and dangers. Autonomous systems rely on technologies such as artificial intelligence (AI) in environments such as autonomous driving, production/manufacturing, logistics and robotics.

However, AI-based autonomous systems are susceptible to sporadic errors, which limit their use to prototypes. Even the slightest deviations in perception caused by conditions such as inclement weather (autonomous driving) or faulty hardware (Industry 4.0) can have serious consequences. This makes it even more important to ensure that safety-critical autonomous systems with new underlying technologies function reliably at all times. For this reason, Fraunhofer ESK is working on methods for validating artificial intelligence (AI) technologies and autonomous systems. This ensures that applications can be safely executed and that human lives are not put at risk.

To achieve this, Fraunhofer ESK develops approaches that enable the creation of resilient systems. For us, resilience describes an autonomous system that continues to function reliably despite the occurrence of expected or unexpected changes. The objective is to be able to switch at any time to a reliable function path in unsafe situations, or when errors occur, without having to completely shut down the system. Inspired by nature, we orient our activities toward the artificial replication of cognitive characteristics that these systems must fulfill:

Dependability awareness

Autonomous systems must be able to recognize their own condition and state, as well as the environment, and also be in a position to evaluate their own reliability. This especially applies when decisions have to be made through machine learning processes such as artificial neuron networks because these decisions cannot be tracked. For this reason Fraunhofer ESK is working on monitoring approaches that give systems the capability to determine early enough where problems are occurring, whether it’s in the interaction behavior in complex connected systems, quality of service and dependability or perception by means of artificial intelligence (AI). We are also working on methods for adequately validating perception monitoring, such as through the intelligent cross-validation of existing internal and external sensors.

Dependable self adaptiveness

The ability to recognize or predict in critical situations is the first important step in avoiding accidents. In autonomous systems however, it’s even more important that the systems be able to be adapt to their environment and their actual condition or state. Because autonomous systems must continue to function safely and not simply shut down, researchers at Fraunhofer ESK are working on cost-effective fail-operational concepts, among others.

The system relies on dynamic processes to adapt itself at runtime to the current situation and dangers in order to ensure safe and reliable behavior tailored to the specific situation, even during an outage. If there is no possibility to achieve full functionality through adaptation, the system adapts its functional scope and functional quality, step-by-step, via graceful degradation in order to provide the highest level of performance without negatively impacting its functional safety. By connecting and incorporating edge, fog or cloud systems, such systems can be enhanced and expanded with external functions, which we describe as a graceful upgrade with adaptive end-to-end architectures.

With all of these processes, special emphasis is placed on ensuring that the process in question yields verifiable proof of safety that complies with standards such as ISO 26262 or IEC 61508.


2018 Weiß, Gereon; Schleiß, Philipp; Schneider, Daniel; Trapp, Mario:
Towards integrating undependable self-adaptive systems in safety-critical environments
2018 Manderscheid, Martin; Weiß, Gereon; Knorr, Rudi:
Verification of network end-to-end latencies for adaptive ethernet-based cyber-physical systems
2017 Schleiß, Philipp; Drabek, Christian; Weiß, Gereon; Bauer, Bernhard:
Generic management of availability in fail-operational automotive systems
2017 Weiß, Gereon; Schleiß, Philipp; Drabek, Christian; Ruiz, Alejandra; Radermacher, Ansgar:
Safe adaptation for reliable and energy-efficient E/E architectures
2016 Weiß, Gereon; Schleiß, Philipp; Drabek, Christian:
Towards flexible and dependable E/E-architectures for future vehicles
2015 Penha, Dulcineia; Weiß, Gereon; Stante, Alexander:
Pattern-based approach for designing fail-operational safety-critical embedded systems
2013 Weiß, Gereon; Grigoleit, Florian; Struss, Peter:
Context modeling for dynamic configuration of automotive functions
2013 Zeller, Marc; Prehofer, Christian; Krefft, Daniel; Weiß, Gereon:
Towards runtime adaptation in AUTOSAR