While new technologies such as artificial intelligence (AI) or 5G open up completely new opportunities for users and companies, there are also associated risks and dangers. Autonomous systems rely on technologies such as artificial intelligence (AI) in environments such as autonomous driving, production/manufacturing, logistics and robotics.
However, AI-based autonomous systems are susceptible to sporadic errors, which limit their use to prototypes. Even the slightest deviations in perception caused by conditions such as inclement weather (autonomous driving) or faulty hardware (Industry 4.0) can have serious consequences. This makes it even more important to ensure that safety-critical autonomous systems with new underlying technologies function reliably at all times. For this reason, Fraunhofer ESK is working on methods for validating artificial intelligence (AI) technologies and autonomous systems. This ensures that applications can be safely executed and that human lives are not put at risk.
To achieve this, Fraunhofer ESK develops approaches that enable the creation of resilient systems. For us, resilience describes an autonomous system that continues to function reliably despite the occurrence of expected or unexpected changes. The objective is to be able to switch at any time to a reliable function path in unsafe situations, or when errors occur, without having to completely shut down the system. Inspired by nature, we orient our activities toward the artificial replication of cognitive characteristics that these systems must fulfill:
Autonomous systems must be able to recognize their own condition and state, as well as the environment, and also be in a position to evaluate their own reliability. This especially applies when decisions have to be made through machine learning processes such as artificial neuron networks because these decisions cannot be tracked. For this reason Fraunhofer ESK is working on monitoring approaches that give systems the capability to determine early enough where problems are occurring, whether it’s in the interaction behavior in complex connected systems, quality of service and dependability or perception by means of artificial intelligence (AI). We are also working on methods for adequately validating perception monitoring, such as through the intelligent cross-validation of existing internal and external sensors.
Dependable self adaptiveness
The ability to recognize or predict in critical situations is the first important step in avoiding accidents. In autonomous systems however, it’s even more important that the systems be able to be adapt to their environment and their actual condition or state. Because autonomous systems must continue to function safely and not simply shut down, researchers at Fraunhofer ESK are working on cost-effective fail-operational concepts, among others.
The system relies on dynamic processes to adapt itself at runtime to the current situation and dangers in order to ensure safe and reliable behavior tailored to the specific situation, even during an outage. If there is no possibility to achieve full functionality through adaptation, the system adapts its functional scope and functional quality, step-by-step, via graceful degradation in order to provide the highest level of performance without negatively impacting its functional safety. By connecting and incorporating edge, fog or cloud systems, such systems can be enhanced and expanded with external functions, which we describe as a graceful upgrade with adaptive end-to-end architectures.
With all of these processes, special emphasis is placed on ensuring that the process in question yields verifiable proof of safety that complies with standards such as ISO 26262 or IEC 61508.