With our approaches to validating artificial intelligence (AI) in the area of perception, you can reliably verify and validate your black box or grey box detection processes, such as artificial neural networks.
Use the methods and algorithms developed by Fraunhofer ESK to evaluate, and if needed correct artificial intelligence decisions to protect human life, and your reputation as well.
Possible application scenario: autonomous driving
One potential field of application for the Fraunhofer ESK methods is autonomous driving. Without extensive opportunities to validate the functionality, using artificial intelligence (AI) in a vehicle represents a risk given that it’s not possible to precisely predict the behavior of the system. Fraunhofer ESK’s validation services for AI technology are thus designed to help ensure safe and reliable autonomous driving!
The key challenges when using artificial intelligence technology in autonomous driving platforms:
- AI = black box.
This means that the decision-making process cannot be comprehended by humans. In addition, self-explanatory AI approaches that can make AI decisions understandable, and which boast sufficient quality, are not foreseeable. This is why artificial intelligence technology cannot be validated using conventional, established methods.
- AI ≠ deterministic.
This means that minimal changes to the input data can have a significant impact on the results. It’s thus not possible to come up with formal proof that an artificial intelligence process is capable of making a reliable decision in every conceivable situation.
- AI based on sensor data.
If sensor data changes only slightly, such as due to dirt and grime or because of unfavorable weather conditions during autonomous driving or in unknown situations, this can have a major impact on the ability of the artificial technology to perceive and thus make decisions.