ARAS - Assured Reasoning for Autonomous Cyber-Physical Systems

Status:

active

Start date:

2024-11-15

End date:

2025-12-14

The project “Assured Reasoning for AI-enabled Autonomous Cyber-Physical Systems (ARAS)” focuses on making AI-powered systems more reliable. As these systems are increasingly used in critical areas, ensuring they behave as expected is essential to avoid catastrophic failures.

AI components, especially deep learning models, can make undesirable (wrong) decisions. This makes it hard to fully trust them in real-world applications. Current efforts to improve the robustness of AI systems, such as uncertainty estimation, robust optimization, and adversarial training, have made progress but fall short of guaranteeing reliable and predictable behavior in real-world applications. In this project we aim to provide a solution based on Neurosymbolic AI (NSAI) which combines the strengths of data-driven learning with structured, rule-based symbolic reasoning. This synergy allows autonomous systems to process data efficiently and make decisions more reliably. A key part of the aapproach is the Neurosymbolic Digital Twin which is a lightweight reference model of the system that is used at run time to continuously monitor the behavior of the system. This model helps the system to make better decisions by considering different types of uncertainty, taking context into account and being continuously updated based on new data.
By focusing on runtime assurance, the project aims to create AI systems that can operate reliably under uncertainty. Although the focus is on autonomous CPS, the approach can be applied to other domains as well.

Kristina Lundqvist, Chairman of the MDU recruitment committee,Professor

Email: kristina.lundqvist@mdu.se
Room: U1-066B
Phone: 021-101428