You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.
The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.
For the reports in this repository we specifically note that
- the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
- the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
- technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
- in other cases, please contact the copyright owner for detailed information
By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.
If you are in doubt, feel free to contact webmaster@ide.mdh.se
ENHANCING EXPLAINABILITY, ROBUSTNESS, AND AUTONOMY: A COMPREHENSIVE APPROACH IN TRUSTWORTHY AI
Publication Type:
Conference/Workshop Paper
Venue:
IEEE Symposium on Explainable, Responsible, and Trustworthy CI
Abstract
Recent advancements in AI, especially generative AI (gAI), are accelerating industrial digitalization, with the market projected to grow significantly by 2030. However, challenges such as the black-box nature of AI decisions, biased data, and AI-generated hallucinations continue to hinder industrial trust. AI also requires better adaptability to dynamic environments and stronger accountability mechanisms. To address these challenges, this paper proposed an adaptive gAI-based multi-agent framework that enables collaboration between human actors and multiple AI agents i.e., ExplainAgent, AuditAgent, RobustAgent and AutoAgent tailored to mirror and provide specialised support for the various aspects of trustworthy AI. Each of the agents will be clearly defined and specialised through the customisation of multiple modules encompassing 1) Communication and Cooperation, 2) Ensure Trust and 3) Execute and Evaluate Decisions. The framework focuses on improving explainability, fairness, and robustness while fostering human-AI collaboration with the aim of advancing trustworthy AI methods, tools and best practices leveraging AI and related technologies.
Bibtex
@inproceedings{Ahmed7108,
author = {Mobyen Uddin Ahmed and Shahina Begum and Shaibal Barua and Abu Naser Masud and Gianluca Di Flumeri and Nicol{\`o} Navarin},
title = {ENHANCING EXPLAINABILITY, ROBUSTNESS, AND AUTONOMY: A COMPREHENSIVE APPROACH IN TRUSTWORTHY AI},
month = {May},
year = {2025},
booktitle = {IEEE Symposium on Explainable, Responsible, and Trustworthy CI},
url = {http://www.es.mdu.se/publications/7108-}
}