Artificial Intelligence (AI) has recently improved by leaps and bounds and is now present in every application domain. This is also the case for Air Transportation, where decision making is more and more associated with AI and in particular with Machine Learning (ML). While these algorithms are meant to help users in their daily tasks, they still face acceptability issues. Users are doubtful about the proposed decision or even worse opposed to it since the decision provided by AI is most of the time opaque, non-intuitive and not understandable by a human. So, compared to a natural discussion between two users, the machines often provide information without the opportunity to justify it. In other words, today’s automation systems with AI or ML do not provide additional information on top of the data processing result to support its explanation which makes them not transparent enough. Also, when AI is applied in a high-risk context such as Air Traffic Management (ATM) individual decision generated by the AI model should be trusted by the human operators. Understanding the behaviour of the model and explanation of the result is a necessary condition for trust. To address these limitations, the ARTIMATION project investigates the applicability of AI methods from the domain of Explainable Artificial Intelligence (XAI). In the project, we will investigate specific features to make AI model transparent and post hoc interpretable (i.e., decision understanding) for users in the domain of ATM systems.
|Waleed Reafee Sbu
Examining Decision-Making in Air Traffic Control: Enhancing Transparency and Decision Support Through Machine Learning, Explanation, and Visualization: A Case Study (Mar 2024) Christophe Hurter , Augustin Degas , Arnaud Guibert , Maelan Poyer , Nicolas Durand , Alexandre Veyrie , Ana Ferreira , Stefano Bonelli , Mobyen Uddin Ahmed, Waleed Reafee Sbu Jmoona, Shaibal Barua, Shahina Begum, Giulia Cartocci , Gianluca Di Flumeri , Gianluca Borghini , Fabio Babiloni , Pietro Aricò 16th International Conference Agents and Artificial Intelligence (ICAART24)
iXGB: Improving the Interpretability of XGBoost using Decision Rules and Counterfactuals (Mar 2024) Mir Riyanul Islam, Mobyen Uddin Ahmed, Shahina Begum 16th International Conference Agents and Artificial Intelligence (ICAART24)
Explaining the Unexplainable: Role of XAI for Flight Take-Off Time Delay Prediction (Jun 2023) Waleed Reafee Sbu Jmoona, Mobyen Uddin Ahmed, Mir Riyanul Islam, Shaibal Barua, Shahina Begum, Ana Ferreira , Nicola Cavagnetto 19th International Conference on Artificial Intelligence Applications and Innovations (AIAI2023)
Usage of more Transparent and Explainable Conflict Resolution Algorithm: Air Traffic Controller Feedback (Oct 2022) Christophe Hurter , Augustin Degas , Arnaud Guibert , Nicolas Durand , Mir Riyanul Islam, Shaibal Barua, Mobyen Uddin Ahmed, Shahina Begum, Stefano Bonelli , Giulia Cartocci , Gianluca Di Flumeri , Gianluca Borghini , Pietro Aricò , Fabio Babiloni Conference of the European Association for Aviation Psychology (EAAP2022)
When a CBR in Hand is Better than Twins in the Bush (Sep 2022) Mobyen Uddin Ahmed, Shaibal Barua, Shahina Begum, Mir Riyanul Islam, Rosina O Weber Fourth Workshop on XCBR: Case-Based Reasoning for the Explanation of Intelligent Systems (XCBR-ICCBR2022)
Transparent Artificial Intelligence and Automation to Air Traffic Management Systems: Conflict Detection and Resolution (Aug 2022) Christophe Hurter , Augustin Degas , Mir Riyanul Islam, Shaibal Barua, Hamidur Rahman, Minesh Poudel , Daniele Ruscio , Mobyen Uddin Ahmed, Shahina Begum, Md Aquif Rahman, Stefano Bonelli , Giulia Cartocci , Gianluca Di Flumeri , Gianluca Borghini , Pietro Aricò , Fabio Babiloni International Conference on Cognitive Aircraft Systems (ICCAS2022)