You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.

The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.

For the reports in this repository we specifically note that

  • the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
  • the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
  • technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
  • in other cases, please contact the copyright owner for detailed information

By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.

If you are in doubt, feel free to contact webmaster@ide.mdh.se

Towards Sustainable AI Development: A Focus on Transparency and Explainability

Authors:

Shahina Begum, Mobyen Uddin Ahmed, Mosarrat Farhana

Publication Type:

Conference/Workshop Paper

Venue:

2nd International Conference on Creativity, Technology, and Sustainability


Abstract

Artificial Intelligence has revolutionised industries, enhancing productivi-ty and efficiency across different sectors. However, its deployment in life-critical domains such as road and air traffic safety, demands trustworthiness in the sustainable development of AI technology. The challenge is to devel-op AI solutions with long-term sustainability in mind, ensuring they re-main relevant and reliable to the users. This paper highlights the role of eX-plainable AI (XAI) and transparency in ensuring trustworthiness and fos-tering sustainability in AI solutions. XAI techniques, which can often be interpretable models and post-hoc explanations, are particularly vital in domains where AI decisions have significant implications, such as predict-ing machinery failures in manufacturing. Despite its potential, achieving explainability remains challenging due to trade-offs between model com-plexity and interpretability, the absence of universal standards, and diverse societal expectations. The paper outlines key steps to ensure explainability and transparency in reliable AI development supported by relevant use cas-es.

Bibtex

@inproceedings{Begum7155,
author = {Shahina Begum and Mobyen Uddin Ahmed and Mosarrat Farhana},
title = {Towards Sustainable AI Development: A Focus on Transparency and Explainability},
month = {July},
year = {2025},
booktitle = {2nd International Conference on Creativity, Technology, and Sustainability},
url = {http://www.es.mdu.se/publications/7155-}
}