You are required to read and agree to the below before accessing a full-text version of an article in the IDE article repository.

The full-text document you are about to access is subject to national and international copyright laws. In most cases (but not necessarily all) the consequence is that personal use is allowed given that the copyright owner is duly acknowledged and respected. All other use (typically) require an explicit permission (often in writing) by the copyright owner.

For the reports in this repository we specifically note that

  • the use of articles under IEEE copyright is governed by the IEEE copyright policy (available at http://www.ieee.org/web/publications/rights/copyrightpolicy.html)
  • the use of articles under ACM copyright is governed by the ACM copyright policy (available at http://www.acm.org/pubs/copyright_policy/)
  • technical reports and other articles issued by M‰lardalen University is free for personal use. For other use, the explicit consent of the authors is required
  • in other cases, please contact the copyright owner for detailed information

By accepting I agree to acknowledge and respect the rights of the copyright owner of the document I am about to access.

If you are in doubt, feel free to contact webmaster@ide.mdh.se

An End-to-End Explainable Fault Prediction Pipeline for Embedded Test Systems

Publication Type:

Conference/Workshop Paper

Venue:

28th International Conference on Computer and Information Technology


Abstract

This work presents an explainable AI framework for fault predictions of Embedded Test Systems, combining classification and trend prediction. Random Forest (RF) and Gradient Boosting (GB) are used as baselines for classification, and the pipeline is extended with sliding-window sequence modelling using LSTM. With the data set, RF achieved 99.84% accuracy and 1.00 ROC-AUC, while Gradient Boosting achieved 98.59% accuracy and 0.96 ROC-AUC. The LSTM forecasts next timestep measurement and supports control-chart monitoring, yielding low errors for passed instances (MAE ≈ 0.007; RMSE ≈ 0.008) and higher errors for failed ones (MAE ≈ 1.158; RMSE ≈ 1.454), effectively flagging unstable behaviour. To enhance interpretability, SHAP and LIME explanations are computed and deployed in a Django-based web application for uploading data, prediction, and visualisation. Additionally, a lightweight Large Language Model (LLM) generates natural language rationales, helping engineers understand which features and time segments drive each decision. Results confirm that tree-based models provide robust baselines, while sequence-aware modelling and explainable AI add practical value for monitoring signals in production.

Bibtex

@inproceedings{Bhuiyan7316,
author = {Md Motaher Hossain Bhuiyan and Shaibal Barua and Mobyen Uddin Ahmed and Shahina Begum},
title = {An End-to-End Explainable Fault Prediction Pipeline for Embedded Test Systems},
month = {July},
year = {2026},
booktitle = {28th International Conference on Computer and Information Technology },
url = {http://www.es.mdu.se/publications/7316-}
}