Black Box to Trust Box: Building Confidence in AI-Powered Diagnosis Through Explainable Models for COVID-19

A, Anix Mary Javitha. and Z, Mary Livinsa. (2025) Black Box to Trust Box: Building Confidence in AI-Powered Diagnosis Through Explainable Models for COVID-19. In: 2025 7th International Conference on Intelligent Sustainable Systems (ICISS), India.

Full text not available from this repository. (Request a copy)

Abstract

Artificial intelligence (AI) has transformed medical diagnostics. Its opaque nature raises concerns about trust. This research focuses on explainable AI models for diagnosing COVID-19 using chest X-ray images. Deep learning models trained on labelled X-ray datasets are used for classification tasks. The Gradient-weighted Class Activation Mapping (Grad-CAM) method is applied to generate visual explanations by highlighting relevant regions in the images. Additionally, the LIME technique is utilized to provide local explanations for individual predictions. These interpretability techniques aim to improve the understanding of AI decision-making. The proposed achieved a diagnostic accuracy of 94.29% with precision of 94% and recall of 93%. Grad-CAM offers insights into the reasoning behind each prediction. Local Interpretable Model-Agnostic Explanations (LIME) complements this by explaining individual features that contribute to the diagnosis. These approaches help bridge the gap between model performance and human trust. Explainable AI increases transparency and enhances model reliability. Healthcare professionals can make informed decisions by understanding the AI’s reasoning process. This study shows that combining deep learning and interpretability techniques can build trust in AI systems. The models can be adopted in clinical settings, improving diagnostic workflows. Future research will focus on improving the explain ability of AI models. Incorporating multimodal data, such as CT scans and clinical records, can enhance diagnostic accuracy. Ultimately explainable AI has the potential to revolutionize healthcare by providing more transparent, trustworthy, and effective solutions for clinical decision-making.

Item Type: Conference or Workshop Item (Paper)
Subjects: Computer Science Engineering > Artificial Intelligence
Domains: Computer Science Engineering
Depositing User: Mr IR Admin
Date Deposited: 29 Aug 2025 07:12
Last Modified: 29 Aug 2025 07:12
URI: https://ir.vistas.ac.in/id/eprint/10836

Actions (login required)

View Item
View Item