Header menu link for other important links
An Explainable Artificial Intelligence Based Approach for Interpretation of Fault Detection Results from Deep Neural Networks
V Pakkiriswamy,
Published in Elsevier Ltd
Volume: 250
Process monitoring is crucial to ensure operational reliability and to prevent industrial accidents. Data-driven methods have become the preferred approach for fault detection and diagnosis. Specifically, deep learning algorithms such as Deep Neural Networks (DNNs) show good potential even in complex processes. A key shortcoming of DNNs is the difficulty in interpreting their classification result. Emerging approaches from explainable Artificial Intelligence (XAI) seek to address this shortcoming. This paper proposes a method based on the Shapley value framework and its implementation using integrated gradients to identify those variables which lead a DNN to classify an input as a fault. The method estimates the marginal contribution of each variable to the DNN, averaged over the path from the baseline (in this case, the process’ normal state) to the current sample. We illustrate the resulting variable attribution using a numerical example and the benchmark Tennessee Eastman process. Our results show that the proposed methodology provides accurate, sample-specific explanations of the DNN's prediction. These can be used by the offline model developer to improve the DNN if necessary. It can also be used by the plant operator in real-time to understand the black-box DNN's predictions and decide on operational strategies. © 2022 Elsevier Ltd
About the journal
Journal2020 Virtual AIChE Annual Meeting
PublisherElsevier Ltd
Open AccessNo