One use of AI, including deep learning, is in prediction tasks, such as image scene understanding and medical image diagnosis. As deep learning models are complex, heatmaps are often used to help explain the AI’s prediction by highlighting pixels that were salient to the prediction.
While existing heatmaps are effective on clean images, real-world images are frequently degraded or ‘biased’-such as camera blur or colour distortion under low light. Images may also be deliberately blurred for privacy reasons. As the level of image clarity decreases, the performance of the heatmaps decreases. These heatmap explanations of degraded images therefore deviate from both reality and user expectations.
This novel technology-Debiased-CAM-describes a method of training a convolutional neural network (CNN) to produce accurate and relatable heatmaps for degraded images. By pinpointing relevant targets on the images that align with user expectations, Debiased-CAMs increase transparency and user trust in the AI’s predictions.
Used to train a convolutional neural network (CNN) to produce accurate and relatable heatmaps for degraded images. By pinpointing relevant targets on the images that align with user expectations, Debiased-CAMs increase transparency and user trust in the AI’s predictions. It also increases the ability of meeting regulatory standards to deploy CNN models in the following applications, where explainable AI is required.