Pixel Ablation-CAM: A New Paradigm in CNN Interpretability for Feature Map Visual Explanations
DOI:
https://doi.org/10.14429/dsj.20444Keywords:
Convolution Neural Network (CNN), Ablation-CAM, Grad-CAM, Explainable AIAbstract
Many cutting-edge computer vision systems now rely heavily on convolutional neural networks, or CNNs. However, conventional interpretation techniques frequently concentrate on 2D feature maps, ignoring the intricate contributions of individual pixels. This work aims to produce “visual explanations” that improve the explainability and transparency of decisions made by various CNN-based algorithms. We provide Pixel Ablation-CAM, a new method that builds on the ideas of Ablation-CAM by using pixel-wise ablation, which enables a finer-grained comprehension of model choices. With this method, activation maps are reinterpreted as arrays of one-dimensional vectors that represent channel-specific pixel activations. We show that, as compared to other approaches such as Grad-CAM, Pixel Ablation-CAM offers better resolution and accuracy in class-discriminative localisation maps by methodically ablating these vectors and monitoring changes in class activation scores. Our extensive testing demonstrates that Pixel Ablation-CAM improves model trust and interpretability, providing fresh perspectives on CNN behavior and propelling the field of explainable AI forward.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Defence Scientific Information & Documentation Centre (DESIDOC) Where otherwise noted, the Articles on this site are licensed under Creative Commons License: CC Attribution-Noncommercial-No Derivative Works 2.5 India