Pixel Ablation-CAM: A New Paradigm in CNN Interpretability for Feature Map Visual Explanations

Authors

DOI:

https://doi.org/10.14429/dsj.20444

Keywords:

Convolution Neural Network (CNN), Ablation-CAM, Grad-CAM, Explainable AI

Abstract

Many cutting-edge computer vision systems now rely heavily on convolutional neural networks, or CNNs. However, conventional interpretation techniques frequently concentrate on 2D feature maps, ignoring the intricate contributions of individual pixels. This work aims to produce “visual explanations” that improve the explainability and transparency of decisions made by various CNN-based algorithms. We provide Pixel Ablation-CAM, a new method that builds on the ideas of Ablation-CAM by using pixel-wise ablation, which enables a finer-grained comprehension of model choices. With this method, activation maps are reinterpreted as arrays of one-dimensional vectors that represent channel-specific pixel activations. We show that, as compared to other approaches such as Grad-CAM, Pixel Ablation-CAM offers better resolution and accuracy in class-discriminative localisation maps by methodically ablating these vectors and monitoring changes in class activation scores. Our extensive testing demonstrates that Pixel Ablation-CAM improves model trust and interpretability, providing fresh perspectives on CNN behavior and propelling the field of explainable AI forward.

Downloads

Published

2025-03-24

How to Cite

Debasis Chaudhuri, Akash Samanta, Aniket Kumar Singh, & Manish Pratap Singh. (2025). Pixel Ablation-CAM: A New Paradigm in CNN Interpretability for Feature Map Visual Explanations. Defence Science Journal, 75(2), 188–198. https://doi.org/10.14429/dsj.20444

Issue

Section

Computers & Systems Studies