Accelerating deep network training for radar identification using batch normalization
DOI:
https://doi.org/10.14429/dsj.74.19475Keywords:
Batch Normalization, Momentum, Beta and Gamma parameters, Batch SizeAbstract
Deep learning techniques have shown remarkable success in radar identification. However, deep neural network training can be time and resource intensive. Batch normalization is a popular approach for quickening deep feed-forward neural network training. The training of deep neural networks is accelerated by minimizing the internal covariate shift and stabilizing the training process by normalizing the intermediate activations within each mini-batch. In this research, the convergence behavior of networks with and without batch normalization is compared. Batch normalization standardizes the input to a layer for each mini-batch applied to either the activations of a prior layer or inputs directly. Our experiments indicate that batch normalization is effective in improving a variety of neural network properties. The results show that batch-normalized models have higher test and validation accuracies across all datasets, which we attribute to their regularizing impact and more steady gradient propagation. This research also examines the impact of several parameters, such as batch size, momentum, and beta and gamma parameters, on the effectiveness of DNNs with batch normalization. The radar dataset used for training is the fused emitter set obtained after feature level fusion of the tracks intercepted by ESM (Electronic Support) and ELINT (Electronic Intelligence) system.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Defence Scientific Information & Documentation Centre (DESIDOC) Where otherwise noted, the Articles on this site are licensed under Creative Commons License: CC Attribution-Noncommercial-No Derivative Works 2.5 India