TITLE:
Estimating the Rectified Linear Unit Activation Function in Deep Learning Using the Grouping-Adjusted Median Estimator: An Evaluation of Max Pooling
AUTHORS:
Kazumitsu Nawata
KEYWORDS:
Deep Learning, ReLU, Max Pooling, Tobit Model, Median, Grouping, Adjustment, Robust Estimation
JOURNAL NAME:
Open Journal of Statistics,
Vol.16 No.1,
January
20,
2026
ABSTRACT: The Rectified Linear Unit (ReLU) activation function is widely employed in deep learning (DL). ReLU shares structural similarities with censored regression and Tobit models common in econometrics and statistics. Although the conventional Tobit maximum likelihood estimator (CTMLE) is frequently applied in these fields, the insights gained have not been fully incorporated into DL research. When models are affected by random observation errors or distributional misspecification, CTMLE often exhibits substantial bias. To address this limitation, we consider the Grouping Adjusted Median Estimator (GAME). GAME is a robust method that does not rely on specific distributional assumptions and is constructed through three stages: grouping, adjustment, and computation of adjusted medians combined with weighted Tobit maximum likelihood estimation. Monte Carlo experiments show that GAME outperforms CTMLE in non-standard settings while incurring only minor efficiency losses under standard conditions. Max pooling is widely used in DL, and its mechanism resembles grouping. This study also evaluates the max pooling estimator (MPE). However, MPE often performs poorly, as the estimated boundaries between target and background regions may be reversed relative to the true ones. While max pooling can be useful for detecting weak signals, it may yield misleading results in high-noise situations, necessitating special care. GAME offers a promising alternative for mitigating noise effects, and the median can be generalized to arbitrary percentiles, providing additional flexibility in estimation.