On non-approximability of zero loss global $\mathcal L^2$ minimizers by gradient descent in deep learning


Thomas Chen, Patricia Muñoz Ewald




We analyze geometric aspects of the gradient descent algorithm in Deep Learning (DL), and give a detailed discussion of the circumstance that, in underparametrized DL networks, zero loss minimization cannot generically be attained. As a consequence, we conclude that the distribution of training inputs must necessarily be non-generic in order to produce zero loss minimizers, both for the method constructed in \cite{cheewa-2,cheewa-4}, or for gradient descent \cite{ch-7} (which assume clustering of training data).