Generalization error bounds of deep learning by Bayesian and empirical risk minimization approaches from a kernel perspective

In this talk,we discuss the generalization error bound of deep learning for both Bayesian and empirical risk minimization methods. To derive the generalization error bound, we consider an integral form to express the target function. Based on this, we derive the bias-variance trade-off about an finite dimensional approximation of the true model and an estimation error in the approximated model. The trade-off is characterized by the eigenvalue property of the Reproducing kernel Hilbert space corresponding to each layer. The generalization error is derived for both Bayesian and ERM estimators. Based on the theory, we discuss how the behavior of the eigenvalues of the kernels affects the generalization error.