http://www.inference.vc/everything-that-works-works-because-its-bayesian-2/
Why do Deep Nets Generalise? HINT: because they are really just an approximation to Bayesian machine learning.
深度网络的泛化能力是怎么来的? 目前理论界还是没有达成一致性认识。甚至得出相反的结论!
Understanding deep learning requires rethinking generalization https://arxiv.org/abs/1611.03530 上面这篇文献说明了网络模型强大的过拟合能力,网络的泛化能力不是来自于网络模型本身。
It appears to be that stochastic gradient descent may be responsible. 网络的泛化能力有可能来自于 SGD,下面这篇文献说明 SGD biases learning towards flat minima, rather than sharp minima,为什么 flat minima 对应更好的泛化能力了? 个人认为它覆盖的参数空间范围更大一些。 On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima https://arxiv.org/abs/1609.04836
但是下面的文献得出相反的结论: claim sharp minima can generalize well Sharp Minima Can Generalize For Deep Nets https://arxiv.org/abs/1703.04933
Everything that works works because it is Bayesian.