All of Niklas Todenhöfer's Comments + Replies

instead deep learning tends to generalise incredibly well to examples it hasn’t seen already. How and why it does so is, however, still poorly-understood.


In my opinion generalisation is a very interesting point!

Are there any new insights into deep learning generalisation, similar to the ideas of:

1) implicit regularisation through optimisation methods like stochastic gradient descent, 
2) the double descent risk curve where more parameters can reduce error again, 
or 
3) margin-based measures to predict generalisation gaps? 

Or more generall... (read more)