Statistical (deep) learning theory

Deep learning algorithms have given exciting performances, e.g., painting pictures, beating Go champions, and autonomously driving cars, among others, showing that they have very good generalisation abilities (small differences between training and test errors). These empirical achievements have astounded yet confounded their human creators. Why do deep learning algorithms generalise so well on unseen data? It lacks mathematical elegance. We do not know the underlying principles that guarantee its success. Let alone to interpret or pertinently strengthen its generalisation ability. In this project, we aim to analyse error bounds, e.g., generalisation error bound and excess risk bound, by measuring the complexity of the predefined (or algorithmic) hypothesis class. An algorithmic hypothesis class is a subset of the predefined hypothesis class that a learning algorithm will (or is likely to) output.

Upcoming Events