Prof Patrick Rebeschini has been awarded a €2m ERC Consolidator Grant in Computer Science and Informatics. 

The project aims to develop novel theoretical foundations for machine learning based on the paradigm of implicit regularisation. The goal is to design a new algorithmic framework that can structurally combine notions of statistical optimality with requirements of computational efficiency. The project will create a world-leading centre in statistical learning theory, recruiting a group of PDRAs working at the intersection of statistics, probability, and optimisation.

Read more about this grant and the impact it will have

There is a very strong machine learning community at Oxford, and I have found it very valuable to engage with colleagues across the University, including the Mathematical Institute, the Department of Computer Science and the Department of Engineering Science, besides the Department of Statistics. This interaction has helped to develop my ideas for this proposal.

Prof Patrick Rebeschini, Department of Statistics

More information

In the era of Big Data—characterized by large, high-dimensional and distributed datasets—we are increasingly faced with the challenge of establishing scalable methodologies that can achieve optimal statistical guarantees under computational constraints. To fundamentally address this challenge, new paradigms need to be established.

Over the past 50 years, statistical learning theory has relied on the framework of explicit regularization to control the model complexity of estimators. By design, this approach decouples notions of statistical optimality and computational efficiency and, in applications, often leads to expensive model selection procedures. This framework faces fundamental limitations to explain the practical success of modern machine learning paradigms, which are based on running simple gradient descent methodologies without any explicit effort to control model complexity.

Overcoming these limitations prompts the investigation of the implicit regularization properties of iterative algorithms, namely the bias enforced as a by-product of the very choice of optimization routine and tuning parameters. Implicit regularization structurally combines statistics with optimization and it has the potential to promote the design of new algorithmic paradigms built around the notion of statistical and computational optimality. However, to fully realize its potential, several challenges need to be overcome.

This project aims to develop a general theory of implicit regularization that can optimally address fundamental primitives in modern applications—e.g. involving sparse and low-rank noisy models, decentralized multi-agent learning, and adaptive and robust procedures—and establish novel cross-disciplinary connections with far-reaching consequences.

This goal will be achieved by combining non-asymptotic tools for the study of random structures in high-dimensional probability with the general framework of mirror descent from optimization and online learning.

Related News