Professor Patrick Rebeschini

Professor of Statistics and Machine Learning

Biographical Sketch

I have a Ph.D. in Operations Research and Financial Engineering from Princeton University (2014). After that, I joined the Yale Institute for Network Science at Yale University. I worked two years as a Postdoctoral Associate in the Electrical Engineering Department, and one year as an Associate Research Scientist with a joint appointment as a Lecturer in the Computer Science Department at Yale.

Research Interests

My research interests lie at the intersection of probability, statistics, and computer science. I am interested in the investigation of fundamental principles  in high-dimensional probability, statistics and optimisation to design computationally efficient and statistically optimal algorithms for machine learning.

Publications

Wu, F. and Rebeschini, P. (2020) “A continuous-time mirror descent approach to sparse phase retrieval”, in Advances in Neural Information Processing Systems 33 (NeurIPS 2020). Neural Information Processing Systems Foundation, Inc, pp. 1–12.
Vaškevičius, T., Kanade, V. and Rebeschini, P. (2020) “The statistical complexity of early-stopped mirror descent”, in. Neural Information Processing Systems Foundation, Inc, pp. 1–12.
Richards, D., Rebeschini, P. and Rosasco, L. (2020) “Decentralised learning with distributed gradient descent and random features”, in Proceedings of the 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, pp. 8105–8115.
Richards, D. and Rebeschini, P. (2020) “Graph-dependent implicit regularisation for distributed stochastic subgradient descent”, Journal of Machine Learning Research, 21(2020), pp. 1–44.
Richards, D., Rebeschini, P. and Rosasco, L. (2020) “Decentralised learning with distributed gradient descent and random features”, in 37th International Conference on Machine Learning, ICML 2020, pp. 8075–8085.
Vaškevičius, T., Kanade, V. and Rebeschini, P. (2019) “Implicit regularization for optimal sparse recovery”, in Advances in Neural Information Processing Systems 32 (NIPS 2019). Neural Information Processing Systems Foundation, pp. 2968–2979.
Richards, D. and Rebeschini, P. (2019) “Optimal statistical rates for decentralised non-parametric regression with linear speed-up”, in Advances in Neural Information Processing Systems 32 (NIPS 2019). Neural Information Processing Systems Foundation.
Martínez-Rubio, D., Kanade, V. and Rebeschini, P. (2019) “Decentralized cooperative stochastic bandits”, in Advances in Neural Information Processing Systems 32. Neural Information Processing Systems Foundation.
Rebeschini, P. and Tatikonda, S. (2019) “A new approach to Laplacian solvers and flow problems”, Journal of Machine Learning Research, 20(36), p. 1−37.
Tatikonda, S. and Rebeschini, P. (2018) “Accelerated consensus via Min-Sum Splitting”, in Advances in Neural Information Processing Systems 30: 31st Annual Conference on Neural Information Processing Systems (NIPS 2017). Curran Associates, pp. 1375–1385.

Contact Details

College affiliation: Tutorial Fellow at University College

Email: patrick.rebeschini@stats.ox.ac.uk