Bio


Prior to joining the University of Oxford, I was a Lecturer in the Computer Science department at Yale University and a Postdoctoral Associate at the Yale Institute for Network Science, hosted by Sekhar Tatikonda. I have a Ph.D. in Operations Research and Financial Engineering from Princeton University, where I worked in Applied Probability under the supervision of Ramon van Handel.

Here is my Curriculum Vitae.

Research


Interests

My research interests lie at the intersection of Probability, Statistics, and Computer Science. I am interested in the investigation of fundamental principles in high-dimensional probability, statistics and optimization to design computationally efficient and statistically optimal algorithms for machine learning.


Activities

I am a Turing Fellow at the Alan Turing Institute London. On June 11 2018 I co-organized the one-day workshop The Interplay between Statistics and Optimization in Learning. On January 13-14 2020 I co-organized the two-day workshop Statistics and Computation.

I am a member of the European Laboratory for Learning and Intelligent Systems (ELLIS). I am part of the management team for the Imperial-Oxford StatML Centre for Doctoral Training. I am an alumnus of the Yale Institute for Network Science, and an alumnus of the Princeton Statistical Laboratory.


Papers

  • Hadamard Wirtinger flow for sparse phase retrieval (with F. Wu). [arXiv] [code]
  • The statistical complexity of early stopped mirror descent (with T. Vaškevičius and V. Kanade). [arXiv] [code]
  • Decentralised sparse multi-task regression (with D. Richards and S. Negahban). [arXiv]
  • Decentralised learning with random features and distributed gradient descent (with D. Richards and L. Rosasco). Accepted to 37th International Conference on Machine Learning (ICML), 2020. [arXiv] [code]
  • Graph-dependent implicit regularisation for distributed stochastic subgradient descent (with D. Richards). Journal of Machine Learning Research (JMLR), vol. 21, no. 34, pp. 1-44, 2020. [journal] [arXiv] [code]
  • Implicit regularization for optimal sparse recovery (with T. Vaškevičius and V. Kanade), 33rd Conference on Neural Information Processing Systems (NeurIPS), pp. 2972-2983, 2019. [proceedings] [arXiv] [code]
  • Optimal statistical rates for decentralised non-parametric regression with linear speed-up (with D. Richards), 33rd Conference on Neural Information Processing Systems (NeurIPS), pp. 1216-1227, 2019. [proceedings] [arXiv]
  • Decentralized cooperative stochastic bandits (with D. Martínez-Rubio and V. Kanade), 33rd Conference on Neural Information Processing Systems (NeurIPS), pp. 4529-4540, 2019. [proceedings] [arXiv] [code]
  • Locality in network optimization (with S. Tatikonda), IEEE Transactions on Control of Network Systems, vol. 6, no. 2, pp. 487-500, 2019. [journal] [arXiv]
  • A new approach to Laplacian solvers and flow problems (with S. Tatikonda), Journal of Machine Learning Research (JMLR), vol. 20, no. 36, pp. 1-37, 2019. [journal] [arXiv]
  • Accelerated consensus via Min-Sum Splitting (with S. Tatikonda), 31st Conference on Neural Information Processing Systems (NIPS), pp. 1374-1384, 2017. [proceedings] [arXiv] [poster]
  • Decay of correlation in network flow problems (with S. Tatikonda), 50th Conference on Information Sciences and Systems (CISS), pp. 169-174, 2016. [proceedings] [pdf]
  • Fast mixing for discrete point processes (with A. Karbasi), 28th Conference on Learning Theory (COLT), pp. 1480-1500, 2015. [proceedings] [arXiv] [poster]
  • Can local particle filters beat the curse of dimensionality? (with R. van Handel), Annals of Applied Probability, vol. 25, no. 5, pp. 2809-2866, 2015. [journal] [arXiv]
  • Phase transitions in nonlinear filtering (with R. van Handel), Electronic Journal of Probability, vol. 20, no. 7, pp. 1-46, 2015. [journal] [arXiv]
  • Comparison theorems for Gibbs measures (with R. van Handel), Journal of Statistical Physics, vol. 157, pp. 234-281, 2014. [journal] [arXiv]
  • Nonlinear filtering in high dimension, Ph.D. thesis, Princeton University, 2014. [pdf]

Talks

  • The Statistical Complexity of Early-Stopped Mirror Descent, Statistical Methods in Machine Learning, Bernoulli-IMS One World Symposium 2020, August 2020. [video]
  • The Statistical Complexity of Early-Stopped Mirror Descent, Probability Seminar (virtual lecture), Division of Applied Mathematics, Brown University, May 2020.
  • Statistically and Computationally Optimal Estimators for Sparse Recovery and Decentralized Regression, Adobe Research, San Jose, December 2019.
  • Implicit Regularization for Optimal Sparse Recovery, Information Systems Lab (ISL) Colloquium, Stanford University, December 2019.
  • On the Interplay between Statistics, Computation and Communication in Decentralised Learning, Decision and Control Systems, KTH, October 2019.
  • Implicit Regularization for Optimal Sparse Recovery, Probability and Mathematical Statistics seminar, Department of Mathematics, KTH, October 2019.
  • Implicit Regularization for Optimal Sparse Recovery, London Machine Learning Meetup, September 2019.
  • Implicit Regularization for Optimal Sparse Recovery, Theory, Algorithms and Computations of Modern Learning Systems workshop, DALI/ELLIS, September 2019.
  • On the Interplay between Statistics, Computation and Communication in Decentralised Learning, Optimization and Statistical Learning workshop (OSL 2019), Les Houches School of Physics. [slides]
  • On the Interplay between Statistics, Computation and Communication in Decentralised Learning, School of Mathematics, University of Bristol, March 2019.
  • On the Interplay between Statistics, Computation and Communication in Decentralised Learning, Algorithms & Computationally Intensive Inference Seminar, University of Warwick, February 2019.
  • Multi-Agent Learning: Implicit Regularization and Order-Optimal Gossip, Theory and Algorithms in Data Science, The Alan Turing Institute, August 2018.
  • Multi-Agent Learning: Implicit Regularization and Order-Optimal Gossip, Statistical Scalability Programme, Isaac Newton Institute, June 2018.
  • Multi-Agent Learning: Implicit Regularization and Order-Optimal Gossip, Statistics Seminar Series, Department of Decision Sciences, Bocconi University, May 2018.
  • Distributed and Decentralised Learning: Generalisation and Order-Optimal Gossip, Amazon Berlin, April 2018.
  • Locality and Message Passing in Network Optimization, Workshop on Optimization vs Sampling, The Alan Turing Institute, February 2018.
  • Accelerated Consensus via Min-Sum Splitting, Statistics Seminar, University of Cambridge, November 2017.
  • Accelerating message-passing using global information, OxWaSP Workshop, University of Warwick, October 2017.
  • Accelerating message-passing using global information, StatMathAppli 2017, Statistics Mathematics and Applications, Fréjus, September 2017.
  • Accelerated Min-Sum for consensus, Large-Scale and Distributed Optimization, LCCC Workshop, Lund University, June 2017.
  • Message-passing in convex optimization, WINRS conference, Brown University, March 2017.
  • Min-Sum and network flows, Workshop on Optimization and Inference for Physical Flows on Networks, Banff International Research Station, March 2017.
  • Locality and message-passing in network optimization, DISMA, Politecnico di Torino, January 2017.
  • Locality and message-passing in network optimization, LIDS Seminar Series, MIT, November 2016.
  • Locality and message-passing in network optimization, Probability Seminar, Division of Applied Mathematics, Brown University. November 2016.
  • Message-passing in network optimization, YINS Seminar Series, Yale University, November 2016.
  • Tractable Bayesian computation in high-dimensional graphical models, Mathematical Sciences Department, IBM Thomas J. Watson Research Center, June 2016.
  • From sampling to learning submodular functions, 2016 New England Statistics Symposium (NESS), Yale University, April 2016.
  • Scale-free sequential Monte Carlo, Seminar on particle methods in Statistics, Statistics Department, Harvard University, April 2016.
  • Decay of correlation in network flow problems, 50th Annual Conference on Information Sciences and Systems (CISS 2016), Princeton University, March 2016.
  • Locality in network optimization, INFORMS, Philadelphia, November 2015.
  • Local algorithms in high-dimensional models, Statistics Department, University of Oxford, September 2015.
  • Killed random walks and graph Laplacians: local sensitivity in network flow problems, Yale Probabilistic Networks Group seminar, Statistics Department, Yale University, September 2015.
  • Decay of correlation in graphical models; algorithmic perspectives, School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, August 2015.
  • Fast mixing for discrete point processes, 28th Annual Conference on Learning Theory (COLT 2015), Université Pierre et Marie Curie, July 2015. [poster] [video]
  • Filtering compressed signal dynamics in high dimension, 45th Annual John H. Barrett Memorial Lectures, University of Tennessee, May 2015.
  • On the role of the Hessian of submodular functions, Yale Probabilistic Networks Group seminar, Statistics Department, Yale University, April 2015.
  • Submodular functions, from optimization to probability, Probability Theory and Combinatorial Optimization, The Fuqua School of Business, Duke University, March 2015.
  • Estimating conditional distributions in high dimension, Applied Mathematics seminar, Yale University, October 2014.
  • Nonlinear filtering in high dimension, Yale Probabilistic Networks Group seminar, Statistics Department, Yale University, September 2014.
  • Particle filters and curse of dimensionality, Monte Carlo Inference for Complex Statistical Models workshop, Isaac Newton Institute for Mathematical Sciences, University of Cambridge, April 2014. [slides] [video]
  • Particle filters and curse of dimensionality, Cambridge Machine Learning Group, University of Cambridge, February 2014.
  • New phenomena in nonlinear filtering, Yale Probabilistic Networks Group seminar, Statistics Department, Yale University, February 2014.
  • Filtering in high dimension, Cornell Probability Summer School, Cornell University, July 2013.

Teaching


During the 2019/2020 academic year I am co-organizing a reading group on Theoretical Machine Learning. Notes are here.

Since 2018, I have been teaching Algorithmic Foundations of Learning, for which I received the 2019 Oxford MPLS Teaching Award.

In Spring 2018 I taught Advanced Simulation Methods.

During the 2017/2018 academic year I organized a reading group on optimization for Machine Learning. Notes are here.

While at Yale University, in Fall 2016 I served as the Head Instructor for CS50 — Introduction to Computing and Programming — taught jointly with Harvard University. This was a coverage on the Yale Daily News. Here is the intro class in Machine Learning and Python, or its VR version.

From 2015 to 2017 I supervised a group of senior students on research projects in Machine Learning, investigating the development of algorithms for natural language processing, sparse regression, and distributed optimization.

I was a member of the Yale Postdoctoral Association, with the goal to facilitate and promote teaching experiences for postdocs in the sciences. For three years in a row, from 2015 to 2017, I organized the Julia Robinson Mathematics Festival at Yale, a celebration of ideas and problems in mathematics that enable junior high and high school students to explore fun math in a non-competitive setting.

In 2013 I received the Excellence in Teaching Award from the Princeton Engineering Council while serving as head teaching assistant for ORF 309 (Probability and Stochastic Systems) at Princeton University.

I was also a fellow of the McGraw Center for Teaching and Learning at Princeton University.

Contact Information

patrick.rebeschini AT stats.ox.ac.uk

Department of Statistics
University of Oxford
24-29 St Giles'
Oxford, OX1 3LB
United Kingdom

LinkedIn profile