The Bayesian Reading Group is a series of weekly informal meetings held at the Department of Statistics, typically during term. The aim of the reading group is to create a friendly environment where we can learn about each other’s work, exchange ideas, and generally broaden our knowledge of the literature and the current research in Bayesian Statistics.


In each meeting, a presentation is given by one of the participants.Overall, the common theme of the presentations is ‘Bayesian statistics’, and many of the past presentations in the past have touched on various aspects of the methodology and theory of the Bayesian approach, often in the nonparametric setting. This term we have chosen to focus on a specific current line of research : the theory of convergence for MCMC algorithms in high dimension under posterior contraction assumptions, as initiated by Wang, Nickl, Bandera, Maillard and others. Alongside this main theme, other related topics are welcome as well, so we encourage people to suggest ideas for presentations even if they are not directly related to this term's topic. We typically meet on Wednesdays 3-4pm (although this is known to be highly flexible) and to encourage attendance, we hold the reading group in hybrid format, with in-person meetings at the Department of Statistics and a zoom link for those wishing to join online. The meetings are typically one hour long, including the time for questions and discussions on the presentation.
 

We hope to see you there! To join the reading group, learn more, and for any question in general, please do not hesitate to contact Paul.


Past organisers: Matteo Giordano, Daniel Moss, Deborah Sulem and Caroline Lawless.

Michaelmas Term 2023

Friday 27 October, 9-10am : Judith Rousseau & Matteo Giordano, Nonasymptotic convergence analysis for the unadjusted Langevin algorithm, by Durmus & Moulines (The Annals of Applied Probability, 2017) --> this is a follow up to last week's presentation, where we will focus on some particular points

Wednesday 18 October, 3-4pm: Judith Rousseau, Nonasymptotic convergence analysis for the unadjusted Langevin algorithm, by Durmus & Moulines (The Annals of Applied Probability, 2017)

Wednesday 11 October, 3-4pm: Stefan Franssen, On free energy barriers in Gaussian priorsand failure of cold start MCMC for high dimensional unimodal distributions, by Bandeira, Maillard, Nickl & Wang (Philosophical Transactions of The Royal Society A, 2023) --> this is a follow up to last week's presentation, where we will dive into the proof in more details

Tuesday 27 September, 3-4pm: Matteo Giordano, On free energy barriers in Gaussian priorsand failure of cold start MCMC for high dimensional unimodal distributions, by Bandeira, Maillard, Nickl & Wang (Philosophical Transactions of The Royal Society A, 2023)

Recent past meetings

Tuesday 27 June, 3-4pm: Matteo Giordano, On free energy barriers in Gaussian priorsand failure of cold start MCMC for high dimensional unimodal distributions, by Bandeira, Maillard, Nickl & Wang (Philosophical Transactions of The Royal Society A, 2023)

Tuesday 13 June, 3-4pm: Jean-Baptiste Fermanian, High-Dimensional Multi-Task Averaging and Application to Kernel Mean Embedding, Marienwald, Fermanian & Blanchard (PMLR, 2021)

Tuesday 6 June, 3-4pm: Deborah Sulem, Scalable and accurate variational Bayes for high-dimensional binary regression models, by Fasano & al (arxiv, 2022)

Tuesday 30 May 3-4pm: Paul Rosa, Spectral Convergence of Graph Laplacian and Heat Kernel Reconstruction in L from Random Samples, by Dunson et al, (Applied and Computational Harmonic Analysis, 2021)

Tuesday 2 May, 3-4:30pm: Xi Lin, Combining randomized and observational data via a power likelihood

Wednesday 22 March, 4pm - 5pm: Geoff Nicholls, Semi-Modular Bayesian Inference: what, why and how

Wednesday 8 March, 4pm - 5pm: Stefan Franssen, On Dependent Dirichlet Process for General Polish Spaces, by Iturriaga & al (arxiv, 2022)

Tuesday 21 February, 4pm - 5pm: Daniel Moss, A General Framework for Cutting Feedback within Modularized Bayesian Inference, by Liu and Goudie (arXiv, 2022)

Tuesday 7 February, 4pm - 5pm: Deborah Sulem, Bayesian Analysis of Nonparanormal Graphical Models Using Rank-Likelihood, by Mulgrave and Ghosal (2023, J. Statist. Planning Inf.)

Tuesday 31 January, 4pm - 5pm: Judith Rousseau, Minimax Rate of Distribution Estimation on Unknown Submanifold under Adversarial Losses,  by Tang and Yang (arXiv, 2022)

Tuesday 24 January, 4pm - 5pm: Paul Rosa, Stationary Kernels and Gaussian Processes on Lie Groups and Their Homogeneous Spaces I: the Compact Case, by Azangulov et al. (arXiv, 2022)

Tuesday 17 January, 4pm - 5pm: Matteo Giordano, Adaptive Inference Over Besov Spaces in the White Noise Model Using p-Exponential Priors, by Agapiou and Savva (arXiv preprint, 2022)

Tuesday 6 December, 4pm - 5pm: We will attend the Bayesian Analysis webinar for the discussion on the paper Deep Gaussian Processes for Calibration of Computer Models (Bayesian Analysis, 2022), by S. Marmin and M. Filippone.

Tuesday 29 November, 3.30pm - 4.40pm: Caroline Lawless, Clustering Consistency with Dirichlet Process Mixtures, by Ascolani et al. (arXiv preprint, 2022)

Tuesday 22 November, 3.30pm - 4.40pm: Judith Rousseau, Martingale Posterior Distributions, by Fong, Holmes and Walker (JRSSB, 2022)

Tuesday 8 November, 3.30pm - 4.40pm: Paul Rosa, Posterior Asymptotics in Wasserstein Metrics on the Real Line, by Chae, De Blasi and Walker (Electronic J. Statistics, 2021)

Wednesday 26 October 2022, 2pm - 3.30pm: We will attend the departmental seminar by Jianqing Fan (Princeton University) on Factor Augmented Sparse Throughput Deep ReLu Neural Networks for High Dimensional Regression

Wednesday 19 October 2022, 3.30pm - 4.30pm: Deborah Sulem, Variational Bayes Methods for Temporal Point Processes

Wednesday 5 October 2022, 4pm - 5pm: Matteo Giordano, Coverage of Credible Intervals in Nonparametric Monotone Regression, by Chakraborty and Goshal (Annals of Statistics, 2021)

 Wednesday 12 October 2022, 3.30pm - 4.30pm: Daniel Moss, Differentially Private Partitioned Variational Inference, by Heikkilä et al. (arXiv preprint, 2022)