Breadcrumb
The Corcoran Memorial Lecture 20th February 2025
Title: E-Values, Anytime-Validity and Bayes
Abstract: E-values (wikipedia) are an alternative to p-values that effortlessly deal with optional continuation: with e-value based tests and the corresponding anytime valid (AV) confidence intervals, one can always gather additional data, while keeping statistically valid conclusions. Until 2019, publications on e-values were few and far between: the concept did not even have a name. Then, in the course of a few months, four papers by different research groups, (including ours - see below) appeared on arXiv that firmly established them as an important statistical concept. By now, there are 100s of papers on e-values and there have been two international workshops on the topic. Allowing for optional continuation is just one way in which e-values provide more flexibility than p-values – they also allow to set a type of significance/confidence level alpha after seeing the data, which is a mortal sin in classical testing. In this talk I will introduce e-values, e-processes and AV confidence intervals, and discuss how like Bayesian approaches, they employ priors, while, unlike in Bayesian approaches, we obtain error guarantees even if these priors misalign with the data.
Main literature:
G., De Heide, Koolen. Safe Testing. Journal of the Royal Statistical Society Series B, 2024 (first version appeared on arXiv 2019).
G. Beyond Neyman-Pearson: e-values enable hypothesis testing with a data-driven alpha. Proceedings National Academy of Sciences of the USA (PNAS), 2024.
Please register your place here.

Bio: Formerly head of the machine learning group at CWI in Amsterdam, Peter Grünwald is currently member of CWI Management and full professor of statistics at the mathematical institute of Leiden University. He recently received an ERC Advanced Grant (2024) for designing a flexible theory of statistics, based on e-values and e-processes, the core concept in a novel, rapidly evolving theory he helped found with the breakthrough paper Safe Testing (arxiv 2019, now JRSS B) and extend in The E-Posterior (Phil. Trans. Roy. Soc. A, 2023) and Beyond Neyman-Pearson (PNAS 2024).
From 2018-2022 Peter served as President of the Association for Computational Learning, the organization running COLT, the world’s prime annual conference on machine learning theory, which he chaired in 2015, having earlier chaired UAI, another major ML conference. He is editor of Foundations and Trends in Machine learning, author of the book (and standard reference) The Minimum Description Length Principle (MIT Press, 2007), and co-recipient of the Van Dantzig prize, the highest Dutch award in statistics and operations research. He has frequently appeared in Dutch national media commenting, e.g., about statistical issues in court cases. He has held a lifelong interest in, and published many papers on, the foundations of statistics