Date: Wednesday, 5th November
Time: 1pm to 4pm, and will include a break for refreshments
Location: Alan Turing building, room G.107
You are warmly invited to the upcoming Graham Dunn Seminar on Wednesday, 5th November 2025, featuring a series of thought-provoking talks on the role of Generative AI in research and teaching statistics. Hosted in person at the University of Manchester, this event will explore how AI is transforming evidence synthesis, statistical modelling, and educational practices. Whether you're a statistician, researcher, educator, or simply curious about the future of AI in your academic statistical practice, this seminar promises engaging insights and lively discussion. All are welcome—please join us! (Disclaimer: This paragraph was drafted with the assistance of Microsoft Copilot generative AI software, based on name and date of meeting, as well as speaker-provided titles and abstracts. Exact prompts used and responses available upon reasonable request)
Please register to attend
Confirmed speakers:
“Using Generative AI tools within Research and Teaching”, Dr Vinny Davies, Senior Lecturer in Statistics, University of Glasgow.
Abstract: This talk will explore how Generative AI tools such as ChatGPT can be used in research and teaching. I will begin with a conceptual introduction to how large language models (LLMs) work, outlining their main strengths and weaknesses. I will then discuss a research project conducted entirely using ChatGPT, highlighting both the opportunities and limitations of this approach. Finally, I will introduce our MOOC, ‘Generative AI for Data Science’, and explain how we plan to integrate it into the curriculum.
“Using AI to help link complex data with biological mechanisms”, Prof Thomas House, Professor of Mathematical Sciences and Head of the Probability and Statistics group, University of Manchester
Abstract: There has been a trend for some time now for biological and medical research to throw up datasets that are much more complex than the data that motivated standard statistical methods, even very sophisticated multivariate theory. Examples include electronically collected data, genetic and other “-omic” data, networks, free text, and images. Here I will give a theoretical explanation of the problems these complex datasets pose, and some understanding of how methods currently grouped under “AI” (but often with some more classical precursors) deal with this complexity. While these methods can be and often are used in a completely data-driven way, I will argue, with examples, that we might expect the greatest benefit to biological and medical research when they are combined with “classical” statistics and domain-specific expertise.
“Behind the hype – AI in systematic reviews”: Hannah O’Keefe, Research Associate, NIHR Information Observatory, University of Newcastle.
Abstract: Artificial intelligence is the hot topic in research circles, particularly for those working in the field of systematic reviews and evidence synthesis. However, conversations about review automation have been ongoing for ~20 years so why aren’t we producing AI driven reviews? And why has it become such a hot topic now? I will discuss how we got to this point, the current uses of AI in evidence synthesis, some of the concerns researchers face about using AI, and the pitfalls and biases AI can introduce.