Optimizing LES closure models through Reinforcement Learning
Dates: | 14 February 2023 |
Times: | 15:00 - 16:00 |
What is it: | Lecture |
Organiser: | Department of Mechanical, Aerospace and Civil Engineering |
Who is it for: | University staff, External researchers, Adults, Alumni, General public |
Speaker: | Dr. Andrea Beck |
|
Please note that this is an online-only event.
Abstract of lecture:
Reinforcement learning (RL) is considered as the third learning paradigm, besides unsupervised and supervised learning. In RL, the learning task is framed as a Markov Decision Process (MDP), which is solved by an optimal policy. This policy is either approximated directly or through the evaluation of a learned value action function. The learned policy represents the current control strategy for solving the MDP. Its parameters are updated through repeated sampling of the policy’s proposed action space through interaction with the environment of the MDP, which emits reward signals intermittently and estimating the gradient of the objective w.r.t these parameters. This optimization within the context of a dynamical system makes the RL approach somewhat orthogonal to supervised learning (SL) in that no training samples need to be known a priori, only a definition of a meaningful reward (which could be a single scalar value) is necessary. This more indirect guidance of the learning process makes RL methods relatively sample-inefficient and training stability is less well understood than for SL methods, however, its possible benefits have been demonstrated in a range of applications from autonomous driving, strategic games and flow control.
In this talk, I will present data-driven approaches to LES modeling for implicitly filtered high order discretizations. Wheres supervised learning of the Reynolds force tensor based on non- ocal data can provide highly accurate results that provide higher a priori correlation than any existing closures, a posteriori stability remains an issue. I will give reasons for this and introduce reinforcement learning (RL) as an alternative optimization approach. Our initial experiments with this method suggest that is it much better suited to account for the uncertainties introduced by the numerical scheme and its induced filter form on the modeling task. For this coupled RL-DG framework, I will present discretization-aware model approaches for the LES equations (c.f. Fig. 1) and discuss the future potential of these solver-in-the-loop optimizations.
Bio of speaker:
Andrea obtained a M.Sc. degree in aerospace engineering with a focus on fluid dynamics from the Georgia Institute of Technology in Atlanta (USA) and a doctoral degree from the University of Stuttgart (Germany) in computational fluid dynamics (CFD). She held the Dorothea-Erxleben professorship at the Institute of Fluid Dynamics and Thermodynamics of the Otto-von-Guericke University in Magdeburg (Germany) from 2020 to 2022 and is currently a professor for numerical methods in fluid dynamics at the faculty of aerospace engineering and geodesy at the University of Stuttgart. Her areas of interest include numerical discretization schemes for multiscale-multiphysics problems, in particular high order methods, high performance computing and visualization, Large Eddy Simulation methods and models, shock capturing schemes, uncertainty quantification methods and machine learning. She is a co-developer of the open-source high order Discontinuous Galerkin CFD framework FLEXI. Recent fields of application include uncertainty quantification of feedback loops in acoustics, particle-laden flow in turbomachines, wake-boundary layer interaction for transport aircraft at realistic flight conditions, shock-droplet interactions and data-driven models for LES closures.
Speaker
Dr. Andrea Beck
Role: Professor for Numerical Methods in Fluid Dynamics
Organisation: University of Stuttgart
Travel and Contact Information