Privacy models for machine learning and statistics
|Dates:||19 May 2021|
|Times:||14:00 - 15:00|
|What is it:||Seminar|
|Organiser:||Department of Computer Science|
|Who is it for:||University staff, Adults, Current University students|
Please join us for the following talk in Computer Science (online)
Joining details: https://zoom.us/j/91710725386
Data privacy studies how to take advantage of data without disclosure of sensitive information. Privacy models, computational definitions of privacy, permit us to establish when data and models are considered safe with respect to disclosure. Data protection mechanisms are defined to be compliant with privacy models, and to achieve a good trade-off between disclosure risk and data utility. In this talk, I will give a brief summary of privacy models and introduce our research in this context. Some of our research focuses on masking methods for databases. That is, methods to be applied to data prior to their use for data analysis. Masking methods modify databases to avoid disclosure and trying to keep data utility. A good masking method is one that achieves a good trade-off between disclosure risk and data utility. Other research focuses on methods to avoid disclosure from analysis from a database. For example, avoiding disclosure from a data-driven machine learning model.
Vicenç Torra is currently a WASP professor on AI at Umeå University (Sweden). He is an IEEE and EurAI Fellow. His fields of interests include data privacy, approximate reasoning (fuzzy sets, fuzzy measures/non-additive measures and integrals) and decision making. He has written seven books, including "Modeling decisions" (with Y. Narukawa, Springer, 2007), "Data Privacy" (Springer, 2017). He is the founder and editor of the Transactions on Data Privacy.
His web page is: http://www.mdai.cat/vtorra.
Role: WASP professor on AI
Organisation: Umeå University (Sweden)
Travel and Contact Information