BEGIN:VCALENDAR
PRODID:-//Columba Systems Ltd//NONSGML CPNG/SpringViewer/ICal Output/3.3-
 M3//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20251203T171414Z
DTSTART:20251211T123000Z
DTEND:20251211T140000Z
SUMMARY:I'm Just A Large Language Model\, Please Excuse Me For Being Raci
 st: Racial Inequalities Manifested In Generative Artificial Intelligence
  - Dhiraj Murthy
UID:{http://www.columbasystems.com/customers/uom/gpp/eventid/}i16t-mhw1ub
 u7-uznzo8
DESCRIPTION:-- Please note that the start time of this seminar has been c
 hanged to 12.30pm --\n\nIn this CoDE lunchtime seminar\, Professor Dhira
 j Murthy (University of Texas in Austin) shares his research.\n\nSUMMARY
 \nGenerative Artificial Intelligence (GenAI) platforms like ChatGPT and 
 Gemini are being used throughout the world to generate text\, images\, m
 usic\, video\, and much more. Though GenAI systems have become advanced\
 , they reproduce racist biases and inequalities overtly and subtly. Spec
 ifically\, GenAI reproduces existing social inequalities – racism\, sexi
 sm\, homophobia\, transphobia\, etc. GenAI is trained from billions of h
 uman-produced data points which reflect structural inequalities. Trainin
 g data contain inherent biases and lack diversity. GenAI models are ulti
 mately Big Tech products and are overtrained with biased Global North-pr
 oduced Internet and social media content. Early iterations of GenAI like
  Microsoft’s Tay had to be taken offline due to hugely racist (and other
 ) outbursts. Musk’s GenAI Grok made the news earlier this year with anti
 semitic content calling for a new holocaust. Because GenAI training proc
 esses are black boxed\, measuring the bias is difficult. Following the m
 ethods of small empirical experiments employed by Safiya Noble to explor
 e racism and racist content in search engines\, this talk introduces my 
 own experiments using ChatGPT\, DeepSeek\, and Meta AI regarding race-re
 lated prompts both in text and image creation. By examining these output
 s\, I render visible some of the deeply embedded biases in contemporary 
 generative large language models (LLMs) and discuss how these types of s
 mall-scale\, empirical experiments can be used to audit GenAI. Ultimatel
 y\, this study demonstrates some ways in which scholars of race and medi
 a can extend and develop theory regarding rapidly changing AI systems.  
 \n\nThis seminar will not be recorded.\n\nHOW TO JOIN US\nYou can attend
  this seminar in person at the University of Manchester or online via Te
 ams. If there are any changes to this event we will share them on this p
 age\, so please check back here before you travel.  \n\n-- Teams meeting
  details --\nMeeting ID: 316 512 406 210 66\nPasscode: ay7uS66Z\nhttps:/
 /teams.microsoft.com/meet/31651240621066?p=S48p0bOxN8Q758RHSX\n\nFINDING
  US AND ACCESSIBILITY\nSee links on this page to the University of Manch
 ester Maps page (for information on getting to the university and findin
 g the building) and AccessAble (for accessibility information). For any 
 other accessibility questions please contact us. \n\n\n\n\n\n
STATUS:TENTATIVE
TRANSP:TRANSPARENT
CLASS:PUBLIC
LOCATION:Alistair Ulph Boardroom (2nd Floor)\, Arthur Lewis Building\, Ma
 nchester
END:VEVENT
END:VCALENDAR
