24 January 2020 - Dirk Hovy: Seminar
Title: Hidden Biases. Ethical Issues in NLP, and What to Do about Them
Texts reflect the authors' demographic properties and biases, which in turn get magnified by statistical NLP models. This has unintended consequences for our analysis: if we do not pay attention to the biases contained, we can easily draw the wrong conclusions, and create disadvantages for our users.
In this talk, I will discuss several types of biases that affect NLP models, what their sources are, and potential counter measures.
- bias stemming from data, i.e., selection bias (if our texts do not adequately reflect the population we want to study), label bias (if the labels we use are skewed), and semantic bias (the latent stereotypes encoded in embeddings).
- biases deriving from the models themselves, i.e., their tendency to amplify any imbalances that are present in the data.
- design bias, i.e., the biases arising from our (the researchers) decisions which topics to analyze, which data sets to use, and what to do with them.
For each bias, I will provide examples and discuss the possible ramifications for a wide range of applications, and who various ways to address and counteract these biases, ranging from simple labeling considerations to new types of models.
Dirk Hovy is associate professor of computer science at Bocconi University in Milan, Italy. Before that, he was faculty and a postdoc in Copenhagen, got a PhD from USC, and a linguistics masters in Germany. He is interested in the interaction between language, society, and machine learning, or what language can tell us about society, and what computers can tell us about language. He has authored over 50 articles on these topics, including 3 best paper awards. He has organized one conference and several workshops on abusive language, ethics in NLP, and computational social science. Outside of work, Dirk enjoys cooking, running, and leather-crafting. For updated information, see http://www.dirkhovy.com