Monday, June 20th

14:00 Introduction
14:05 Invited talk: Su Lin Blodgett (Microsoft Research)
15:00 Lightning presentations
16:00 Break
16:30 Poster session
17:30 Panel discussion

Invited talk:

Su Lin Blodgett
Su Lin Blodgett

Su Lin Blodgett is a senior researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal. Her research focuses on the ethical and social implications of language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. She completed her Ph.D. in computer science at the University of Massachusetts Amherst, where she was supported by the NSF Graduate Research Fellowship, and has been named as one of the 2022 100 Brilliant Women in AI Ethics.

Accepted papers

  • Disagreement space in argument analysis
    Annette Hautli-Janisz, Ella Schad and Chris Reed
  • Change My Mind: how Syntax-based Hate Speech Recognizer can Uncover Hidden Motivations based on Different Viewpoints
    Michele Mastromattei, Valerio Basile and Fabio Massimo Zanzotto
  • Analyzing the Effects of Annotator Gender Across NLP Tasks
    Laura Biester, Vanita Sharma, Ashkan Kazemi, Naihao Deng, Steven R. Wilson and Rada Mihalcea
  • Predicting Literary Quality How Perspectivist Should We Be?
    Yuri Bizzoni, Ida Marie Lassen and Telma Peura
  • Bias Discovery Within Human Raters: A Case Study of the Jigsaw Dataset
    Marta Marchiori Manerba, Riccardo Guidotti, Lucia Passaro and Salvatore Ruggieri
  • The Viability of Best-worst Scaling and Categorical Data Label Annotation Tasks in Detecting Implicit Bias
    Parker Glenn, Cassandra L. Jacobs, Marvin Thielk and Yi Chu
  • What if Ground Truth is Subjective? Personalized Deep Neural Hate Speech Detection
    Kamil Kanclerz, Marcin Gruza, Konrad Karanowski, Julita Bielaniewicz, Piotr Milkowski, Jan Kocon and Przemyslaw Kazienko
  • StudEmo: A Non-aggregated Review Dataset for Personalized Emotion Recognition
    Anh Ngo, Agri Candri, Teddy Ferdinan, Jan Kocon and Wojciech Korczynski
  • Annotator Response Distributions as a Sampling Frame
    Christopher Homan, Tharindu Cyril Weerasooriya, Lora Aroyo and Chris Welty
  • Variation in the Expression and Annotation of Emotions: a Wizard of Oz Pilot Study
    Sofie Labat, Naomi Ackaert, Thomas Demeester and Veronique Hoste
  • Beyond Explanation: A Case for Exploratory Text Visualizations of Non-Aggregated, Annotated Datasets
    Lucy Havens, Benjamin Bach and Beatrice Alex
  • The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism
    Pratik S. Sachdeva, Renata Barreto, Geoff Bacon, Alexander Sahn, Claudia von Vacano and Chris Kennedy
  • Improving Label Quality by Jointly Modeling Items and Annotators
    Tharindu Cyril Weerasooriya, Alexander Ororbia and Christopher Homan
  • Lutma: a Frame-Making Tool for Collaborative FrameNet Development
    Tiago Timponi Torrent, Arthur Lorenzi, Ely Edison Matos, Frederico Belcavello, Marcelo Viridiano and Maucha Andrade Gamonal
  • The Case for Perspective in Multimodal Datasets
    Marcelo Viridiano, Tiago Timponi Torrent, Oliver Czulo, Arthur Lorenzi, Ely Matos and Frederico Belcavello