Programme

  • 9:00-9:30 Opening remarks
  • 9:30-10:30 Keynote talk: Barbara Plank
  • 10:30-11:00 Coffee break
  • 11:00-12:00 Lightning talks (2 minutes for each paper)
  • 12:00-13:30 Poster session
  • 13:30-15:00 Lunch break
  • 15:00-16:00 Remote presentations
  • 16:00-16:30 Coffee break
  • 16:30-17:30 Panel Discussion: Barbara Plank, Alicia Parrish, Massimo Poesio
  • 17:30-17:45 Closing

Invited talks:

Barbara Plank

Barbara Plank

“From Human Label Variation and Model Uncertainty to Error Detection
(and Back)”

Barbara Plank is Professor for AI and Computational Linguistics at LMU Munich where she heads the MainNLP lab, co-director of the Center for Information and Language Processing (CIS), and part-time professor at IT University of Copenhagen. She received her PhD in 2011 from the University of Groningen Currently, she holds an ERC Consolidator Grant, is ELLIS Scholar and VP-Elect for the Association for Computational Linguistics (ACL).

Accepted Papers

  • OrigamIM: A Dataset of Ambiguous Sentence Interpretations for Social Grounding and Implicit Language Understanding
    Liesbeth Allein and Marie-Francine Moens
  • Is a picture of a bird a bird? A mixed-methods approach to understanding diverse human perspectives and ambiguity in machine vision models
    Alicia Parrish, Susan Hao, Sarah Laszlo and Lora Aroyo
  • Wisdom of Instruction-Tuned Language Model Crowds. Exploring Model Label Variation
    Flor Miriam Plaza-del-Arco, Debora Nozza and Dirk Hovy
  • Linguistic Fingerprint in Transformer Models: How Language Variation Influences Parameter Selection in Irony Detection
    Michele Mastromattei and Fabio Massimo Zanzotto
  • Exploring Cross-Cultural Differences in English Hate Speech Annotations: From Dataset Construction to Analysis
    Nayeon Lee, Chani Jung, Junho Myung, Jiho Jin, Jose Camacho-Collados, Juho Kim and Alice Oh
  • Revisiting Annotation of Online Gender-Based Violence
    Gavin Abercrombie, Nikolas Vitsakis, Aiqi Jiang and Ioannis Konstas
  • Confidence-based Ensembling of Perspective-aware Models
    Silvia Casola, Soda Marem Lo, Valerio Basile, Simona Frenda, Alessandra Teresa Cignarella, Viviana Patti and Cristina Bosco
  • A Perspectivist Corpus of Numbers in Social Judgements
    Marlon May, Lucie Flek and Charles Welch
  • Intersectionality in AI Safety: Using Multilevel Models to Understand Diverse Perceptions of Safety in Conversational AI
    Christopher Homan, Gregory Serapio-Garcia, Lora Aroyo, Mark Diaz, Alicia Parrish, Vinodkumar Prabhakaran, Alex Taylor and Ding Wang
  • An Overview of Recent Approaches to Enable Diversity in Large Language Models through Aligning with Human Perspectives
    Benedetta Muscato, Chandana Sree Mala, Marta Marchiori Manerba, Gizem Gezici and Fosca Giannotti
  • Federated Learning for Exploiting Annotators’ Disagreements in Natural Language Processing
    Nuria Rodríguez Barroso, Eugenio Martínez Cámara, Jose Camacho-Collados, M. Victoria Luzón and Francisco Herrera
  • Quantifying the Persona Effect in LLM Simulations
    Tiancheng Hu and Nigel Collier
  • A Dataset for Multi-Scale Film Rating Inference from Reviews
    Frankie Robertson and Stefano Leone
  • Disagreement in Argumentation Annotation
    Anna Lindahl
  • Moral Disagreement over Serious Matters: Discovering the Knowledge Hidden in the Perspectives
    Anny D. Alvarez Nogales and Oscar Araque
  • Perspectives on Hate: General vs. Domain-Specific Models
    Giulia Rizzi, Michele Fontana and Elisabetta Fersini
  • Soft metrics for evaluation with disagreements: an assessment
    Giulia Rizzi, Elisa Leonardelli, Massimo Poesio, Alexandra Uma, Maja Pavlovic, Silviu Paun, Paolo Rosso and Elisabetta Fersini
  • Designing NLP Systems That Adapt to Diverse Worldviews
    Claudiu Creanga and Liviu P. Dinu
  • The Effectiveness of LLMs as Annotators: A Comparative Overview and Empirical Analysis of Direct Representation
    Maja Pavlovic and Massimo Poesio
  • What Does Perspectivism Mean? An Ethical and Methodological Counter- criticism
    Mathieu Valette
  • Towards Situated Evaluation for Perspectivist Machine Learning Problems: A Pilot Study of Image Aesthetic Quality Assessment.
    Samuel Goree and David Crandall
  • Consistency is Key: Disentangling Label Variation in Natural Language Processing with Intra-Annotator Agreement
    Gavin Abercrombie, Amanda Cercas Curry, Tanvi Dinkar, Verena Rieser and Dirk Hovy

Proceedings

TBA