Until recently, the dominant paradigm in natural language processing (and other areas of artificial intelligence) has been to resolve observed label disagreement into a single “ground truth” or “gold standard” via aggregation, adjudication, or statistical means. However, in recent years, the field has increasingly focused on subjective tasks, such as abuse detection or quality estimation, in which multiple points of view may be equally valid, and a unique ‘ground truth’ label may not exist. At the same time, as concerns have been raised about bias and fairness in AI, it has become increasingly apparent that an approach which assumes a single “ground truth” can erase minority voices.
Strong perspectivism in NLP (Cabitza et al., 2023) pursues the spirit of recent initiatives such as Data Statements (Bender and Friedman, 2018), extending their scope to the full NLP pipeline, including the aspects related to modelling, evaluation and explanation.
The first edition two editions of the workshop “Perspectivist Approaches in NLP” (NLPerspectives) explored current and ongoing work on the collection and labelling of non-aggregated datasets, and approaches to modelling and including these perspectives, as well as evaluation and applications of multi-perspective models.
- NLPerspectives 2022, co-located with LREC 2022
- NLPerspectives 2023, co-located with ECAI 2023
- NLPerspectives 2024, co-located with LREC-COLING 2024
This website collects resources and pointers for the community interested in these topics. Among the goals, we aim at building on the work begun with the Perspectivist Data Manifesto, including the creation of a repository of perspectivist datasets with non-aggregated labels for use by researchers in perspectivist NLP modelling.