WOAH 2021 | Call for Papers

WOAH 2021 Call for Papers

Digital technologies have brought myriad benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled harmful and abusive behaviours to reach large audiences and for their negative effects to be amplified, including interpersonal aggression, bullying and hate speech. Already marginalised and vulnerable communities are often disproportionately at risk of receiving such abuse, compounding other social inequalities and injustices.

As academics, civil society, policymakers and tech companies devote more resources and effort to tackling online abuse, there is a pressing need for scientific research that critically and rigorously investigates how it is defined, detected and countered. Technical disciplines such as machine learning (ML), natural language processing (NLP) and statistics have made substantial advances in this field. However, concerns have been raised about the potential societal biases that many of automated detection systems reflect, propagate and sometimes amplify. For example, many systems have different error rates for content produced by different groups of people (such as having higher error on content produced in African American English) or perform better at detecting certain types of abuse than others. These issues are magnified by the lack of explainability and transparency in most abusive content detection systems. The limitations of current approaches to tackling and moderating online content are not purely engineering challenges, but raise fundamental social questions of fairness and harm. Any interventions that employ biased, inaccurate or brittle models to detect online abuse could end up exacerbating the social injustices they aim to counter.

For the fifth edition of the Workshop on Online Abuse and Harms (5th WOAH!) we advance research in online abuse through our theme: Social Bias and Unfairness in Online Abuse Detection. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. These include but are not limited to: NLP, machine learning, computational social sciences, law, politics, psychology, network analysis, sociology and cultural studies. Continuing the tradition started in WOAH 4, we invite civil society, in particular individuals and organisations working with women and marginalised communities who are often disproportionately affected by online abuse, to submit reports, case studies, findings, data, and to record their lived experiences. We hope that through these engagements WOAH can directly address the issues faced by those on the front-lines of tackling online abuse.

Joint session with MWE workshop

WOAH is hosting a 1-hour session with the Multi Word Expression (MWE) workshop to explore the question of how multi word expressions (e.g. “sweep under the rug”) may factor into the detection of online abuse. We believe that considering multi word expressions in abusive language can be beneficial to both the WOAH community and the MWE community, through opening an avenue of research for the WOAH community and an additional testbed for MWE processing technology. The main goal of the joint sessions is to pave the way towards the creation of a dataset for a shared task that can involve both communities. Submissions describing research on MWEs and abusive language, especially introducing new datasets, are welcome.

Contributions

Academic papers (long and short)

Authors are invited to submit full papers of up to 8 pages of content and short papers of up to 4 pages of content, with unlimited pages for references. Accepted papers will be given an additional page of content to address reviewer comments. Previously published papers cannot be accepted. Papers that are currently undergoing review at other venues are welcome.

Topics related to developing computational models and systems include but are not limited to:

  • NLP and Computer Vision models and methods for detecting abusive language online, including, but not limited to hate speech, gender-based violence, cyberbullying etc.

  • Application of NLP and Computer Vision tools to analyze social media content and other large data sets

  • NLP and Computer Vision models for cross-lingual abusive language detection

  • Computational models for multi-modal abuse detection

  • Development of corpora and annotation guidelines

  • Critical algorithm studies with a focus on content moderation technology

  • Human-Computer Interaction for abusive language detection systems

  • Best practices for using NLP and Computer Vision techniques in watchdog settings

  • Submissions addressing interpretability and social biases in content moderation technologies


Topics related to legal, social, and policy considerations of abusive language online include but are not limited to:

  • The social and personal consequences of being the target of abusive language and targeting others with abusive language

  • Assessment of current (computational and non-computational) methods of addressing abusive language

  • Legal ramifications of measures taken against abusive language use

  • Social implications of monitoring and moderating unacceptable content

  • Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

Non-archival submissions

We welcome non-archival submissions (2 pages + 2 pages for references), which can include work previously published elsewhere.

Shared task on hateful memes

Please see the shared task page.

Civil society reports

We invite reports from civil society. These are non-archival submissions, and can include previously published work. They must be a minimum of two pages, with no upper limit. Please contact us if you have any queries about the civil society reports.


Submission Information

Submission link: https://www.softconf.com/acl2021/w02_woah2021

Submission deadline: May 3rd, 2021

Notification date: June 4th, 2021

Camera-ready date: June 30th, 2021

We use the ACL-IJCNLP 2021 Submission Guidelines for submissions. We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.

We request that all papers adhere to our submission policies. Submissions will be reviewed by the program committee. As reviewing is blind, please ensure that papers are anonymous. Self-references that reveal the author's identity, e.g., "We previously showed (Smith, 1991) ...", should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991) ...".