WOAH 2020 | Research Papers

Overview

Digital technologies have brought myriad benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled harmful and abusive behaviours to reach large audiences and for their negative effects to be amplified, including interpersonal aggression, bullying and hate speech. The negative effects are further compounded as marginalised and vulnerable communities are disproportionately at the risk of receiving abuse. As policymakers, civil society and tech companies devote more resources and effort to tackling online abuse, there is a pressing need for scientific research that critically and rigorously investigates how they are defined, detected, moderated and countered.

Technical disciplines such as machine learning (ML), natural language processing (NLP) and statistics have made substantial advances in detecting and modelling online abuse. Primarily, this has been through leveraging state-of-the-art ML and NLP techniques, such as contextual word embeddings, transfer learning and graph embeddings. However, concerns have been raised about the potential societal biases that many of these ML-based detection systems reflect, propagate and sometimes amplify. These concerns are magnified by the lack of explainability and transparency of these models. For example, many detection systems have different error rates for content produced by different people or perform better at detecting certain types of abuse. Such issues are not purely engineering challenges but raise fundamental questions of fairness and social harms: any interventions that employ biased models to detect and moderate online abuse could end up exacerbating the social injustices they aim to counter. For instance, women are 27 times more likely to be the target of online harassment [1]. Developing reliable and robust tools for abusive content detection and moderation in collaboration with key stakeholders, such as policy-makers and civil society, is crucial as the field matures and automated detection systems become ubiquitous.

For the fourth edition of the Workshop on Online Abuse and Harms (4th WOAH!) we advance research in online abuse through our theme: Social Bias and Unfairness in Online Abuse Detection. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. These include but are not limited to: NLP, machine learning, computational social sciences, law, politics, psychology, network analysis, sociology and cultural studies. Additionally, in this iteration we invite civil society, in particular individuals and organisations working with women and marginalised communities who are often disproportionately affected by online abuse, to submit reports, case studies, findings, data, and to record their lived experiences. We hope that through these engagements we can develop computational tools which address the issues faced by those on the front-lines of tackling online abuse.


Timeline

Submission deadline: September 1, 2020

Notification date: September 29, 2020

Camera-ready date: October 14, 2020


Contributions

We invite long (8 pages) and short (4 pages) academic/research papers on any of the following topics.

Related to developing computational models and systems:

  • NLP models and methods for detecting abusive language online, including, but not limited to hate speech, gender-based violence, cyberbullying etc.
  • Application of NLP tools to analyze social media content and other large data sets
  • NLP models for cross-lingual abusive language detection
  • Computational models for multi-modal abuse detection
  • Development of corpora and annotation guidelines
  • Critical algorithm studies with a focus on content moderation technology
  • Human-Computer Interaction for abusive language detection systems
  • Best practices for using NLP techniques in watchdog settings
  • Submissions addressing interpretability and social biases in content moderation technologies


Related to legal, social, and policy considerations of abusive language online:

  • The social and personal consequences of being the target of abusive language and targeting others with abusive language
  • Assessment of current (computational and non-computational) methods of addressing abusive language
  • Legal ramifications of measures taken against abusive language use
  • Social implications of monitoring and moderating unacceptable content
  • Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.


Submission Information

Submission link: https://www.softconf.com/emnlp2020/WOAH4/

We will be using the EMNLP 2020 Submission Guidelines.

Authors are invited to submit a full paper of up to 8 pages of content with up to 2 additional pages for references. We also invite short papers of up to 4 pages of content, including 2 additional pages for references. We also invite abstract submissions of up to 2 pages, including 1 additional page for references. Accepted papers will be given an additional page of content to address reviewer comments. We also invite papers which describe systems.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author's identity, e.g., "We previously showed (Smith, 1991) ...", should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991) ...".

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.

Finally, we request that all papers adhere to the submission policies outlined.


References

[1] https://time.com/4049106/un-cyber-violence-physical-violence/