WOAH 2021

The 5th Workshop on Online Abuse and Harms

WOAH 2021 Call for Papers

Digital technologies have brought myriad benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled harmful and abusive behaviours to reach large audiences and for their negative effects to be amplified, including interpersonal aggression, bullying and hate speech. Already marginalised and vulnerable communities are often disproportionately at risk of receiving such abuse, compounding other social inequalities and injustices.

As academics, civil society, policymakers and tech companies devote more resources and effort to tackling online abuse, there is a pressing need for scientific research that critically and rigorously investigates how it is defined, detected and countered. Technical disciplines such as machine learning (ML), natural language processing (NLP) and statistics have made substantial advances in this field. However, concerns have been raised about the potential societal biases that many of automated detection systems reflect, propagate and sometimes amplify. For example, many systems have different error rates for content produced by different groups of people (such as having higher error on content produced in African American English) or perform better at detecting certain types of abuse than others. These issues are magnified by the lack of explainability and transparency in most abusive content detection systems. The limitations of current approaches to tackling and moderating online content are not purely engineering challenges, but raise fundamental social questions of fairness and harm. Any interventions that employ biased, inaccurate or brittle models to detect online abuse could end up exacerbating the social injustices they aim to counter.

For the fifth edition of the Workshop on Online Abuse and Harms (5th WOAH!) we advance research in online abuse through our theme: Social Bias and Unfairness in Online Abuse Detection. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. These include but are not limited to: NLP, machine learning, computational social sciences, law, politics, psychology, network analysis, sociology and cultural studies. Continuing the tradition started in WOAH 4, we invite civil society, in particular individuals and organisations working with women and marginalised communities who are often disproportionately affected by online abuse, to submit reports, case studies, findings, data, and to record their lived experiences. We hope that through these engagements WOAH can directly address the issues faced by those on the front-lines of tackling online abuse.

Joint session with MWE workshop

WOAH is hosting a 1-hour session with the Multi Word Expression (MWE) workshop to explore the question of how multi word expressions (e.g. “sweep under the rug”) may factor into the detection of online abuse. We believe that considering multi word expressions in abusive language can be beneficial to both the WOAH community and the MWE community, through opening an avenue of research for the WOAH community and an additional testbed for MWE processing technology. The main goal of the joint sessions is to pave the way towards the creation of a dataset for a shared task that can involve both communities. Submissions describing research on MWEs and abusive language, especially introducing new datasets, are welcome.

Academic papers (long and short)

Authors are invited to submit full papers of up to 8 pages of content and short papers of up to 4 pages of content, with unlimited pages for references. Accepted papers will be given an additional page of content to address reviewer comments. Previously published papers cannot be accepted. Papers that are currently undergoing review at other venues are welcome.

Topics related to developing computational models and systems include but are not limited to:

  • NLP and Computer Vision models and methods for detecting abusive language online, including, but not limited to hate speech, gender-based violence, cyberbullying etc.

  • Application of NLP and Computer Vision tools to analyze social media content and other large data sets

  • NLP and Computer Vision models for cross-lingual abusive language detection

  • Computational models for multi-modal abuse detection

  • Development of corpora and annotation guidelines

  • Critical algorithm studies with a focus on content moderation technology

  • Human-Computer Interaction for abusive language detection systems

  • Best practices for using NLP and Computer Vision techniques in watchdog settings

  • Submissions addressing interpretability and social biases in content moderation technologies


Topics related to legal, social, and policy considerations of abusive language online include but are not limited to:

  • The social and personal consequences of being the target of abusive language and targeting others with abusive language

  • Assessment of current (computational and non-computational) methods of addressing abusive language

  • Legal ramifications of measures taken against abusive language use

  • Social implications of monitoring and moderating unacceptable content

  • Considerations of implemented and proposed policies for dealing with abusive language online and the technological means of dealing with it.

Non-archival submissions

We welcome non-archival submissions (2 pages + 2 pages for references), which can include work previously published elsewhere.

Shared task on hateful memes

Please see the WOAH 2021 shared task page.

Civil society reports

We invite reports from civil society. These are non-archival submissions, and can include previously published work. They must be a minimum of two pages, with no upper limit. Please contact us if you have any queries about the civil society reports.


Workshop Schedule (August 6th 2021, CEST)

All times are in Central European Summer Time (CEST). 1500 in CEST is 0600 in PST.

Opening remarks

  • Introduction, 15:00-15:10

Keynote Session I

  • Leon Derczynski, 15:10-15:55

  • Murali Shanmugavelan, 15:55-16:40

  • Break, 16:40-16:45

Paper Presentations

  • 1-minute paper storm, 16:45-17:10

  • Paper Q&A Panel I, 17:10-17:40 (BERTology and online abuse; Analysing models to improve real-world performance; Resources for non-English languages)

  • Paper Q&A Panel II, 17:40-18:10 (Fairness, bias and understandability; Datasets and language resources; Dynamics and nature of online abuse)

  • Break, 18:10-18:20

Multi-Word Expressions and Online Abuse

  • MWE Panel, 18:20-19:00

  • Break, 19:00-19:15

Keynote Session II

  • Deb Raji, 19:15-20:00

  • Keynote discussion panel with all keynotes, 20:00-20:45

  • Break, 20:45-21:00

Shared Task

  • Results session, 21:00-21:45

Closing remarks

  • Closing remarks, 21:45-22:00

Keynote speakers

We are delighted to announce our three keynote speakers.

Debora Raji is a Ph.D. Student at Fundamental Machine Learning Research group at University of California, Berkeley. She is also a Mozilla fellow that works closely with the Algorithmic Justice League initiative to highlight the harm caused by deployed AI products. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice. Recently, she was named to Forbes 30 Under 30 and MIT Tech Review 35 Under 35 Innovator.

Murali Shanmugavelan’s academic research is concerned with the disavowal of caste in media and communication studies and digital cultures. Murali is a Faculty Fellow, Race and Technology at Data and Society. Murali is currently working on the re-manifestation of caste and social hierarchies in digital cultures such as caste-hate speech, open data and platform economies. At Data and Society, Murali will scrutinise everyday casteism on the Internet, develop actionable policy recommendations, and build pedagogic content about caste in communications and technology studies. In addition, Murali has written numerous research and policy reports briefs on ICT policies, internet governance and caste-related digital cultures.

Leon Derczynski is an associate professor at the IT University of Copenhagen. His research focuses on Natural Language Processing and Machine Learning with a specific focus on NLP for online communications that hinder democratic participation, including misinformation and abusive language. This includes datasets across many languages and examination of data practices and analysis methods in this complex, multi-disciplinary area of NLP.


Organisers


Contact

You can contact the organizers at organizers [at] workshopononlineabuse [dot] com

WOAH 2021 Sponsors


ACL Anti-Harassment Policy

We abide by the ACL anti-harassment policy outlined here.