WOAH | The 5th Workshop on Online Abuse and Harms

Overview

The goal of The Workshop on Online Abuse and Harms (WOAH) is to advance research that develops, interrogates and applies computational methods for detecting, classifying and modelling online abuse. You can see the Proceedings of the 4th Workshop (2020) on the ACL Anthology. Please read, discuss and use! WOAH 2021 is colocated with ACL. It will take place on August 6th.

Background

Digital technologies have brought myriad benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled harmful and abusive behaviours to reach large audiences and for their negative effects to be amplified, including interpersonal aggression, bullying and hate speech. Already marginalised and vulnerable communities are often disproportionately at risk of receiving such abuse, compounding other social inequalities and injustices.

For the fifth edition of the Workshop on Online Abuse and Harms (5th WOAH!) we advance research in online abuse through our theme: Social Bias and Unfairness in Online Abuse Detection. We continue to emphasize the need for inter-, cross- and anti- disciplinary work on online abuse and harms, and invite paper submissions from a range of fields. We are colocated with ACL 2021. We have published proceedings from every iteration of WOAH, which you can find on the ACL Anthology. WOAH welcomes an interdisciplinary mix of perspectives on online abuse and harm.

Workshop Schedule (August 6th 2021, CEST)

All times are in Central European Summer Time (CEST). 1500 in CEST is 0600 in PST. For the full detailed schedule, including all panel participants, see here.

Opening remarks

  • Introduction, 15:00-15:10

Keynote Session I

  • Leon Derczynski, 15:10-15:55

  • Murali Shanmugavelan, 15:55-16:40

  • Break, 16:40-16:45

Paper Presentations

  • 1-minute paper storm, 16:45-17:10

  • Paper Q&A Panel I, 17:10-17:40 (BERTology and online abuse; Analysing models to improve real-world performance; Resources for non-English languages)

  • Paper Q&A Panel II, 17:40-18:10 (Fairness, bias and understandability; Datasets and language resources; Dynamics and nature of online abuse)

  • Break, 18:10-18:20

Multi-Word Expressions and Online Abuse

  • MWE Panel, 18:20-19:00

  • Break, 19:00-19:15

Keynote Session II

  • Deb Raji, 19:15-20:00

  • Keynote discussion panel with all keynotes, 20:00-20:45

  • Break, 20:45-21:00

Shared Task

  • Results session, 21:00-21:45

Closing remarks

  • Closing remarks, 21:45-22:00

Keynote speakers

We are delighted to announce our three keynote speakers.

Debora Raji is a Ph.D. Student at Fundamental Machine Learning Research group at University of California, Berkeley. She is also a Mozilla fellow that works closely with the Algorithmic Justice League initiative to highlight the harm caused by deployed AI products. She has also worked with Google’s Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice. Recently, she was named to Forbes 30 Under 30 and MIT Tech Review 35 Under 35 Innovator.

Murali Shanmugavelan’s academic research is concerned with the disavowal of caste in media and communication studies and digital cultures. Murali is a Faculty Fellow, Race and Technology at Data and Society. Murali is currently working on the re-manifestation of caste and social hierarchies in digital cultures such as caste-hate speech, open data and platform economies. At Data and Society, Murali will scrutinise everyday casteism on the Internet, develop actionable policy recommendations, and build pedagogic content about caste in communications and technology studies. In addition, Murali has written numerous research and policy reports briefs on ICT policies, internet governance and caste-related digital cultures.

Leon Derczynski is an associate professor at the IT University of Copenhagen. His research focuses on Natural Language Processing and Machine Learning with a specific focus on NLP for online communications that hinder democratic participation, including misinformation and abusive language. This includes datasets across many languages and examination of data practices and analysis methods in this complex, multi-disciplinary area of NLP.

Call for Papers

The Call for Papers is now closed. WOAH has four components:

  1. Regular paper submissions, both short (4 pages) and long (8 pages).

  2. Submissions from civil society (5 to 20 pages). Previously published work can be accepted as a non-archival submission.

  3. Shared Task on hateful memes.

  4. A multidisciplinary panel discussion.

Contact

You can contact the organizers at organizers [at] workshopononlineabuse [dot] com

Sponsors for WOAH 2021

ACL Anti-Harassment Policy

We abide by the ACL anti-harassment policy outlined here.