The 7th Workshop on Online Abuse and Harms (WOAH) on July 13th at ACL 2023.
Thursday, July 13th 2023. Toronto time. All sessions are in PIER 7&8 unless indicated otherwise.
09:00 - 09:15: Opening Remarks
09:15 - 09:45: Dirk Hovy: “Whose Truth Is It Anyway?”
09:45 - 10:15: Vinodkumar Prabhakaran:
10:15 - 11:45: Poster Session in the Frontenac Ballroom
Identity Construction in a Misogynist Incels Forum
Michael Yoder, Chloe Perry, David Brown, Kathleen Carley and Meredith Pruden
DeTexD: A Benchmark Dataset for Delicate Text Detection
Artem Chernodub, Serhii Yavnyi, Oleksii Sliusarenko, Jade Razzaghi, Yichen Mo and Knar Hovakimyan
Respectful or Toxic? Using Zero-Shot Learning with Language Models to Detect Hate Speech
Flor Miriam Plaza-del-Arco, Debora Nozza and Dirk Hovy
Benchmarking Offensive and Abusive Language in Dutch Tweets
Tommaso Caselli and Hylke Van Der Veen
Concept-Based Explanations to Test for False Causal Relationships Learned by Abusive Language Classifiers
Isar Nejadgholi, Svetlana Kiritchenko, Kathleen C. Fraser and Esma Balkir
Conversation Derailment Forecasting with Graph Convolutional Networks
Enas Altarawneh, Ameeta Agrawal, Michael Jenkin and Manos Papagelis
Resources for Automated Identification of Online Gender-Based Violence: A Systematic Review
Gavin Abercrombie, Aiqi Jiang, Poppy Gerrard-abbott, Ioannis Konstas and Verena Rieser
Disentangling Disagreements on Offensiveness: A Cross-Cultural Study
Aida Mostafazadeh Davani, Mark Diaz, Dylan Baker and Vinodkumar Prabhakaran
Evaluating the Effectiveness of Natural Language Inference for Hate Speech Detection in Languages with Limited Labeled Data
Janis Goldzycher, Moritz Preisig, Chantal Amrhein and Gerold Schneider
Factoring Hate Speech: A New Annotation Framework to Study Hate Speech in Social Media
Gal Ron, Effi Levi, Odelia Oshri and Shaul Shenhav
Toward Disambiguating the Definitions of Abusive, Offensive, Toxic, and Uncivil Comments
Pia Pachinger, Julia Neidhardt, Allan Hanbury and Anna Maria Planitzer
Harmful Language Datasets: An Assessment of Robustness
Katerina Korre, John Pavlopoulos, Jeffrey Sorensen, Léo Laugier, Ion Androutsopoulos, Lucas Dixon and Alberto Barrón-cede ̃no
Robust Hate Speech Detection in Social Media: A Cross-Dataset Empirical Evaluation
Dimosthenis Antypas and Jose Camacho-Collados
[Findings] Responsibility Perspective Transfer for Italian Femicide News
Gosse Minnema, Huiyuan Lai, Benedetta Muscato and Malvina Nissim
[Findings] Scientific Fact-Checking: A Survey of Resources and Approaches
Juraj Vladika and Florian Matthes
[Findings] A New Task and Dataset on Detecting Attacks on Human Rights Defenders
Shihao Ran, Di Lu, Aoife Cahill, Joel Tetreault and Alejandro Jaimes
[Findings] ClaimDiff: Comparing and Contrasting Claims on Contentious Issues
Miyoung Ko, Ingyu Seong, Hwaran Lee, Joonsuk Park, Minsuk Chang and Minjoon Seo
[Findings] Which Examples Should be Multiply Annotated? Active Learning When Annotators May Disagree
Connor T Baumler, Anna Sotnikova and Hal Daumé III
[Findings] Disagreement Matters: Preserving Label Diversity by Jointly Modeling Item and Annotator Label Distributions with DisCo
Tharindu Cyril Weerasooriya, Alexander Ororbia, Raj B Bhensadadia, Ashiqur KhudaBukhsh and Christopher Homan
[Findings] Debiasing should be Good and Bad
Robert A. Morabito, Jad Kabbara and Ali Emami
[Findings] COBRA Frames: Contextual Reasoning about Effects and Harms of Offensive Statements
Xuhui Zhou, Hao Zhu, Akhila Yerukola, Thomas Davidson, Jena D. Hwang, Swabha Swayamdipta and Maarten Sap
[Findings] Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models
Eddie L. Ungless, Bjorn Ross and Anne Lauscher
11:45 - 12:15: Maarten Sap: “The Pivotal Role of Social Context in Toxic Language Detection”
12:15 - 13:30: Lunch Break
13:30 - 14:00: Su Lin Blodgett:
14:00 - 14:30: Outstanding Paper Talks
Cross-Platform and Cross-Domain Abusive Language Detection with Supervised Contrastive Learning
Md Tawkat Islam Khondaker, Muhammad Abdul-mageed and Laks Lakshmanan, V.s.
Aporophobia: An Overlooked Type of Toxic Language Targeting the Poor
Svetlana Kiritchenko, Georgina Curto Rex, Isar Nejadgholi and Kathleen C Fraser
14:30 - 15:00: Lightning Talks for remote attendants
Towards Safer Communities: Detecting Aggression and Offensive Language in Code-Mixed Tweets to Combat Cyberbullying
Nazia Nafis, Diptesh Kanojia, Naveen Saini and Rudra Murthy
Towards Weakly-Supervised Hate Speech Classification Across Datasets
Yiping Jin, Leo Wanner, Vishakha Kadam and Alexander Shvets
Relationality and Offensive Speech: A Research Agenda
Razvan Amironesei and Mark Diaz
Auditing YouTube Content Moderation in Low Resource Language Settings
Hellina Hailu Nigatu and Inioluwa Raji
ExtremeBB: A Database for Large-Scale Research into Online Hate, Harassment, the Manosphere and Extremism
Anh V. Vu, Lydia Wilson, Yi Ting Chua, Ilia Shumailov and Ross Anderson
HOMO-MEX: A Mexican Spanish Annotated Corpus for LGBT+phobia Detection on Twitter
Juan Vásquez, Scott Andersen, Gemma Bel-enguix, Helena Gómez-adorno and Sergio-luis Ojeda-trueba
A Cross-Lingual Study of Homotransphobia on Twitter
Davide Locatelli, Greta Damo and Debora Nozza
[Findings] The State of Profanity Obfuscation in Natural Language Processing Scientific Publications
Debora Nozza and Dirk Hovy
[Findings] It’s not Sexually Suggestive; It’s Educative | Separating Sex Education from Suggestive Content on TikTok videos
Enfa Rose George and Mihai Surdeanu
[Findings] Playing the Part of the Sharp Bully: Generating Adversarial Examples for Implicit Hate Speech Detection
Nicolas Benjamin Ocampo, Elena Cabrio and Serena Villata
15:00 - 15:30: Lauren Klein: “Historical Data, Real-World Harms”
15:30 - 16:15: Coffee Break
16:15 - 17:15: Panel Discussion on the special theme with all invited speakers
17:15 - 17:25: Closing Remarks
From 17:30: Workshop Drinks, External Location TBA at the workshop