Shared task on Multimodal Hate Speech Event Detection at CASE@EACL2024

Organized by FarhanJafri - Current server time: Sept. 19, 2025, 1:51 a.m. UTC

Previous

ST2 Testing
Nov. 30, 2023, midnight UTC

Current

ST2 Testing
Nov. 30, 2023, midnight UTC

End

Competition Ends
Jan. 7, 2024, 11:59 p.m. UTC

Shared task on Multimodal Hate Event Detection [CASE 2024 @ EACL 2024]

Hate speech detection is one of the most important aspects of event identification during political events like invasions. In the case of hate speech detection, the event is the occurrence of hate speech, the entity is the target of the hate speech, and the relationship is the connection between the two. Since multimodal content is widely prevalent across the internet, the detection of hate speech in text-embedded images is very important. Given a text-embedded image, this task aims to automatically identify the hate speech and its targets. This task will have two subtasks.

Sub-task A: Hate Speech Detection

The goal of this task is to identify whether the given text-embedded image contains hate speech or not. The text-embedded images, which are the dataset for this subtask, will have annotations for the prevalence of hate speech.

Sub-task B: Target Detection


The goal of this subtask is to identify the targets of hate speech in a given hateful text-embedded image. The text-embedded images are annotated for "community", "individual" and "organization" targets.

 

References


If you use the dataset, please cite as follows:

@inproceedings{bhandari2023crisishatemm,
title={CrisisHateMM: Multimodal Analysis of Directed and Undirected Hate Speech in Text-Embedded Images from Russia-Ukraine Conflict},
author={Bhandari, Aashish and Shah, Siddhant Bikram and Thapa, Surendrabikram and Naseem, Usman and Nasim, Mehwish},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}

Evaluation Criteria

All the images have a unique identifier called "index". The labels for training data are organized in the folder provided. For evaluation and testing, the submission format is mentioned below.

Subtask 1


The script takes one prediction file as the input. Your submission file must be a JSON file which is then zipped. We will only take the first file in the zip folder, so do not zip multiple files together. Make sure that your hate label is given as "1" and non-hate label is given as "0".

IMPORTANT: The index in json should be in ascending order.

{"index": 45805, "prediction": 0}
{"index": 20568, "prediction": 1}
{"index": 30987, "prediction": 0}

A sample file is available here. Also, make sure that the index order in the submission file in JSON should be in ascending order. The JSON file basically tells what image (with a unique index) is given what label.

Subtask 2


The script takes one prediction file as the input. Your submission file must be a JSON file which is then zipped. We will only take the first file in the zip folder, so do not zip multiple files together. Make sure that your individual, community, and organization labels are given as "0", "1", and "2" respectively.

IMPORTANT: The index in json should be in ascending order. Also, make sure that the index order in the submission file in JSON should be in ascending order.

{"index": 45865, "prediction": 0}
{"index": 23568, "prediction": 1}
{"index": 36987, "prediction": 2}

A sample file is available here. The JSON file basically tells what image (with a unique index) is given what label. Also, make sure that the index order in the submission file in JSON should be in ascending order.

For both Subtasks, the performance will be ranked by F1 score.

More about evaluation can be found here.

To submit the files, name your submission as submission.json and zip it with the file name, ref.zip. Make sure that the zip does not have any sub-directories. Windows users can right-click on the JSON file and save it as a zip.

Rules

The "Evaluation Phase" is meant for competitors to familiarize themselves with the Codalab site. We provide a Train set for training, and Dev set for testing. Therefore, competitors are not allowed to use the dev set for training in this phase.

In the "Testing Phase", competitors may feel free to incorporate the dev set for training. However, the test set should not be incorporated into training.

The use of external dataset are permitted, but they should not violate any other terms and conditions. You should also mention your usage in your paper write-up.

Organizers of the competition might choose to publicize, analyze and change in any way any content sent as a part of this task. Whenever appropriate academic citation for the sending group would be added (e.g. in a paper summarizing the task).

The organizers are free to penalize or disqualify for any violation of the above rules or for misuse, unethical behavior, or other behaviors they agree are not accepted in a scientific competition in general and in the specific one at hand.

Terms and Conditions

The participants should agree to the terms and conditions. 

1. I/We agree to not share data of the competition with the third party. If anyone needs the data, I/we will redirect them to the source website.
2. I/We provide the consent to share my/our results with organizers and other participants.

Training & Evaluation data available: Nov 1, 2023

Test data available: Nov 30, 2023

Test start: Nov 30, 2023

Test end: Jan 7, 2024

System Description Paper submissions due: Jan 13, 2024

Notification to authors after review: Jan 26, 2024

Camera ready: Jan 30, 2024

CASE Workshop: 21-22 Mar, 2024

Organizers

  • Surendrabikram Thapa (Virginia Tech, USA)
  • Farhan Ahmad Jafri (Jamia Millia Islamia, India)
  • Hariram Veeramani (UCLA, USA)
  • Kritesh Rauniyar (Delhi Technological University, India)
  • Usman Naseem (James Cook University, Australia)

If there are any questions related to the competition, please contact surendrabikram@vt.edu

 

ST1 Evaluation

Start: Nov. 1, 2023, midnight

Description: Develop and train your system, and try evaluating on development data.

ST1 Testing

Start: Nov. 30, 2023, midnight

Description: Run the trained system on test data and upload predictions for leaderboard scoring.

ST2 Evaluation

Start: Nov. 1, 2023, midnight

Description: Develop and train your system, and try evaluating on development data.

ST2 Testing

Start: Nov. 30, 2023, midnight

Description: Run the trained system on test data and upload predictions for leaderboard scoring.

Competition Ends

Jan. 7, 2024, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 Yestin 1.2500
2 AhmedElSayed 1.7500
3 Sadiya_Puspo 3.0000