In the HEART-MET Activity Recognition Challenge, the task is to recognize human activities from videos. The videos are recorded from robots operating in a domestic environment and includes activities such as reading a book, drinking water, falling on the floor etc.
HEART-MET is one of the competitions in the METRICS project, which has received funding from European Union’s Horizon 2020 research and innovation program under grant agreement No 871252. The competition aims to benchmark assistive robots performing healthcare-related tasks in unstructured domestic environments.
Activity recognition is an important skill for a robot which is operating in an assistive capacity for persons who may have care needs. In addition to recognizing daily living activities, it is important for the robot to detect activities or events in which the robot may need to offer help or call for assistance. The datasets for this challenge are collected from real robots performing activity recognition in domestic environments with several different volunteers performing the activities.
In order to participate:
Your request will be approved within 24 hours.
If you are participating as a team, the team leader can create a new team by clicking the Team tab. Team members can then request to join the team. A team member must first be registered to participate in the challenge before they can request to join a team. See Competition Teams for more details.
The winner of this challenge will be given a chance to participate in the workshop on ICSR2022. Also, the ICSR2022 registration fee will be provided.
17.10.2022 - Start of competition
23.11.2022 - Start of the test phase of Codalab stage
30.11.2022 - End of Codalab stage
02.12.2022 - Deadline for submission of code and report
04.12.2022 - Winners of the Codalab stage are announced
You must submit a submission bundle (a .zip archive) with a single JSON file named submission.json. This file should contain key-value pairs, in which the key is the name of the video, and the value is a item, which is the predicted class ID (an integer in the interval [0, 19]).
For example, if "video0000.mp4" is classified as "Drinking water" and the key-value pair would be "video0000.mp4": [4]. A video sample considered to be Unknown activity can be represented as "video0001.mp4": [19].
{"video0000.mp4": [3], "video0001.mp4": [7], "video0002.mp4": [2], "video0003.mp4": [1], .... }
A sample submission file has been provided in the starting kit.
Use the following command to create the submission bundle:
zip -j submission.zip submission.json
The final rank is based on the true positive rate.
This challenge is organized by HEART-MET and has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 871252 (METRICS).
The Codalab stage of the competition consists of two phases: Development and Testing. A training and validation set are available during the Development phase, and a test dataset is available during the Testing phase. The Remote Execution stage of the competition will take place after the Testing phase.
The dataset consists of video clips around 5-10 seconds long, of a person performing an activity.
There are a total of 20 activities in this challenge. In addition to the activities, the dataset (both training and validation/test) contains one classs, which is unknown activity
The unknown activity include videos of unrelated content (e.g. a video with no person, a person performing activities outside of the 19 classes, etc.) and videos in which the data is corrupted (e.g. blurred videos, noisy videos etc.). The corrupted videos may contain relevant activities or unrelated content.
The activity IDs and the corresponding classes contained in this dataset are listed below:
0 | Opening the door and walking in/out |
---|---|
1 | Putting on a jacket |
2 | Touching a hot surface |
3 | Opening the fridge |
4 | Drinking water |
5 | Colliding against something |
6 | Eating food with a fork |
7 | Coughing or sneezing |
8 | Wiping a table |
9 | Reading a book |
10 | Neck roll exercise |
11 | Freehand exercise |
12 | Lying down |
13 | Limping |
14 | Talking on the phone |
15 | Using a computer |
16 | Falling down |
17 | Brushing teeth |
18 | Writing |
19 | Unknown activity |
The labels for the training set are provided as a single JSON file, consisting of key-value pairs of the video file name and the corresponding label(s). The labels are provided as a list, which may consist of one or two elements.
The list may consist of:
Example:
{"video0000.mp4": [2], "video0001.mp4": [19], "video0002.mp4": [5], .... }
Refer to the Evaluation section for the details.
The data is split into training, validation and test datasets. The training and validation datasets are available during the Development phase, with labels provided only for the training dataset. You can evaluate your methods on the validation set by submitting your results to the Codalab server.
The test dataset will be available at the start of the Testing phase. During the test phase, you should submit results for the test set to the Codalab server.
Start: Oct. 17, 2022, midnight
Start: Nov. 23, 2022, 11:59 p.m.
Nov. 30, 2022, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In