QUICK START GUIDE
Hi, all,
Thanks for participating the MiGA challenge, Online Recognition Track.
This quick start guide provides a basic introduction to the MiGA and some supplimentary materials to help you learn more about the iMiGUE and SMG datasets.
Time needed: 10 minutes (without model training)/ 35 minutes (without model training).
1. Download the start kit code
Download the start kit code from our challenge on the Codalab platfrom (MiGA challenge page-> Participate Tab-> Files Tab-> Starting Kit). Unzip it and you get the startkit folder. (this code is based on a skeleton-based online recognition baseline: https://github.com/mikecheninoulu/SMG)
2. Download the dataset
Download the start kit code from our challenge on the Codalab platfrom (MiGA challenge page-> Participate Tab-> Files Tab-> Public Data). Unzip it and put the contents (imigue_skeleton_train/ and imigue_skeleton_validate/ folders) to startkit/datasets/ folder.
3. Follow intructions in the README.md file
Including: setup the environment, prepare the dataset, train the model, and testing.
Note that CUDA 10.0 is mandatory (you need to install CUDA 10.0 and change the hyperlink to CUDA 10 if you are using CUDA 10.1 or higher version).
4. Submission
The generated results will be saved as Submission.zip in /experi/XXXX/submission/ folder, you can directly submit this Submission.zip file to the Codalab platform and get the result.
Q&A
About datasets
1. iMiGUE and SMG datasets
iMiGUE and SMG datasets are two datasets focusing on micro-gestures. They are published on CVPR 2021 and IJCV 2023. In the iMiGUE dataset, we annotate micro-gestures from tennis players in the post-match interview to predict if they win or lose the game. In the SMG dataset, we annotate micro-gestures from subjects in a story-telling game to predict if they are makingup the content or repeating the story.
In this challenge, we only use iMiGUE dataset for training, validating and testing. We encourage participates to validate their methods also on the SMG dataset when submitting their scheme to the workshop.
The skeleton modality of the SMG dataset is available: https://drive.google.com/file/d/1kzDFunbJz5ZFvdIBpNDTxGV3kecyWyPK/view?usp=share_link
The sample code for processing the SMG dataset is available: https://drive.google.com/file/d/18DvG4pgrZI418ncq3fkvjCHJHxKsnZ3S/view?usp=share_link
Chen H., Shi H., Liu X., Li X., and Zhao G. SMG: A Micro-gesture Dataset Towards Spontaneous Body Gestures for Emotional Stress State Analysis. International Journal of Computer Vision (IJCV 2023), 2023: 1-21.
Liu, X., Shi H., Chen H., Yu Z., Li X., and Zhao G. "iMiGUE: An identity-free video dataset for micro-gesture understanding and emotion analysis." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021), pp. 10631-10642. 2021.
2. Dataset introduction
The detailed dataset introduction of the iMiGUE is given in the DATA_README.MD, including descriptions for each file and also tools to process the data.
3. Training, validating and testing sets
In the phase 1 (March 27th - May 2nd), we release the training, validating dataset, you need to submit results based on the validating set.
In the phase 2 (May 2nd-May 15th), we release the final testing dataset, you need to submit results based on the testing set.
4. Exact your customized skeleton data
You can use convert_FULLskeleton_to_LIGHTbody.py file provided along with the dataset.
About source code
1. To learn more about how to access the data, please see iMiGUEaccessSample.py in the starting kit.
2. To learn more about how to evaluate model and make a submission, please see iMiGUEevaluation.py in the starting kit.
3. The starting kit is based on STGCN, please refer to the official website for more training details.
About submission
1. Note that you need to submit .zip file instead of .csv file to the Codalab.
2. You need to submit validating results in the phase 1 (March 27th - May 2nd), and submit final testing results in the phase 2 (May 2nd-May 15th) to get the predicting score. The final ranking and award is based only on the results on phase 2.
For any further question regarding the MiGA-IJCAI2023 challenge, feel free to contact the leading data chair Haoyu Chen: chen.haoyu@oulu.fi
Posted by: mikecheninoulu @ April 12, 2023, 11:01 p.m.