OmniBenchmark Challenge ECCV@2022-ImageNet1k Pre-training Track
Omni-Realm Benchmark (OmniBenchmark) is a diverse (21 semantic realm-wise datasets) and concise (realm-wise datasets have no concepts overlapping) benchmark for evaluating pre-trained model generalization across semantic super-concepts/realms, e.g. across mammals to aircrafts.
Besides pulic dataset we have released. The dataset also features a hidden test set. The evaluation of OmniBenchmark Challenge is performed on this hidden test set. Users are required to submit final prediction files, which we shall proceed to evaluate.
Important: This track is specified to ImageNet1k pre-trained model, and you are required to submit your code for us to check in the final ranking. The end time for this competition is tentative.
To access the OmniBenchmark, please visit its GitHub repository. You can also find the detailed data description and usage in the download file.
Please refer to Evaluation to organize your result file.
Please check the Terms and Conditions for further rules and details.
If you have any questions, please contact us by sending an email to zhangyh024@gmail.com.
Evaluation
You should test your model on each realm-wise dataset, then submit your result following the instruction as follows.
For this challenge, we use the average top-1 accuracy across all realm-wise dataset as evaluation criteria.
Here we introduce how to organize your result as result.json for hidden test evaluation. We take activity as an example.
If you follow the download instruction in GitHub, you should see the following folder structure.
|--- activity # realm
| |--- activity.train
| | |--- images/ # data
|--- *.jpg
|---- ...
| | |--- record.txt # annotation
| |--- activity.val
| | |--- images/ #data
|--- *.jpg
|---- ...
| | |--- *.record.txt # annotation
| |--- activity.test # current hidden test
| | |--- images/ #data
|--- *.jpg
|---- ...
| | |--- *.record.txt # image_path+ pseudo_class
|--- ...
*/test
includes the images for hidden test set of realm activity. Without annotation, its */record.txt
looks like follows.
#path pseudo_class
images/5787130938_18a0473d7a_c.jpg 0
images/7748325622_0bb01b1c5e_c.jpg 0
images/4048192444_34f7ab3982_c.jpg 0
images/2423934713_f3a93f606b_c.jpg 0
...
Noted that the ''pseudo_class'' of all images in *.test
is "0", you should evaluate your model on these images, forming a result.json file of realm activity as follows.
{
"activity":
{
"images/5787130938_18a0473d7a_c.jpg":0,
"images/7748325622_0bb01b1c5e_c.jpg":1,
"images/4048192444_34f7ab3982_c.jpg":2,
"images/2423934713_f3a93f606b_c.jpg":3,
...
}
}
After evaluating your model on all realms, the final result file submitting to this challenge is organized as follows. You should name it as result.json
"""
It is a json dictictionary:
{
"lower_case_realm_name":
{
"path":predicted_class,
"path":predicted_class,
...
},
"lower_case_realm_name":
{
"path":predicted_class,
"path":predicted_class,
...
},
...
"""
{
"activity":
{
"images/5787130938_18a0473d7a_c.jpg":0,
"images/7748325622_0bb01b1c5e_c.jpg":1,
"images/4048192444_34f7ab3982_c.jpg":2,
"images/2423934713_f3a93f606b_c.jpg":3,
...
}
...
}
Finally, you should zip this result.json, and submit this zip file to the codalab. We give an example for this json file at result.json demo.
The OmniBenchmark Challenge 2022 will be around 4 weeks (28 days) with one phase. The challenge is hold by The 4th Workshop on Sensing, Understanding and Synthesizing Humans. Participants are restricted to train their algorithms on the publicly available OmniBenchmark trainning dataset. The public val set presents general circumstances of the hidden test set that is used to maintain a public leaderboard. The final results will be revealed around Sep. 2022. Participants are expected to develop more robust and generalized methods for representation generalization.
When participating in the competition, please be reminded that:
Before downloading and using the OmniBenchmark dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The authors and their affiliations, SenseTime, are referred to as "Producer".
@inproceedings{zhang2022omnibenchmark, title={Benchmarking Omni-Vision Representation through the Lens of Visual Realms}, author={Yuanhan Zhang and Zhenfei Yin and Jing Shao and Ziwei Liu}, year={2022}, archivePrefix={arXiv}, }
The download link will be sent to you once your request is approved.
Copyright © 2022, OmniBenchmark Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:
THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Start: July 1, 2022, midnight
Description: The online evaluation results must be submitted through this CodaLab competition site of the OmniBenchmark Challenge. Please refer to Evaluation page to check the way to organize your test results.
Oct. 31, 2022, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In