OmniBenchmark Challenge ECCV@2022:ImageNet1k-Pretrain Track

Organized by ZhangYuanhan - Current server time: March 29, 2025, 10:38 a.m. UTC

First phase

Final Hidden Test
July 1, 2022, midnight UTC

End

Competition Ends
Oct. 31, 2022, 11:59 p.m. UTC

OmniBenchmark Challenge ECCV@2022-ImageNet1k Pre-training Track

Overview

Omni-Realm Benchmark (OmniBenchmark) is a diverse (21 semantic realm-wise datasets) and concise (realm-wise datasets have no concepts overlapping) benchmark for evaluating pre-trained model generalization across semantic super-concepts/realms, e.g. across mammals to aircrafts.

Besides pulic dataset we have released. The dataset also features a hidden test set. The evaluation of OmniBenchmark Challenge is performed on this hidden test set. Users are required to submit final prediction files, which we shall proceed to evaluate.

Important: This track is specified to ImageNet1k pre-trained model, and you are required to submit your code for us to check in the final ranking. The end time for this competition is tentative.

 

Data Download

To access the OmniBenchmark, please visit its GitHub repository. You can also find the detailed data description and usage in the download file.

 

Submission

Please refer to Evaluation to organize your result file.

 

General Rules

Please check the Terms and Conditions for further rules and details.

 

Contact Us

If you have any questions, please contact us by sending an email to zhangyh024@gmail.com.

 

 

Evaluation

Evaluation Criteria

You should test your model on each realm-wise dataset, then submit your result following the instruction as follows.

For this challenge, we use the average top-1 accuracy across all realm-wise dataset as evaluation criteria.

How to organize a result.json file for hidden test evaluation?

Evaluating models on the realm-wise dataset

Here we introduce how to organize your result as result.json for hidden test evaluation. We take activity as an example.

If you follow the download instruction in GitHub, you should see the following folder structure.


        |--- activity # realm
        |   |--- activity.train
        |   |   |--- images/ # data
                    |--- *.jpg
                    |---- ...
        |   |   |--- record.txt # annotation
        |   |--- activity.val
        |   |   |--- images/ #data
                    |--- *.jpg
                    |---- ...
        |   |   |--- *.record.txt # annotation
        |   |--- activity.test # current hidden test
        |   |   |--- images/ #data
                    |--- *.jpg
                    |---- ...
        |   |   |--- *.record.txt # image_path+ pseudo_class
        |--- ...
    


*/test includes the images for hidden test set of realm activity. Without annotation, its */record.txt looks like follows.


        #path pseudo_class
        images/5787130938_18a0473d7a_c.jpg 0
        images/7748325622_0bb01b1c5e_c.jpg 0
        images/4048192444_34f7ab3982_c.jpg 0
        images/2423934713_f3a93f606b_c.jpg 0
        ...
    


Noted that the ''pseudo_class'' of all images in *.test is "0", you should evaluate your model on these images, forming a result.json file of realm activity as follows.


    {
        "activity":
        {
            "images/5787130938_18a0473d7a_c.jpg":0,
            "images/7748325622_0bb01b1c5e_c.jpg":1,
            "images/4048192444_34f7ab3982_c.jpg":2,
            "images/2423934713_f3a93f606b_c.jpg":3,
            ...
        }
    }
    

How to organize your result.json?

Step1:

After evaluating your model on all realms, the final result file submitting to this challenge is organized as follows. You should name it as result.json


        """
        It is a json dictictionary:
        {
            "lower_case_realm_name":
            {
                "path":predicted_class,
                "path":predicted_class,
                ...   
            },
            "lower_case_realm_name":
            {
                "path":predicted_class,
                "path":predicted_class,
                ...   
            },     
            ...     
        """
        {
            "activity":
            {
                "images/5787130938_18a0473d7a_c.jpg":0,
                "images/7748325622_0bb01b1c5e_c.jpg":1,
                "images/4048192444_34f7ab3982_c.jpg":2,
                "images/2423934713_f3a93f606b_c.jpg":3,
                ...
            }
            ...
        }
    
Step2:

Finally, you should zip this result.json, and submit this zip file to the codalab. We give an example for this json file at result.json demo.

Terms and Conditions

General Rules

The OmniBenchmark Challenge 2022 will be around 4 weeks (28 days) with one phase. The challenge is hold by The 4th Workshop on Sensing, Understanding and Synthesizing Humans. Participants are restricted to train their algorithms on the publicly available OmniBenchmark trainning dataset. The public val set presents general circumstances of the hidden test set that is used to maintain a public leaderboard. The final results will be revealed around Sep. 2022. Participants are expected to develop more robust and generalized methods for representation generalization.

When participating in the competition, please be reminded that:

  • Results in the correct format must be uploaded to the evaluation server. The evaluation page lists detailed information regarding how results will be evaluated.
  • Each entry must be associated to a team and provide its affiliation.
  • Using multiple accounts to increase the number of submissions and private sharing outside teams are strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The organizer reserves the right to adjust the competition schedule and rules based on situations.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a fact sheet briefly describing their methods. There is no other publication requirement.

Terms of Use: OmniBenchmark Dataset

Before downloading and using the OmniBenchmark dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The authors and their affiliations, SenseTime, are referred to as "Producer".

  • The OmniBenchmark dataset is used for non-commercial/non-profit research purposes only.
  • All the images in OmniBenchmark dataset can be used for academic purposes. However, the Producer is NOT responsible for any further use in a defamatory, pornographic or any other unlawful manner, or violation of any applicable regulations or laws.
  • The User takes full responsibility for any consequence caused by his/her use of OmniBenchmark dataset in any form and shall defend and indemnify the Producer against all claims arising from such uses.
  • The User should NOT distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the OmniBenchmark dataset to any third party for any purpose.
  • The User can provide his/her research associates and colleagues with access to OmniBenchmark dataset (the download link or the dataset itself) provided that he/she agrees to be bound by these terms of use and guarantees that his/her research associates and colleagues agree to be bound by these terms of use.
  • The User should NOT remove or alter any copyright, trademark, or other proprietary notices appearing on or in copies of the OmniBenchmark dataset.
  • This agreement is effective for any potential User of the OmniBenchmark dataset upon the date that the User first accesses the OmniBenchmark dataset in any form.
  • The Producer reserves the right to terminate the User's access to the OmniBenchmark dataset at any time.
  • For using OmniBenchmark dataset, please cite the following paper:
    @inproceedings{zhang2022omnibenchmark,
            title={Benchmarking Omni-Vision Representation through the Lens of Visual Realms}, 
            author={Yuanhan Zhang and Zhenfei Yin and Jing Shao and Ziwei Liu},
            year={2022},
            archivePrefix={arXiv},
      }
    

The download link will be sent to you once your request is approved.

Software

Copyright © 2022, OmniBenchmark Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of the OmniBenchmark Consortium nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Final Hidden Test

Start: July 1, 2022, midnight

Description: The online evaluation results must be submitted through this CodaLab competition site of the OmniBenchmark Challenge. Please refer to Evaluation page to check the way to organize your test results.

Competition Ends

Oct. 31, 2022, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In