PBVS 2024 Multi-modal Aerial View Image Challenge - C (SAR Classification)

Organized by mavoc_pbvs_admin - Current server time: March 13, 2025, 10:27 p.m. UTC
Reward $1,200

Previous

Development
Jan. 26, 2025, 11:59 p.m. UTC

Current

Development
Jan. 26, 2025, 11:59 p.m. UTC

End

Competition Ends
March 10, 2025, 11:59 p.m. UTC

 

PBVS @ CVPR 2024

 

Multi-modal Aerial View Image

Important dates

  • 2024.01.19 Release of train data (input and output) and validation data (inputs only)
  • 2024.01.21 Validation server online
  • 2024.02.21 Final test data release (inputs only)
  • *UPDATE* 2024.03.5 Test output results submission deadline
  • *UPDATE* 2024.03.5 Fact sheets  and code/executable submission deadline
  • 2024.03.7 Preliminary test results release to the participants
  • 2024.03.7 Paper submission deadline for entries from the challenge (Submit through https://pbvs-workshop.github.io/submission.html)
  • 2024.06.7 PBVS workshop and challenges, results and award ceremony (CVPR 2024)

 Preliminary Results

  1. IQSKJSP
  2. MITHF
  3. GuanYu

We will begin to reach out to winners to request information for prize distribution and digital certificate distribution.

Challenge overview

Electro-optical (EO) sensors that capture images in the visible spectrum such as RGB and grayscale images, have been most prevalent in the computer vision research area. However, other sensors such as synthetic aperture radar (SAR) can reproduce images from radar signals that in some cases could complement EO sensors when such sensors fail to capture significant information (i.e. weather condition, no visible light, etc).

An ideal automated target recognition system would be based on multi-sensor information to compensate for the individual shortcomings of either of the sensor-based platforms. However, it is currently unclear if/how using EO and SAR data together can improve the performance of automatic target reconition (ATR) systems. Thus, the motivation for this challenge is to understand if and how data from one modality can improve the learning process for the other modality and vice versa. Ideas from domain adaption, transfer learning or fusion are welcomed to solve this problem.

Jointly with PBVS workshop we have a PBVS challenge on Multi-modal Aerial View Imagery Challenge-C, that is, the task of predicting the class label of an aerial low resolution image based on a set of prior examples of images and their class labels. The challenge uses a new dataset:

The classification track will focus on classification of SAR data. The goal will be to train a classifier that is maximally accurate on a held-out test set of SAR chips from the 10 classes and detect out of distribution targets. Participants are welcome to use both the EO and SAR training data sets to accomplish this task. We expect every participant to submit a description of their method after the final phase. We will not only score the methods by accuracy but also by novelty and creativity. We reserve the right to review the participants code and replicate their results in order to adhere to honor code guidelines.

The aim is to obtain a network design / solution capable to produce high quality classification results with the best accuracy according to the ground truth labels and best out of distribution detection according to the ground truth.

The top ranked participants will be awarded and invited to follow the CVPR submission guide for workshops to describe their solution and to submit to the associated PBVS workshop at CVPR 2024.

The 20th IEEE Workshop on Perception Beyond the Visible Spectrum will be held on June, 2024 in conjunction with CVPR 2024.

More details are found on the data section of the competition.

Paper Submission

Participants are encouraged to submit a paper discussing their method. Submissions should be submitted in the CVPR format. Please submit your paper through: https://pbvs-workshop.github.io/submission.html.

Awards Summary

 

AWARDS 

 
 

First place awards (Track 1 & 2)

 
 

$1,200.00 

 
 

Second place awards (Track 1 & 2)

 
 

$800.00 

 
 

Third place awards (Track 1 & 2)

 
 

$500.00 

 
 

NA

 
 

NA 

 
 

NA

 
 

NA 

 

There are prizes for first, second and third place for each respective track. To be eligible for prizes, submissions must outperform previous years' submissions. Teams can also submit a paper detailing their solution to the workshop. 

Competition

The training data is already made available to the registered participants.

Provided Resources

  • Scripts: With the dataset the organizers will provide scripts to facilitate the reproducibility of the images and performance evaluation results after the validation server is online. More information is provided on the data page.
  • Contact: You can use the forum on the data description page (highly recommended!) or directly contact the challenge organizers by email (Spencer Low: mavoc.pbvs@gmail.com).
  • Check out our solutions from last year: The results from last year's competition are discussed in greater detail in our paper.

PBVS @ CVPR 2024

 

Multi-modal Aerial View Object Classification Challenge

 

Evaluation

The evaluation consists from the comparison of the predictions with the reference ground truth labels and out-of-distribution detection.

We use the standard classification accuracy (top-1, [%]) as often employed in the literature. For each dataset we report the results over all the processed images belonging to it. We use AUROC and TNR@95TPR to measure out-of-distribution detection performance. 

For submitting the results, you need to follow these steps:

    1. process the set of input images and extract the image ids (example: for "SAR_345.png" the image id is 345) and together with your class id prediction create a results.csv file that will have a header (first line) with the fixed content "image_id, class_id, score" and on each of the following lines the image_id, class_id, and score for each process input image (example: for an input file with name "Gotcha345.png" and a class id prediction 2 then the output results.csv should include a row/line with the corresponding information "345, 2, 5.4"). The SCORE value is floating point number that indicates your confidence in the predicted classification. This can correspond to final class weights, or the corresponding softmax value. This is used to calculate out-of-distribution scoring.
      Note that the results.csv should contain the header and a number of lines with image id, class id equal with the number of input images in the set.  An example of how such a results.csv file with 2 predictions should look like: 
        image_id, class_id, score
       345, 2, 5.4
       1345, 5, -1.2

  1. create a ZIP archive containing all your predictions in the results.csv file mentioned above and a readme.txt  Note that the archive should not include folders, all the files should be in the root of the archive.
  2. the readme.txt file should contain the following lines filled in with the runtime per image (in seconds) of the solution, 1 or 0 accordingly if employs CPU or GPU at runtime, and 1 or 0 if employs extra data for training the models or not.
    runtime per image [s] : 10.43 
    CPU[1] / GPU[0] : 1
    Extra Data [1] / No Extra Data [0] : 1
    Other description : Solution based on A+ of Timofte et al. ACCV 2014. We have a Matlab/C++ implementation, and report single core CPU runtime. The method was trained on Train 91 of Yang et al. and BSDS 200 of the Berkeley segmentation dataset. 
     The last part of the file can have any description you want about the code producing the provided results (dependencies, link, scripts, etc.)
    The provided information is very important both during the validation period when different teams can compare their results / solutions but also for establishing the final ranking of the teams and their methods.

Perception Beyond the Visible Spectrum (PBVS) workshop and challenges @ CVPR 2024

 

Multi-Modal Aerial View Object Classification Challenge

These are the official rules (terms and conditions) that govern how the PBVS 2024 challenge on Multi-Modal Aerial View Challenge will operate. This challenge will be simply referred to as the "challenge" or the "contest" throughout the remaining part of these rules and may be named as "PBVS" or "SAR-EO" benchmark, challenge, or contest, elsewhere (our webpage, our documentation, other publications).

In these rules, "we", "our", and "us" refer to the organizers (Spencer Low (mavoc.pbvs [at] gmail.com), Oliver Nina (oliver.nina.1 [at] afresearchlab.com), Bob Lee (bob.lee [at] wbi-innovates.com)) of PBVS challenge and "you" and "yourself" refer to an eligible contest participant.

Note that these official rules can change during the contest until the start of the final phase. If at any point during the contest the registered participant considers that can not anymore meet the eligibility criteria or does not agree with the changes in the official terms and conditions then it is the responsibility of the participant to send an email to the organizers such that to be removed from all the records. Once the contest is over no change is possible in the status of the registered participants and their entries.

1. Contest description

This is a skill-based contest and chance plays no part in the determination of the winner (s).

The goal of the contest is to predict the label of an input (SAR) image and the challenge is called Multi-Modal Aerial View Imagery Classification.

Focus of the contest: it will be made available a dataset adapted for the specific needs of the challenge. The images have a large diversity of contents. We will refer to this dataset, its partition, and related materials as PBVS Dataset. The dataset is divided into training, validation, and testing data. We focus on the quality of the results, the aim is to achieve predictions with the best accuracy to the reference ground truth labels. The participants will not have access to the ground truth labels from the test data. The ranking of the participants is according to the performance of their methods on the test data. The participants will provide descriptions of their methods, details on (run)time complexity, platform, and (extra) data used for modeling. The winners will be determined according to their entries, the reproducibility of the results and uploaded codes or executables, and the above-mentioned criteria as judged by the organizers.

2. Tentative contest schedule

The registered participants will be notified by email if any changes are made to the schedule. The schedule is available on the PBVS workshop web page and on the Overview of the Codalab competition.

2. Tentative contest schedule

The registered participants will be notified by email if any changes are made to the schedule. The schedule is available on the PBVS workshop web page and on the Overview of the Codalab competition.

3. Eligibility

You are eligible to register and compete in this contest only if you meet all the following requirements:

  • you are an individual or a team of people willing to contribute to the open tasks, who accepts to follow the rules of this contest
  • you are not a PBVS challenge organizer or an employee of PBVS challenge organizers
  • you are not involved in any part of the administration and execution of this contest
  • you are not a first-degree relative, partner, household member of an employee or of an organizer of PBVS challenge or of a person involved in any part of the administration and execution of this contest

This contest is void wherever it is prohibited by law.

Entries submitted but not qualified to enter the contest, it is considered voluntary and for any entry, you submit PBVS reserves the right to evaluate it for scientific purposes, however under no circumstances will such entries qualify for sponsored prizes. If you are an employee, affiliated with or representant of any of the PBVS challenge sponsors then you are allowed to enter in the contest and get ranked, however, if you will rank among the winners with eligible entries you will receive only a diploma award and none of the sponsored money, products or travel grants.

NOTE: industry and research labs are allowed to submit entries and to compete in both the validation phase and final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.

We will have 3 categories of entries in the final test ranking:
1) checked with publicly released codes 
2) checked with publicly released executable
3) unchecked (with or without released codes or executables)

 

4. Entry

In order to be eligible for judging, an entry must meet all the following requirements:

Entry contents: the participants are required to submit image results and code or executables. To be eligible for prizes, the top-ranking participants should publicly release their code or executables under a license of their choice, taken among popular OSI-approved licenses (http://opensource.org/licenses) and make their code or executables online accessible for a period of not less than one year following the end of the challenge (applies only for top three ranked participants of the competition). To enter the final ranking the participants will need to fill out a survey (fact sheet) briefly describing their method. All the participants are also invited (not mandatory) to submit a paper for peer-reviewing and publication at the PBVS Workshop and Challenges (to be held online on June 2021). To be eligible for prizes, the participant's score must improve the baseline performance provided by the challenge organizers.

Use of data provided: all data provided by PBVS are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. PBVS and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remains in the property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify PBVS and the organizers, including their employees, Trustees, officers, and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.

  • Test data: The organizers will use the test data for the final evaluation and ranking of the entries. The ground truth test data will not be made available to the participants during the contest.
  • Training and validation data: The organizers will make available to the participants a training dataset with ground truth images and a validation dataset without ground truth images. At the start of the final phase, the test data without ground truth images will be made available to the registered participants.
  • Post-challenge analyses: the organizers may also perform additional post-challenge analyses using extra data, but without effect on the challenge ranking.
  • Submission: the entries will be online submitted via the CodaLab web platform. During the development phase, while the validation server is online, the participants will receive immediate feedback on validation data. The final perceptual evaluation will be computed on the test data submissions, the final scores will be released after the challenge is over.
  • Original work, permissions: In addition, by submitting your entry into this contest you confirm that to the best of your knowledge: - your entry is your own original work; and - your entry only includes material that you own, or that you have permission to use.

5. Potential use of entry

Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:

Are granting us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential prize awards, for the duration of the protection of the copyrights to:

  1. Use, review, assess, test, and otherwise analyze results submitted or produced by your code or executable and other material submitted by you in connection with this contest and any future research or contests by the organizers; and
  2. Feature your entry and all its content in connection with the promotion of this contest in all media (now known or later developed);

Agree to sign any necessary documentation that may be required for us and our designees to make use of the rights you granted above;

Understand and acknowledge that we and other entrants may have developed or commissioned materials similar or identical to your submission and you waive any claims you may have resulting from any similarities to your entry;

Understand that we cannot control the incoming information you will disclose to our representatives or our co-sponsor’s representatives in the course of entering, or what our representatives will remember about your entry. You also understand that we will not restrict the work assignments of representatives or our co-sponsor’s representatives who have had access to your entry. By entering this contest, you agree that use of information in our representatives’ or our co-sponsor’s representatives unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law;

Understand that you will not receive any compensation or credit for use of your entry, other than what is described in these official rules.

If you do not want to grant us these rights to your entry, please do not enter this contest.

6. Submission of entries

The participants will follow the instructions on the CodaLab website to submit entries

The participants will be registered as mutually exclusive teams. Each team is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.

The participants must follow the instructions and the rules. We will automatically disqualify incomplete or invalid entries.

7. Judging the entries

The board of PBVS will select a panel of judges to judge the entries; all judges will be forbidden to enter the contest and will be experts in causality, statistics, machine learning, computer vision, or a related field, or experts in challenge organization. A list of the judges will be made available upon request. The judges will review all eligible entries received and select (three) winners for each or for both of the competition tracks based upon the prediction score on test data. The judges will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.

The decisions of these judges are final and binding. The distribution of prizes according to the decisions made by the judges will be made within three (3) months after completion of the last round of the contest. If we do not receive a sufficient number of entries meeting the entry requirements, we may, at our discretion based on the above criteria, not award any or all of the contest prizes below. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.

8. Prizes and Awards

The financial sponsors of this contest are listed on PBVS 2024 workshop web page. There will be economic incentive prizes and travel grants for the winners (based on availability) to boost contest participation; these prizes will not require participants to enter into an IP agreement with any of the sponsors, disclose algorithms, or deliver source code to them. The participants affiliated with the industry sponsors agree to not receive any sponsored money, product, or travel grant in the case they will be among the winners.

Incentive Prizes for each track competitions (tentative, the prizes depend on attracted funds from the sponsors)

 

AWARDS 

 
 

First place awards (Track 1 & 2)

 
 

$1,000.00 

 
 

Second place awards (Track 1 & 2)

 
 

$750.00 

 
 

Third place awards (Track 1 & 2)

 
 

$500.00 

 
 

Best paper award, first place (MAVOC Challenge)

 
 

$1,000.00 

 
 

Best paper award, second place (MAVOC Challenge)

 
 

$750.00 

 

There are prizes for first, second and third place for each respective track. Teams can also submit a paper detailing their solution to the workshop. There will be an award for best paper and runner up. There will be one first place paper, and one runner up for submissions from both tracks. Judges will prioritize novelty in solution. 

9. Other Sponsored Events

Publishing papers is optional and will not be a condition to entering the challenge or winning prizes. The top-ranking participants are invited to submit a paper following CVPR2024 author rules, for peer-reviewing to the PBVS workshop.

The results of the challenge will be published together with PBVS 2024 workshop papers in the 2024 CVPR Workshops proceedings.

The top-ranked participants and participants contributing interesting and novel methods to the challenge will be invited to be co-authors of the challenge report paper which will be published in the 2024 CVPR Workshops proceedings. A detailed description of the ranked solution, as well as the reproducibility of the results, are a must to be an eligible co-author.

10. Notifications

If there is any change to data, schedule, instructions of participation, or these rules, the registered participants will be notified on the competition page and/or at the email they provided with the registration.

Within seven days following the determination of winners, we will send a notification to the potential winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate winner, unless forbidden by applicable law.

The prize such as money, product, or travel grant will be delivered to the registered team leader given that the team is not affiliated with any of the sponsors. It is up to the team to share the prize. If this person becomes unavailable for any reason, the prize will be delivered to be the authorized account holder of the e-mail address used to make the winning entry.

If you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity, and liability/publicity release and applicable tax forms. If you are a potential winner and are a minor in your place of residence, we require that your parent or legal guardian will be designated as the winner, and we may require that they sign a declaration of eligibility, use, indemnity, and liability/publicity release on your behalf. If you, (or your parent/legal guardian if applicable), do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you (or the designated parent/legal guardian) and select an alternate selected winner.

 


The terms and conditions are inspired by and use verbatim text from the `Terms and conditions' of ChaLearn Looking at People Challenges and of the NTIRE 2017, 2018, 2019 and 2020 challenges and of the AIM 2019 and 2020 challenges .

We've provided a very simple robust boilerplate model. It uses a ResNet50 PyTorch model but you can change it if you like. In the zip file, there is a README file that contains some simple instructions on how to run the baselines.

Previous results can be found at: https://openaccess.thecvf.com/content/CVPR2023W/PBVS/html/Low_Multi-Modal_Aerial_View_Object_Classification_Challenge_Results_-_PBVS_2023_CVPRW_2023_paper.html

Good luck!

 

 

No files have been added for this competition yet.

PBVS @ CVPR 2024

 

Organizers

The contact persons and direct managers of the PBVS Multi-modal Aerial View Imagery Classification challenge.Spencer Low (BYU) (mavoc.pbvs@gmail.com)

  • Spencer Low (mavoc.pbvs@gmail.com)
  • Oliver Nina (AFRL) (oliver.nina.1 [at] afresearchlab.com),
  • Angel Sappa (angel.sappa[at]cvc.uab.es)

The PBVS classification challenge on Multi-modal Aerial view Imagery is organized jointly with the PBVS 2024 workshop. The results of the challenge will be published at PBVS 2024 workshop and in the CVPR 2024 Workshops proceedings.

 More information about PBVS workshop and challenge organizers is available here: https://pbvs-workshop.github.io

Development

Start: Jan. 26, 2025, 11:59 p.m.

Testing

Start: Feb. 21, 2024, 11:59 p.m.

Description: The Score2 corresponds to your weighted final score. Score3 corresponds to your OOD score. We will primarily use Score2 to select top performing submissions.

Competition Ends

March 10, 2025, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 mavoc_pbvs_admin 0.75
2 gohard12 0.45
3 Xhnxhn 0.44