Could it be that the sum of the number of vertices per ROI does not equal the total number of vertices?
Posted by: aborovykh @ May 31, 2023, 7:12 a.m.Yes, you are correct, the total number of the Challenge vertices is higher than the sum of the individual ROI vertices. Please let us know if you have further questions on this.
The Algonauts Challenge Team
Posted by: giffordale95 @ May 31, 2023, 6:21 p.m.Thank you :)
I had one more question: the model in the tutorial notebook gives me a Pearson correlation score of ~0.2, however in the submission it gives about ~0.4. When I then train my local models, on my machine it gives me a Pearson correlation of ~0.35 and the submission gives around ~0.4 as well. Could this be, or am I doing something wrong? I know that the submission score is R-squared & adjusted for the noice ceiling, but still it seems peculiar. I am new to the space, so this may be a very silly question.
Posted by: aborovykh @ June 3, 2023, 2:10 p.m.The difference is due to two reasons:
1. The tutorial score is given by the the Pearson's correlation between true and predicted data, whereas the submission scores consist of the same Pearson's correlations normalized by the noise ceiling. In other words, the prediction accuracy scores in the tutorial and the challenge evaluation metric from the CodaLab submissions are computed differently, and are therefore not expected to be equal.
2. Furthermore, when you train and evaluate your models locally you are evaluating them on a portion of the Challenge training split, whereas when you submit your predictions to CodaLab your models are evaluated on the withheld Challenge test split: it is expected that by changing the data on which you evaluate your models also the evaluation score will change.
The Algonauts Team
Posted by: giffordale95 @ June 3, 2023, 2:29 p.m.Thank you! And my final question: where could I download those noise ceilings? Are they part of the natural scenes dataset?
Posted by: aborovykh @ June 7, 2023, 8:31 a.m.Yes, you can download the noise ceiling signal-to-noise ratio (ncsnr) from the AWS NSD data release (/natural-scenes-dataset/nsddata_betas/ppdata/subj0X/fsaverage/betas_fithrf_GLMdenoise_RR/Xh.ncsnr.mgh), and then index the vertices retained for the Challenge using the vertices mask we provide in the Challenge data release (/algonauts_2023_challenge_data/subj0X/roi_masks/Xh.all-vertices_fsaverage_space.npy)
Once you have the ncsnr of the Challenge vertices, you will have to transform it into the noise ceiling based on how many trials exist (from 1 to 3) for each of the image conditions you will use to evaluate your models (i.e., the images of the Challenge train split on which you will evaluate your models). However, you cannot infer the trial number of each image condition from the Challenge data, as this data is averaged across trials of each image condition. Here (https://drive.google.com/drive/folders/1oKxacZMPiRULxROVOZaFutkpzm1VwDMk) you will find a vector variable for each Challenge subject indicating how many trials (from 1 to 3) were averaged to obtain the fMRI responses for each image condition of the Challenge train split. For the transformation from ncsnr to noise ceiling, please use the last equation of the "Noise ceiling" paragraph in the "Functional data (NSD)" section of the NSD data manual (https://cvnlab.slite.page/p/6CusMRYfk0/Functional-data-NSD).
Posted by: giffordale95 @ June 7, 2023, 12:48 p.m.Hi
I've followed the above comment to calculate noise ceilings. Anyone able to get a close enough score to their validation score?
There is a very big difference for my submission. Is there any further things happening for calculating metric?
Thanks
Posted by: fractalencoders @ July 4, 2023, 12:06 p.m.compared to my private score, I am getting ~10 lower score for subject 5 and 7, but ~10 higher score for 6 and 8, however, all-subject mean score is close to the online score. I guess this is fine, maybe subject 5 and 7 had some beer the night before last 3 sessions, and subject 6 and 8 had some coffee.
Posted by: huze @ July 9, 2023, 9:53 a.m.