The Algonauts Project 2023 - How the Human Brain Makes Sense of Natural Scenes Forum

Go back to competition Back to thread list Post in this thread

> Unexpectedly low score for Colab test predictions

As a sanity check, I have tried making a submission (# 3 for me, titled submission03) using exactly the test predictions from the development kit colab for subject 1 LH and RH (using the alexnet layer features.12 downsampled to 100 PCA components). In the colab script, the validation predictions have a mean correlation of around 0.25 (non noise-normalized), which with the noise ceiling of around 0.65 from the NSD paper for subject 1 is about the same accuracy as the organizer baseline score of 40%. However, the noise-normalized test score for subject 1 in the submission is around 15-20%, which is much lower than expected. There are no errors in the scoring log but there are the following warnings:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
/opt/conda/lib/python3.9/site-packages/numpy/core/fromnumeric.py:3474: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
/opt/conda/lib/python3.9/site-packages/numpy/core/_methods.py:189: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
Could someone else or one of the organizers try this approach and see if they get the same behavior?

Posted by: alex12341 @ June 20, 2023, 2:09 p.m.

Hi alex12341, thanks for the comment. We will check it out. Can you give more information on how you are predicting the other 7 subjects? Also, the errors you see are not to worry about.

Posted by: blahner @ June 20, 2023, 2:24 p.m.

The other subjects were my attempt at recreating exactly the organizer baseline as described for the challenge submission (extracting features from all alexnet layers, appending them, reducing to 100 PCA components). The other subjects (2-8) that use that method also have oddly low scores. However, those predictions used my own code so I could have gotten something wrong with ordering of the images, etc, which is why I wanted to use the predictions right from the colab notebook. I would have tried the colab predictions for all subjects but it takes a while to run for each subject so I just wanted to try for subject 1.

Posted by: alex12341 @ June 20, 2023, 2:28 p.m.

Please have a look at the other thread on this forum "What is 'OrganizerBaseline'" for details on how exactly the OrganizerBaseline score was computed. Absent from your post, use all available data for training the linear regression, and remember to square your correlation before dividing by the noise ceiling (see the arxiv paper or CodaLab "evaluation" page).
To aid in reproducing the OrganizerBaseline score, here's some info on subject 1:
LH correlation (mean, all vertices, un-normalized): 0.397283
RH correlation (mean, all vertices, un-normalized): 0.388741
LH correlation ^2 (mean, all vertices, un-normalized): 0.186479
RH correlation ^2 (mean, all vertices, un-normalized): 0.182993
I hope this helps.
-The Algonauts Team

Posted by: blahner @ June 20, 2023, 4:11 p.m.

Are those correlations for the test set or for a validation set with the available fmri data?

Posted by: alex12341 @ June 20, 2023, 4:38 p.m.

Those are correlations against the test set

Posted by: blahner @ June 20, 2023, 5:45 p.m.

Ok thanks, so I think the problem is not in the scoring step but that the model is not the same, as the correlation avg I get for validation is much lower than 0.4 (closer to around 0.2-0.25), so I would imagine the correlation for the test set is as well. However, I have to the best of my knowledge replicated exactly what is described in the organizer baseline thread, so I'm not sure where the discrepancy could come from. Would it be possible to make the code used to create the organizer baseline publicly available?

Posted by: alex12341 @ June 21, 2023, 12:55 p.m.

Hi, just to follow up, would it be possible to post the code for the organizer baseline model?

Posted by: alex12341 @ July 4, 2023, 12:12 p.m.

Hi, at the moment we are pretty busy organizing the upcoming Algonauts session at CCN, but are planning to release the code after the challenge is over. We thank you for your understanding!

The Algonauts Team

Posted by: giffordale95 @ July 4, 2023, 2:14 p.m.
Post in this thread