This is fMRI data from Miyawaki Y et al. (2008): Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron. Dec 10;60(5):915-29. In this study a custom algorithm, sparse multinomial logistic regression was used to train decoders at multiple spatial scales and combine them based on a linear model, in order to reconstruct presented stimuli.
The experiment consisted of a human subject viewing contrast-based images of 10x10 flickering image patches. There were two types of image viewing tasks: 1. geometric and alphabet viewing and 2. random image viewing. For image presentation, a block design was used with rest periods between the presentation of each image. For viewing 10x10 image patches defining common geometric shapes or alphabet letters, image presentation lasted 12 s, followed by a 12 s rest. For random 10x10 image patch presentation, image presentation lasted 6 s, followed by a 6s rest.
Voxels from V1, V2, V3, V4, and VP cortex are shared in this data. All data are pre-processed and ready to use for machine learning.