This is fMRI data from Horikawa, et al. (2013): Neural Decoding of Visual Imagery During Sleep. Science. May;340(6132):639-642. In this study, three human subjects slept in an fMRI scanner, giving verbal reports on their dreams when awakened. Contents occuring in dreams were then predicted using BOLD signal decoding analysis.
Three human subjects slept in an fMRI scanner. When a specific EEG pattern associated with dreaming was viewed, subjects were awakened and gave a verbal report describing the contents of the dreams they had. Dream contents were then matched to synsets in WordNet, and brain activity were labeled with the synsets. Training and test data were then divided and decoding analysis was performed to predict the contents of dreams (synsets) associated with brain activity.
Voxels from the whole brain, with masks for V1, V2, V3, lateral occipital complex (LOC), fusiform face area (FFA), parahippocampal place area (PPA), lower visual cortex (LVC), and higher visual cortex (HVC) regions-of-interests are shared in this data. All data are pre-processed and ready to use for machine learning.