The use of machine learning methods in functional neuroimage analysis has demonstrated an increased sensitivity to cognitive function compared to previously used univariate methods (Kilian-Hütten 2011, Naselaris 2011). This, coupled with the continued progression of cognitive neuroscience research, has led researchers to employ more ecologically valid experimental procedures and more complex stimuli.
Our previous work (Casey, Thompson, Kang, Raizada, and Wheatley 2012) investigated decoding hemodynamic brain activity in the feed-forward pathways involved in music listening with rich stimuli. Our current work investigates top-down music processing via auditory imagery with an imagined music task. Most previous work on auditory imagery (e.g. Zatorre 2000; Zatorre, Halpern, and Bouffard 2010) used familiar tunes, such as nursery rhymes, that have associated lyrics which elicit activation of language areas in the brain.
Abstract. The ability to reconstruct audio-visual stimuli from human brain activity is an important step towards creating intelligent brain-computer interfaces and also serves as a valuable tool for cognitive neu-roscience research. We propose a general method for stimulus reconstruc-tion that simultaneously learns from multiple sources of brain activity and multiple stimulus representations.
Much of music neuroscience research has focused on finding functionally specific brain re-gions, often employing highly controlled stimuli. Recent results in computational neuroscience suggest that auditory information is represented in distributed, overlapping patterns in the brain [4] and that natural sounds may be optimal for studying the functional architecture of higher order auditory areas [3]. With this in mind, the goal of the present work was to decode musical informa-tion from brain activity collected during naturalistic music listening.