MusicMindHealth

High-resolution 7-Tesla fMRI data on the perception of musical genres – an extension to the studyforrest dataset

F1000 Research
Here we present an extension to the studyforrest dataset – a versatile resource for studying the behavior of the human brain in situations of real-life complexity (http://studyforrest.org). This release adds more high-resolution, ultra high-field (7 Tesla) functional magnetic resonance imaging (fMRI) data from the same individuals. The twenty participants were repeatedly stimulated with a total of 25 music clips, with and without speech content, from five different genres using a slow event-related paradigm.

General Subjects Display Cross-Modal Responses to Musical Stimuli

Proceedings of the European Society for the Cognitive Sciences of Music (ESCOM)
We investigated the perception of music in a cognitive musicology study, employing behavioral methods to examine general associative patterns--i.e. the propensity for subjects to recruit associations when listening to music, reminiscent of synaesthetic cross-wiring (Cytowic, 2009). Although non-Synaesthetic associations to music are less explored, experiments such as Köhler’s (1929) linguistic “Kiki, Boulba” study, demonstrated associations in non-synaesthetes, supporting the hypothesis that general listeners engage cross-sensorial connections.

Non-Auditory Associations of Musical and Non-Musical Sounds in General Listeners

International Congress on Synaesthesia Art and Science V.
Our research explores theories based upon past behavioural studies and FMRI scans with Synaesthetes and general listeners. FMRI experiments have revealed that the cross-modal associations to sounds in Synaesthetes are less pronounced, yet still present in the general population. The results of our psycho-musicology study with 40 Synaesthetes and 40 non-Synaesthetes reveal a quasi-Synaesthetic [Nikolic, 2014] spectrum extending to general listeners, similar to culturally founded Synaesthesia [Kohler].

Cross Modal Aesthetics From A Feature-Extraction Perspective

Proceedings of the International Society for Music Information Retrieval
This paper investigates perceptual relationships between art in the auditory and visual domains. First, we conducted a behavioral experiment asking subjects to assess similarity between 10 musical recordings and 10 works of abstract art. We found a significant degree of agreement across subjects as to which i mages correspond to which audio, even though neither the audio nor the images possessed semantic content. Secondly, we sought to find the relationship between audio and images within a defined feature space that correlated with the subjective similarity judgments.

Music Information Retrieval from Neurological Signals: Towards Neural Population Codes for Music

Society for Music Perception and Cognition
Much of music neuroscience research has focused on finding functionally specific brain re-gions, often employing highly controlled stimuli. Recent results in computational neuroscience suggest that auditory information is represented in distributed, overlapping patterns in the brain [4] and that natural sounds may be optimal for studying the functional architecture of higher order auditory areas [3]. With this in mind, the goal of the present work was to decode musical informa-tion from brain activity collected during naturalistic music listening.

Audio Stimulus Reconstruction Using Multi-Source Semantic Embedding

Neural Information Processing Systems (NIPS)
Abstract. The ability to reconstruct audio-visual stimuli from human brain activity is an important step towards creating intelligent brain-computer interfaces and also serves as a valuable tool for cognitive neu-roscience research. We propose a general method for stimulus reconstruc-tion that simultaneously learns from multiple sources of brain activity and multiple stimulus representations.

Decoding Absolute and Relative Pitch Imagery in the Auditory Pathway

CCN Colloquium

Michael Casey - Decoding Pitch Imagery in the Auditory Pathway

Our previous work (Casey, Thompson, Kang, Raizada, and Wheatley 2012) investigated decoding hemodynamic brain activity in the feed-forward pathways involved in music listening with rich stimuli. Our current work investigates top-down music processing via auditory imagery with an imagined music task. Most previous work on auditory imagery (e.g. Zatorre 2000; Zatorre, Halpern, and Bouffard 2010) used familiar tunes, such as nursery rhymes, that have associated lyrics which elicit activation of language areas in the brain.

Normative Musicology: Automatic Tonal Induction via Entropy and Rational Expectation

Milestones in Music Cognition Workshop
A new branch of systematic musicology, “normative musicology,” is proposed and its practice demonstrated. Normative musicology is the study of optimal (“norma-tive”) expectations about future musical signals, given some corpus of past signals. It is a formalization of many “statistical learning” approaches (e.g. [1]) and may be considered a computational counterpart to empirical musicology.

How Humans Hear and Imagine Musical Scales

Decoding Population Responses Workshop
The cognitive representations that support our experience of pitch perception and imagery are not well understood and they generally focus on tonotopic organization of neural columns in the brain (place-based coding of absolute frequency). From prior behavioural studies, we understand musical pitch space to be relative to a reference key, and hierarchically organized. Our current study uses a new between-subject common representation of spatio-temporal multivariate population codes to identify the representational space of musical pitch.

Pages