Each week, we take the top 10 videos of YouTube and resynthesize the #1 video using the remaining 9 videos. We’ll continue doing so until one of our videos ends up in the top 10. The process, called “Smash Up” is a new kind of remix/mosaicing process which learns tiny perceptual fragments of audio and video using a computational model of audiovisual perception.
"Resynthesizing Perception" immserses participants within an audiovisual augmented reality using goggles and headphones while they explore their environment. What they hear and see is a computationally generative synthesis of what they would normally hear and see. By demonstrating the associations and juxtapositions the synthesis creates, the aim is to bring to light questions of the nature of representations supporting perception. Two modes of operation are possible.
Decoding Population Responses Workshop
The cognitive representations that support our experience of pitch perception and imagery are not well understood and they generally focus on tonotopic organization of neural columns in the brain (place-based coding of absolute frequency). From prior behavioural studies, we understand musical pitch space to be relative to a reference key, and hierarchically organized. Our current study uses a new between-subject common representation of spatio-temporal multivariate population codes to identify the representational space of musical pitch.