2013

check

check (2013)

Inspired by experiments in abstract film-phonography, check explores the sonic potential of running sequences of patterns through a 16mm film projector. Collages composed of three different patterns and solid areas of black and white were composed and photocopied onto strips of clear film leader. The sequences provide not only the images that are projected onto the screen, but also the soundtrack to the film.

photosinebank

photosinebank is a piece for the 16-CdS and laptop. The performer controls swells of sine-wave clusters by casting shadows on the controller. This performance was recorded at the Audiotheque in Miami, FL as part of their Year-End Fest on December 26th, 2013. Listen to it here.

Carlos Dominguez, laptop & 16-CdS

Understory

Understory

Max Hammer, animation
Carlos Dominguez, music/sound

Synopsis: Secrets of life explained through the persistance of vision.

Drawing inspiration from insect swarms and fog, the music for Understory swells in and out of tonalities and microrhythms that accent the world around the animation's protagonist. Multiple delay lines create these clouds of sound using only two samples - a low E on an electric guitar and a Bb on a Cherokee flute - transposed and polyphonically layered.

Pizza, French Fry, Waffle

Pizza, French Fry, Waffles

Max Hammer, video
Carlos Dominguez, music

Music composed by Carlos Dominguez accompanies a video of a ski run by Max Hammer. The music is composed of harmonica and accordion samples taken from a session of improvisation with Phillip Hermans. Special thanks to Jessica Thompson for letting us use her accordion.

Study of Chinese and UK Hit Song Prediction

Proceedings of Computer Music Multidisciplinary Research
The top 40 chart is a popular resource used by listeners to select and purchase music. Previous work on automatic hit song prediction focused on Western pop music. However, pop songs from different parts of the world exhibit significant differences.

Investigating the Relationship Between Pressure Force and Acoustic Waveform in Footstep Sounds

in Proceedings of the International Conference on Digital Signal Processing (DSP)
In this paper we present an inquiry into of the relationships between audio waveforms and ground reaction force in recorded footstep sounds. In an anechoic room, we recorded several footstep sounds produced while walking on creaking wood and gravel. The recordings were performed by using a pair of sandals embedded with six pressure sensors each. Investigations of the relationships between recorded force and footstep sounds is presented, together with several possible applications of the system.

Distort Them as You Please: The Sonic Artifacts of Compositional Transformation (Ghost in the MP3)

Master's Thesis

moDernisT_v1

"moDernisT" was created by salvaging the sounds lost to mp3 compression from the song "Tom's Diner", famously used as one of the main controls in the listening tests to develop the MP3 encoding algorithm. Here we find the form of the song intact, but the details are just remnants of the original, scrambled artifacts hinting at what once was. This thesis discusses a series of compositions created over the past year that combine the use of external data with intuitive aesthetic decisions.

Groove Kernels as Rhythmic-Acoustic Motif Descriptors

Proceedings of the International Society for Music Information Retrieval
The “groove” of a song correlates with enjoyment and bodily movement. Recent work has shown that humans often agree whether a song does or does not have groove and how much groove a song has. It is therefore useful to develop algorithms that characterize the quality of groove across songs. We evaluate three unsupervised tempo-invariant models for measuring pairwise musical groove similarity: A temporal model, a timbre-temporal model, and a pitch-timbre-temporal model.

ACTION: Cross-Modal Cinematics, Auteur Classification, and Audio-Visual Structure in Film

Digital Music Research Network Workshop
Content-based analysis of video using audio and visual features has previously been used for the automatic tasks of scene/shot segmentation and video summarization. We present new work that extends this research to automatically extract and compare the narrative structure of feature films, discover patterns in the relationship of music, sound, and image, and classify films according to their director using audio, visual, and joint audio-visual features.

Music Information Retrieval from Neurological Signals: Towards Neural Population Codes for Music

Society for Music Perception and Cognition
Much of music neuroscience research has focused on finding functionally specific brain re-gions, often employing highly controlled stimuli. Recent results in computational neuroscience suggest that auditory information is represented in distributed, overlapping patterns in the brain [4] and that natural sounds may be optimal for studying the functional architecture of higher order auditory areas [3]. With this in mind, the goal of the present work was to decode musical informa-tion from brain activity collected during naturalistic music listening.

Pages

Subscribe to RSS - 2013