The End

"The End" music: Carlos Dominguez, Dartmouth Dance Ensemble dir:John Heginbotham

Upper Valley residents joined the Dartmouth Dance Ensemble with music composed by digital musics graduate student Carlos Dominguez, G'14. Serving as the concert’s finale, “The End” looks at how individuals relate to community and the future. Each performer utilizes nine distinct dance moves in the piece executed in various comnbinations.

YouTube Smash Up

PSY - GANGNAM STYLE (강남스타일) M/V (YouTube SmashUp)

Each week, we take the top 10 videos of YouTube and resynthesize the #1 video using the remaining 9 videos. We’ll continue doing so until one of our videos ends up in the top 10. The process, called “Smash Up” is a new kind of remix/mosaicing process which learns tiny perceptual fragments of audio and video using a computational model of audiovisual perception.

One Million Seconds, Time and Motion Picture Study

One Million Seconds (Time and Motion Picture Study)

One Million Seconds is a study in imploded cinema. Each moment stitches together fragments sampled from 120 full-length motion pictures (24,000,000 frames / 1,000,000 seconds) from the history of cinema. The resulting film is generated from audio-visual fragments, sorted by their audio similarity to a hidden cantus firmus- Glenn Gould's 1981 recording of J. S. Bach's Goldberg Variations. The cantus firmus accompanies the opening titles. Subsequently, the movie consists entirely of matching fragments from the 120 films.

Exploring Film Auteurship with the ACTION toolbox

Society for Cinema and Media Studies
From exposing Jackson Pollock forgeries to clarifying the sections of the Federalist Papers written by Alexander Hamilton, computational analysis and machine learning have proven to be powerful tools in the study of authorship. Film scholar Warren Buckland used the statistical analysis of shot lengths and shot types to make a persuasive claim that Tobe Hooper, and not Steven Spielberg as rumored, directed Poltergeist (1982). However, Buckland and other scholars using Cinemetrics have had to manually enter data for these elements.

EMdrum: An Electromagnetically Actuated Drum

New Interfaces for Musical Expression
The EMdrum, a drum electromagnetically actuated in the manner of a loudspeaker, is presented. Design principles are established and implementation is described; in particular, two alternative electromagnetic actuation designs, moving-coil and moving-magnet, are discussed. We evaluate the time-frequency response of the instrument and present a musical application.

SonicTaiji: A Mobile Instrument for Taiji Performance

International Conference on Auditory Display
SonicTaiji is a mobile instrument designed for the Android Platform. It utilizes accelerometer detection, sound synthesis, and data communication techniques to achieve real-time Taiji sonification. Taiji is an inner-strength martial art aimed at inducing meditative states. In this mobile music application, Taiji movements are sonified via gesture detection, connecting listening and movement. This instrument is a tool for practitioners to enhance the meditative experience of performing Taiji. We describe the implementation of gesture position selection, real-time synthesis, and data mapping.

A Surface Controller for the Simultaneous Manipulation of Multiple Analog Components

New Interfaces for Musical Expression
This project presents a control surface that combines a grid of photocells with a microcontroller to allow a musician to manipulate multiple analog components at once. A brief background on past uses of photocells for music and film composition and instrument-building introduces a few different implementations and performance contexts for the controller. Topics such as implementation, construction, performance scenarios and reflections on past performances of the controller are also discussed.

Digitally Extending the Optical Soundtrack

Proceedings of the International Computer Music Conference
The optical soundtrack has a long history in experimental film as a means of image sonification. The technique translates image luminance into amplitude along the vertical axis, enabling the sonification of a wide variety of filmed patterns. While the technical challenges of working with film preclude casual exploration of the technique, digital implementation of optical image sonification allows interested users with skill sets outside of film to access this process as a means of sonifying video input.

Musical Audio Synthesis Using Autoencoding Neural Networks

Proceedings of the International Computer Music Conference
With an optimal network topology and tuning of hyperparameters, artificial neural networks (ANNs) may be trained to learn a mapping from low level audio features to one or more higher-level representations. Such artificial neural networks are commonly used in classification and re-gression settings to perform arbitrary tasks. In this work we suggest re-purposing auto-encoding neural networks as musical audio synthesizers.