SoundVisionComputation

Groove Kernels as Rhythmic-Acoustic Motif Descriptors

Proceedings of the International Society for Music Information Retrieval
The “groove” of a song correlates with enjoyment and bodily movement. Recent work has shown that humans often agree whether a song does or does not have groove and how much groove a song has. It is therefore useful to develop algorithms that characterize the quality of groove across songs. We evaluate three unsupervised tempo-invariant models for measuring pairwise musical groove similarity: A temporal model, a timbre-temporal model, and a pitch-timbre-temporal model.

ACTION: Cross-Modal Cinematics, Auteur Classification, and Audio-Visual Structure in Film

Digital Music Research Network Workshop
Content-based analysis of video using audio and visual features has previously been used for the automatic tasks of scene/shot segmentation and video summarization. We present new work that extends this research to automatically extract and compare the narrative structure of feature films, discover patterns in the relationship of music, sound, and image, and classify films according to their director using audio, visual, and joint audio-visual features.

Exploring Film Auteurship with the ACTION toolbox

Society for Cinema and Media Studies
From exposing Jackson Pollock forgeries to clarifying the sections of the Federalist Papers written by Alexander Hamilton, computational analysis and machine learning have proven to be powerful tools in the study of authorship. Film scholar Warren Buckland used the statistical analysis of shot lengths and shot types to make a persuasive claim that Tobe Hooper, and not Steven Spielberg as rumored, directed Poltergeist (1982). However, Buckland and other scholars using Cinemetrics have had to manually enter data for these elements.

EMdrum: An Electromagnetically Actuated Drum

New Interfaces for Musical Expression
The EMdrum, a drum electromagnetically actuated in the manner of a loudspeaker, is presented. Design principles are established and implementation is described; in particular, two alternative electromagnetic actuation designs, moving-coil and moving-magnet, are discussed. We evaluate the time-frequency response of the instrument and present a musical application.

SonicTaiji: A Mobile Instrument for Taiji Performance

International Conference on Auditory Display
SonicTaiji is a mobile instrument designed for the Android Platform. It utilizes accelerometer detection, sound synthesis, and data communication techniques to achieve real-time Taiji sonification. Taiji is an inner-strength martial art aimed at inducing meditative states. In this mobile music application, Taiji movements are sonified via gesture detection, connecting listening and movement. This instrument is a tool for practitioners to enhance the meditative experience of performing Taiji. We describe the implementation of gesture position selection, real-time synthesis, and data mapping.

A Surface Controller for the Simultaneous Manipulation of Multiple Analog Components

New Interfaces for Musical Expression
This project presents a control surface that combines a grid of photocells with a microcontroller to allow a musician to manipulate multiple analog components at once. A brief background on past uses of photocells for music and film composition and instrument-building introduces a few different implementations and performance contexts for the controller. Topics such as implementation, construction, performance scenarios and reflections on past performances of the controller are also discussed.

Digitally Extending the Optical Soundtrack

Proceedings of the International Computer Music Conference
The optical soundtrack has a long history in experimental film as a means of image sonification. The technique translates image luminance into amplitude along the vertical axis, enabling the sonification of a wide variety of filmed patterns. While the technical challenges of working with film preclude casual exploration of the technique, digital implementation of optical image sonification allows interested users with skill sets outside of film to access this process as a means of sonifying video input.

Musical Audio Synthesis Using Autoencoding Neural Networks

Proceedings of the International Computer Music Conference
With an optimal network topology and tuning of hyperparameters, artificial neural networks (ANNs) may be trained to learn a mapping from low level audio features to one or more higher-level representations. Such artificial neural networks are commonly used in classification and re-gression settings to perform arbitrary tasks. In this work we suggest re-purposing auto-encoding neural networks as musical audio synthesizers.

Audiovisual Resynthesis in an Augmented Reality

ACM Multimedia
"Resynthesizing Perception" immserses participants within an audiovisual augmented reality using goggles and headphones while they explore their environment. What they hear and see is a computationally generative synthesis of what they would normally hear and see. By demonstrating the associations and juxtapositions the synthesis creates, the aim is to bring to light questions of the nature of representations supporting perception. Two modes of operation are possible.

Pages

Subscribe to RSS - SoundVisionComputation