Portfolio

My creative practice and research focuses on electro-acoustic systems relating to chamber music, performance space augmentation, and sound installation. I am interested in exploring acoustic phenomena – often feedback derived from both real and virtual systems – in live situations, and embedding sounds or gestures into layers of automated live electronic processes. Below are some examples from my music and installations, followed by examples of my research and writings relating to these areas.

Dr. Eoin Callery CV/Resume October 2018

Music to Installations:

A scene for two musicians/actors, piano, percussion, live electronics, arduino/photoresistor controlled fan, ultrasonic speaker, fixed electronics (Utilizing Modal Distortion Processing – currently under development by Professor Jonathan Abel). At the end of the piece the fan moves the signal from the ultrasonic speaker around the performance space. Thanks to the amazing Levy Lorenzo and Dennis Sullivan… collectively aka Radical 2.

In Skry 2 a photoresistor is placed underneath a bowl of water. Food coloring is added to the water. The pitch of the electronic sound changes very slowly as the water changes color.

For cello, prepared Disklavier (The disklavier is prepared with several hard cover books placed across and dampening all of the strings), and live electronics controlling variable band-pass filters. Performed at CCRMA by the incredible Séverine Ballon.

Finally, my interactive installation Short Story for headphones, using, MAX, Arduino, photoresistors, found objects, found objects and a reading light.

Research – Writing and Recording:

Virtual Acoustics

In collaboration with my CCRMA colleagues Elliot K Canfield-Dafilou and Professor Jonathan Abel, I developed a real-time multichannel virtual acoustic system – DAFx 2018 Paper. We are using this system in a variety of ongoing live performance, installation, and research projects ranging from archaeo-acoustic and eco-acoustics to composition and installations. Some of these projects explore the blending of real and virtual feedback, while others are investigating how this system can provide more sophisticated audio experiences in virtual and augmented reality situations. Recently, we have been assisting with research – led by Professor Jonathan Berger – into a codex of music possibly written for performance at Chiesa di Sant’Aniceto in the Palazzo Altemps in Rome. This space is interesting for us as the church has undergone remarkably little renovation or alteration since its construction in the early 1600’s. After taking impulse response measurements of the space in mid-2018, we created virtual acoustic models of the church, and slightly altered virtual acoustic models of the church, in the CCRMA recording studio. We then recorded an ensemble of singers and continuo organist playing works from the codex in these real-time virtual environment, written by late Renaissance composer Felice Anerio. Among other things, the data from this recording session will be used to quantitatively investigate the impact of the acoustic of this space, and spaces in general, on composition and performance practice. Papers and articles relating to this work will be publish over the coming year.

Automated Variable Band Pass Filters

In many of my pieces I have utilized groups of automated hard limited variable band pass filters. These allow me to set very high input gains on microphone input and filter the ensuing feedback without creating unpleasant squeals. Below is a quick and simple demonstration of these filter system:

An album of more elaborate works using these systems using amplified electric guitar, violin bow, and string is available through eh? records and from Bandcamp

Another quick example:

For a longer discussion about using these systems in more complex pieces please see:

DMA Final Project: Developing an Electroacoustic Portfolio

Abstract:

“Taking my installation from 2012, The Workshop at the Back of the Barracks, this paper discusses how my reflections on this piece influenced the development of the five subsequent electroacoustic chamber compositions. During the installation the acoustic sound materials, that is, the movements and gestures of the visitors interacting with this installation, became embedded into an ongoing automated live electronic stream of sound. This embedding created what I call alternative audio images of the movements and gestures, something more than the creation of 1-plus-1electronic processing relationships. Similarly, the acoustic materials for each of the chamber pieces were created through a series of simple reinterpretations of standard notation made during the compositional process. These reinterpretations were analogous to the parameter value shifts that produced the live electronic streams. Following some introductory remarks and a description of the installation, the original performance instructions and scores for each of the subsequent chamber pieces are presented. In addition, the main elements of the source acoustic materials for each piece and their principle reinterpretations are described. Further, each chapter includes a description of the autonomous live electronic streams that the chamber musicians – performing the reinterpreted acoustic materials – became embedded into, and the alternative audio images, which were also produced by this embedding.”

Modal Effects

Modal effects processor were developed by Professor Jonathan Abel at CCRMA/Stanford University. With the assistance of doctoral student Kurt James Werner they have written several papers explaining the processing, for example their 2015 DAF-x paper DISTORTION AND PITCH PROCESSING USING A MODAL REVERBERATOR ARCHITECTURE. I have been fortunate in having had the opportunity to use these processors in several of my pieces (Diegesis – Something Like In Memoriam at the top of the page uses the modal distortion effect) and give them feedback on what would be useful to develop for realtime versions of these effect.