Abstract
Among the many possibilities for combining human performers and electronics, the use of computer intelligence allows us to conceive of attributing “virtuosity” to the live electronic system. Rather than try to directly copy human behaviours, however, it is more interesting to explore ways in which the electronics is at the same time sufficiently similar and sufficiently different, compared to human instrumental playing, to create a stimulating space for composition and for performance.
This suggests a kind of collaboration between composer and electronics: even while one is creating the electronic environment and musical material, one is discovering more about, and negotiating with, the aspects in which one hopes to set up autonomous agency. It feels like making both an instrument, and the music, and a persona.
In Gravity’s Horizon, the imbedded “intelligence” does not use AI or machine learning in the usual sense, but involves ways of trading off consistency versus variation, and listening versus leading, within a chamber music interaction. The electronics behaves as a fourth member of the ensemble; but it has its own distinctive possibilities. It is able to capture and mediate between the sonorities of the acoustic flute, drum (played by the flautist), cello and piano, in ways that are impossible with acoustic instruments; and it carries the sound between them through space with loudspeakers placed amongst the ensemble. The same model can also produce sounding materials that are very distinct from those of the instruments. The sonic trajectories are under control of a “score” comparable in detail to that of the instrumental parts, but including aleatoric aspects of microstructure, and adaptive in time along with the human players.
Sound production is based on additive synthesis, at its simplest using sine tones but here much extended. Real-time capture/analysis enables sustain and extrapolation from individual instrument sounds, and intimate matching to the sounds as actually played by the human players. “Pre-learned” sounds provide targets for transformative phrases. The talk will include live demonstrations.
Gravity’s Horizon has been performed by two ensembles, in different contexts. During 2019, the original commissioning ensemble and I are returning to the work to extend it. As was planned at the start, it will grow in its middle section, drawing on our findings so far and, musically, expanding the negotiated arc between the distinctive materials of the beginning and ending. One kind of material here includes use of counterpoint, an interesting and exposed way to explore sensitive interactions between the four parties – three players plus electronics. I am also integrating automated score-following using Ircam’s Antescofo. Tests so far indicate that it will be in the ways the score following is pushed to fail, as much as when it works, that create rewarding responsive space.
This suggests a kind of collaboration between composer and electronics: even while one is creating the electronic environment and musical material, one is discovering more about, and negotiating with, the aspects in which one hopes to set up autonomous agency. It feels like making both an instrument, and the music, and a persona.
In Gravity’s Horizon, the imbedded “intelligence” does not use AI or machine learning in the usual sense, but involves ways of trading off consistency versus variation, and listening versus leading, within a chamber music interaction. The electronics behaves as a fourth member of the ensemble; but it has its own distinctive possibilities. It is able to capture and mediate between the sonorities of the acoustic flute, drum (played by the flautist), cello and piano, in ways that are impossible with acoustic instruments; and it carries the sound between them through space with loudspeakers placed amongst the ensemble. The same model can also produce sounding materials that are very distinct from those of the instruments. The sonic trajectories are under control of a “score” comparable in detail to that of the instrumental parts, but including aleatoric aspects of microstructure, and adaptive in time along with the human players.
Sound production is based on additive synthesis, at its simplest using sine tones but here much extended. Real-time capture/analysis enables sustain and extrapolation from individual instrument sounds, and intimate matching to the sounds as actually played by the human players. “Pre-learned” sounds provide targets for transformative phrases. The talk will include live demonstrations.
Gravity’s Horizon has been performed by two ensembles, in different contexts. During 2019, the original commissioning ensemble and I are returning to the work to extend it. As was planned at the start, it will grow in its middle section, drawing on our findings so far and, musically, expanding the negotiated arc between the distinctive materials of the beginning and ending. One kind of material here includes use of counterpoint, an interesting and exposed way to explore sensitive interactions between the four parties – three players plus electronics. I am also integrating automated score-following using Ircam’s Antescofo. Tests so far indicate that it will be in the ways the score following is pushed to fail, as much as when it works, that create rewarding responsive space.
Original language | English |
---|---|
Publication status | Published - 23 Mar 2019 |
Event | Society for Electro-Acoustic Music in the United States, 2019 conference - Berklee College of Music and Boston Conservatory at Berklee, Boston, United States Duration: 21 Mar 2019 → 23 Mar 2019 https://www.berklee.edu/seamus |
Conference
Conference | Society for Electro-Acoustic Music in the United States, 2019 conference |
---|---|
Abbreviated title | SEAMUS 2019 |
Country/Territory | United States |
City | Boston |
Period | 21/03/19 → 23/03/19 |
Internet address |
Keywords
- live electronics, collaboration, composition, performance, AI