On Future Performance

 By TOD MACHOVER

Composing is what I love to do most. It is what best combines my
various skills and interests — imagination, reflection, organization
and the desire to communicate my thoughts and emotions to anyone who
will listen. I also love solitude: I do my creative work in an
18th-century barn on our farm near Boston, where I can pursue my ideas
without the need to explain or translate until all is ripe and ready.
So it may seem like a paradox that another large chunk of my life is
spent in one of the world’s most futuristic, collaborative and
intensive centers of technological invention — the The Massachusetts Institute of Technology Media Lab.
But the attractions and complexities of merging these worlds are
central to how and why I work, and grow from seeds planted when I was
very young.



My mom is a Juilliard-trained pianist and a remarkable pedagogue and my
dad is one of the pioneers of computer graphics, but it actually took
me a while to start combining these fields. I grew up as a cellist,
first playing solo Bach, then chamber music (I never particularly
enjoyed playing in orchestras), and then, by high school, original
composed or improvised music using a wired (or is that “weird”?)
transformed rock cello that I created by placing large headphones
around the cello for amplification, then sending the sound through tape
recorder loops and analog transformation processes.

Is there any more music technology to invent? Or do our musical imaginations and artistic cultures simply need to catch up?

The appearance of the Beatles’ “Sgt. Pepper’s Lonely Hearts Club
Band” had changed my life: it suggested a music that ideally balanced
complexity and directness. There was a downside, though: as a product
of the recording studio, most of the Beatles’ music after 1967 couldn’t
actually be played live. That’s when I started imagining a performance
mode that would combine the physicality and intimacy of solo cello and
the unhinged creativity of the recording studio. I was driven by the
urge to bring this strange, enticing and intricate music filling my
head out through my arms and fingers and into the world.

This desire compelled me not only to compose the music I was
imagining, but also to invent new instruments and new modes of playing
them, something that I never thought as a kid that I’d end up doing. So
along with my colleagues and students at the M.I.T. Media Lab I’ve
designed hypercellos for Yo-Yo Ma and Matt Haimovitz, a Brain Opera to allow audiences to share in the creation of each performance, a Toy Symphony to induce children to fall in love with music using Music Toys that open doors to collaboration with top-level virtuosi, and composing software — Hyperscore — for enhancing music education and enabling music-modulated-health.

Inventions like these have been part of a trend that has yielded
amazing developments over the past 10 years. Technology has
democratized music in ways that are surprising even to me,
revolutionizing access to any music anytime with iPod and iTunes,
opening interactive musicmaking to amateurs with Guitar Hero and Rock
Band (which both grew out of a group I lead at the M.I.T. Media Lab),
providing digital production and recording facilities on any laptop
that surpass what the Beatles used at Abbey Road, and redefining the
performance ensemble with initiatives like the Stanford iPhone
Orchestra and YouTube Symphony.

In fact, at the start of 2010 one wonders whether there is any more
music technology to invent, or whether our musical imaginations and
artistic cultures simply need to catch up. The answer is both, and then
some.

For the first time in my career, I feel as if there are enough tools
on my laptop, enough brilliant and inventive playing chops amongst the
younger generation of performers, enough ooomph in the iPhone, and
increasing openness and entrepreneurship in musical organizations both
large and small to stimulate my imagination and allow for the
production and dissemination of my somewhat unusual creations.

But even though these evolving music technologies are already very
powerful and increasingly ubiquitous, we can also see their current
limitations and potential risks. Guitar Hero is rhythmically exciting
but not yet expressive or creative enough — a “sticky” but not
“open-ended” experience that does not obviously lead to better
musicality, listening or ensemble awareness. The iPhone is a remarkable
little chameleon but lacks the touch and sensitivity of even the
simplest traditional instrument, better for selecting and switching
than for subtly shaping. Amplified sound is loudly present and
“surrounds” us ever more, but still emphasizes the boom box aspect
rather than the “still small voice.” And there isn’t yet a performance
measurement system that could come close to interpreting the
exuberance, range and immediacy of someone like Gustavo Dudamel or
truly enhancing the experience of an “unplugged” symphony orchestra.

As a composer, I find that each new piece I undertake suggests
exciting but daunting technological challenges; my imagination just
seems to be wired that way. My current project, the opera “Death and the Powers,” is one example.



Video: A look at the technology being used in Tod Machover’s opera, “Death and the Powers”



I had been invited to imagine a new (and unusual) opera by the Opera of
Monte Carlo, and two fundamental impressions came to mind early on. The
first came from thoughts about mortality and how difficult it is to sum
up one’s life in a way that can be shared and transmitted to loved ones
through generations, and how music has a particularly powerful capacity
for collecting and concentrating multiple experiences, then burning
them indelibly into our memories. And I started imagining that this web
of musical memories — the embodiment of an entire life — needed to
transcend traditional notes and instruments, jump off the stage and
physically envelope the listener, both aurally and visually. This
turned into a mental impression of floating, undulating, palpable 3-D
sounds represented visually through slowly moving, morphing objects
filling a stage — like “Fantasia” become physical (but with my music and without
dancing elephants). I felt the need to go beyond the flatness and
harshness of usual multimedia tools to create something that was at the
same time transcendent and magical but also completely human and
down-to-earth.

I then sought out collaborators — the poet Robert Pinsky and the
playwright Randy Weiner — to turn these initial impressions into an
opera, a form that has long attracted me for its use of word and image
to ground music’s abstract qualities in concrete human experience.
Together we crafted a story about a man who longs to leave the world in
order to pass to a higher level of existence, but wants everything
about himself — his memories, his ability to influence others, his
contact with those he loves, his legacy — to remain behind.

This story evolved into a full opera libretto in which the main
character, named Simon Powers, switches on The System at the end of
Scene 1: he becomes embodied more and more in his surroundings, forcing
those left behind to decide how to communicate with him or it, whether
to follow, and what part of his legacy to retain or reject. The stage
itself becomes the main character in the opera, taking over from — and
extending — the physical presence of the singer. Realizing this vision
has been a daunting challenge, but happily, with the collaboration of
the director Diane Paulus, the designer Alex McDowell, the
choreographer Karole Armitage and my group at the M.I.T. Media Lab, we
are in the process of designing sighing walls, morphing furniture,
gliding robots and even a resonating chandelier to create The System on
stage — and to make it “sing.”

In helping to tell this story and to sonify the score, all aspects
of this physical set translate and amplify Simon Powers’ human
presence, challenging the current limits of our ability to measure and
interpret all the subtleties of a great performance. The techniques
currently being developed are already yielding surprising results,
turning elegantly refined gestures, barely perceptible touch, and the
gentlest breath into sounds, shapes and movements that convey
personality and feeling without looking or sounding exactly like a
human being. It is a new kind of instrument, and we are just learning
how to play it.

Hopefully, these developments will lead to musical possibilities
down the line that I can’t predict right now, just as software and
hardware designed to measure Yo-Yo Ma’s bowing led — in a slightly
zigzag way — to Guitar Hero. I would not be surprised, for example, if
the sophisticated infrastructure that Simon Powers will use to
construct and communicate his legacy when “Powers” premieres next
September in Monaco were eventually to morph into a platform for
creating and sharing musical stories — a kind of “personal opera” — on,
well, your iPhone.

In fact, I think that it is precisely this kind of surprising
freshness that technology can allow — through what can be precisely
customized for each project and through the unexpected new discoveries
that each project seems to require or reveal — that remains one of its
continuing attractions for me.

But we can’t take such freshness for granted. Musical technology is
so ever-present in our culture, and we are all so very aware of it,
that techno-clichés and techno-banalities are never far away and have
become ever more difficult to identify and root out. It is deceptively
challenging these days to apply technology to music in ways that
explode our imaginations, deepen our personal insights, shake us out of
boring routine and accepted belief, and pull us ever closer to one
another.

That’s what makes this kind of work worthwhile and inspires me. But
it also leads to a paradox that I experience every single day: that the
desire to shape the future is not perfectly compatible with the
knowledge that musical experience — and its power to excite and
transform us — is fleeting, here and now, in this very moment. And that
we’d be extremely fortunate indeed to create new sounds and instruments
and technologies that approach the compact, powerful perfection of
playing, listening to or imagining Bach emanating from a solo cello.

So what do you think? Can music made by technological processes ever
match the beauty and impact of a skilled performance on a traditional
instrument? Will an iPhone or its descendents allow us to enhance our
musical imaginations while merging with our bodies, becoming —
literally — second nature as we create and communicate our deepest
thoughts and feelings through sound?


Tod Machover

Tod Machover is a composer and inventor, known for developing
new technologies for music and performance. He is professor of Music
and Media at the M.I.T. Media Lab and is currently finishing a CD of
recent music for string quartet, orchestra and electronics, to be
released this spring on the Bridge label. His Web site is todmachover.com

CMJ: Call for works for Computer Music Journal DVD 2010

 CMJ DVD 2010

MIT Press’ Computer Music Journal is seeking submissions of recent computer music works from the Austral-Asian region (Australia, New Zealand, Singapore, Indonesia and Malaysia) for inclusion on this year’s annual DVD. This is published with the fourth and final issue at the end of the year.  Video works are especially encouraged, but audio-only works are also welcome, as are videos of performances (with good-quality audio).

Works may be audio files only, video, videos of performances and so on. Works may have been presented or published elsewhere, but in the latter case, documentation must be provided granting CMJ the nonexclusive right to publish. One work may be submitted per composer.

Please include:
Full quality renditions of your work:
– If multi-channel audio, up to 5.1 channels, include the mono files and clear labeling of speaker distribution, 16-bit, 48-kHz (44.1 kHz also acceptable);
– if video work, include a minimally compressed .mov file (4:3 NTSC).
In addition, please include a text file with program note and bio (500 word maximum for each), and, optionally, a *minimum* 300-dpi b/w digital photo of the composer(s), TIFF files are best.

Duration: Pieces under 10 minutes in duration are preferred.

Because of DVD production requirements and publishing lead times, works must be received no later than Friday 2nd April 2010.


Please send your work to:

Paul Doornbusch – CMJ DVD Curator
Counter Delivery
c/o – Melbourne A’Beckett Street Post Office
Ground Level
410 Elizabeth Street
Melbourne
VIC 3000
AUSTRALIA

2nd Call: New Interfaces for Musical Expression (NIME++ 2010)

 June 15-18 2010
www.nime2010.org

On behalf of the NIME 2010 Committee, we invite you to be part of the International Conference on New Interfaces for Musical Expression.  The core purpose of NIME is to examine interfaces/instruments for musical expression. The ++ portion of the 2010 conference concerns projects/papers that collaborate with music or sound via multi-disciplinary, cross-disciplinary and multimodal expression. Projects that collaborate with other disciplines, extending the musical or sonic framework are invited, and vice-versa.

NIME TOPICS We welcome submissions on topics related to new interfaces for music performance including, but not limited to:

– Novel controllers and interfaces for musical expression
– Novel controllers for collaborative performance
– Novel musical instruments
– Computational methods of composition
– Augmented/hyper instruments
– Interfaces for dance and physical expression
– Interactive Game Music
– Robotic Music
– Interactive sound and multimedia installations
– Interactive sonification
– Sensor and actuator technologies
– Haptic and force feedback devices
– Interface protocols and data formats
– Gesture and music
– Perceptual & cognitive issues
– Interactivity design and software tools
– Musical mapping strategies
– Performance analysis and machine learning
– Performance rendering and generative algorithms
– Experiences with novel interfaces in education and entertainment
– Experiences with novel interfaces in live performance and composition
– Surveys of past work and stimulating ideas for future research
– Historical studies in twentieth-century instrument design
– Reports on student projects in the framework of NIME related courses
– Artistic, cultural, and social impact of NIME technology
– Gesture measurement
– Enabling music networks
– Bio-music

NIME++ TOPICS We welcome submissions on topics related to multi-disciplinary, cross-disciplinary and multimodal expression, but not limited to:

– Mobile Technologies including Sound & Music
– Locative Media Integration
– Urban Digital Media & Media Façades
– Human-Computer Interaction
– Multimodal Expressive Interfaces
– Practice-Based Research Approaches/Methodologies/Criticism
– Sonification, Auditory Display & Multimodal Information Expression,Data Display
– NIME intersecting with Performance, Dance, Theatre, Game Design
– Sonic Expression in Architecture, Design, Wearables/Fashion
– Computational Interfaces/Methods for Expression & Creativity


CALL FOR PAPERS
Full Paper (up to 6 pages in proceedings, longer oral presentation,optional demo)
Short Paper/Poster/Demo (up to 4 pages in proceedings, choose from shorter oral presentation OR poster OR demo)

ALL PAPERS WILL BE PUBLISHED IN THE CONFERENCE PROCEEDINGS.

IMPORTANT DATES
Submission of full papers, short papers and poster/demo proposals: 29 January 2010
Notification of acceptance/rejection (papers and posters): 12 March 2010
Submission of final papers: 23 April 2010

For any further information/question/comment/suggestion, please send a message to the local organizing committee.


CALL FOR PERFORMANCES, INSTALLATIONS, EXHIBITION
As in previous years, we are calling for proposals for performances and installations to be presented in conjunction with the conference. There are 2 categories of performance proposal: Concert performance and Club night performance.  For details see the call on the NIME2010 website.

We are pleased to announce that Ensemble Offspring <http://www.ensembleoffspring.com/> will be ensemble in residence at NIME 2010.  Performers/composers may therefore draw on any combination of their instrumental resources if they wish.  See the NIME2010 website for details.

Submission of installation and performance proposals due: 29 January 2010


CALL FOR WORKSHOP/TUTORIAL PROPOSAL
In 2010, we are looking for two kinds of special event on the day preceding the full conference:

WORKSHOPS for academic-style paper presentations or discussion relating to a specialist area directed by an expert;

and

TUTORIALS for making things and developing knowledge. Tutorials can range in intention from instruction and discussion about specialist techniques, platforms, hardware, software or pedagogical topics for the advancement of fellow NIME-ers and people with experience related to the topic, or tutorials can be instructive for visitors to the NIME community, novices/newbies, interested student participants, people from other fields, and members of the public getting to know the potential of NIME.  For details see the call on the NIME2010 website.

Submission of workshop/tutorial proposals due: 29 January 2010

Dr. Andrew Johnston
Lecturer
School of Software
Faculty of Engineering and IT
University of Technology, Sydney
PO Box 123, Broadway, NSW 2007, Australia
Ph. +61 2 9514 4497 Fax: +61 2 9514 4492
Room 10.04.341
Web: http://andrewjohnston.net/