60X60프로젝트는 60명의 작곡가가 각 1분길이의 곡을 만들어 60분길이의 하나의 작품으로 만드는 프로젝트였죠.
이번에는 60명의 안무가가 함께 참여하여 1시간의 공연을 만들었네요.
꽤 재미있습니다.
HEC TV에서 1시간의 공연을 볼수 있습니다.
감상해보세요
http://www.hectv.org/programs/spec/program.php?specialid=51
60X60프로젝트는 60명의 작곡가가 각 1분길이의 곡을 만들어 60분길이의 하나의 작품으로 만드는 프로젝트였죠.
이번에는 60명의 안무가가 함께 참여하여 1시간의 공연을 만들었네요.
꽤 재미있습니다.
HEC TV에서 1시간의 공연을 볼수 있습니다.
감상해보세요
http://www.hectv.org/programs/spec/program.php?specialid=51
Geeta Dayal
on Wednesday, January 6th, 2010 at 1:00 pm.
In Krystof Wodiczko’s striking installation Out of Here: The Veterans
Project, currently on view at the ICA in Boston, choppers roar
overhead. People scream in the distance. Glass breaks and shatters on
the floor. The viewer can see almost nothing; the large room is dark,
except for a few windows high above, created by a row of video
projections. The view from these windows is obscured; the piece is as
much about what you can’t see than what you do see. But even more
importantly, the piece is about what you hear–and what you can’t
hear. The chants of an imam become the sounds of women wailing.
Gunshots begin to fire sporadically. Military officers yell harsh
commands. The rumble of bass—a swarm of Humvees in the distance,
drawing closer—gets louder and more threatening. The longer you stay
in the room, immersed in the increasing racket, the more palpable the
sense of dread becomes. The harrowing sounds of war are not simply
about the sounds themselves, but the spaces in between.
In the intriguing new book Sonic Warfare: Sound, Affect, and the Ecology of Fear [MIT Press], Steve Goodman
explores the power of sound as a tactic of irritation, intimidation,
or even permanent harm. Goodman analyzes “environments, or ecologies,
in which sound contributes to an immersive atmosphere or ambience of
fear and dread–where sound helps produce a bad vibe.”
Goodman catalogs a litany of military uses of sound that seem like
sinister science fiction fantasies. The “Urban Funk Campaign” was a
suite of audio harassment techniques used by the military in Vietnam
in the early 1970s. One such technique was called “The Curdler,” or
“People Repeller,” a panic-inducing oscillator with the ability to
cause deafening impact at short distances. The Windkanone, or
“Whirlwind Cannon,” was a sonic weapon planned by the Nazis. The
“Ghost Army” was a unit of the U.S. Army in World War II that
impersonated other units to fake out the enemy, employing an array of
sonic deception techniques with the help of engineers from Bell Labs.
“The Scream” was an acoustic weapon used by the Israeli military
against protesters in 2005. That same year, the Israeli air force
deployed deafening sonic booms over the Gaza Strip—producing powerful
physiological and psychological effects. “Its victims likened its
effect to the wall of air pressure generated by a massive explosion,”
Goodman writes. “They reported broken windows, ear pain, nosebleeds,
anxiety attacks, sleeplessness, hypertension, and being left ‘shaking
inside.’ “
The physiological effects of sound get an extended discussion via the
concept of infrasound, or sub-20 Hz bass frequencies, which are
legendary for inducing bodily harm. Fantastical tales about infrasound
and its infamous effects on the human body abound in popular lore.
Infrasound devices generally require huge, heavy rigs to produce such
powerful waves, which limit their practicality. One of the book’s most
fascinating accounts is the story of the wily scientist Vladimir
Gavreau, who did bizarre experiments with infrasonic waves in his
French laboratory in the 1960s. According to Goodman, one such
experiment caught
Gavreau and his team in a “vibratory “envelope of death,” where they
“allegedly suffered
sustained internal spasms as their organs hit critical resonance frequencies.”
Goodman seizes upon these outer limits
of sound – infrasound at the low end, and ultrasound at the high end –
and explores them extensively. For him, infrasound and ultrasound, at
the edges of our range of perception – illustrate the “unsound,” as he
terms it, the “not yet audible.”
Freakish military devices like “The Curdler” may seem like footnotes
of the historical record–curiosities from wars staged in far-flung
lands. But these devices also hit close to home. Last September,
police in Pittsburgh utilized a device known as the LRAD (Long Range
Acoustic Device) cannon against G20 protesters — the first documented
use of one of these acoustic cannons against civilians in the United
States. At top volume, the cannon is capable of emitting high-pitched
warning tones at 146 decibels — loud enough to cause permanent
hearing damage.
How do we make sense of these uses of sound? Goodman sidesteps a
full-on historical survey of the subject. Nor is he interested in a
scientific analysis of the neurobiology of audition. Instead, he
presents a theoretical apparatus for understanding these acts of sonic
warfare, via thinkers such as Friedrich Kittler, Paul Virilio, and
Jacques Attali. Goodman argues for an “ontology of vibrational force,”
as a way of understanding “the not yet audible.” Goodman defines
vibrational force
as a “microrhythmic oscillation,” and uses the idea of
“rhythmanalysis”–a philosophy of
rhythm developed by the philosophers Pinheros dos Santos, Gaston
Bachelard, and Henri
Lefebvre—to advance his argument.
Along the way, Goodman delves into a bewildering array of references
from the worlds of
philosophy, psychoacoustics, art, music, and military strategy. The
Futurists’ fixation with noise, war, and speed figures in here, from
Luigi Russolo’s famed tract “The Art of Noises” to Marinetti’s fevered
exultations: “Load! Fire! What a joy to hear to smell completely
taratatata of the machine guns screaming a breathlessness under the
stings.” So, too, do the discourses of Afrofuturism, the surreal
fictional landscapes of William S. Burroughs and J.G. Ballard, the
1984 cult film Decoder, “audio
viruses,” Deleuze and Guattari’s theory of the refrain, Jamaican sound
systems, the work of the sound artist Mark Bain, and the “Mosquito
Anti-Social Device,” a high-frequency tool designed to prevent UK
teenagers from loitering.
Sonic Warfare is a heady, sprawling read, densely packed with detail.
Goodman’s wide range is, in part, influenced by his background. In
addition to being a writer and theorist, he doubles as an accomplished
producer of dubstep under the alias kode9, wandering a subterranean
world of bone-rattling bass pressure, towering speaker stacks, and
crowded rooms. His unique dual existence makes him strangely – and
ideally – suited for a book which requires not only an understanding
of theory and history, but also a close and personal understanding of
the powerful physicality of sound itself.
Geeta Dayal is the author of Another Green World
(Continuum, 2009), a new book on Brian Eno. She has written over 150
articles and reviews for major publications, including Bookforum, The
Village Voice, The New York Times, The International Herald-Tribune,
Wired, The Wire, Print, I.D., and many more. She has taught several
courses as a lecturer in new media and journalism at the University of
California – Berkeley, Fordham University, and the State University of
New York. She studied cognitive neuroscience and film at M.I.T. and
journalism at Columbia. You can find more of her work on her blog, The Original Soundtrack.
By TOD MACHOVER
Composing is what I love to do most. It is what best combines my
various skills and interests — imagination, reflection, organization
and the desire to communicate my thoughts and emotions to anyone who
will listen. I also love solitude: I do my creative work in an
18th-century barn on our farm near Boston, where I can pursue my ideas
without the need to explain or translate until all is ripe and ready.
So it may seem like a paradox that another large chunk of my life is
spent in one of the world’s most futuristic, collaborative and
intensive centers of technological invention — the The Massachusetts Institute of Technology Media Lab.
But the attractions and complexities of merging these worlds are
central to how and why I work, and grow from seeds planted when I was
very young.
My mom is a Juilliard-trained pianist and a remarkable pedagogue and my
dad is one of the pioneers of computer graphics, but it actually took
me a while to start combining these fields. I grew up as a cellist,
first playing solo Bach, then chamber music (I never particularly
enjoyed playing in orchestras), and then, by high school, original
composed or improvised music using a wired (or is that “weird”?)
transformed rock cello that I created by placing large headphones
around the cello for amplification, then sending the sound through tape
recorder loops and analog transformation processes.
Is there any more music technology to invent? Or do our musical imaginations and artistic cultures simply need to catch up?
The appearance of the Beatles’ “Sgt. Pepper’s Lonely Hearts Club
Band” had changed my life: it suggested a music that ideally balanced
complexity and directness. There was a downside, though: as a product
of the recording studio, most of the Beatles’ music after 1967 couldn’t
actually be played live. That’s when I started imagining a performance
mode that would combine the physicality and intimacy of solo cello and
the unhinged creativity of the recording studio. I was driven by the
urge to bring this strange, enticing and intricate music filling my
head out through my arms and fingers and into the world.
This desire compelled me not only to compose the music I was
imagining, but also to invent new instruments and new modes of playing
them, something that I never thought as a kid that I’d end up doing. So
along with my colleagues and students at the M.I.T. Media Lab I’ve
designed hypercellos for Yo-Yo Ma and Matt Haimovitz, a Brain Opera to allow audiences to share in the creation of each performance, a Toy Symphony to induce children to fall in love with music using Music Toys that open doors to collaboration with top-level virtuosi, and composing software — Hyperscore — for enhancing music education and enabling music-modulated-health.
Inventions like these have been part of a trend that has yielded
amazing developments over the past 10 years. Technology has
democratized music in ways that are surprising even to me,
revolutionizing access to any music anytime with iPod and iTunes,
opening interactive musicmaking to amateurs with Guitar Hero and Rock
Band (which both grew out of a group I lead at the M.I.T. Media Lab),
providing digital production and recording facilities on any laptop
that surpass what the Beatles used at Abbey Road, and redefining the
performance ensemble with initiatives like the Stanford iPhone
Orchestra and YouTube Symphony.
In fact, at the start of 2010 one wonders whether there is any more
music technology to invent, or whether our musical imaginations and
artistic cultures simply need to catch up. The answer is both, and then
some.
For the first time in my career, I feel as if there are enough tools
on my laptop, enough brilliant and inventive playing chops amongst the
younger generation of performers, enough ooomph in the iPhone, and
increasing openness and entrepreneurship in musical organizations both
large and small to stimulate my imagination and allow for the
production and dissemination of my somewhat unusual creations.
But even though these evolving music technologies are already very
powerful and increasingly ubiquitous, we can also see their current
limitations and potential risks. Guitar Hero is rhythmically exciting
but not yet expressive or creative enough — a “sticky” but not
“open-ended” experience that does not obviously lead to better
musicality, listening or ensemble awareness. The iPhone is a remarkable
little chameleon but lacks the touch and sensitivity of even the
simplest traditional instrument, better for selecting and switching
than for subtly shaping. Amplified sound is loudly present and
“surrounds” us ever more, but still emphasizes the boom box aspect
rather than the “still small voice.” And there isn’t yet a performance
measurement system that could come close to interpreting the
exuberance, range and immediacy of someone like Gustavo Dudamel or
truly enhancing the experience of an “unplugged” symphony orchestra.
As a composer, I find that each new piece I undertake suggests
exciting but daunting technological challenges; my imagination just
seems to be wired that way. My current project, the opera “Death and the Powers,” is one example.
Video: A look at the technology being used in Tod Machover’s opera, “Death and the Powers”
I had been invited to imagine a new (and unusual) opera by the Opera of
Monte Carlo, and two fundamental impressions came to mind early on. The
first came from thoughts about mortality and how difficult it is to sum
up one’s life in a way that can be shared and transmitted to loved ones
through generations, and how music has a particularly powerful capacity
for collecting and concentrating multiple experiences, then burning
them indelibly into our memories. And I started imagining that this web
of musical memories — the embodiment of an entire life — needed to
transcend traditional notes and instruments, jump off the stage and
physically envelope the listener, both aurally and visually. This
turned into a mental impression of floating, undulating, palpable 3-D
sounds represented visually through slowly moving, morphing objects
filling a stage — like “Fantasia” become physical (but with my music and without
dancing elephants). I felt the need to go beyond the flatness and
harshness of usual multimedia tools to create something that was at the
same time transcendent and magical but also completely human and
down-to-earth.
I then sought out collaborators — the poet Robert Pinsky and the
playwright Randy Weiner — to turn these initial impressions into an
opera, a form that has long attracted me for its use of word and image
to ground music’s abstract qualities in concrete human experience.
Together we crafted a story about a man who longs to leave the world in
order to pass to a higher level of existence, but wants everything
about himself — his memories, his ability to influence others, his
contact with those he loves, his legacy — to remain behind.
This story evolved into a full opera libretto in which the main
character, named Simon Powers, switches on The System at the end of
Scene 1: he becomes embodied more and more in his surroundings, forcing
those left behind to decide how to communicate with him or it, whether
to follow, and what part of his legacy to retain or reject. The stage
itself becomes the main character in the opera, taking over from — and
extending — the physical presence of the singer. Realizing this vision
has been a daunting challenge, but happily, with the collaboration of
the director Diane Paulus, the designer Alex McDowell, the
choreographer Karole Armitage and my group at the M.I.T. Media Lab, we
are in the process of designing sighing walls, morphing furniture,
gliding robots and even a resonating chandelier to create The System on
stage — and to make it “sing.”
In helping to tell this story and to sonify the score, all aspects
of this physical set translate and amplify Simon Powers’ human
presence, challenging the current limits of our ability to measure and
interpret all the subtleties of a great performance. The techniques
currently being developed are already yielding surprising results,
turning elegantly refined gestures, barely perceptible touch, and the
gentlest breath into sounds, shapes and movements that convey
personality and feeling without looking or sounding exactly like a
human being. It is a new kind of instrument, and we are just learning
how to play it.
Hopefully, these developments will lead to musical possibilities
down the line that I can’t predict right now, just as software and
hardware designed to measure Yo-Yo Ma’s bowing led — in a slightly
zigzag way — to Guitar Hero. I would not be surprised, for example, if
the sophisticated infrastructure that Simon Powers will use to
construct and communicate his legacy when “Powers” premieres next
September in Monaco were eventually to morph into a platform for
creating and sharing musical stories — a kind of “personal opera” — on,
well, your iPhone.
In fact, I think that it is precisely this kind of surprising
freshness that technology can allow — through what can be precisely
customized for each project and through the unexpected new discoveries
that each project seems to require or reveal — that remains one of its
continuing attractions for me.
But we can’t take such freshness for granted. Musical technology is
so ever-present in our culture, and we are all so very aware of it,
that techno-clichés and techno-banalities are never far away and have
become ever more difficult to identify and root out. It is deceptively
challenging these days to apply technology to music in ways that
explode our imaginations, deepen our personal insights, shake us out of
boring routine and accepted belief, and pull us ever closer to one
another.
That’s what makes this kind of work worthwhile and inspires me. But
it also leads to a paradox that I experience every single day: that the
desire to shape the future is not perfectly compatible with the
knowledge that musical experience — and its power to excite and
transform us — is fleeting, here and now, in this very moment. And that
we’d be extremely fortunate indeed to create new sounds and instruments
and technologies that approach the compact, powerful perfection of
playing, listening to or imagining Bach emanating from a solo cello.
So what do you think? Can music made by technological processes ever
match the beauty and impact of a skilled performance on a traditional
instrument? Will an iPhone or its descendents allow us to enhance our
musical imaginations while merging with our bodies, becoming —
literally — second nature as we create and communicate our deepest
thoughts and feelings through sound?
Tod Machover is a composer and inventor, known for developing
new technologies for music and performance. He is professor of Music
and Media at the M.I.T. Media Lab and is currently finishing a CD of
recent music for string quartet, orchestra and electronics, to be
released this spring on the Bridge label. His Web site is todmachover.com