Our students arrived yesterday with their assignments, namely, a
description of a short study that uses one sensory device to control one
output device. Both Dawn and I were pleased to see how seriously the
students had taken the work, and with their proposals.
The process over the next 48 hours (until class on Thursday) will be to
meet individually with each student for one hour. During that time, we will
digitize and materials each one needs, and I will program the Interactor
file that maps the information from the sensor to the output device. We
will spend the remaining time (hopefully 1/2 hour) trying out their
environment.
It is a conscious decision on part of Dawn and myself not to burdent these
students with learning anything about programming. We wanted them to focus
on the relationship between content and the tools they are using to express
that content, and not on the implementation of the tool itself. If there
were a substantial amount of time to work with the students, I think that
we could have given them some of the implementation chores, but not over a
five-day workshop.
In any case, here are a few of the 13 proposals. Note that the first two
were implemented last night, and so there is a bit more detail on the
actual implementation there.
-- F. wants to control the sounds of a radio (including some of the noise
found when you turn the dial) with BigEye. We let her know that we didn't
have a way to control real radios, but that we could make digital
recordings of the material she wanted and control those sounds. She I told
her of the John Cage piece in which seveal people adjust the tuning and
volume of a radio, as her and John's piece seem of a like mind.
-- J. wants to have the movements of her hand across various parts of her
body trigger a combination of intimate sounds (light kisses, the rubbing of
skin, softly shhhhsh-ing someone, etc.). She wanted to attach sensors from
the Alesis D-4 to her hands, and turn the sensitivity way up so that the
lightest contact would trigger a sound. When she tried out the piece, it
was quite effective as she sometimes made slow, languid gestures (with
unpredictable but interesting results) and sometimes pressed hard on her
body -- making a contrast between the "soft" sound and "hard" contact. Dawn
worked with J. on the movement quailities in the limited time that we had.
-- V. will use the MidiDancer to control the recorded sound of hip-hop
records, in fashion similar to a club DJ. She wants to be able trigger the
looped phrases and control their volume with the movements of an arm and a
leg. Perhaps I will finially discover the commercial application of my
little invention... ;-) I am interested to see if she will attempt to
recontextualize the music with her movement, or just support it with
movement that is appropriate to the club scene. We will see.
-- O. intends to use the Alesis D-4 sensors to trigger sounds. He wants to
attach the sensors to the wall of the theater, so that he can trigger the
sounds by jumping into the wall surface. He wanted to know if he jumped on
the wall far from a sensor, would the sound be softer than if he hit the
wall right next to the sensor. I told him that the answer was yes. He then
said that they should be spread out by some distance, so that he can
control the volume with his proximity to each individual sensors. He also
wanted to know if he could put the one sensor between two slices of bread,
which I also told him would work.
The schedule for Day 3 is a marathon: From 0900 to 2300 we meet with 11
students, are interviewed in an interesting sounding critical theory class
called "Electronic Civilzation", and give a lecture in the "Digital
Dancing" class. It should make for a productive day...
More tomorrow,
Mark
================================================================
Mark Coniglio, Artistic Co-Director | troika@panix.com
Troika Ranch | http://www.art.net/~troika
================================================================