Wednesday, January 12, 2011

Time flies when you have fun....

I just got reminded that it's soon time to re-start this project again.... I wonder if any of the collaborators read this? Anyhow, over the next few weeks I'll try to create an array of some really string electromagnets and design a PWM controller to drive them. Then, let's see and hear what happens.

Wednesday, April 16, 2008

Getting funding

I mentioned the WHATYOUMAYCALLITRON in the departmental meeting last week, after our new Dean had expressed concerns about how we can promote students to do science and engineering. My answer was, let's make art, by means of technology. I think he was actually quite positive, as everybody knows that I'm raving mad and it normally works. We need to write up a more formal doc about this.

Sunday, March 30, 2008


We need some artistic concept behind this as well, otherwise it'll be meaningless. As I've just adjusted all my steampunk clocks, perhaps TIME could be a topic? What is time by the way? Music, as a time-based medium is quite interesting in this respect. How fast can you hear? Is slow hearing more enjoyable than fast hearing? How long does a sound have to be to be non-musical? Is the universe singing to us but we're listening at the wrong speed?

Friday, March 21, 2008

more on interactivity

I think minimum latency is essential. As mentioned in previous posting, camera based interaction is normally slow. I think we can and should take on board all the ideas from Direct Manipulation, but for a tangible interface. For the last week, I've been thinking about adding to MOD-wheel functions in my synth rig to get more control, using an arduiono and various sensors... Simple things like pots and LDRs give you a pretty direct feel for the parameters you manipulate.

Wednesday, March 12, 2008


nora's post in a comment to an earlier message ( reminded me of a performance i saw at nime in paris 2006 of the reactable (there's lots of youtube links to this)

these tables have a certain kind of interactivity which i think it would be nice to contrast with - they lend themselves to a slow paced kind of gesture (the frame rates involved in camera tracking is one constraint on this) and to activities which i would characterise as 'supervisory' (there's lots algorithmic stuff going on which the gestures and their effects on mediating interface widgets pass parameter values to) - this makes for pieces which are 'builders' - you start off small and get big (this helps the audience work out relationships too... the video nora found was a little hard to work out what the interactional effects were as it started in the midst of things...) - and you do not often do anything sudden and dramatic

it'd be nice to break that mould

synthesis methods

in one of his comments to the ferrofluids message below the man known as cyberviking remarks that video images of such a thing could be used for wavetable synthesis - this is how i make sound from my little dreamachine - frames from a webcam are scanned with the grey values of a given pixel giving you a wavetable value - i have experimented with a number of methods along these lines - i prefer changes in the image yielding spectral changes rather than something as gross as pitch as that tends to end up pretty cheesy (another video theremin anyone?)

i thought i'd start off a thread here - anybody have any further interesting ideas for using an actual physical volume or surface as a means for controlling/shaping sound synthesis?

Sunday, March 9, 2008

100 fingers (or toes)

Just imagine, with the kind of instrument we're talking about here, it could be played by (for example) 10 people simultaneously - direct manipulation style. As part of the system is electric, we could also allow on-line collaboration. One of our students in 2001, Michelle Dillon, did a Mawashi Machine...