Look around the room you’re in. Drum your fingers against some of the objects around you. Now imagine that you could turn those touches into any imaginable sound – and all you’d need to play them is a single contact mic. And we’re not talking just simplistic sounds – think expressive, responsive transformation of the world around you, all with just that one mic, thanks to clever gestural recognition.

Bruno Zamborlin has made that idea a reality, with hold-onto-your-chair results. It’s not available yet for public consumption, but it’s coming.

Bruno explains to CDM:

Mogees is a novel way for transforming any surface into a musical instrument.

By putting a (very cheap) contact microphone over a surface, the software can recognise different types of touch and associate them with different synthesisers.

Users can train the software with their own ‘gestures’, using both bare hands and objects. In the video demo we put the microphone over different surfaces such as kitchen tables and balloons.

The sound synthesis is based on two different techniques:

1 – physical modeling, which consists in generating the sound by simulating physical laws. Different materials can be simulated, such as membranes, strings, tubes and plates

2 – mosaicing, that works as follow: first, users load a sound folder; then, the noise coming from the microphone is analysed and the software continuously finds and plays its closest segment within the sound folder.

Mogees has not been realised yet. It could be published as Max4Live patch in some month.

Yes, we’ll be watching for future versions and publication, with bated breath and eager hands.

http://www.brunozamborlin.com/mogees

Update: Readers point to similar earlier work; obviously, contact mics have long been readily available. I’m not always concerned with whether something is new or not – old and cool can be cool. But what does appear to be new here is the additional gestural analysis to work more accurately with location. That takes an existing technique and refines its musicality. -PK

  • http://blog.califaudio.com tj milian

    It looks great. I’d love to see this as an iOS app. 

  • http://www.thewhyproject.com The Why Project

    In 2000, I helpded to do almost exactly this at the Nouvelles Scenes festival in Dijon, France. We used a contact mic on a table going through patch programmed on the Nord MicroModular, and then amplified over a PA installation.

    It did not have any processing to recognize the touch place/distance, but the patch did use amplitude and frequency content to drive other parameters… so all in all a similar concept.
    The hand tapping on the table was also projected on a large screen at the other end of the (industrial) hall.

    Together with the huge amount of feedback from the room, and the fact that we had assigned another 16 parameters in the patch onto a midi controller, made for quite a spectacular result, and vast number of different soundscapes.

    • peterkirn

      Ah, excellent. Yeah, I think what's novel here is finding the location. But that said, did you publish at all on what you did? (I'd love to see them publish here…)

    • http://www.thewhyproject.com The Why Project

      Not much published about it, and I'd have to ask around to see if there's any video footage. http://www.theatreonline.com/guide/detail_piece.a

      That's the program guide of a festival where we did this the second time, in Paris. "Approximations" is the name of the performance. Bertrand is a friend of mine, it was his project, I did the patch on the Nord MM, and tweaked the sound during the performance. It was his hand tapping though :D

      Not sure how good your French is, but if you want, i'll message you a translation one of these days. In brief, the program guide transcribes the thought behind the performance, the relationship with the public, etc.

    • ian

      sounds great! can you share the patch?

      thanks!

  • boneless

    wavedrum mini clip without the wavedrum!

    i'm pretty sure this is exactly how that works, and it uses a special physical modeling algorithm developed by yamaha (sondius-xg)– so this mogees uses midi values?  The real big thing the wavedrum has going for it is the much wider dynamic range it can capture vs. 128 values of midi, in addition to combining live audio–

    super cool!

  • Kevin Hackett

    Depeche Mode was doing exact same thing back in the 80's.

    • peterkirn

      Not with gesture analysis, they weren't, I don't think. ;)

    • Brent Williams

      No, but John Cage and Merce Cunningham (with assistance from Robert Moog) were doing something in the general ballpark… http://www.9evenings.org/variations_vii.php
      Not quite the same, but I thought you might find it interesting.
      Cheers,
      Brent Williams
      brentwilliams@ozemail.com.au

  • http://ableton_forum.com starving student

    walking is nothing special even babies can do it……….but adults can do it much better, give this guy the credit he deserves

  • derekmorton

    Fun video and I like the results he gets however using the word gesture really mislead me.  Gestures based analysis implies sensing movement (even silent gestures) and his piezo really can only sense vibration data unless I am missing something. I guess percussive movement can qualify as a type of gesture but loads of folks are doing really nifty stuff mapping Kinect data to sound parameters without the need to generate a gesture with sound.

    • bruno

      "pattern" recognition would probably fit better

  • Aaron

    I'm excited, as an avid table/pants/anything-I-can-find drummer :)

  • vinayk

    im impressed!

  • scott

    more info on the software here:&nbsp ;http://imtr.ircam.fr/imtr/Gesture_Follower

  • http://debsinha.com deb

    that is just sick. frankly, this is what would make me get max for live.
    nothing's new, and I am sure someone has done something somewhere before, but this is still fantastic. i am happy to call these gestures—because, um, they are. i need the tactile—i'm not a dancer and i don't have the control over my body that would allow me to really interact with a kinect in what i feel is a meaningful way. as a percussionist who has studied a lot of different techniques and traditions I can't wait to hear more!

  • tim

    reminds me of strettas piezo/maxmsp experiments: http://www.youtube.com/watch?v=CSopUi9-pUg

    • david

      Awesome. It seems essentially the same idea. Thanks for the pointer!

    • john

      it is quite different for two reasons:
      - in stretta's work there is no recognition. each instrument is standalone and the user decide which one to use;
      - the synthesis is sampling or FM, i don't hear any physical modelling on it

    • http://www.papernoise.net papernoise

      was about to say the same! :) I think stretta's works a bit different, still, both seem like great ways to get a certain sound out of anything.

  • wetterberg

    Guys, let's not get too excited over this – Yes, Cory Doctorow got psyched, but we're better than that, aren't we? Shouldn't we have a cynical outlook on videos like this, regardless of their IRCAM'edness?

    Yes, it listens to the way the surface is being played; max has been doing that since bonk~ et al and msp came along. And we can already detect things like "noisiness" of a sound, which would work for switching between "impact" and "scrape".

    Also, it apparently looks into… the FUTURE. Check at 1:54 and hear the sounds change *before* they're being triggered. I hate to be "that guy" but it looks like there is some switching of sounds there, perhaps by listening for gaps in the playing?
    Come on, people. This is MSP in a nutshell.

    Now, the *hardware* – that contact mic looks reaaally really cool. Mine all sound crummy. 
    Sorry for the cynicism. :-/

  • oufant

    looks interesting….worried about latency issues.

  • joshg

    Is this actually doing any location mapping at all? I think the word "gesture" is (accidentally) implying position-tracking when this is really just recognizing the difference between taps and scratches.

  • randg1

    liked this content/ article. I would certainly recommend the same to others as well.visit:mumbaiflowerplaza dot com(send gifts and flowers to Mumbai).

  • http://www.papanoongaku.com gunboat diplomacy

    sorry, but i think this is totally misleading.  No gesture analysis is being done; you don't need to flap your wrists and fingers all around to get the sound he makes.  it's all just max analyzing vibrations, sorting them out based on some criteria (frequency or amplitude or something) and then mapping it to a midi note or synthesizing a sound.  Stretta did this on Cycling 74's website and it was cool and lacked all the flashy (and overblown) hand movements.  

    not only that, but releasing it for M4L is even worse.  Just make it a max patch.  Why force people to spend the extra dough on M4L or for Ableton?  or we can all download Stretta's patch FOR FREE and get the same result.

  • http://flavors.me/glorian friedrich glorian
  • Jengel

    I think a lot of you guys are missing the broader point here. As mentioned above, when you look into the details, the piezo example is just one case of multidimensional data, where gestures are learned using machine learning and then the gestures are estimated probabilisticly in REALTIME. This is a very powerful concept, and could be used with a great deal more of live inputs than just contact mics. 

    As far as the sound mosaicing goes, I'm reminded of experiments in Jack Gallant's lab at UC Berkeley, where they using fMRI as the multidimensional input for their algorithms and do a "mosaic" of youtube videos, to not only give suprisingly accurate reconstructions of what people are actually seeing from measuring their brain blood flow, but also eerily beautiful images in the way they don't perfectly recreate it. 
    http://gallantlab.org/

    I think this type of eerie beauty would be great to explore in a sound context.

  • http://www.cooperam.com lelemarea

    perfect for autism and musictherapy

    • http://www.magneticpitch.com John

      +1
      but "perfect" would include self-contained synth / amplifier — housed in a doll(?) or other tactile comfort/stimulus

  • Brent Williams

    &lt ;http://www.9evenings.org/variations_vii.php>
    John Cage, Merce Cunningham (with assistance from Robert Moog I believe)
    Similar ideas.
    Check it out.
    The important thing to note is that this was so horrendously difficult to realise back then, and I think this might be why Cage and Cunningham did not revisit this in a hurry. Nowadays however, the possibilities for collaboration between choreographers and sound artists and mind-boggling.
    Cheers,
    Brent Williams

  • Brent Williams

    &lt ;http://www.9evenings.org/variations_vii.php>
    Sorry, I made a mistake with the URL before on the Cage / Cunningham thing. The above one should work. If not, copy it and paste it into your browser. It's worth looking into the Cage / Cunningham collaboration for those really interested in sound from gesture / movement…

  • apalomba

    I noticed he is using some kind of putty to hold the mic in place.
    Does anyone know what that is?

  • http://www.tradefor.net allez

    this is ingenious

  • http://www.ebenestudio.com lematt

    i'm quite interested by the brand of the contact icrophone used in this video.

  • http://ebenestudio.com lematt

    *microphone*

  • http://www.didierlahely.com 10dier

    @Brunozambelin and team

    Hello there, I was thrilled to discover this “tool” to experiment some “glitches music” or whatever you may think to use with/in/on/under etc! : ))

    In my dreams, I can see a stethoscope connected to an iPhone:iPad application…. : )

    Hurry please!!! : ))

  • david

    I believe I have that contact mic. It's a Schaller. On average, I found the difference to a (then) $500 Schertler and such to be negligible. The only crummy part is the cable, which is pretty stiff. Bass response in particular is very good.

    I too felt there was some use of particularly favorable samples to suspend a little disbelief here and there (the typewriter scene?). Nothing wrong with that though. It's not like he's selling anything, and unlike the article lead-in, the creator himself puts 'gestures' in quotations, which is more on the mark. It doesn't recognize gestures, but you can use gestures to make unique sounds which are recognized. If the sounds the gestures make are unique enough.

  • Polite_Society

    I was thinking the same thing. Some of those sounds were triggering before contact happened.

    Not to mention in that first scene, why wasn't the vibration from the passing traffic causing any kind of feedback? Potentially good algorithms, but yeah…

    I mean.. it's nothing that new, exactly. My last generation Korg wavedrum essentially does the same thing, though I am really impressed with the samples though. They sound real nice.

  • Yasha

    Looking at the documentation for Gesture Follower at http://ftm.ircam.fr/index.php/Gesture_Follower (the work of Bruno Zamborlin's collaborator Norbert Schnell et al), "gesture" is being loosely defined to include "any multi-dimensional curve." The same set of modules is used to analyze spacial gestures, such as with a Wiimote or a mouse, and "gestures" involving changes in audio parameters.

    It's also interesting to look at CataRT (at http://imtr.ircam.fr/imtr/CataRT) which I would guess is the basis for the "sound mosaicing" synthesis that Mogees uses.

  • http://www.papanoongaku.com gunboat diplomacy

    that's interesting, but how is it going to work with a piezo mic?  there is mention of a x-y axis and you can take freq and amp from the mic, but i still don't know how it supposed know that you're drawing a triangle on a coffee table vs a squiggly line on a tree trunk.  i understand being able to measure change between the signal coming from the mic is important.  and all kinds of valuable metrics can be gleaned from it from a musical standpoint (aftertouch, velocity, tempo, etc etc).  i suppose i'm just being a party pooper.

    but i suppose my problem is with the headline.  "With just one mic (and a very advanced Max/MSP patch), any surface becomes an gestural instrument."

    and to go back to peter's Edit: how does this patch know anything about location?  or what surface it's being used on?  nothing in the post or the original post has any info on that.  to me it's just a matter of how the acoustic properties of whatever surface you are using transmit the freq and amp of the mic.