From the Lab: Airborne Beats from Oblong Industries on Vimeo.

With hand gestures recalling those that first reached the mainstream in Minority Report, “Airborne Beats” lets you make music just by gesturing with your hands and fingers in mid-air. You can drag around audio samples, and make gestures for control, controlling both production and performance. Coming from the labs at Oblong, it’s the latest etude in a long series of these kind of interfaces (see below). They in turn point out this could work with any time-based interface. And because of the nature of the interface, it also makes those tasks collaborative.

Technical details:

If you’re wondering how the app was built, Airborne Beats was programmed in C++ using g-speak and the creative coding library, Cinder. As for hardware, we pulled together one screen equipped with an IR sensor, a “commodity” computer (with g-speak installed), and speakers. For the most-part, it was designed and developed by a single person. Although Airborne Beats is currently a demo, the users of this application could be composers, DJs, or perhaps even children or educators. And the ability to recognize multiple hands opens up some unique collaborative possibilities (guest DJ, anyone?).

Now, it’s a clear a lot of work and talent went into the app. But I can’t help but notice that the results are, frankly, a bit awkward. (Of course, that’s why testing and experimentation is so valuable: there’s no substitute for trying things out.) There’s some really clever stuff in there, including the overlay of envelopes atop waveforms and the way the interactions work, particularly grabbing audio from a pool. But while it shows potential, it’s also hard to see a lot of advantages over the conventional input for the same interface.

In fact, it seems what this otherwise-impressive demo needs is to somehow pair up not with a GarageBand-style timeline, but the likes of AudioGL, the 3D interface we saw earlier today. That interface on-screen must in turn deal with the fact that the mouse was never intended as a 3D interface. (To be fair, that hasn’t stopped gamers from making lightning-quick work of using it, but it still seems worth reviewing.)

More:
http://oblong.com/what-we-do/labs

And here’s some of the other work they’re doing. As you can see, some of these experiments, built around the gestural interface, suggest more effective possibilities.

Oblong Labs from Oblong Industries on Vimeo.

In fact, this all segues nicely to an insightful post by Chris Randall today at Analog Industries.

Chris makes an impassioned argument for taking a fresh approach to designing interfaces for music software, rather than just copying existing music gear. Viewed in this context, that becomes necessity: you can’t devise a novel musical operation or physical interaction and expect it to match up well with a copied-and-pasted UI. As Chris puts it, in regards to AudioGL:

What I really want to talk about is how this shoehorns in to my latest flight of fancy. What I like about this app is that Jonathan has, for the most part, ignored the standard conventions that the music tech industry relies on. (COMMANDMENT ONE: THALL SHALT MAKE ALL COMPRESSORS LOOK LIKE A BEAT-UP 1176! ETC.) Instead, he’s just made it look cool and logical.

Now, perhaps Hollywood Tom Cruise should stick to the hand gestures. But approaching the problem of designing an interface afresh ought to be just that.

  • PaulDavisTheFirst

    i visited oblong in barcelona about 4 years ago. to be honest, the stuff i saw then was much cooler and much deeper than this beats stuff.

    once you understand the basics of their technology, which is tremendously and seriously awesome, you can also understand how totally different it is from anything touch or direct sensor based. sadly, in my estimation that means that it doesn’t have a lot of applicability to music/composition/creation/control, other than being a lot more expensive, and a lot more difficult to get right. its a bit like optical computing and supersonic flight – incredibly cool, but it turns out to be not that useful for the things that non-defense-contractors tend to do :)

    • Motion

      Probably been mentioned on CD before but maybe the upcoming Leap Motion can crack it: https://leapmotion.com or at least get close, Audio GL should be a lot of fun if it gets support.

    • DG

      Leap Motion, though its very close-range, has a lot promise, particularly with being able to scale down the cost/size of these types of systems.

  • http://twitter.com/kallepa Kalle Paulsson

    It’s interesting to see these concepts pop up now and then. But they never address the basic problem with using gestural interfaces like these for any longer periods of time – “gorilla arms”.

    Try it now. Wave your arms in front of you like you were composing some fat beats. How do you feel after a couple of minutes? Fifteen minutes? Half hour?

    http://en.wikipedia.org/wiki/Gesture_recognition#.22Gorilla_arm.22

    • DG

      Try it now. Wave your arms in from of a harp, drum kit, karate opponent. How do you feel after a couple minutes? Like you’re actually using your body.

      http://en.wikipedia.org/wiki/Laziness

    • PaulDavisTheFirst

      its very different to move and then strike/pluck/stroke another object than it is to move your hands through the air with no touch-sense target. Very, very different.

    • DG

      Good argument about the lack of touch-sense targets, however I personally feel though that if the expressive capabilities of hands moving through the air in 3D space aren’t used as some kind of input source–if all the hands are used for is intermittent navigational gestures–then we are missing out on a big creative potential. From an NUI stand-point, we can’t expect physical buttons to always be in the world around us where we want them to be. The jump to virtual buttons (lacking touch-sense targets) seems to be the way of the future.

      I actually visited the Barcelona office recently and tried this demo out. Though it is difficult to track the gestures with the accuracy of say, a mouse, the extended use fo the application is less tiring that you might think. inevitably you end up taking breaks to think about your next compositional choice, during which your hands can drop down to the side. In the meantime, the music continues, so it doesn’t interrupt the experience. Also, there are some nice bounding cues (not shown in this video) that show up when the user is too close / far away / far right / far left / etc. These help guide you to the optimum place on the floor, so that you don’t become fatigued. It is helps establish spatial memory, so that you know approximately how far you need to move your hand to perform the interactions.

    • PaulDavisTheFirst

      From an NUI stand-point, we can’t expect physical buttons to always be in the world around us where we want them to be. The jump to virtual buttons (lacking touch-sense targets) seems to be the way of the future.

      is there something wrong with building actual physical control surfaces, aka “instruments” ?

    • DG

      No, I’m not trying to advocate that. In fact, I think physical control surfaces are actually ideal where cost, wear and tear, scalability, multi-user scenarios, etc, are less of an issue.