ETHNO TEKH | live at Enig’matik Vibrations [Melb,Aus] 2012 from Ethno Tekh on Vimeo.

You’ve seen impressive tech demos and promo videos with Kinect, perhaps. But here’s a real, live performance with a crowd. (Best moment: an audience member walking in front of the camera. Breaking the flow like that oddly makes the show feel more real.)

Ethno Tekh is a dubsteppy AV performance with a flow of beats and images. Now, we’ve heard all the criticisms of Kinect as musical controller – its fairly high latency and broad gestures mean it can’t quite compete with acoustic instruments or more sensitive physical controllers. But in this case, it seems the performance techniques evolved before the Kinect in a way that suits it perfectly. If you are going to make big, sweeping changes to filters, modulation, and overall musical gestures, why not do it like this? Somehow, the physical expression seems to match the music, better than exaggerating twisting a knob would.

Just be prepared to get physical and bare your arms. Oh — and bonus points if you can solo with a beer bottle, as this guy can.

Full details on how they’re working. It really feels like an engaging, human performance – and you know they’re confident in it when they post the whole video (and you can actually enjoy watching it without edits):

Ethno Tekh is a collaboration between Brad Hammond
and Chris Vik, in a motion-controlled, A/V act. Our performances are totally live, using the Kinect, Ableton Live and a number of custom tools built in Unity3D and Max/MSP/M4L.

The audio is built by using the Kinect to play and loops different synths. Along with the live instruments there are backing drum loops which are triggered the foot switches ([Behringer] FCB1010) to provide an extra layering to the tracks that can’t be controlled live with the Kinect alone. Scripted sounds are kept to an absolute minimum, with the flow of the pieces and the layering completely controlled by Chris.

The motion control data as well as audio analysis feed into the visual system, which brings to life real time rendered audio reactive abstract visuals. The visuals consist mostly of generative abstract geometry, glitchy shaders and procedural animation produced in Unity. The audio backing, looped and live content can be seen on screen which is representing FFT data analyzed on the audio-computer and sent via
OSC to the visual computer.

There is NO post on this video except editing between the cameras and between the audio captured by the cameras and direct feed. This is literally the performance from start to finish. Keep up to date with us on

The artists are based in Australia. And yes, they’re with Enig-matik Records – see our feature story on this label with an exclusive CDM mix:

Frag’mnts – An Enig’matik Records CDM Sampler, Ethno-step Diversity to Hear [Free Listening]

Keep up the work; we look forward to seeing more.

  • audiohufter

    There will come a day when turning knobs will be an obsolete action. I will miss them. Interfaces are getting tinier and will dissolve in just the body. I guess i am oldfashioned but i am afraid this will feel a bit ackward sometimes. (excuse my english)

  • drumcapella

    after having a short look i dont think i want to know what happens with the beerbottle

  • OutlandSound

    Interesting how this gets no real response from the readers of CDM when in my opinion its actually a very good working model of physical control that we’ve all been talking about for years, decades even. It looks like it works great, the music is approachable and entertaining and these guys pulled off what looks like a small but great live show that made people feel involved. I’ll remember this as the moment where I finally saw a potential for mass appeal. These guys need to start hitting the fest circuit, they have more to offer than the usual toasting, button pressing, screen gazing and headbanging.

    • foljs

      “””these guys pulled off what looks like a small but great live show that made people feel involved. “””

      Really? You got all that from the guy dancing badly and the tired hookup of motion controllers to MIDI / OSC data?

  • Robert Halvarsson

    This is actually quite great. Looks liike they pulled this one of in a really captivating manner.

  • Erki Kannus

    I like the technical part of it. Sound sampling is great. But visually the ”dancer” is really bad. I can not watch him – makes me sick.

    • foljs

      That. It’s “white people dancing” at its worst…

    • Chris Vik

      Thanks for comments! Sorry about the stomach, but at least you enjoyed the technical side. Just to clarify, nowhere in this article does it claim that I’m a “dancer” or that I’m dancing, nor would I personally do so myself.

      I’m 100% geek and musician – so if you’re watching this expecting someone to be gracefully prancing about on a stage pretending to control music, then this isn’t the video. This is live synthesis; I control ALL of the parameters of each instrument that I use (there are 12 different instruments throughout the video). For example, with the granular scratching vocal instrument, my hands are controlling: grain size, grain position, high/low-pass filter, lfo speed, lfo amount, reverb and pitch (search “granular scratching” on YouTube). I then loop the instruments that I play to build the music.

      My movements are simply consequential to the input system I’ve created to use with the Kinect. So again I’m sorry, my movements weren’t graceful (indeed to the point of nausea), however to my credit, I haven’t seem anyone control 7 parameters simultaneously whilst looking pretty 😉

    • misho

      well said! i personally very much enjoyed the performance and music. well done and looking forward to seeing more in the future.

  • JAKE

    AMAZING PERFORMANCE GUYS! anyone who is paying out the dancing is COMPLETELY MISSING THE POINT!!!