This week, at Germany’s re:publica conference – an event linking offline and online worlds – I addressed the question of how musical inventions can help predict the way we use tools. I started all the way back tens of thousands of years ago with the first known (likely) musical instrument. From there, I looked at how the requirements of musical interfaces – in time and usability – can inform all kinds of design problems.

And I also suggested that musicians don’t lag in innovation as much as people might expect.

I thought about whether I wanted to post this as a video, as it’d be more structured if I wrote it as an article. But it occurs that some people might like to hear me talk off the cuff, “ums” and all, and those who did could provide some feedback. I really never give the same talk twice; I’m constantly revising my thoughts and part of the reason is being challenged by feedback. (Yes, as blogging may seem a solo monologue, in my experience it’s more like a feedback loop, not an echo chamber. Otherwise, I wouldn’t keep doing it.)

Full description:

From HAL to Wiimotes and Kinect, musicians have predicted the future of machine/human interaction. Because music connects with time, body, and emotion in a unique way, they test the limits of technology. Now it’s time to work out what comes next.

What’s going on here – how did musicians manage to invent major digital interaction tech before anyone else? Before the iPad, the first commercial multi-touch product was built for musicians and DJs. Before the Wii remote, musicians built gestural controllers, dating back to the early part of the 20th century. Before the moon landing, Max Mathews’ team of researchers taught computers to make music and sing, inspired HAL in 2001: A Space Odyssey, and may have even built the precursor to object-oriented programming. Music’s demands – to be expressive, real-time, and play with others – can test the limitations of technology in a way people feel deeply, and help us get beyond those limitations. Music technologist Peter Kirn will explore the history of these connections, show how those without any background in music can learn from this field, and examine how musicians may be at the forefront again, as they push the boundaries of 3D-printing, data mining, online interaction, embedded hardware, and even futuristic, cyborg-like wearable technology. Even if you can’t hold a tune, you may get a sense of how to get ahead of those trends – before HAL gets there first.


  • christopher jette

    Your slide has Max Mathews (1957 – ), implying he is still around. Sadly, it should read (1957-2011).

    • Peter Kirn

      Sorry, those are the dates of the life of MUSIC (and derivatives, including Csound). And it is very much still alive.

  • Swen B

    Another very good lecture about interaction at its (currently) limits. Thanks for that. You developed some interesting new aspects, dealing with the relationship between imagination, gesture and (audio) control, since I last heard you speaking two month ago in Duesseldorf. Well done. I’m curious about your next steps into a deeper understanding, how musical performance will force interaction designers to close the gap between the machines and us – I mean … not physically, but in understanding.

  • ioflow

    a fascinating talk, to be sure. and, i noticed that the SketchSynth clip @ 18 minutes features one of my songs, “mnml autmn.” CC licensing in action!

  • HuskyFluxPlus

    THX Pete!

  • BBischof

    Thank you for the talk :)

  • Benjamin Carey

    Nice job Peter, thanks for posting…

  • Dave O Mahony

    Fascinating stuff Peter, thank you for sharing!
    I am going to have to reference you in my own research :)

    Also what a beautifully poignant quote, “Imagine the absurdity of a one-second delay between blowing a note and hearing it.”

  • Nicomachus

    Music is derived from the specific intonation of audible frequencies of sound in sequential time. Such frequency is currently measured in Hz, which signifies the cycles per second that the surrounding air pressure is oscillating. The perception of “harmony” in regards to music is directly related to the sympathetic relationship between a certain number of sounds occurring simultaneously or sequentially or both. The perception of physical “beauty” in regards to human attractiveness is directly related to the various symmetrical ratios of geometry between facial features.

    The calculation of Pi (which is the ratio of a circle’s circumference to its diameter) is an irrational number with no known limits – which is of course much different than the simplicity of binary programming (which is ultimately just a series of 1s or 0s reflecting a position of “on” or “off”). Quantum computing is based on coding which is theoretically limitless as well – relying on shades of grey between 1 or 0 rather than the absolutely black and white 1 and 0 currently in use. Only time will tell if such computing lives up to its promises.

    “Music” is quite a bit different than “sound”, in terms of a classical definition – in the same way that physical “beauty” is quite a bit different than a physical “face”, although many artists throughout history have sought to blur these boundaries or even dissolve them.

    Personally, I would say that making “sound” is not the same as making “music” – but of course, that is just my humble opinion of how those terms should be understood. The specific evocation of emotional resonance by way of harmonic intonation is not currently understood by the scientific or neurological community at large – nor is it a very high priority. The priorities of our current scientific world tend to flow in other directions which are perceived to be more directly attuned to the service of various personal gains (financial, political, historical, etc).

  • James Levine

    Thanks Peter, watched it once through. Very engaging and well-honed for this length talk. And because you asked for the feedback, I’ll share my questions. If I was in your audience, I’d still want to know what that crowd-sourced “Daisy” sounded like, for sure. Here’s one:

    If you recall the new opera house in Oslo, I believe the disciplinary decisions and ergonomics literally came out of observing skateboarders and their interaction with space in order to anticipate movement throughout the building and its exterior. If we were to bring things back to visual design, the ancient flute being the first artifact, how might music and its interfaces inform and predict designs that solve for flow and shape?

    And a follow up would be to reconsider the monome and the culmination of the grid. Do you really think this is a musically informed interface that will predict what future technological interaction will look like? To me, it’s more like the Shuowen characters required to print and read a Chinese newspaper. Aren’t there musical, physical, and mathematical reasons why most instruments over many prototypes still have not arrived at a grid?

    • James Levine

      The grid paradigm?