Ding dong, the score is dead… or not, in fact. Photo (CC-BY) Steve Snodgrass.

There’s a peculiar false controversy going on at the moment over music notation. First, the blog for online (Flash-based) browser notation editor Noteflight introduced a manifesto:
Music Notation Today, Part 1: A Brief Manifesto

The essay by president Joe Berkovitz is a good read, but it oddly makes the comparison between notation and recorded sound, which is a bit like saying a telephone is better than a DVD. One is interactive and intended for human conversation; one is not. So, go ahead and enjoy the copy of Inception that arrived from Netflix — just don’t take it as an excuse not to call your mother. It’s an argument notation will win, to be sure; it’s just not really a very fair fight.

That is, of course, the implication of Berkovitz’s argument, but the failure to state it overtly prompts Synthtopia to run with the comparison:
Does Music Notation Matter For Electronic Music?

Synthtopia’s James Lewin then goes on to make the following argument:

While Berkovitz argues in favor of “looser” communication of music, an over-arching trend in electronic music has been to give you greater and more immediate control over sound.

I’ve heard this before, and it’s worth asking. But I think if you really ask the question, you’ll find that notation isn’t less relevant: it’s profoundly more relevant.

Yes, indeed, electronic music does give composers direct control over sound for solo work. Lewin goes on to say, “For example, it’s fairly routine for composers to create large scale works, such as soundtracks, without the use of traditional notation.” True — so long as they don’t hire any musicians.

Involve more than yourself, and you’re back where you started. Let’s assume, for instance, you want turntablists, samplists, or controllerists. Great! Oh, wait – you might need to tell them what to do. Now, you could try to explain it to them, but the moment you want to provide any kind of structure to the improvisation, odds are you’ll need some sort of picture.

Quick — write this down. Photo (CC-BY-ND) piermario.

“Some sort of picture” has always been the core element of music notation. The issue of whether this follows traditional 19th century engraving practice is irrelevant – and entirely inappropriate to many forms of music. But if you draw a picture, whether you use a computer to make that picture or not, it’s a score.

Even working alone, these kinds of representations become critical. We might assume that it “marginalizes” notation because computers facilitate solo work. But as we remember the contributions of Max Mathews this week, it’s important to note that from his first pioneering digital synthesis system over half a century ago, there was always the notion of some sort of musical structure. (In Csound to this day, it’s called a “score,” and not by accident.) Whether you notate on a staff, in pictures, or in code, you create a representation of musical structure in time. In a conventional score, that representation is interactive and open to interpretation. Computer programming languages and graphical patching environments give us new ways of doing this. Sharing that code or graphical patch lets us share our ideas with others. And the moment you want someone to perform a physical gesture to make your music, you return to the same set of needs that have driven music notation for millennia.

A Delphic Hymn, 2nd century BC, complete with simple annotations for pitch. If you don’t use something like this, you must teach all your vocalists entirely by rote and hope they have good memories. Side note: if things go really badly with this whole global climate change and depleting oil thing, I expect this will be the big forward advance in tablet platforms, not Android or iOS. Create Stoneage Music, coming to you on a cliff face soon! Photo Public Domain, via Wikipedia.

There are fancy solutions — see this paper, with lots of pretty images of “spectromorphology,” for one — but how fancy it is doesn’t matter. You’ll need something, even if you scrawl on napkins.

In fact, the moment you want to think about the musical structure, you’re likely to use some sort of visual or representational metaphor. Open up any music software program, and these representations are ubiquitous. Waveforms and spectra are also accompanied by piano rolls, graphs, blocks, colors, and symbols. The Ableton Live Session View has LEGO-style colored blocks. Drum machines represent rhythmic subdivision in units derived from centuries of notation; take away even the handy notes and flags Roland added to theirs, and you still see a grid that you could quickly explain to someone who fell through a wormhole from the 16th Century.

“x0x”-style rhythmic grids on a drum machine, as translated to conventional notation. Hint: the patterns on the bottom are typically easier for humans to read, not only because of convention but because they evolved for the sake of quick readability. Photo (CC-BY-SA) The_WB.

If you want to take any one of those patterns and give it to another musician, then you will certainly translate it into a picture. If traditional notation is the most appropriate, you’ll use that. If graphical notation gets the point across more clearly, you’ll do something non-traditional. But that question has everything to do with intention and communication. You might need to adapt the notation to the technology, but that’s always the case. The turntable requires some specialized symbols, but so, too, do fingerings on a woodwind or plucking technique on a harp.

Speaking as a composer, what frustrated many composers in the 20th century with notation was actually the same criticism typically levied against the computer: notation was too precise, too limiting, too entrenched in certain expectations about measuring time and tune. If you really only wish to organize sound in the privacy of your own home, never involving another human being, you might find these attributes of the computer appealing. But if anything, the computer has given us the potential to be freed from these same limitations, by allowing us to quickly create new graphical and textual languages for representing music, and by reassigning time, tune, and timbre to anything we can possibly imagine. In doing so, they present new frontiers for other human beings to improvise and perform live, whether they’re working with another digital machine, their own voice, or a kazoo.

No one said you had to use just one system of notation to make a score. New graphical solutions assist in electronic music – but also sometimes better communicate intentions across a broader spectrum of ideas. Photo (CC-BY) Charles Kremenak of a score by Cheryl Leonard.

What has electronic music done for music notation? Simple: it’s expanded its necessity, broadened its meaning and applications, facilitated its storage, transmission, and sharing, simplified its production, exploded its possibilities in everything from graphics to interactivity, and freed it from centuries of accumulated restrictions.

What’s on the left (an MP3) doesn’t replace what’s on the right (a score) because a canned recording doesn’t replace live performance, visual communication, creation, representation – or thinking. Photo (CC-BY-SA) Yagan Kiely.

My prediction: if you want to look for the growth area in music technology, it’ll be in notation. We’ll see more of what we already have (conventional notation), and a broader category of what qualifies as musical notation – a greater spectrum of notational systems:

  • More kinds of visual musical notation. New interactive systems will facilitate explosive exploration of the connection of visual symbols to sound.
  • The display becomes a blank page. Tablets of all kinds – the iPad being only the beginning – will adapt computer displays to forms usable in performance. That’ll be a huge boon to conventional notation and new graphical notational systems alike.
  • More connected. The ongoing growth of the Web will mean new ways to edit, share, and view notation. Case in point: guitar tab is massively popular as a a search term online.
  • More possibilities. Whereas engraving systems restricted notational practice to certain (largely Western) traditions, open-ended computer notation will make it easier than ever to use alternative notations and non-Western systems.
  • More people. People will continue to play instruments. And they’ll need to notate gestures for new instruments as they’re invented.
  • More improvisation. Written notation and improvisation aren’t necessarily at odds. Any culture with writing will typically make some annotation, no matter how simple, on a score, even if only squiggles on a sheet of lyrics.

The only way recorded sound would make this go away is if recording makes people stop making live music. But recording, for all the times it threatened to do that, hasn’t succeeded yet in making that happen.

In fact, the potential of digital technology for notation is so broad, so diverse, that it almost does it a disservice to put it in one post. So don’t look at this as a manifesto: look at it, instead, as a challenge, to look at new ideas in electronic music in terms of how they use design, visuals, and textual representation to communicate ideas.

Viewing the world of sound through the grand staff is limiting, and for certain sounds, anachronistic. But to cease to view music through any kind of representation whatsoever would mean abandoning musical thought itself.

I love this definition of music notation on Wikipedia: “Music notation or musical notation is any system that represents aurally perceived music, through the use of written symbols.”

The word “written” doesn’t really fit; if it did, engraving killed musical scores and writers stopped “writing” when they bought typewriters. Music notation, like language itself, is fundamentally symbols.

Oh, and by the way – editing and sharing scores in your browser? Pretty darned cool. And if you think Internet access isn’t capable of making revolutions happen? Well…

More exhibits:

Notation need not require linear time; it can be interactive. Mozart’s K.516f Musikalisches Würfelspiel was aleatoric music, determined by dice rolls. But it still conveyed that idea as written notation. And it’s a natural for software adaptation, as in this 1991 version for Atari.

Computers can provide new interactive notations that double as interface. Iannis Xenakis translated back and forth from music to architecture and spatial form, and also pioneered work in using digital graphics tablets as ways of expressing ideas with the computer. His work is carried on in the powerful IanniX software. (Thanks, Brad!) But that’s the fundamental point here: arguably, any computer interface is some form of notation.

What about the vision impaired? Using notation does not require having sight; computers have been a boon to expanding access to notation. The late Ray Charles was a Sibelius user; sadly, it seems Dancing Dots no longer supports Sibelius, but there are other options. The GPL-licensed open source Freedots continues to work with MusicXML scores for compatibility with many tools. Dancing Dots continues a variety of software and hardware tools for varying degrees of vision impairment from low vision to blindness. These also include interfaces that enable other music software, notably Cakewalk’s SONAR. A 2006 overview from the Texas School for the Blind and Vision Impaired discusses some of the research and tools.

What about rote learning? None of this is to take away the power of rote musical learning. But that’s independent from the computer question; rote musical transmission is perhaps the most direct means of communicating a musical idea between people, and illustrates how significant human communication is to musical process. And even through rote learning, I would think you might come to understand certain patterns of mode or rhythm, which means internalizing those patterns as some kind of mental representation or symbol.

Where these cultures have writing, they tend to have some form of notation. So, for instance, in India – even in a culture in which oral transmission is common – notation has been found as early as 200 BC.

  • brad

    Speaking of pretty pictures, don't forget to check out IanniX
    http://www.iannix.org/en/index.php

    Video intro:
    http://vimeo.com/22176407

  • http://chipflip.wordpress.com goto80

    Nice post! I started to think about trackers when reading this.

    Trackers were probably the first widely popular interface that wasn't based in traditional notation. Unlike Mathews-esque interfaces, they normally don't set the duration of each note and they scroll vertically.

    Still, it's close to traditional notation in the sense that it shows so much of the song. A chiptune in e.g. Protracker shows all voices and their ornaments simultaneously, and anyone who knows the language understands it. That's probably kind of rare. Would be interesting to see a musician use trackers as instructions. 

  • poopoo

    Being able to sit down with a guitar and piece of sheet music and play a song you've never heard before that was written 300 years ago is something special.

    Maybe in 300 years someone will pick up a piece of sheet music and play this.. http://www.youtube.com/watch?v=z4hQG-XTVa4

  • http://www.schwetter.de/blog Holger Schwetter

    quote: "The only way recorded sound would make this go away is if recording makes people stop making live music. But recording, for all the times it threatened to do that, hasn’t succeeded yet in making that happen."

    In fact studies on popmusic show that the distribution of rock music records led to a rise in selfmade music because loads of young people started forming bands. you can easily check this by looking at the fact that the music instrument suppliers largely live on the amateurs. I guess the same can be said about electronic music or hip hop as well.

  • Juno

    I think that this discussion assumes that a score = time line. All the examples are grids, and that's only one way of working. The discussion doesn't cover process music, which can be highly defined but doesn't follow a notation.

    1. Play middle C

    2. Toss a coin. If heads go up 3 notes. If tails go down 5 notes. Play that note.

    3. If not back at middle C go to 2.

    Ableton has random cell jumps. Each time you play the piece it's different, but it's a fixed score.

    I once made a DVD that played a set of melodies which jumped according to the current time. Every time you play the DVD it's different. But it's the same DVD 'score'.

    (I was once teaching a music class about processes. One of my students filled the back of a van with a drum kit and an audio recorder. He drove around fast and recorded the kit falling about. His 'score' was a street map with the route he took marked out.

    High Distinction.)

  • Peter Kirn

    @Juno: Good point! Well, music still has to exist in time to be perceived, but that doesn't mean it needs a linear timeline. Took your suggestion to heart and added Mozart's Musikalisches Würfelspiel. I'd argue your three lines of instruction are a musical notation. In fact, they require advance knowledge of what the symbols mean ("middle C," "up 3 notes"). "Graphic" symbols alone aren't always required ("molto," "cresc.," "poco a poco" ….)

  • hookeypookey

    "The word “written” doesn’t really fit"

    Your definition of the word shouldn't be so literal. Would you not say that you wrote this blog piece? Or would you not use that word because you typed it out?

    Writing is simply the transcription of an idea or thought onto another medium. Does not have to be with pencil and paper to constitute writing.

  • Peter Kirn

    @hookeypookey: No, that's precisely my point – I think given the history of how musical representation has evolved and been used, it's the symbolic abstraction of musical pattern that matters more than whether or not you literally write it down. It's the figurative meaning, which is why the rote learning has a lot in common with notation, even in cultures where written notation is less significant to the practice.

  • http://michaelclemow.com mikeclemow

    Thor Magnusson has an interesting article in the latest issue of Computer Music Journal about making music with digital musical systems.  He says that a common approach when faced with the "practically infinite expressive scope" that these systems offer is to devise a "relatively high-level system of constraints, encapsulating a defined space for potential expression, whether of compositional or gestural nature.  

    This got me thinking about why Chopin didn't feel the need to include a section in his score for the 21 nocturnes entitled "How to Build a Piano" or "the equal temperament scale."  Just about every composition I write these days requires a git repository and circuit schematics in order for someone else to be able to perform the piece.  In a world with no knowledge of keyed instruments, Chopin's nocturnes are unrealizable.

    I think that this situation has always been the case, only cultural standards and technology have been such that we weren't forced to notice.  Now, however, this is coming to a head and we can't help but notice.  Philip Glass says he believes that "music is something that should be written down."  It's pretty clear to me that you can't write it all down–you never could–and just like before, if we decide to write something down, we have to make serious decisions about what we're going to write down and what we're going to choose to omit from "the score."

  • http://www.hispasonic.com Mudo

    Hi Peter,

    if you are interested in new notation ways for turntablist/sampleist check this project:

    http://www.skrat.ch

    and this youtube video about pre-alpha software version.
    http://www.youtube.com/watch?v=S8izg3jHIfI

    I believe the point is something between this kind of notation and Rock Band system…

    ;)

  • Jonah

    @mikeclemow Classical notation and analog playback compared to MIDI has the advantage that it has an "infinite" resolution. It also has many expressive and emotional signifiers that MIDI does not. MIDI and sequencers inspired by MIDI are rather severely constrained already in some ways.

    The prediction that we will see an abundence of new forms notation will be prescient, I think, but how much value does notation lose when it no longer serves the purpose of being a unifying language? The idea that notation allows for great collaboration gets a monkey wrench in it if everyone uses and knows different systems. Will proficiency also be lost if we are thinking and working in so many varied representational systems? 

    The time is right(overdue in some ways) for a new system that takes lessons learned from past systems to more accurately describe the next 50 years of music. An active system that moves forward at a slow considered pace. Standards are strengths only in living systems.

    I wonder if Fenn O'Berg or any other electronic collaborators have developed novel ways of communication and notation while they worked together.

    As a side note for my own personal notation I use color very often.

    RE: Recorded sound making live performance go away. What do you think about the proliferation of backing tracks?

  • http://ardour.org/ Paul Davis

    @mikeclemow: love this observation. superb.

    i'm not sure if its related but i also find it interesting to compare the way in which both Csound and LaTeX share the goal of being able to render the same set of instructions in EXACTLY the same way even across 30 years of their existence, and how much this contrasts with most commercial software synthesis tools, which pay vastly less attention to the notion of re-performability. in peter's terms, they've chosen to prefer the audio rendition of a piece over the notated form, which is a defensible choice, but a limiting one too.

  • http://www.ricoallthetime.com Mike Burnett

    I recently saw composer Eve Beglarian speak at Portland State University, and she was very down on traditional notation. She described teaching a vocal part to dancers who used their experience with committing complex dance steps to memory to learn their vocal parts without notation. Not sure of which piece, perhaps "I Have My Own Room"?

  • dynamique

    Great article!

  • Peter Kirn

    I should add — I'm the last person to devalue rote learning / oral (and aural) transmission. Far from it, the whole idea I'm trying to argue for is a wider definition. I'm saying it's humans who are what are otherwise missing from the education. Recall, the original argument James made on Synthtopia was, why do you need notation if you can directly manipulate sound on a computer? The issue is, you need some system of symbols to be able to share in what you're doing – whether you write or speak them, whether you also learn music by repetition and muscle memory, whether you use a linear conventional score or something interactive, the ability to organize or describe your musical thoughts is a vital dimension when you want to share the thoughts with someone else.

  • http://ardour.org/ Paul Davis

    peter, if i can, i'd like to expand on what you wrote and said. Its not that its important to be able to describe your "musical thoughts" when you want to share the thoughts with someone else. i think its more subtle than that. what you are trying to share is "what i am trying to do", which is quite distinct from "the outcome of the last time i tried to do that".

    the latter is a recording – it could be audio data, MIDI or some other signal that represents the outcome of bow moving, key striking, knob twiddling, hand waving, throat shaping, membrane hitting or whatever. its nice to be able to give someone that as the outcome, but its of much less use if are trying to tell them what you were trying to do, either so that they can try it to, or so that they can collaborate with you in their own way.

    saying "i moved the filter cutoff back and forth between X and Y at about 2Hz" is all well and good, and provides a much more detailed explanation of what a knob twiddler is doing that western notation could ever do. but it provides no sense of why the knob twiddler did that, of what effect they were trying to get, or how you might be able to interact with them when you perform together.

  • Peter Kirn

    Paul: well said. And yes, this raises a whole network of questions, so I personally meant to open up this issue more than provide any definitive answer. It's sort of the set of questions you have to ask any time you do anything musically with other people, so I'd be very disappointed if I felt I had definitive answers. :)

  • Rupert Lally

    Yet another great article, from you Peter… Two things occurred to me whilst reading it, and perhaps these have been partly covered in other people's comments:

    Firstly, a notation system is only really worthwhile if it can transcribe a part accurately, that means pitch, rhythm and timbre: the reason most electronic composers who write using a computer no longer create a "score" is because you cannot accurately "notate" timbre when using electronic or non- conventional sounds… Some early composers used graphic scores… But they were only intended as a guide, much like processes or systems music… They were never meant to wholly or accurately transcribe what the composer meant…

    Secondly, for notation to work, or should I say to be of use, to the composer – the performers all need to be able to understand it… Which I realize sounds I'm stating the obvious, but hear me out: returning to your imaginary ensemble of turntabilists and samplists

  • Rupert Lally

    Sorry… Accidentally hit "submit" ( the perils of writing on an iPhone)…

    Peter's ensemble: why would expect any of these performers to understand notation – none of these instruments necessarily conform to western scales, pitches or equal temprament… The pitch of the turntablist's performance is dependent on whatever record he/she has on the turntable, what speed it's played at etc… Why would this performer learn a notation system that couldn't accurately transcribe their playing…

    I make this point as someone who started in Jazz and rock before moving into soundtrack work, sometimes I've used people from those worlds in my soundtrack work; but I have never needed to use notation with them in order to enable them to understand what I want from them… For musicians who learnt to play music via improvising or by ear, they don't need notation to be told what they need to play; perhaps what they come up with is different from what you the composer thought of, but why should that matter…what they've thought of is often better anyhow…

  • http://www.james-ingram-act-two.de James Ingram

    Great Article! Its a very long story, but I completely agree that music notation (and not just standard notation) is due for a big comeback.

    I was Stockhausen's copyist for 27 years, so I've thought a lot about this subject. For me, in contrast to Stochausen's generation, music symbols do not mean absolute time, but require a tradition of performance practice to be performed "correctly". The advantage of writing is that it is fixed, transcending time. Large forms can be created by re-reading, thinking, changing etc. Whole books could be written on the subject…

    I came here after commenting on one of your other articles — apologies for the duplicate posting — but you and your readers might like to check out some experimental scores I've uploaded recently. These are written in SVG extended to contain MIDI information. Interestingly, MIDI info is incomplete and requires interpretation… (I'd like to see specialists re-interpreting my scores in the way performers have always done). This requires a new generation of performing software, but that does not seem too far away.

    One of the scores can be found at
    http://james-ingram-act-two.de/compositions/study
    Background info and the other scores can be found by following the link "about Study 2".

  • http://www.facebook.com/ Blondy

    Dag nabbit good stuff you wihppernsppaers!

  • Pall Ivan Palsson

    I have a site where i collect interesting new “animated” notations that could be relevant: http://animatednotation.blogspot.com/