To celebrate what in the US we call 3.14 or PI day, today I’m offering stories that deal with mathematics and circles. First up, an app named for the great philosopher who is credited – even if perhaps ahistorically so – with finding that ratio and ratios in harmonies.

Technology has long introduced innovations that make playing easier for specialists and non-specialists alike. Just ask anyone who plays an instrument like the guitar – frets, and the simplified notation that went with it, go back centuries as a means of allowing more people to make music.

Developer Rob Fielding wants to rethink frets, to bring their disposition and playability closer to the way harmonics work in sound. The creator of the microtonal iPad app Mugician, his next app in development, Pythagoras, offers some fascinating ideas. Forgive me getting a bit theoretical in the prose for those who do speak that language; the videos are always the best way of understanding what’s going on. (The vast majority of even untrained ears have the ability to perceive pitch with astounding accuracy, so you don’t have to be an expert. Usually when people claim to be tone deaf, the problem is that they can’t sing, not that they can’t hear, in my experience.)

I’ll let Rob explain:

Pitch

Pythagoras’ fretless mode uses geometry to mark the harmonically relevant points, not fixed frets. Where the lines intersect with strings, the notes are perfect ratios to each other. This helps you to locate and get to know the useful pitches that are used in world music. [See image, top for a beautiful visualization of how this works. -Ed.] That is explained here:
The Spectrum – Pythagoras’s interface

When you play a chord like a major third, you line up the blue notes to overlap perfectly, and you get shiningly perfect major thirds that way. Same for harmonically correct fifths and fourths. These are the pitches that you hear as overtones when you listen carefully to instruments with lots of sympathetics, etc.

I do want to respond to this one lamentation in Rob’s post: he frets (ahem) that MIDI doesn’t use frequency, and that OSC isn’t well-supported. I actually think MIDI isn’t far off – it just lacks precision. Perception of pitch is complex, but a logarithmic scale (in which 440 sounds one octave higher than 220) is reasonably close to how we hear. And that’s precisely what MIDI gives you; if you just wanted to number the piano, its solution of using a number like 60 for middle C makes perfect sense. (We can overlook for a moment that the definition of MIDI fumbled the octave. The basic idea was still right.)

Even outside MIDI, a numbering system like that in MIDI – mapping pitch space to a logarithmic scale to make them match intuitively what we hear – is not uncommon. The problem is that MIDI doesn’t have a rational way of dealing with what happens in between the notes, as it used integers for efficiency. Take MIDI’s logarithmic scale and set floating-point numbers (numbers with a decimal place, like 60.5 instead of 60), and you have a pretty decent solution. You could still, if you didn’t want integers to represent 12-tone equal-tempered pitch, apply different scales and modes. But I think if you wanted a decent way of communicating note values, unless I’m really missing something, sending floating point numbers that default to a 12-TET logarithmic scale can’t be too bad. I understand that most instruments don’t yet respond in any standardized way, but I refuse to believe this is an intractable problem. I’m happy to discuss in comments. Heck, if we just got Max and Pd patchers to agree on something, I’d be pleased.

On to another very cool idea:

Octave Rounding

Pythagoras is using octave rounding in its latest incarnation when you press the “Auto” button for the octave switch. What this means is simply that it doesn’t care about what octave a note is in, it will pick the closest octave to the last played note. This allows for astounding feats of arpeggiation and pentatonic scales – even when playing fretless. Here is the more popular video:

And here is the improvement upon it the next day (much less viewed video) where you can slide up or down a fourth:

This octave rounding is an idea I implemented a few years ago in my Samchillian derivative called Xstrument. (Both Xstrument and Mugician are open source projects on github). This idea is very applicable to 2 octave keyboards as well.

Here is the idea with a trivial Pd program:

It’s really great stuff. As it happens, I’ve been exploring new geometries for music making myself, interested along similar lines. And musical inventor Roger Linn has had a lot of things to say about it lately, too, including his respect for Rob’s work.

So, I’d love to have a discussion. What interesting interfaces have you seen for music? Are there any you find playable in practical circumstances? And why can’t we just solve this issue of how to transmit pitch information between software and hardware once and for all? (I don’t yet know how HD-MIDI will address the issue; that’ll be interesting to see.)

And don’t miss Rob’s blog:
http://rrr00bb.blogspot.com/

Finally, here’s Jordan Rudess rocking out with Mugician, Rob’s (currently-available) app.

  • ANders

    Just a geeky note:) In Pythagorean tuning the thirds are far from "perfect" they are in the ratios of: 81: 64 a major third and 32:27 a minor third.

    In a 5 limit just intonation system which this is? The thirds are more "perfect" 5:4 and 6:5

  • jhhl

    MIDI NoteOn/NoteOff is about what switch you hit, and not what pitch you play. There are various ways to get more precision in pitch on mainstream MIDI compatible instruments, usually by sending a calibrated pitch bend before each NoteOn (on different channels if you are going polyphonic). Various synthesizers added sysExes to let you tune them in various ways, and I  took advantage of that to write a tuning program, EE, so I could dump algorithmically defined tunings onto synths to the best of their abilities. I used this for tuning synths for various American Festival of Microtonal Music concerts in the 80s and 90s.

    The iOS devices have a few microtonally aware apps, some more practical than others as performance instruments. Some of those apps are my apps. If you want to hear differences between intervals, and keep them droning, I suggest checking out Droneo. It has an interesting "tone spiral" page which lets you drag tones around in a logorithmic interval space, and has a number of other ways to specify intervals. http://www.jhhl.net/iPhone/Droneo

  • lala

    no more pitch-bend hoops – cool

  • Jeff Brown

    I think often about how to represent pitch information.  Three qualities that matter a lot for a format are• simplicity• range• ease of transformation

    MIDI represents 12-tone ET pitches simply, and makes chromatic transposition easy.  It's not very good at other approaches to pitch.  Its range is limited: on one channel, if you're playing more than one tone at a time, you've only got 128 choices for any but the first tone.  (That first one has 128*128 possibilities, thanks to pitchbend.)  Yes, you can represent multiple out-of-scale pitches, using different channels, but it's difficult to encode and a nightmare to edit.

    Floating point pitch representation has the best range, and a very simple format, but it sacrifices the third quality.  Like MIDI, it only easily admits parallel transposition: You can easily transpose all notes up by, say, a fifth, but how would you transpose them up two scale steps?  You'd first need some scale to be specified, and some guarantee that the pitches you're starting with are in it.

    If you want scale-degree transposition to be relatively easy, the best format would seem to be one in which pitches are represented by integers (or perhaps "degree-octave" pairs of integers) and the applicable scale is recorded as a set of pitches that changes discretely over time.  The scale could be represented as a set of ratios relative to a floating-point root pitch, or they could be just a set of floating-point values.  Again, the latter representation is simpler, but the former makes it easy to transpose the whole scale (just change the root pitch).

    No programmer* would try to convince you that a single data structure is the best for every problem.  But neither would they tell you it's best to design a data structure from scratch every time you need one.  Rather, there are a few templates — lists, arrays, dictionaries, etc. — that get used and reused, and with which everyone gets familiar, because they make life easier.

    Digital musicians really, really ought to do the same.

    *with the possible exception of Lisp fans

  • Peter Kirn

    @Jeff: actually, that raises a really good point. The only reason that even floats between those log values make sense are because you're presuming in advance that you have a 12TET tuning. So it makes perfect sense that you should always define some tuning table along with the pitch information. It means not only is the data then meaningful to humans, but it becomes possible to perform musical transformations on the pitch set. (And we're not talking advanced theory, either – we're talking basic stepwise transposition, like you say!)

    Based on what little information we have / I've seen, it would seem a lot of early music and music from other parts of the world would tend to work the same way. You're more likely to maintain the relationship of the mode (and thus the interval relations between pitches) than absolute pitch. That means, again, it makes sense to use some sort of relative scale and then a formula to map that scale to frequency.

    It actually bothers me a little that we use the word "microtonal," as that implies arbitrary values between pitches. For most music, we're talking about even the ability to do something in tune in a commonly-used mode, even before we get to common musical gestures between those pitches. Again, as you say, without some sort of distinction of the tuning table first, nothing else makes sense.

    Not all tuning information needs to be transmitted in real time, either. It makes sense you'd transmit a tuning, then notes, not retune while you're playing, not only for acoustic reasons but for musical reasons. Even if you had something crazy like multiple simultaneous tuning systems, you could still transmit those as separate events, without having to add explicit frequencies to every single note event. 

    My bet would be that transmitting note information would beat raw frequency information nine times out of ten, if you want to have any kind of musical pitch system in play at all.

  • Jeff Brown

    > The only reason that even floats between those log values > make sense are because you’re presuming in advance that > you have a 12TET tuning.

    Good point.  And that observation might point toward a way to relatively seemlessly introduce more flexible pitch representation into familiar-looking workflows.  

    I'm imagining a piano roll with a horizontal row corresponding to every integer from 0 to 127, a vertical bitmap that repeats every T pitches, and a horizontal row parallel to the pitches that encodes the pitch set.  By default, T=12, the bitmap is an octave on a keyboard, and the horizontal row only encodes one event, at the start, which encodes middle C (261.6 Hz) and the 11 non-unison intervals (2^(1/12), 2^(2/12) … 2^(11/12)).  To someone who didn't want to mess with "microtonality" (which, I agree, is an unfortunate term), the default would look exactly like what they're already used to in most DAWs.  

    If someone wants to play in an alternate equal-tempered tuning, they change T to whatever's appropriate (31! 31!), swap the bitmap for something that repeats every 31 rows instead of 12, leave the root (C) unchanged, change the set of intervals to the 31 pitches from middle C to just under the C an octave up, and they're good to go.  For someone who wants a non-equal-tempered tuning with N notes, the approach would be exactly the same, except they might sometimes want to change the root pitch as well as the intervals.

    I earlier said that no data structure is perfect for every job, but I'm having trouble thinking of a composition style that this wouldn't easily accomodate.

  • lala

    im waiting for something that does this out of the box:

    78.0 cents/step = 15.385 steps/octave

    63.8 cents/step = 18.809 steps/octave

  • Jeff Brown

    Hmm.  I hadn't thought of that.  You're describing a scale that doesn't repeat after an octave.  But such a scheme would still be accomodable by the data structure I described — you'd just need a bitmap that repeats every step (instead of every 12 or 31 or whatever)and a scale that had exactly one interval, repeating thereafter.

  • lala
  • Jeff Brown

    Yeah, she's a genius.  One of those scales is, as she described it elsewhere, derivable as an equal-tempered division of the perfect fifth, rather than the octave.  Trippy stuff.

    Personally, I would hate to compose in a scale that didn't repeat every octave.  It's literally the last interval I would choose to be without.

    However, it does seem like an ideal music notation system would have to allow such things.  You made me realize that in the scheme I was describing, there's another hidden assumption: the interval after which the scale repeats.  If that last datum were made explicit, I think you might have a system that covers the needs of anyone who uses scales.

    (To be more explicit about the missing datum: I was assuming one would encode a scale as, for instance, the following: {middle C = 261.6 Hz; 2^(1/12), 2^(2/12) … 2^(11/12)}.  But if you really want to be sure you've spelled out 12-tone ET, you've also got to specify that the scale repeats at the ratio 2.)

  • lala

    if the scale doesnt reapeat @ the ratio of 2 it sounds like nothing i’ve heard with my western ears/ brain

  • lala

    :)

  • jhhl

    Wendy Carlos' equal divisions of intervals other than octaves came about due to her realization that an octave is one of the easiest things to create with digital synthesis, and so the intervals in the alpha and beta scales are not intended to be be only pitches you will hear. This is a kind of paratactical tuning (cf. Larry Polansky) where you use the tuning you need to get the effect you want wherever it may be in the composition. 

  • lala

    "paratactical tuning" isnt that just a wild mix of what ever intervals & the chromatic scale?

  • lala

    thats why i want alpha & beta tuning,

    and ad a layer of chromatic stuff later on or the other way round

  • lala

    i get gussets in my brain when i try to play alpha & beta on a regular keybord …

  • http://rrr00b.blogspot.com rob

    Ah guys, that blurb I made about MIDI is just something even more basic than all of this.  It's about the basic nature of fretless instruments.  

    On a fretless instrument, there is no scale to make a tuning table to (no discrete keys.  no special notes anywhere).  MIDI assumes discrete keys that turn on and off, and it isn't true in this case.  Trying to work around it with a channel per string to get arbitrary independent pitches is an abuse of the spec that doesn't work reliably.  Here's why:

    You just have a bunch of strings (more than the number of available MIDI channels), that can bend arbitrarily… As an example, a single note down on A1 bends up to … C5 say….way more than 14 bits that MIDI bend gives you.  The beauty of OSC and Pd is that you just have floating point numbers for frequency.  (A log scaled one that is mod 12.0 that Peter Kirn suggested is a good choice actually… I use exactly this internally in both of my instruments.)

    Anyway, the gesture data coming out of a fretless instrument is wild and arbitrary pitch bends like this, with at least 10 fingers – much more if you want drone strings too.  Every single note down is a slightly different pitch, so you really have to have a finger per channel.  Channel 10 in MIDI is generally drums when you would like it to be yet another finger and have dozens of such channels to handle harps, sympathetics, etc.  

    I actually don't pay much attention to the academic Microtonal stuff going on. Microtonality is real-world in most world music.  It's because of fretless instruments and purely tuned 4ths/5ths … not because of any unusual keyboards or unusual mathematical music theories.

    You can wrangle MIDI into doing microtonality.  You can abuse channels it will work polyphonically too.  But it definitely won't consistently across MIDI devices.  Just inside OSX's garageband I find that some of the guitars handle microtonal scales while others just sound wrong.  It's expected because I am abusing the spec and going beyond what is commonly supported.

    MIDI's whole design assumes discrete keys.  Many devices assume 14 bits of bending +/- a whole tone.  It assumes 12 current notes per octave.  

    In Arabic music, a single phrase will commonly mix Phrygian mode sharp 4th, minor mode, and Dorian with quarterflat second and sixth.  Furthermore, in these scales the 5th/4th are adjusted to be harmonically perfect with respect to the current "chord" (mode actually).  This means that almost every time you put down your finger the pitch could be different.  This is why it is said that scales don't really exist in this system.  This is one simple non-academic case.

    Anyways…  This is a reason why OSC exists.  it's why Pd Everywhere seems like such a GREAT idea to me, given the crazy amount of computing power on touchscreens now.

  • Jeff Brown

    Rob: I agree, MIDI is a woefully inadequate specification, and OSC offers the *possibility* of doing much better.  However, OSC is so flexible that it leaves a lot of musicians scratching their heads — similar to the novelist's terror of facing a blank page.  

    In order to be useful to a lot of people, a good standard has not only to be flexible, but also structured enough that the person using it doesn't have to come up with a lot of structure on their own.  (I feel like I'm echoing Peter here.)  That's why I think we need a pitch specification that's a little more concrete than OSC's "you can send any message to anything at anytime".  The pitch spec could be written using OSC, but it would impose more structure — namely, by specifying a root, a set of intervals, and an interval of periodicity.

    If you're worried that such a spec couldn't carry the sort of unrestricted pitch bends you want to be able to notate, one could complicate the standard by adding an optional floating-point logarithmic bend value to each note.  My intuition is that the most elegant way to handle polyphonic out-of-scale values would be to create a label for each note as soon as it starts.  That way, later bend events could refer to that label in order to indicate which notes they're supposed to modify.

    In place of absolute logarithmic bend information, one might in some cases prefer to send bend messages that relate to the scale — e.g. "Bend up from this scale-tone to that scale-tone over this many seconds."  Ideally the standard would allow for that, too.

  • http://rrr00b.blogspot.com rob

    Jeff Brown:  Your first paragraph sums up exactly what I meant in my blog post.  ;-)  I have seen some open source projects to make a meta-spec over OSC to standardize a frequency-oriented MIDI-equivalent.

    What MIDI does fantastically well is that if you plug in any kind of device oriented around 12 discrete keys per octave and small pitch bends that it will just work… basically perfectly.  

    He's right that it boils down to a resolution issue when using the standard MIDI that everything understands.  If you could use Bend (per note and before note on!) to reach the entire range, then it would work perfectly in the microtonal case.

  • Peter Kirn

    Ah, okay, I see what you were saying now, Rob. 

    So, of course, we wind up getting into these OSC versus MIDI discussions and miss the bottom line – neither MIDI nor OSC really answers what to do with things with strings and frets. If you have an instrument with keys, note on and note off messages make perfect sense – and, honestly, whether you're using OSC or MIDI, that'd what you'd want. 

    But has anyone got a proposal for how to describe messages from a stringed instrument? What would it need, just a "pluck" message or "bow" message and then a separate fretting message?

    And yes, OSC provides sort of a canvas on which you could build these things – it just happens that no one has stepped up the plate with something that could be standardized for these kinds of messages. (And, arguably, didn't really do so with MIDI, either.)

  • Jeff Brown

    For a MIDI-replacement to handle real-world instruments well, I think it needs to meet two criteria: (1) the set of parameters at play when a note starts, and while it plays, can vary across instruments, and (2) the note is labeled when it starts, so that future messages can modify it.

    For instance, to encode the initial pluck of a string, you really need more than the two integers (pitch and vel) MIDI allows: there are also timbral qualities that depend on where you struck the string, at what angle, with what type of material, etc.  Moreover, after the note has begun, if you want to capture the level of nuance a real guitar offers, you need to be able to encode incremental changes (such as vibrato) and also discrete, abrupt ones (such as a temporary palm-mute).

    The same paradigm would be useful for drums — particularly hi-hats, which are of adjusted mid-vibration without ending that vibration.  In fact, I'm having trouble thinking of real-world instruments (besides the piano and its ilk) in which a note's entire evolution can be encoded simply by describing its start and end.  A violinist is constantly changing pressure, often changing direction, sometimes changing bow angle.  A wind player has a number of percussive ways to adjust a note, and the ability to use their vocal chords, too.

    Part of MIDI's failure is that it tries to describe the behavior of all sorts of different instruments with the same parameters.  Its replacement should provide a framework for any type of data, but without imposing parameters from one instrument on another where they don't make sense.  

    It might be useful to standardize parameters for each instrument family: for instance, strings would have "strike" information (position, angle, material, velocity), "interference information" (position, material, pressure), "bow" information (position, angle, pressure, material), and "pitch" information (tension, length).  This would let someone transplant a string performance from one instrument to another.  It wouldn't let them transplant the performance to a saxophone without specifying which parameters are meant to correspond — but that's appropriate.  And it's not much of a limitation, either: one could standardize default mappings from one instrument's parameters to another, for people who didn't want to give it any thought.

    It's fine if the "guitar standard data format" has more parameters than most people plan to use.  The data format should be such that as long as you specify the bare minimum required to produce a sound — pitch, I suppose — the rest can be set to default values, dependent on the synthesizer.  Making a complete set of parameters available need not conflict with keeping the interface simple.