“The hills are alive /
with the sound of browsers”
Ever thought you’d make sounds in a browser, or have new ways of visualizing music playback? It’s happening, with builds of Firefox anyone can download.
Work to make browsers rich with sound synthesis and visualization continues. “Compatibility” isn’t really an advantage yet, because Firefox is the only browser with support, and only in the next version, though that could change in the future. And yes, Flash is capable of some of this, too (though not real 3D), with 90-95% saturation, conservatively, of computers. But if not compatibility, what these experiments do represent is what happens when someone working on a tool (Firefox, in this case) really commits to making sound a priority, and supporting free standards and developer tools (an emerging standard API, WebGL, Processing.js, etc.).
In fact, it’d be great if this occurred everywhere: if you’re making a platform, make sound a priority, and people will do mind-blowing stuff with your platform.
Among the latest fruits:
2. Patching in a browser – with a Pd clone. Chris McCormick is porting a subset of basic Pd objects to the browser. Now, one side of me wonders whether Pd is the best choice; it’s a somewhat idiosyncratic, if powerful, language for describing sound patching. But on the other hand, I could see this being fantastic in teaching and sharing: put basic patches up in a browser, let people play with them live, then build more advanced tools (with greater hardware access and external support than is possible in a browse) in the traditional Pd tool. As I keep saying, I think there’s far too much partisanship in the discussion (“Browsers for everything!” / “Browsers are useless!”), far too little thinking about how the browser and the desktop tool are more powerful together.
Also — heck, I may try this out in workshops as soon as next week. The browser could build a basic language for music and visuals in Processing and Pd, then robust performance tools could be built in the native tools, with quite a lot of compatibility between the two.
3. Actual standards. The W3C, the standards body behind HTML, has added this discussion to an Audio Incubator group. (It’s been incubating for some time, but maybe this will help something actually hatch.) Now I’d just like to see these things in Chrome/Chromium, too – I wonder if anyone’s up to a test build, as the standards adoption discussion continues. A number of readers have pointed out that MPEG4 had a specification that included, wholesale evidently, Csound. But this process seems more organic to me – you need actual tools and real-world experiments to evaluate the validity of something, not just standards on paper.
Putting the Awesomeness in Context: An Appeal
A side rant, though: why do Web geeks only care about what happens in the browser? It’s funny to me it seems that outlets like Slashdot jump on stories like browser-based tools, but ignore exactly the same ideas if they’re in a separate app. That’s not a criticism of the Mozilla crew or these brilliant hackers – this is what development is all about, pushing your tools to the limits. But if there isn’t a broader recognition of the value of what you’re doing or why you’re doing it in the first place, there’s a danger that unsustainable tool fetish will miss the point. That is, synthesis in the browser is excellent, but if people don’t understand the value of the synthesis itself, we have a lot more work to do.
I’ve still yet to see a compelling explanation of what the browser really is, and what’s possible with its interface paradigm. That should be a fascinating discussion, actually, especially with the radical transformation of the browser, particularly as players like Google make it the central aspect of TV-watching or tablet experiences. But the discussion is only really interesting if you don’t start out with the value as a given. For instance, if browsers become a bigger part of what we do, is its simplistic tab metaphor really sufficient? If browsers simply bundle a set of native tools, are there ways “standalone” apps might adopt similar, standards-based approaches?
Oh, side note: this isn’t about “the cloud.” The cool stuff here is happening on your local hardware, period. That’s what makes it fast, and that’s what makes it work for audio, and your local machine is getting cheaper, cooler, and less power-hungry all the time. New DSP and floating-point capabilities in devices like tablets could make sound more powerful and flexible than ever before – provided people work out how to maximize, not squander, those capabilities.
So, here’s what I’d like to ask: what form will the standards discussion take? And how can these larger discussions – many of which transcend the discussion of any one tool or standard – find a forum?
Behind the Scenes, More Info
While you ponder that (and I’m open to suggestions), here’s more reading for you:
Experiments with audio, part X [Dave Humphrey's increasingly-awesome blog]
More details on the first example, and how it was built (Minefield is Firefox 3.7):
This demo combines the CubicVR 3D engine on WebGL (www.cubicvr.org) with the Mozilla HTML5 Audio API (hacks.mozilla.org), Processing.js (www.processingjs.org) and BeatDetektor.js (www.beatdetektor.com)
Mozilla Audio API is used to sample the HTML5 audio tag on the page, this information is processed by BeatDetektor.js which produces timing information for the Processing.js real-time canvas textures and the CubicVR.js procedurally generated WebGL scene using them.
The camera is set to free roam a simple chase pattern with a probability to follow a nearby cube (fully automated).
Available online at:
or if you have a Float32Array enabled Minefield build:
you can find more info about audio api-enabled Minefield builds at:
You can also feel free to chat with us about the Audio API via the #audio channel on irc.mozilla.org
Enjoy! And yes, I’ll have to work out a more beginner-friendly, here’s how to do this post.