Hearing over the din of noise is something that humans do a lot better than computers. A new mathematical technique promises to provide highly accurate models of sound, even with broadband noise in the picture. Why does this matter, aside from mathematical curiosity? For one, better sonic analysis could mean more realistic models of instruments and more flexible sound editing tools, inspiring a new generation of music software.

From our friend kokorozashi:

‘In a recent issue of the Proceedings of the National Academy of Sciences, Marcelo Magnasco, professor and head of the Mathematical Physics Laboratory at Rockefeller University, has published a paper that may prove to be a sound-analysis breakthrough, featuring a mathematical method or â€Å“algorithmâ€Â? that’s far more nuanced at transforming sound into a visual representation than current methods. â€Å“This outperforms everything in the market as a general method of sound analysis,â€Â? Magnasco says. In fact, he notes, it may be the same type of method the brain actually uses.’

Full article:
New mathematical method provides better way to analyze noise [Physorg.com]

This certainly wouldn’t be the first time new algorithms yielded scientific advances and musical advances alike. Even the famed (or infamous) AutoTune plug-in benefits from data processing techniques used in oil exploration. (Lesson: it takes a lot of science to make Jessica Simpson sing in tune. Sorry, couldn’t resist.) Of course, the converse is true, too: better sound processing can be very useful to a broad range of sciences, because, well, sound is just about everywhere.

[Updated] Tom Duff has managed to hunt down the actual paper so you can get this straight from the source:

Sparse time-frequency representations,
Timothy J. Gardner and Marcelo O. Magnasco
[Proceedings of the National Academy of Sciences]

While I wouldn’t normally say this of academic papers, it has really pretty pictures. (Seriously: visual renderings of the analyses not only illustrate the point, but also happen to look gorgeous.)

  • Tom Duff

    The link in the story doesn't go to the paper, but to a news article that doesn't mention the title or even the publication date of the paper. Nevertheless, I think I found it.

  • http://www.createdigitalmusic.com Peter Kirn

    Thanks, Tom! Yes, I was unable to find it. Added to the story. And the paper itself looks fascinating. I'm really stunned by how they've thought (theoretically, at least) by the way in which this might be modeled in neurons.

  • Sasa

    In fact, disciplines like Adaptive Filtering and Blind Identification have been always concerned about a similar problem – extracting meaningful signals from complex signals in the presence of unknown noise.

    I believe that what this article talks about is nothing new. Time-frequency analysis has been extensively researched. See any book on Wavelet Theory, for example.

    It would be nice to read the publication. After a quick scan through the paper mentioned by the previous poster, I am certainly not impressed.

  • http://www.createdigitalmusic.com Peter Kirn

    While the analysis isn't necessarily new, is it novel at least in the way they're mapping this to perception and cognition? (Or was that always part of thes approaches, too?)

    I'm way out of my depth here, but then, that's why we have comments (and why I tend to enjoy them more than, ahem, my own writing).

  • Gilbert Bernstein

    In regards to the comment about pictures, take a look at most of the Siggraph papers (Computer Graphics) or Scientific Visualization papers if you want to see really pretty pictures in a science paper. I've heard that some of the Viz people go out of their way to tweak their pictures in hopes of getting put on the cover of the journal.

  • Lance Williams

    Hey, Tom! How're you doing?

    And what do you think, by comparison, of this one?

    http://www.nature.com/nature/journal/v439/n7079/p

    ("Efficient audio coding," Evan Smith and Michael Lewicki, Nature, 23 Feb 2006)