User:Corban/AudioAPI: Difference between revisions
Jump to navigation
Jump to search
mNo edit summary |
|||
Line 1: | Line 1: | ||
== Defining an Enhanced API for Audio == | == Defining an Enhanced API for Audio (Draft Recommendation) == | ||
==== Abstract ==== | |||
The HTML5 specification introduces the audio and video media elements, and with them the opportunity to dramatically change the way we integrate media on the web. The current API provides ways to play and get limited information about audio and video, but gives no way to programatically access or create such media. We present a new API for these media elements which allows web developers to read and write raw audio data. | The HTML5 specification introduces the audio and video media elements, and with them the opportunity to dramatically change the way we integrate media on the web. The current API provides ways to play and get limited information about audio and video, but gives no way to programatically access or create such media. We present a new API for these media elements which allows web developers to read and write raw audio data. | ||
==== Authors ==== | |||
* David Humphrey | |||
* Corban Brook | |||
==== Current Implementation ==== | ==== Current Implementation ==== | ||
Line 52: | Line 59: | ||
audioOutput.mozAudioWrite(samples.length, samples); | audioOutput.mozAudioWrite(samples.length, samples); | ||
</pre> | </pre> | ||
Revision as of 16:59, 24 February 2010
Defining an Enhanced API for Audio (Draft Recommendation)
Abstract
The HTML5 specification introduces the audio and video media elements, and with them the opportunity to dramatically change the way we integrate media on the web. The current API provides ways to play and get limited information about audio and video, but gives no way to programatically access or create such media. We present a new API for these media elements which allows web developers to read and write raw audio data.
Authors
- David Humphrey
- Corban Brook
Current Implementation
David Humphrey has developed a proof of concept, experimental build of Firefox which implements the following basic API:
Reading Audio
onaudiowritten="callback(event);"
<audio src="song.ogg" onaudiowritten="audioWritten(event);"></audio>
mozFrameBuffer
var samples = []; function audioWritten(event) { samples = event.mozFrameBuffer; }
Getting FFT Spectrum
mozSpectrum
var spectrum = []; function audioWritten(event) { spectrum = event.mozSpectrum; }
Writing Audio
mozSetup(channels, sampleRate, volume)
var audioOutput = document.getElementById('audio-element'); audioOutput.mozSetup(2, 44100, 1);
mozAudioWrite(length, buffer)
var samples = [0.242, 0.127, 0.0, -0.058, -0.242, ...]; audioOutput.mozAudioWrite(samples.length, samples);