Audio Data API: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(merging review copy with this, this is main copy now.)
Line 3: Line 3:
===== Abstract =====
===== Abstract =====


The HTML5 specification introduces the <audio> and <video> media elements, and with them the opportunity to dramatically change the way we integrate media on the web. The current HTML5 media API provides ways to play and get limited information about audio and video, but gives no way to programatically access or create such media.  We present a new extension to this API, which allows web developers to read and write raw audio data.
The HTML5 specification introduces the <audio> and <video> media elements, and with them the opportunity to dramatically change the way we integrate media on the web. The current HTML5 media API provides ways to play and get limited information about audio and video, but gives no way to programatically access or create such media.  We present a new Mozilla extension to this API, which allows web developers to read and write raw audio data.


===== Authors =====
===== Authors =====
Line 18: Line 18:
* Thomas Saunders
* Thomas Saunders
* Ted Mielczarek
* Ted Mielczarek
* Felipe Gomes


===== Status =====
== API Tutorial ==


'''This is a work in progress.'''  This document reflects the current thinking of its authors, and is not an official specification. The original goal of this specification was to experiment with web audio data on the way to creating a more stable recommendation. The authors hoped that this work, and the ideas it generated, would eventually find their way into Mozilla and other HTML5 compatible browsers.  Both of these goals are within reach now, with work ramping up in [https://bugzilla.mozilla.org/show_bug.cgi?id=490705 this Mozilla bug], and the announcement of an official [http://www.w3.org/2005/Incubator/audio/ W3C Audio Incubator Group] chaired by one of the authors.
This API extends the HTMLMediaElement and HTMLAudioElement (e.g., affecting <video> and <audio>), and implements the following basic API for reading and writing raw audio data:


The continuing work on this specification and API can be tracked here, and in [https://bugzilla.mozilla.org/show_bug.cgi?id=490705 the bug].  Comments, feedback, and collaboration are all welcome.  You can reach the authors on irc in the [irc://irc.mozilla.org/audio #audio channel] on irc.mozilla.org.
===== Reading Audio =====


===== Version =====
Audio data is made available via an event-based API.  As the audio is played, and therefore decoded, sample data is passed to content scripts in a framebuffer for processing after becoming available to the audio layer--hence the name, '''AudioAvailable'''.  These samples may or may not have been played yet at the time of the event.  The audio samples returned in the event are raw, and have not been adjusted for mute/volume settings on the media element.  Playing, pausing, and seeking the audio also affect the streaming of this raw audio data.


'''NOTE:''' ''While this patch/API is under review, I will be making changes and updates to [[Audio Data API Review Version|this version of the documentation]], leaving the following one in tact for people working on things using it.''
Users of this API can register two callbacks on the <audio> or <video> element in order to consume this data:


This is the second major version of this API (referred to by the developers as audio13)--the [[Audio Data API 1|previous version is available here]].  The primary improvements and changes are:
<pre>
<audio src="song.ogg"
      onloadedmetadata="audioInfo();"&gt;
</audio>
</pre>


* Removal of '''mozSpectrum''' (i.e., native FFT calculation) -- will be done in JS now.
The '''LoadedMetadata''' event is a standard part of HTML5. It now indicates that a media element (audio or video) has useful metadata loaded, which can be accessed using three new attributes:
* Added WebGL Arrays (i.e., fast, typed, native float arrays) for the event framebuffer as well as '''mozWriteAudio()'''.
* Changed '''mozWriteAudio()''' to only write as much as can be written to the audio hardware without blocking, and to return the number of samples written.
* Native array interfaces instead of using accessors and IDL array arguments.
* No zero padding of audio data occurs anymore.  All frames are exactly channels * 1024 elements in length.
* Added '''mozCurrentSampleOffset()''' method.
* Removed undocumented position/buffer methods on audio element.
* Added '''mozChannels''', '''mozSampleRate''', '''mozFrameBufferLength''' to '''loadedmetadata''' event.
* Added '''mozSetFrameBufferSize()''' method.


Demos written for the previous version are '''not''' compatible, though can be made to be quite easily.  See details below.
* mozChannels
* mozSampleRate
* mozFrameBufferLength


== API Tutorial ==
Prior to the '''LoadedMetadata''' event, accessing these attributes will cause an exception to be thrown, indicating that they are not known, or there is no audio.  These attributes indicate the '''number of channels''', audio '''sample rate per second''', and the '''default size of the framebuffer''' that will be used in '''MozAudioAvailable''' events.  This event is fired once as the media resource is first loaded, and is useful for interpreting or writing the audio data.


We have developed a proof of concept, experimental build of Firefox ([[#Obtaining_Code_and_Builds|builds provided below]]) which extends the HTMLMediaElement (e.g., affecting &lt;video&gt; and &lt;audio&gt;) and HTMLAudioElement, and implements the following basic API for reading and writing raw audio data:
The '''MozAudioAvailable''' event provides two pieces of data.  The first is a framebuffer (i.e., an array) containing decoded audio sample data (i.e., floats).  The second is the time for these samples measured from the start in seconds.  Web developers consume this event by registering an event listener in script like so:


===== Reading Audio =====
<pre>
 
&lt;audio id="audio" src="song.ogg"&gt;&lt;/audio&gt;
Audio data is made available via an event-based API. As the audio is played, and therefore decoded, each frame is passed to content scripts for processing after being written to the audio layer--hence the name, '''AudioWritten'''.  Playing and pausing the audio all affect the streaming of this raw audio data as well.
&lt;script&gt;
  var audio = document.getElementById("audio");
  audio.addEventListener('MozAudioAvailable', someFunction, false);
&lt;/script&gt;
</pre>


Users of this API can register two callbacks on the &lt;audio&gt; or &lt;video&gt; element in order to consume this data:
An audio or video element can also be created with script outside the DOM:


<pre>
<pre>
<audio src="song.ogg"
var audio = new Audio();
      onloadedmetadata="audioInfo(event);"
audio.src = "song.ogg";
      onaudiowritten="audioWritten(event);">
audio.addEventListener('MozAudioAvailable', someFunction, false);
</audio>
audio.play();
</pre>
</pre>
The '''LoadedMetadata''' event is a standard part of HTML5, and has been extended to provide more detailed information about the audio stream.  Specifically, developers can obtain the '''number of channels''', '''sample rate per second''', and '''default size of the framebuffer''' that will be used in audiowritten events.  This event is fired once as the media resource is first loaded, and is useful for interpreting or writing the audio data.
The '''AudioWritten''' event provides two pieces of data.  The first is a framebuffer (i.e., an array) containing sample data (i.e., floats) for the current frame.  The second is the time for these samples measured from the start in milliseconds.


The following is an example of how both events might be used:
The following is an example of how both events might be used:
Line 73: Line 70:
     samples;
     samples;


function audioInfo(event) {
function audioInfo() {
   channels          = event.mozChannels;
  var audio = document.getElementById('audio');
   rate              = event.mozSampleRate;
 
   frameBufferLength = event.mozFrameBufferLength;
  // After loadedmetadata event, following media element attributes are known:
   channels          = audio.mozChannels;
   rate              = audio.mozSampleRate;
   frameBufferLength = audio.mozFrameBufferLength;
}
}


function audioWritten(event) {
function audioAvailable(event) {
   var samples = event.mozFrameBuffer;
   var samples = event.frameBuffer;
   var time    = event.mozTime;
   var time    = event.time;


   for (var i = 0; i < frameBufferLength; i++) {
   for (var i = 0; i < frameBufferLength; i++) {
Line 103: Line 103:
   </head>
   </head>
   <body>
   <body>
     <audio src="song.ogg"
     <audio id="audio-element"
          src="song.ogg"
           controls="true"
           controls="true"
           onloadedmetadata="loadedMetadata(event);"
           onloadedmetadata="loadedMetadata();"
          onaudiowritten="audioWritten(event);"
           style="width: 512px;">
           style="width: 512px;">
     </audio>
     </audio>
Line 114: Line 114:
       var canvas = document.getElementById('fft'),
       var canvas = document.getElementById('fft'),
           ctx = canvas.getContext('2d'),
           ctx = canvas.getContext('2d'),
          channels,
          rate,
          frameBufferLength,
           fft;
           fft;


       function loadedMetadata(event) {
       function loadedMetadata() {
         var channels          = event.mozChannels,
         channels          = audio.mozChannels;
            rate              = event.mozSampleRate,
        rate              = audio.mozSampleRate;
            frameBufferLength = event.mozFrameBufferLength;
        frameBufferLength = audio.mozFrameBufferLength;
          
          
         fft = new FFT(frameBufferLength / channels, rate),
         fft = new FFT(frameBufferLength / channels, rate);
       }
       }


       function audioWritten(event) {
       function audioAvailable(event) {
         var fb = event.mozFrameBuffer,
         var fb = event.frameBuffer,
            t  = event.time, /* unused, but it's there */
             signal = new Float32Array(fb.length / channels),
             signal = new Float32Array(fb.length / channels),
             magnitude;
             magnitude;
Line 148: Line 152:
         }
         }
       }
       }
      var audio = document.getElementById('audio-element');
      audio.addEventListener('MozAudioAvailable', audioAvailable, false);


       // FFT from dsp.js, see below
       // FFT from dsp.js, see below
Line 188: Line 195:


         if ( bufferSize !== buffer.length ) {
         if ( bufferSize !== buffer.length ) {
           throw "Supplied buffer is not the same size as defined FFT. FFT Size: " +
           throw "Supplied buffer is not the same size as defined FFT. FFT Size: " + bufferSize + " Buffer Size: " + buffer.length;
                bufferSize + " Buffer Size: " + buffer.length;
         }
         }


Line 252: Line 258:
It is also possible to setup an &lt;audio&gt; element for raw writing from script (i.e., without a ''src'' attribute).  Content scripts can specify the audio stream's characteristics, then write audio samples using the following methods:
It is also possible to setup an &lt;audio&gt; element for raw writing from script (i.e., without a ''src'' attribute).  Content scripts can specify the audio stream's characteristics, then write audio samples using the following methods:


<code>mozSetup(channels, sampleRate, volume)</code>
<code>mozSetup(channels, sampleRate)</code>


<pre>
<pre>
// Create a new audio element
// Create a new audio element
var audioOutput = new Audio();
var audioOutput = new Audio();
// Set up audio element with 2 channel, 44.1KHz audio stream, volume set to full.  
// Set up audio element with 2 channel, 44.1KHz audio stream.
audioOutput.mozSetup(2, 44100, 1);
audioOutput.mozSetup(2, 44100);
</pre>
</pre>


Line 276: Line 282:


<pre>
<pre>
// Get current position of the underlying audio stream, measured in samples written.
// Get current position of the underlying audio stream, measured in samples available.
var currentSampleOffset = audioOutput.mozCurrentSampleOffset();
var currentSampleOffset = audioOutput.mozCurrentSampleOffset();
</pre>
</pre>


Since the '''AudioWritten''' event and the '''mozWriteAudio()''' method both use '''Float32Array''', it is possible to take the output of one audio stream and pass it directly (or process first and then pass) to a second:
Since the '''MozAudioAvailable''' event and the '''mozWriteAudio()''' method both use '''Float32Array''', it is possible to take the output of one audio stream and pass it directly (or process first and then pass) to a second:


<pre>
<pre>
<audio id="a1"  
<audio id="a1"  
       src="song.ogg"  
       src="song.ogg"  
       onloadedmetadata="loadedMetadata(event);"
       onloadedmetadata="loadedMetadata();"
       onaudiowritten="audioWritten(event);"
       controls>
      controls="controls">
</audio>
</audio>
<script>
<script>
Line 294: Line 299:
     buffer = [];
     buffer = [];


function loadedMetadata(event) {
function loadedMetadata() {
   // Mute a1 audio.
   // Mute a1 audio.
   a1.volume = 0;
   a1.volume = 0;
   // Setup a2 to be identical to a1, and play through there.
   // Setup a2 to be identical to a1, and play through there.
   a2.mozSetup(event.mozChannels, event.mozSampleRate, 1);
   a2.mozSetup(a1.mozChannels, a1.mozSampleRate);
}
}


function audioWritten(event) {
function audioAvailable(event) {
   // Write the current framebuffer
   // Write the current framebuffer
   var frameBuffer = event.mozFrameBuffer;
   var frameBuffer = event.mozFrameBuffer;
   writeAudio(frameBuffer);
   writeAudio(frameBuffer);
}
}
a1.addEventListener('a1', audioAvailable, false);


function writeAudio(audio) {
function writeAudio(audio) {
Line 323: Line 329:
</pre>
</pre>


Audio data written using the '''mozWriteAudio()''' method needs to be written at a regular interval in equal portions, in order to keep a little ahead of the current sample offset (current sample offset of hardware can be obtained with '''mozCurrentSampleOffset()'''), where a little means something on the order of 500ms of samples.  For example, if working with 2 channels at 44100 samples per second, and a writing interval chosen that is equal to 100ms, and a pre-buffer equal to 500ms, one would write an array of (2 * 44100 / 10) = 8820 samples, and a total of (currentSampleOffset + 2 * 44100 / 2).
Audio data written using the '''mozWriteAudio()''' method needs to be written at a regular interval in equal portions, in order to keep a little ahead of the current sample offset (current sample offset of hardware can be obtained with '''mozCurrentSampleOffset()'''), where a little means something on the order of 500ms of samples.  For example, if working with 2 channels at 44100 samples per second, a writing interval of 100ms, and a pre-buffer equal to 500ms, one would write an array of (2 * 44100 / 10) = 8820 samples, and a total of (currentSampleOffset + 2 * 44100 / 2).


===== Complete Example: Creating a Web Based Tone Generator =====
===== Complete Example: Creating a Web Based Tone Generator =====
Line 347: Line 353:


       var audio = new Audio();
       var audio = new Audio();
       audio.mozSetup(1, sampleRate, 1);
       audio.mozSetup(1, sampleRate);
       var currentWritePosition = 0;
       var currentWritePosition = 0;


Line 388: Line 394:
== DOM Implementation ==  
== DOM Implementation ==  


===== nsIDOMNotifyAudioMetadataEvent =====
===== nsIDOMNotifyAudioAvailableEvent =====


Audio metadata is provided via custom properties of the media element's '''loadedmetadata''' event.  This event occurs once when the browser first aquires information about the media resource.  The event details are as follows:
Audio data is made available via the following event:


* '''Event''': LoadedMetadata
* '''Event''': AudioAvailableEvent
* '''Event handler''': onloadedmetadata
* '''Event handler''': onmozaudioavailable


The '''LoadedMetadataEvent''' is defined as follows:
The '''AudioAvailableEvent''' is defined as follows:


<pre>
<pre>
interface nsIDOMNotifyAudioMetadataEvent : nsIDOMEvent
interface nsIDOMNotifyAudioAvailableEvent : nsIDOMEvent
{
{
   readonly attribute unsigned long mozChannels;
   // mozFrameBuffer is really a Float32Array
   readonly attribute unsigned long mozSampleRate;
   readonly attribute jsval  frameBuffer;
   readonly attribute unsigned long mozFrameBufferLength;
   readonly attribute float  time;
};
};
</pre>
</pre>


The '''mozChannels''' attribute contains the number of channels in the audio resource (e.g., 2).  The '''mozSampleRate''' attribute contains the number of samples per second that will be played, for example 44100. The '''mozFrameBufferLength''' attribute contains the default number of samples that will be returned in each '''AudioWritten''' eventThis number is a total for all channels, and by default is set to be the number of channels * 1024 (e.g., 2 channels * 1024 samples = 2048 total).  You can change this size using the '''mozSetFrameBufferSize()''' method to be another power of 2 between 512 and 32768 (see details below).
The '''frameBuffer''' attribute contains a typed array ('''Float32Array''') with the raw audio data (32-bit float values) obtained from decoding the audio (e.g., the raw data being sent to the audio hardware vs. encoded audio).  This is of the form <nowiki>[channel1, channel2, ..., channelN, channel1, channel2, ..., channelN, ...]</nowiki>All audio frames are normalized to a length of channels * 1024 by default, but could be any power of 2 between 512 and 32768 if the user has set a different length using the '''mozFrameBufferLength''' attribute.


===== nsIDOMNotifyAudioWrittenEvent =====
The '''time''' attribute contains a float representing the time in seconds since the start.


Audio data is made available via the following event:
===== nsIDOMHTMLMediaElement additions =====


* '''Event''': AudioWrittenEvent
Audio metadata is made available via three new attributes on the HTMLMediaElement.  By default these attributes throw if accessed before the '''LoadedMetadata''' event occurs.  Users who need this info before the audio starts playing should not use '''autoplay''', since the audio might start before a loadmetadata handler has run.
* '''Event handler''': onaudiowritten


The '''AudioWrittenEvent''' is defined as follows:
The three new attributes are defined as follows:


<pre>
<pre>
interface nsIDOMNotifyAudioWrittenEvent : nsIDOMEvent
   readonly attribute unsigned long mozChannels;
{
   readonly attribute unsigned long mozSampleRate;
  // mozFrameBuffer is really a Float32Array, via dom_quickstubs
          attribute unsigned long mozFrameBufferLength;
   readonly attribute nsIVariant        mozFrameBuffer;
   readonly attribute unsigned long long mozTime;
};
</pre>
</pre>


The '''mozFrameBuffer''' attribute contains a typed array ('''Float32Array''') with the raw audio data (32-bit float values) obtained from decoding the audio (e.g., the raw data being sent to the audio hardware vs. encoded audio).  This is of the form <nowiki>[channel1, channel2, ..., channelN, channel1, channel2, ..., channelN, ...]</nowiki>.  All audio frames are normalized to a length of channels * 1024 by default, but could be any power of 2 between 512 and 32768 if the user has set a different size using '''mozSetFrameBufferSize()'''.
The '''mozChannels''' attribute contains the number of channels in the audio resource (e.g., 2).  The '''mozSampleRate''' attribute contains the number of samples per second that will be played, for example 44100.  Both are read-only.


The '''mozTime''' attribute contains an unsigned integer (64-bit) representing the time in milliseconds since the start.
The '''mozFrameBufferLength''' attribute indicates the number of samples that will be returned in the framebuffer of each '''MozAudioAvailable''' event.  This number is a total for all channels, and by default is set to be the number of channels * 1024 (e.g., 2 channels * 1024 samples = 2048 total).


===== nsIDOMHTMLAudioElement additions =====
The '''mozFrameBufferLength''' attribute can also be set to a new value, if users want lower latency, or larger amounts of data, etc.  The size given '''must''' be a power of 2 between 512 and 32768.  The following are all valid lengths:


Audio write access is achieved by adding two new methods to the HTML media element:
* 512
* 1024
* 2048
* 4096
* 8192
* 16384
* 32768


<pre>
Using any other size will result in an exception being thrown.  The best time to set a new length is after the '''loadedmetadata''' event fires, when the audio info is known, but before the audio has started or '''MozAudioAvailable''' events begun firing.
void mozSetup(in long channels, in long rate, in float volume);


unsigned long mozWriteAudio(array); // array is Array() or Float32Array()
===== nsIDOMHTMLAudioElement additions =====


unsigned long long mozCurrentSampleOffset();
The HTMLAudioElement has also been extended to allow write access.  Audio writing is achieved by adding three new methods:


void mozSetFramebufferSize(size); // size must be a power of 2 between 512 and 32768
<pre>
  void mozSetup(in long channels, in long rate);
  unsigned long mozWriteAudio(array); // array is Array() or Float32Array()
  unsigned long long mozCurrentSampleOffset();
</pre>
</pre>


The '''mozSetup()''' method allows an &lt;audio&gt; element to be setup for writing from script.  This method '''must''' be called before '''mozWriteAudio''' can be called, since an audio stream has to be created for the media element.  It takes three arguments:
The '''mozSetup()''' method allows an &lt;audio&gt; element to be setup for writing from script.  This method '''must''' be called before '''mozWriteAudio''' or '''mozCurrentSampleOffset''' can be called, since an audio stream has to be created for the media element.  It takes two arguments:


# '''channels''' - the number of audio channels (e.g., 2)
# '''channels''' - the number of audio channels (e.g., 2)
# '''rate''' - the audio's sample rate (e.g., 44100 samples per second)
# '''rate''' - the audio's sample rate (e.g., 44100 samples per second)
# '''volume''' - the initial volume to use (e.g., 1.0)


The choices made for '''channel''' and '''rate''' are significant, because they determine the amount of data you must pass to '''mozWriteAudio()'''.  That is, you must pass either an array with 0 elements--similar to flushing the audio stream--or enough data for each channel specified in '''mozSetup()'''.
The choices made for '''channel''' and '''rate''' are significant, because they determine the amount of data you must pass to '''mozWriteAudio()'''.  That is, you must pass either an array with 0 elements--similar to flushing the audio stream--or enough data for each channel specified in '''mozSetup()'''.
Line 454: Line 463:
The '''mozSetup()''' method, if called more than once, will recreate a new audio stream (destroying an existing one if present) with each call.  Thus it is safe to call this more than once, but unnecessary.  
The '''mozSetup()''' method, if called more than once, will recreate a new audio stream (destroying an existing one if present) with each call.  Thus it is safe to call this more than once, but unnecessary.  


The '''mozWriteAudio()''' method can be called after '''mozSetup()'''.  It allows audio data to be written directly from script.  It takes one argument:
The '''mozWriteAudio()''' method can be called after '''mozSetup()'''.  It allows audio data to be written directly from script.  It takes one argument, '''array'''.  This is a JS Array (i.e., new Array()) or a typed float array (i.e., new Float32Array()) containing the audio data (floats) you wish to write.  It must be 0 or N elements in length, where N % channels == 0, otherwise an exception is thrown.  
 
# '''array''' - this is a JS Array (i.e., new Array()) or a typed float array (i.e., new Float32Array()) containing the audio data (floats) you wish to write.  It must be 0 or N elements in length, where N % channels == 0, otherwise a DOM error occurs.  


The '''mozWriteAudio()''' method returns the number of samples that were just written, which may or may not be the same as the number in '''array'''.  Only the number of samples that can be written without blocking the audio hardware will be written.  It is the responsibility of the caller to deal with any samples that don't get written in the first pass (e.g., buffer and write in the next call).
The '''mozWriteAudio()''' method returns the number of samples that were just written, which may or may not be the same as the number in '''array'''.  Only the number of samples that can be written without blocking the audio hardware will be written.  It is the responsibility of the caller to deal with any samples that don't get written in the first pass (e.g., buffer and write in the next call).
Line 462: Line 469:
The '''mozCurrentSampleOffset()''' method can be called after '''mozSetup()'''.  It returns the current position (measured in samples) of the audio stream.  This is useful when determining how much data to write with '''mozWriteAudio()'''.
The '''mozCurrentSampleOffset()''' method can be called after '''mozSetup()'''.  It returns the current position (measured in samples) of the audio stream.  This is useful when determining how much data to write with '''mozWriteAudio()'''.


The '''mozSetFrameBufferSize()''' is used to change the default framebuffer size for '''AudioWritten''' events.  By default, this value will be 1024 * channels (e.g., 2048 for 2 channels).  You can give another size if you need lower latency, or larger amounts of data, etcThe size you give '''must''' be a power of 2 between 512 and 32768. The following are all valid:
All of '''mozWriteAudio()''', '''mozCurrentSampleOffset()''', and '''mozSetup()''' will throw exceptions if called out of order.  '''mozSetup()''' will also throw if a ''src'' attribute has previously been set on the audio element (i.e., you can't do both at the same time).


* 512
===== Security =====
* 1024
 
* 2048
Similar to the &lt;canvas&gt; element and its '''getImageData''' method, the '''MozAudioAvailable''' event's '''frameBuffer''' attribute protects against information leakage between origins.
* 4096
 
* 8192
The '''MozAudioAvailable''' event's '''frameBuffer''' attribute will throw if the origin of audio resource does not match the document's origin.  NOTE: this will affect users who have the security.fileuri.strict_origin_policy set, and are working locally with file:/// URIs.
* 16384
* 32768


Using any other size will result in an exception being thrown.  The best time to call '''mozSetFrameBufferSize()''' is in the '''loadedmetadata''' event, when the audio info is known, but before the audio has started or events begun firing.
===== Compatibility with Audio Backends =====


All of '''mozWriteAudio()''', '''mozCurrentSampleOffset()''', and '''mozSetup()''' will throw exceptions if called out of order.
The current MozAudioAvailable implementation integrates with Mozilla's decoder abstract base classes, and therefore, any audio decoder which uses these base classes automatically dispatches MozAudioAvailable events.  At the time of writing, this includes the Ogg and WebM decoders but '''not''' the Wave decoder.


== Additional Resources ==
== Additional Resources ==
Line 480: Line 485:
A series of blog posts document the evolution and implementation of this API: http://vocamus.net/dave/?cat=25.  Another overview by Al MacDonald is available [http://weblog.bocoup.com/web-audio-all-aboard here].
A series of blog posts document the evolution and implementation of this API: http://vocamus.net/dave/?cat=25.  Another overview by Al MacDonald is available [http://weblog.bocoup.com/web-audio-all-aboard here].


=== Obtaining Code and Builds ===
=== Bug ===
 
A patch is available in the [https://bugzilla.mozilla.org/show_bug.cgi?id=490705 bug], if you would like to experiment with this API.  We have also created builds you can download and run locally.  You can download the 'audio13h' builds here (don't use the *-debug builds):


http://ftp.mozilla.org/pub/mozilla.org/firefox/tryserver-builds/david.humphrey@senecac.on.ca-bf04114969ea/ 
The work on this API is available in Mozilla [https://bugzilla.mozilla.org/show_bug.cgi?id=490705 bug 490705].


The Linux builds do not have working WebGL at this time due to [https://bugzilla.mozilla.org/show_bug.cgi?id=567095 bug 567095].
=== Obtaining Code and Builds ===


By request, a [http://scotland.proximity.on.ca/dxr/tmp/audio/audio13a/firefox-3.7a5pre.en-US.linux-x86_64.tar.bz2 Fedora Linux 64-bit build] is available too.
'''Latest Try Server Builds:'''


A Win32 version of Firefox combining [https://bugzilla.mozilla.org/show_bug.cgi?id=508906 Multi-Touch screen input from Felipe Gomes] and audio data access from David Humphrey can be downloaded [http://gul.ly/5q here].
http://ftp.mozilla.org/pub/mozilla.org/firefox/tryserver-builds/david.humphrey@senecac.on.ca-ecf5c7f4e806/


=== JavaScript Audio Libraries ===
=== JavaScript Audio Libraries ===


* We have started work on a JavaScript library to make building audio web apps easier.  Details are [[Audio Data API JS Library|here]].
* We have started work on a JavaScript library to make building audio web apps easier.  Details are [[Audio Data API JS Library|here]].
* [http://github.com/bfirsh/dynamicaudio.js dynamicaudio.js] - An interface for writing audio with a Flash fall back for older browsers.
* [http://github.com/bfirsh/dynamicaudio.js dynamicaudio.js] - An interface for writing audio with a Flash fall back for older browsers.  ''NOTE:'' not necessarily up-to-date with this version of the API.


=== Working Audio Data Demos ===
=== Working Audio Data Demos ===


A number of working demos have been created, including:
A number of working demos have been created, including:
* Writing Audio from JavaScript, Digital Signal Processing
** API Example: [http://code.bocoup.com/audio-data-api/examples/inverted-waveform-cancellation Inverted Waveform Cancellation]
** API Example: [http://code.bocoup.com/audio-data-api/examples/stereo-splitting-and-panning Stereo Splitting and Panning]
** API Example: [http://code.bocoup.com/audio-data-api/examples/mid-side-microphone-decoder/ Mid-Side Microphone Decoder]
** API Example: [http://code.bocoup.com/audio-data-api/examples/ambient-extraction-mixer/ Ambient Extraction Mixer]
** API Example: [http://code.bocoup.com/audio-data-api/examples/worker-thread-audio-processing/ Worker Thread Audio Processing]
* Beat Detection (also showing use of WebGL for 3D visualizations)
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor1HD.html (video [http://vimeo.com/11345262 here])
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor2HD.html (video of older version [http://vimeo.com/11345685 here])
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor3HD.html (video [http://www.youtube.com/watch?v=OxoFcyKYwr0&fmt=22 here])
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor4HD.html (video [http://www.youtube.com/watch?v=dym4DqpJuDk&fmt=22 here])


'''NOTE:''' ''If you try to run demos created with the original API using a build that implements the new API, you may encounter [https://bugzilla.mozilla.org/show_bug.cgi?id=560212 bug 560212].  We are aware of this, as is Mozilla, and it is being investigated.''
'''NOTE:''' ''If you try to run demos created with the original API using a build that implements the new API, you may encounter [https://bugzilla.mozilla.org/show_bug.cgi?id=560212 bug 560212].  We are aware of this, as is Mozilla, and it is being investigated.''


==== Demos Working on Current API ====
=== Demos Needing to be Updated to New API ===


* FFT visualization (calculated with js)
* FFT visualization (calculated with js)
** http://weare.buildingsky.net/processing/dsp.js/examples/fft.html
** http://weare.buildingsky.net/processing/dsp.js/examples/fft.html
* Beat Detection (also showing use of WebGL for 3D visualizations)
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor1HD-13a.html (video [http://vimeo.com/11345262 here])
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor2HD-13a.html (video of older version [http://vimeo.com/11345685 here])
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor3HD-13a.html (video [http://www.youtube.com/watch?v=OxoFcyKYwr0&fmt=22 here])
** http://cubicvr.org/CubicVR.js/bd3/BeatDetektor4HD.html (video [http://www.youtube.com/watch?v=dym4DqpJuDk&fmt=22 here])


* Writing Audio from JavaScript, Digital Signal Processing
* Writing Audio from JavaScript, Digital Signal Processing
Line 521: Line 531:
** Reverb effect http://code.almeros.com/code-examples/reverb-firefox-audio-api/ (video [http://vimeo.com/13386796 here])
** Reverb effect http://code.almeros.com/code-examples/reverb-firefox-audio-api/ (video [http://vimeo.com/13386796 here])
** Csound shaker instrument ported to JavaScript via Processing.js http://scotland.proximity.on.ca/dxr/tmp/audio/shaker/instruments/shaker.htm
** Csound shaker instrument ported to JavaScript via Processing.js http://scotland.proximity.on.ca/dxr/tmp/audio/shaker/instruments/shaker.htm
==== Demos Needing to be Updated to New API ====


** http://weare.buildingsky.net/processing/dft.js/audio.new.html (video [http://vimeo.com/8525101 here])
** http://weare.buildingsky.net/processing/dft.js/audio.new.html (video [http://vimeo.com/8525101 here])
Line 545: Line 553:
** JS Multi-Oscillator Synthesizer http://weare.buildingsky.net/processing/dsp.js/examples/synthesizer.html (video [http://vimeo.com/11411533 here])
** JS Multi-Oscillator Synthesizer http://weare.buildingsky.net/processing/dsp.js/examples/synthesizer.html (video [http://vimeo.com/11411533 here])
** JS IIR Filter http://weare.buildingsky.net/processing/dsp.js/examples/filter.html (video [http://vimeo.com/11335434 here])  
** JS IIR Filter http://weare.buildingsky.net/processing/dsp.js/examples/filter.html (video [http://vimeo.com/11335434 here])  
** API Example: [http://code.bocoup.com/audio-data-api/examples/inverted-waveform-cancellation Inverted Waveform Cancellation]
** API Example: [http://code.bocoup.com/audio-data-api/examples/stereo-splitting-and-panning Stereo Splitting and Panning]
** API Example: [http://code.bocoup.com/audio-data-api/examples/mid-side-microphone-decoder/ Mid-Side Microphone Decoder]
** API Example: [http://code.bocoup.com/audio-data-api/examples/ambient-extraction-mixer/ Ambient Extraction Mixer]
** Biquad filter http://www.ricardmarxer.com/audioapi/biquad/ (demo by Ricard Marxer)  
** Biquad filter http://www.ricardmarxer.com/audioapi/biquad/ (demo by Ricard Marxer)  
** Interactive Audio Application, Bloom http://code.bocoup.com/bloop/color/bloop.html (video [http://vimeo.com/11346141 here] and [http://vimeo.com/11345133 here])
** Interactive Audio Application, Bloom http://code.bocoup.com/bloop/color/bloop.html (video [http://vimeo.com/11346141 here] and [http://vimeo.com/11345133 here])
Line 565: Line 569:
* http://news.slashdot.org/story/10/05/26/1936224/Breakthroughs-In-HTML-Audio-Via-Manipulation-With-JavaScript
* http://news.slashdot.org/story/10/05/26/1936224/Breakthroughs-In-HTML-Audio-Via-Manipulation-With-JavaScript
* http://ajaxian.com/archives/amazing-audio-api-javascript-demos
* http://ajaxian.com/archives/amazing-audio-api-javascript-demos
* http://www.webmonkey.com/2010/08/sampleplayer-makes-your-browser-sing-sans-flash/

Revision as of 00:16, 17 August 2010

Defining an Enhanced API for Audio (Draft Recommendation)

Abstract

The HTML5 specification introduces the <audio> and <video> media elements, and with them the opportunity to dramatically change the way we integrate media on the web. The current HTML5 media API provides ways to play and get limited information about audio and video, but gives no way to programatically access or create such media. We present a new Mozilla extension to this API, which allows web developers to read and write raw audio data.

Authors
Other Contributors
  • Thomas Saunders
  • Ted Mielczarek

API Tutorial

This API extends the HTMLMediaElement and HTMLAudioElement (e.g., affecting <video> and <audio>), and implements the following basic API for reading and writing raw audio data:

Reading Audio

Audio data is made available via an event-based API. As the audio is played, and therefore decoded, sample data is passed to content scripts in a framebuffer for processing after becoming available to the audio layer--hence the name, AudioAvailable. These samples may or may not have been played yet at the time of the event. The audio samples returned in the event are raw, and have not been adjusted for mute/volume settings on the media element. Playing, pausing, and seeking the audio also affect the streaming of this raw audio data.

Users of this API can register two callbacks on the <audio> or <video> element in order to consume this data:

<audio src="song.ogg"
       onloadedmetadata="audioInfo();">
</audio>

The LoadedMetadata event is a standard part of HTML5. It now indicates that a media element (audio or video) has useful metadata loaded, which can be accessed using three new attributes:

  • mozChannels
  • mozSampleRate
  • mozFrameBufferLength

Prior to the LoadedMetadata event, accessing these attributes will cause an exception to be thrown, indicating that they are not known, or there is no audio. These attributes indicate the number of channels, audio sample rate per second, and the default size of the framebuffer that will be used in MozAudioAvailable events. This event is fired once as the media resource is first loaded, and is useful for interpreting or writing the audio data.

The MozAudioAvailable event provides two pieces of data. The first is a framebuffer (i.e., an array) containing decoded audio sample data (i.e., floats). The second is the time for these samples measured from the start in seconds. Web developers consume this event by registering an event listener in script like so:

<audio id="audio" src="song.ogg"></audio>
<script>
  var audio = document.getElementById("audio");
  audio.addEventListener('MozAudioAvailable', someFunction, false);
</script>

An audio or video element can also be created with script outside the DOM:

var audio = new Audio();
audio.src = "song.ogg";
audio.addEventListener('MozAudioAvailable', someFunction, false);
audio.play();

The following is an example of how both events might be used:

var channels,
    rate,
    frameBufferLength,
    samples;

function audioInfo() {
  var audio = document.getElementById('audio');

  // After loadedmetadata event, following media element attributes are known:
  channels          = audio.mozChannels;
  rate              = audio.mozSampleRate;
  frameBufferLength = audio.mozFrameBufferLength;
}

function audioAvailable(event) {
  var samples = event.frameBuffer;
  var time    = event.time;

  for (var i = 0; i < frameBufferLength; i++) {
    // Do something with the audio data as it is played.
    processSample(samples[i], channels, rate);
  }
}
Complete Example: Visualizing Audio Spectrum

This example calculates and displays FFT spectrum data for the playing audio:

Fft.png

<!DOCTYPE html>
<html>
  <head>
    <title>JavaScript Spectrum Example</title>
  </head>
  <body>
    <audio id="audio-element"
           src="song.ogg"
           controls="true"
           onloadedmetadata="loadedMetadata();"
           style="width: 512px;">
    </audio>
    <div><canvas id="fft" width="512" height="200"></canvas></div>

    <script>
      var canvas = document.getElementById('fft'),
          ctx = canvas.getContext('2d'),
          channels,
          rate,
          frameBufferLength,
          fft;

      function loadedMetadata() {
        channels          = audio.mozChannels;
        rate              = audio.mozSampleRate;
        frameBufferLength = audio.mozFrameBufferLength;
         
        fft = new FFT(frameBufferLength / channels, rate);
      }

      function audioAvailable(event) {
        var fb = event.frameBuffer,
            t  = event.time, /* unused, but it's there */
            signal = new Float32Array(fb.length / channels),
            magnitude;

        for (var i = 0, fbl = frameBufferLength / 2; i < fbl; i++ ) {
          // Assuming interlaced stereo channels,
          // need to split and merge into a stero-mix mono signal
          signal[i] = (fb[2*i] + fb[2*i+1]) / 2;
        }

        fft.forward(signal);

        // Clear the canvas before drawing spectrum
        ctx.clearRect(0,0, canvas.width, canvas.height);

        for (var i = 0; i < fft.spectrum.length; i++ ) {
          // multiply spectrum by a zoom value
          magnitude = fft.spectrum[i] * 4000;

          // Draw rectangle bars for each frequency bin
          ctx.fillRect(i * 4, canvas.height, 3, -magnitude);
        }
      }

      var audio = document.getElementById('audio-element');
      audio.addEventListener('MozAudioAvailable', audioAvailable, false);

      // FFT from dsp.js, see below
      var FFT = function(bufferSize, sampleRate) {
        this.bufferSize   = bufferSize;
        this.sampleRate   = sampleRate;
        this.spectrum     = new Float32Array(bufferSize/2);
        this.real         = new Float32Array(bufferSize);
        this.imag         = new Float32Array(bufferSize);
        this.reverseTable = new Uint32Array(bufferSize);
        this.sinTable     = new Float32Array(bufferSize);
        this.cosTable     = new Float32Array(bufferSize);

        var limit = 1,
            bit = bufferSize >> 1;

        while ( limit < bufferSize ) {
          for ( var i = 0; i < limit; i++ ) {
            this.reverseTable[i + limit] = this.reverseTable[i] + bit;
          }

          limit = limit << 1;
          bit = bit >> 1;
        }

        for ( var i = 0; i < bufferSize; i++ ) {
          this.sinTable[i] = Math.sin(-Math.PI/i);
          this.cosTable[i] = Math.cos(-Math.PI/i);
        }
      };

      FFT.prototype.forward = function(buffer) {
        var bufferSize   = this.bufferSize,
            cosTable     = this.cosTable,
            sinTable     = this.sinTable,
            reverseTable = this.reverseTable,
            real         = this.real,
            imag         = this.imag,
            spectrum     = this.spectrum;

        if ( bufferSize !== buffer.length ) {
          throw "Supplied buffer is not the same size as defined FFT. FFT Size: " + bufferSize + " Buffer Size: " + buffer.length;
        }

        for ( var i = 0; i < bufferSize; i++ ) {
          real[i] = buffer[reverseTable[i]];
          imag[i] = 0;
        }

        var halfSize = 1,
            phaseShiftStepReal,	
            phaseShiftStepImag,
            currentPhaseShiftReal,
            currentPhaseShiftImag,
            off,
            tr,
            ti,
            tmpReal,	
            i;

        while ( halfSize < bufferSize ) {
          phaseShiftStepReal = cosTable[halfSize];
          phaseShiftStepImag = sinTable[halfSize];
          currentPhaseShiftReal = 1.0;
          currentPhaseShiftImag = 0.0;

          for ( var fftStep = 0; fftStep < halfSize; fftStep++ ) {
            i = fftStep;

            while ( i < bufferSize ) {
              off = i + halfSize;
              tr = (currentPhaseShiftReal * real[off]) - (currentPhaseShiftImag * imag[off]);
              ti = (currentPhaseShiftReal * imag[off]) + (currentPhaseShiftImag * real[off]);

              real[off] = real[i] - tr;
              imag[off] = imag[i] - ti;
              real[i] += tr;
              imag[i] += ti;

              i += halfSize << 1;
            }

            tmpReal = currentPhaseShiftReal;
            currentPhaseShiftReal = (tmpReal * phaseShiftStepReal) - (currentPhaseShiftImag * phaseShiftStepImag);
            currentPhaseShiftImag = (tmpReal * phaseShiftStepImag) + (currentPhaseShiftImag * phaseShiftStepReal);
          }

          halfSize = halfSize << 1;
	}

        i = bufferSize/2;
        while(i--) {
          spectrum[i] = 2 * Math.sqrt(real[i] * real[i] + imag[i] * imag[i]) / bufferSize;
	}
      };
    </script>
  </body>
</html>
Writing Audio

It is also possible to setup an <audio> element for raw writing from script (i.e., without a src attribute). Content scripts can specify the audio stream's characteristics, then write audio samples using the following methods:

mozSetup(channels, sampleRate)

// Create a new audio element
var audioOutput = new Audio();
// Set up audio element with 2 channel, 44.1KHz audio stream.
audioOutput.mozSetup(2, 44100);

mozWriteAudio(buffer)

// Write samples using a JS Array
var samples = [0.242, 0.127, 0.0, -0.058, -0.242, ...];
var numberSamplesWritten = audioOutput.mozWriteAudio(samples);

// Write samples using a Typed Array
var samples = new Float32Array([0.242, 0.127, 0.0, -0.058, -0.242, ...]);
var numberSamplesWritten = audioOutput.mozWriteAudio(samples);

mozCurrentSampleOffset()

// Get current position of the underlying audio stream, measured in samples available.
var currentSampleOffset = audioOutput.mozCurrentSampleOffset();

Since the MozAudioAvailable event and the mozWriteAudio() method both use Float32Array, it is possible to take the output of one audio stream and pass it directly (or process first and then pass) to a second:

<audio id="a1" 
       src="song.ogg" 
       onloadedmetadata="loadedMetadata();"
       controls>
</audio>
<script>
var a1 = document.getElementById('a1'),
    a2 = new Audio(),
    buffer = [];

function loadedMetadata() {
  // Mute a1 audio.
  a1.volume = 0;
  // Setup a2 to be identical to a1, and play through there.
  a2.mozSetup(a1.mozChannels, a1.mozSampleRate);
}

function audioAvailable(event) {
  // Write the current framebuffer
  var frameBuffer = event.mozFrameBuffer;
  writeAudio(frameBuffer);
}
a1.addEventListener('a1', audioAvailable, false);

function writeAudio(audio) {
  // If there's buffered data, write that first
  buffer = (buffer.length === 0) ? audio :
    buffer.concat(audio);

  var written = a2.mozWriteAudio(buffer);
  // If all data wasn't written, buffer it:
  if (written < buffer.length) {
    buffer = buffer.slice(written);
  } else {
    buffer.length = 0;
  }
}
</script>

Audio data written using the mozWriteAudio() method needs to be written at a regular interval in equal portions, in order to keep a little ahead of the current sample offset (current sample offset of hardware can be obtained with mozCurrentSampleOffset()), where a little means something on the order of 500ms of samples. For example, if working with 2 channels at 44100 samples per second, a writing interval of 100ms, and a pre-buffer equal to 500ms, one would write an array of (2 * 44100 / 10) = 8820 samples, and a total of (currentSampleOffset + 2 * 44100 / 2).

Complete Example: Creating a Web Based Tone Generator

This example creates a simple tone generator, and plays the resulting tone.

<!DOCTYPE html>
<html>
  <head>
    <title>JavaScript Audio Write Example</title>
  </head>
  <body>
    <input type="text" size="4" id="freq" value="440"><label for="hz">Hz</label>
    <button onclick="start()">play</button>
    <button onclick="stop()">stop</button>

    <script type="text/javascript">
      var sampleRate = 44100,
          portionSize = sampleRate / 10, 
          prebufferSize = sampleRate / 2,
          freq = undefined; // no sound

      var audio = new Audio();
      audio.mozSetup(1, sampleRate);
      var currentWritePosition = 0;

      function getSoundData(t, size) {
        var soundData = new Float32Array(size);
        if (freq) {
          var k = 2* Math.PI * freq / sampleRate;
          for (var i=0; i<size; i++) {
            soundData[i] = Math.sin(k * (i + t));
          }
        }
        return soundData;
      }

      function writeData() {
        while(audio.mozCurrentSampleOffset() + prebufferSize >= currentWritePosition) {
          var soundData = getSoundData(currentWritePosition, portionSize);
          audio.mozWriteAudio(soundData);
          currentWritePosition += portionSize;
        }
      }

      // initial write
      writeData(); 
      var writeInterval = Math.floor(1000 * portionSize / sampleRate);
      setInterval(writeData, writeInterval);

      function start() {
        freq = parseFloat(document.getElementById("freq").value);
      }

      function stop() {
        freq = undefined;
      }
  </script>
  </body>
</html>

DOM Implementation

nsIDOMNotifyAudioAvailableEvent

Audio data is made available via the following event:

  • Event: AudioAvailableEvent
  • Event handler: onmozaudioavailable

The AudioAvailableEvent is defined as follows:

interface nsIDOMNotifyAudioAvailableEvent : nsIDOMEvent
{
  // mozFrameBuffer is really a Float32Array
  readonly attribute jsval  frameBuffer;
  readonly attribute float  time;
};

The frameBuffer attribute contains a typed array (Float32Array) with the raw audio data (32-bit float values) obtained from decoding the audio (e.g., the raw data being sent to the audio hardware vs. encoded audio). This is of the form [channel1, channel2, ..., channelN, channel1, channel2, ..., channelN, ...]. All audio frames are normalized to a length of channels * 1024 by default, but could be any power of 2 between 512 and 32768 if the user has set a different length using the mozFrameBufferLength attribute.

The time attribute contains a float representing the time in seconds since the start.

nsIDOMHTMLMediaElement additions

Audio metadata is made available via three new attributes on the HTMLMediaElement. By default these attributes throw if accessed before the LoadedMetadata event occurs. Users who need this info before the audio starts playing should not use autoplay, since the audio might start before a loadmetadata handler has run.

The three new attributes are defined as follows:

  readonly attribute unsigned long mozChannels;
  readonly attribute unsigned long mozSampleRate;
           attribute unsigned long mozFrameBufferLength;

The mozChannels attribute contains the number of channels in the audio resource (e.g., 2). The mozSampleRate attribute contains the number of samples per second that will be played, for example 44100. Both are read-only.

The mozFrameBufferLength attribute indicates the number of samples that will be returned in the framebuffer of each MozAudioAvailable event. This number is a total for all channels, and by default is set to be the number of channels * 1024 (e.g., 2 channels * 1024 samples = 2048 total).

The mozFrameBufferLength attribute can also be set to a new value, if users want lower latency, or larger amounts of data, etc. The size given must be a power of 2 between 512 and 32768. The following are all valid lengths:

  • 512
  • 1024
  • 2048
  • 4096
  • 8192
  • 16384
  • 32768

Using any other size will result in an exception being thrown. The best time to set a new length is after the loadedmetadata event fires, when the audio info is known, but before the audio has started or MozAudioAvailable events begun firing.

nsIDOMHTMLAudioElement additions

The HTMLAudioElement has also been extended to allow write access. Audio writing is achieved by adding three new methods:

  void mozSetup(in long channels, in long rate);
  unsigned long mozWriteAudio(array); // array is Array() or Float32Array()
  unsigned long long mozCurrentSampleOffset();

The mozSetup() method allows an <audio> element to be setup for writing from script. This method must be called before mozWriteAudio or mozCurrentSampleOffset can be called, since an audio stream has to be created for the media element. It takes two arguments:

  1. channels - the number of audio channels (e.g., 2)
  2. rate - the audio's sample rate (e.g., 44100 samples per second)

The choices made for channel and rate are significant, because they determine the amount of data you must pass to mozWriteAudio(). That is, you must pass either an array with 0 elements--similar to flushing the audio stream--or enough data for each channel specified in mozSetup().

The mozSetup() method, if called more than once, will recreate a new audio stream (destroying an existing one if present) with each call. Thus it is safe to call this more than once, but unnecessary.

The mozWriteAudio() method can be called after mozSetup(). It allows audio data to be written directly from script. It takes one argument, array. This is a JS Array (i.e., new Array()) or a typed float array (i.e., new Float32Array()) containing the audio data (floats) you wish to write. It must be 0 or N elements in length, where N % channels == 0, otherwise an exception is thrown.

The mozWriteAudio() method returns the number of samples that were just written, which may or may not be the same as the number in array. Only the number of samples that can be written without blocking the audio hardware will be written. It is the responsibility of the caller to deal with any samples that don't get written in the first pass (e.g., buffer and write in the next call).

The mozCurrentSampleOffset() method can be called after mozSetup(). It returns the current position (measured in samples) of the audio stream. This is useful when determining how much data to write with mozWriteAudio().

All of mozWriteAudio(), mozCurrentSampleOffset(), and mozSetup() will throw exceptions if called out of order. mozSetup() will also throw if a src attribute has previously been set on the audio element (i.e., you can't do both at the same time).

Security

Similar to the <canvas> element and its getImageData method, the MozAudioAvailable event's frameBuffer attribute protects against information leakage between origins.

The MozAudioAvailable event's frameBuffer attribute will throw if the origin of audio resource does not match the document's origin. NOTE: this will affect users who have the security.fileuri.strict_origin_policy set, and are working locally with file:/// URIs.

Compatibility with Audio Backends

The current MozAudioAvailable implementation integrates with Mozilla's decoder abstract base classes, and therefore, any audio decoder which uses these base classes automatically dispatches MozAudioAvailable events. At the time of writing, this includes the Ogg and WebM decoders but not the Wave decoder.

Additional Resources

A series of blog posts document the evolution and implementation of this API: http://vocamus.net/dave/?cat=25. Another overview by Al MacDonald is available here.

Bug

The work on this API is available in Mozilla bug 490705.

Obtaining Code and Builds

Latest Try Server Builds:

http://ftp.mozilla.org/pub/mozilla.org/firefox/tryserver-builds/david.humphrey@senecac.on.ca-ecf5c7f4e806/

JavaScript Audio Libraries

  • We have started work on a JavaScript library to make building audio web apps easier. Details are here.
  • dynamicaudio.js - An interface for writing audio with a Flash fall back for older browsers. NOTE: not necessarily up-to-date with this version of the API.

Working Audio Data Demos

A number of working demos have been created, including:

NOTE: If you try to run demos created with the original API using a build that implements the new API, you may encounter bug 560212. We are aware of this, as is Mozilla, and it is being investigated.

Demos Needing to be Updated to New API

Third Party Discussions

A number of people have written about our work, including: