removing obsolete sections from the UC&R draft
authorOlivier Thereaux <Olivier.Thereaux@bbc.co.uk>
Thu, 30 Aug 2012 08:46:30 +0100
changeset 150 5b010953b78b
parent 149 a65c8e15497d
child 151 a1c1325607fa
removing obsolete sections from the UC&R draft
- All requirements have been covered in detail within each scenario. Removing Section 3
- Consequently, the mapping table is obsolete. Editors consider creating another one at a later date.
- Section 5. Features out of scope no longer the best repository for such decisions. Bugzilla has markers for WONTFIX, LATER and milestones, the group should use that instead to list features it has not / will not implement
reqs/Overview.html
--- a/reqs/Overview.html	Thu Aug 30 08:42:47 2012 +0100
+++ b/reqs/Overview.html	Thu Aug 30 08:46:30 2012 +0100
@@ -1,4 +1,3 @@
-
 <!DOCTYPE html>
 <html>
   <head>
@@ -396,391 +395,6 @@
     
     </section>
       
-      <section>
-    
-      <h2>Requirements </h2>
-      <section>
-      <h3>Sources of audio</h3>
-      
-      <p>The Audio Processing API can operate on a number of sources of audio: </p>
-      
-      <ul>
-        <li>  a DOM Element can be a source: HTML &lt;audio&gt; elements (with both remote and local sources)</li>
-        <li> memory-resident PCM data can be a source: Individual memory-resident “buffers” of PCM audio data which are not associated with &lt;audio&gt; elements</li>
-        <li> programmatically calculated data can be a source: on-the-fly generation of audio data</li>
-        <li> devices can act as a source: Audio that has been captured from devices - microphones, instruments etc.</li>
-        <li> remote peer can act as a source: Source from a remote peer (e.g. a WebRTC source) </li>
-      </ul>
-      
-      
-      <h4>Support for primary audio file formats </h4>
-      <p>Sources of audio can be compressed or uncompressed, in typical standard formats found on the Web and in the industry (e.g. MP3 or WAV)</p>
-      
-      <h4>One source, many sounds </h4>
-      <p>It should be possible to load a single source of sound and instantiate it in multiple, overlapping occurrences without reloading that are mixed together.</p>
-          
-      <h4>Playing / Looping  sources of audio </h4>
-      <p>A subrange of a source can be played. It should be possible to start and stop playing a source of audio at any desired offset within the source. This would then allow the source to be used as an audio sprite. 
-      See: <a href="http://remysharp.com/2010/12/23/audio-sprites/" class="external free" title="http://remysharp.com/2010/12/23/audio-sprites/">http://remysharp.com/2010/12/23/audio-sprites/</a>
-      And: <a href="http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0006.html" class="external free" title="http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0006.html">http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0006.html</a></p>
-      <p>A source can be looped. It should be possible to loop memory-resident sources. It should be possible to loop on a whole-source and intra-source basis, or to play the beginning of a sound leading into a looped segment.</p>
-      
-      <h4>Capture of audio from microphone, line in, other inputs </h4>
-      <p>Audio from a variety of sources, including line in and microphone input, should be made available to the Web Audio API for processing.</p>
-      
-      <h4>Adding effects to the audio part of a video stream, and keep it in sync with the video playback </h4>
-      <p>The API should have access to the audio part of video streams being played in the browser, and it should be possible to add effects to it and output the result in real time during video playback, keeping audio and video in sync.</p>
-      
-      <h4>Sample-accurate scheduling of playback </h4>
-      <p>In the case of memory-resident sources it should be possible to trigger the playback of the audio in a sample-accurate fashion. </p>
-      
-      
-      <h4>Buffering </h4>
-      <p>From:  <a href="http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0006.html" class="external free" title="http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0006.html">http://lists.w3.org/Archives/Public/public-audio/2012JanMar/0006.html</a></p>
-      <dl>
-        <dd>It would be nice to have something like AudioNode.onready(123.2, callback)</dd>
-        <dd>if the browser is really sure to playback properly.</dd>
-      </dl>
-      
-      
-      <h4>Support for basic polyphony </h4>
-      
-      <p>A large number of simultaneous sources must be able to be played back simultaneously. As a guideline, the use-cases have identified that 32 [TODO: validate the number based on FMOD] simultaneous audio sources are required for typical music and gaming applications.</p>
-      
-      <h4>Multi-channel support</h4>
-      <p>The API should support multi-channel (surround) sounds, in particular:</p>
-      <ul>
-        <li>Channel layouts and mapping should be supported. Mapping of typical layouts (mono, stereo, quad, 5.1 and 7.1) should be clearly specified.</li>
-        <li>Channel up and down-mixing should be allowed. That means
-        <ul>
-          <li>The ability to match the number of channels supported by the hardware (with no upper limit?)</li>
-          <li>The ability to up mix any source to match the number of channels of the system</li>
-          <li>The ability to use as source an audio stream with more channels than the system supports</li>
-          <li>The ability to down mix a source or stream to any number of channels, down to mono</li>
-        </ul>
-        </li>
-      </ul>  
-      <h4>Rapid scheduling of many independent sources </h4>
-      <p>The ability to construct and schedule the playback of approximately 100 notes or sonic events per second across all voices would be required for typical music synthesis and gaming applications. 
-      </p>
-      <h4>Triggering of audio sources </h4>
-      <p>It should be possible to trigger playback of audio sources in response to DOM events (onMouseOver, onKeyPress etc.), in addition it should be possible for the client to ascertain (for example using a callback) when an event has started and finished.
-      </p><p>A conforming specification MUST be able to play pre-loaded sounds back at faster than a rate of 'x' milliseconds from the time the JavaScript code was executed, until the time that sound is heard through the speakers: where the audio path is not running through any external sound devices and the browser is using the default sound driver provided by the operating system.
-      </p><p><i>Examples:</i>
-      </p>
-      <pre>
-      document.addEventListener( 'keypress', function(){
-        source.play();
-      }, false );
-
-      </pre>
-      <h4>Audio quality </h4>
-      <p>As a general requirement audio playback should be free of glitches, jitter and other distortions.
-      </p>
-      
-      </section>
-      <section>
-      <h3>Transformations of sources of audio </h3>
-      <p>Each of the sources of audio described above should be able to be transformed in real time. This processing should, as much as possible, have a low latency on a wide variety of target platforms. 
-      </p>
-      <h4>Modularity of transformations </h4>
-      <p>The Audio Processing API should allow arbitrary combinations of transforms. A number of use-cases have the requirement that the developer has control over the transforms in a “modular” fashion.
-      </p>
-      <h4>Transformation parameter automation </h4>
-      <p>Where there are parameters for these effects, it should be possible to automatically modify these parameters in a programmatic, time-dependent way. Parameter changes must be able to be scheduled relative to a source’s onset time which may be in the future. Primary candidates for automation include gain, playback rate and filter frequency.
-      </p><p>Transformations include:
-      </p>
-      <h4>Gain adjustment </h4>
-      <h4>Playback rate adjustment </h4>
-      <h4>Spatialization </h4>
-      <ul><li> equal power/level panning
-      </li><li> binaural HRTF-based spatialization 
-      </li><li> including the influence of the directivity of acoustic sources
-      </li><li> including the attenuation of acoustic sources by distance
-      </li><li> including the effect of movement on acoustic sources
-      </li></ul>
-      <h4>Filtering </h4>
-      <ul><li> graphic EQ
-      </li><li> low/hi/bandpass filters
-      </li><li> impulse response filters
-      </li><li> Pitch shifting
-      </li><li> Time stretching
-      </li></ul>
-      <h4>Noise gating </h4>
-      <p>It is possible to apply a real-time noise gate to a source to automatically mute it when its average power level falls below some arbitrary threshold (is this a reflexive case of ducking?).
-      </p>
-      <h4>Dynamic range compression </h4>
-      <p><b>TBA</b>
-      </p>
-      <h4>The simulation of acoustic spaces </h4>
-      <p>it should be possible to give the impression to the listener that a source is in a specific acoustic environment. The simulation of this environment may also be based on real-world measurements.
-      </p>
-      <h4>The simulation of occlusions and obstructions </h4>
-      <p>It should be possible to give the impression to the listener that an acoustic source is occlusions or obstructed by objects in the virtual space
-      </p>
-      </section>
-      <section>
-        <h3>Source Combination and Interaction </h3>
-      <h4>Mixing Sources </h4>
-      <p>many independent sources can be mixed
-      </p>
-      <h4>Ducking </h4>
-      <p>It is possible to mute or attenuate one source based on the average power level of another source, in real time.
-      </p>
-      <h4>Echo cancellation </h4>
-      <p>It is possible to apply echo cancellation to a set of sources based on real-time audio input (say, a microphone).
-      </p>
-      </section>
-      <section>
-        <h3>Analysis of sources </h3>
-      <h4>Level detection </h4>
-      <p>A time-averaged volume or power level can be extracted from a source in real time (for visualisation or conversation-status purposes)
-      </p>
-      <h4>Frequency domain analysis </h4>
-      <p>A frequency spectrum can be extracted from a source in real time (for visualisation purposes).
-      </p>
-      </section>
-      <section>
-        <h3>Synthesis of sources </h3>
-      <h4>Generation of common signals for synthesis and parameter modulation purposes </h4>
-      <p>For example sine, sawtooth, square and white noise. 
-      </p>
-      <h4>The ability to read in standard definitions of wavetable instruments </h4>
-      <p>(e.g. Sound Font, DLS)
-      </p>
-      <h4>Acceptable performance of synthesis </h4>
-      <p>TBD</p>
-
-      <h2>Other Considerations </h2>
-      <h4> Performance and Hardware-acceleration friendliness </h4>
-      <ul><li> From WebRTC WG: audio-processing needs to be doable for real-time communications — this means getting processing, and using hardware-capabilities as much as possible.
-      </li></ul>
-      <ul><li> Improved performance and reliability to play, pause, stop and cache sounds, especially using the HTML5 Appcache offline caching for the HTML audio element.  From Paul Bakaus, Zynga (see <a href="http://lists.w3.org/Archives/Public/public-audio/2011AprJun/0128.html" title="http://lists.w3.org/Archives/Public/public-audio/2011AprJun/0128.html">thread</a>)
-      </li></ul>
-      </section>
-
-
-    </section>
-  <section>
-
-      <h2>Mapping Use Cases and Requirements</h2>
-
-      <table>
-      <tr>
-        <th style="width: 35em">Requirement Family</th>
-        <th style="width: 55em">Requirement</th>
-        <th style="width: 20em">Requirement Priority</th>
-        <th>Video Chat</th>
-        <th>HTML5 game with audio effects, music</th>
-        <th>Online music production tool</th>
-        <th>Online radio broadcast</th>
-        <th>writing music on the web</th>
-        <th>wavetable synthesis of a virtual music instrument</th>
-        <th>Audio / Music Visualization</th>
-        <th>UI/DOM Sounds</th>
-        <th>UC-9 : Language learning</th>
-        <th> UC-10 : Podcast on a flight</th>
-        <th> UC-11: DJ music at 125 BPM</th>
-        <th> UC-12 : Soundtrack and sound effects in a video editing tool</th>
-        <th> UC-13 : Web-based guitar practice service</th>
-        <th>UC-14 : User Control of Audio</th>
-        <th>UC-15 : Video commentary</th>
-      </tr>
-      <tr>
-        <th colspan="2">Use Case Priority</th>
-        <td></td>
-        <th>High</th>
-        <th>High</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>High</th>
-        <th>High</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>Low</th>
-        <th>Low</th>
-      </tr>
-      <tr>
-      <th rowspan='11'>Sources of audio</th>
-        <td>Support for primary audio file formats</td>
-        <td>Baseline</td>
-    <!--    1            2            3            4            5            6            7            8            9            10           11          12           13           14         15    -->
-
-        <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>  <td> </td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td> One source, many sounds </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Playing / Looping sources of audio </td>
-        <td>Baseline</td>
-        <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>  <td>✓</td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td>Capture of audio from microphone, line in, other inputs</td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>  <td> </td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td>Adding effects to the audio part of a video stream, and keep it in sync with the video playback</td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td> Sample-accurate scheduling of playback </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Buffering </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Multi-channel support</td>
-        <td>Baseline</td>
-        <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>  <td>✓</td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td> Support for basic polyphony </td>
-        <td>Baseline</td>
-        <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-
-        <td> Rapid scheduling of many independent sources </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Triggering of audio sources </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td> Audio quality </td>
-        <td>Baseline</td>
-        <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-
-    <!--    1            2            3            4            5            6            7            8            9            10           11          12           13           14         15    -->
-
-
-      <th rowspan='10'>Transformations of sources of audio </th>
-        <td> Modularity of transformations </td>
-        <td>Baseline</td>
-        <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Transformation parameter automation </td>
-        <td>Baseline</td>
-        <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Gain adjustment </td>
-        <td>Baseline</td>
-        <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>  <td>✓</td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td> Simple playback rate adjustment </td>
-        <td>Baseline</td>
-        <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Spatialization </td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Filtering </td>
-        <td>Baseline</td>
-        <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Noise gating </td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Dynamic range compression </td>
-        <td><strong>Minority, but important</strong></td>
-        <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td>✓</td>  <td>✓</td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td> The simulation of acoustic spaces </td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td>The simulation of occlusions and obstructions </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-
-    <!--    1            2            3            4            5            6            7            8            9            10           11          12           13      -->
-
-      <tr>
-      <th rowspan='3'>Source Combination and Interaction </th>
-        <td> Mixing Sources </td>
-        <td>Baseline</td>
-        <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Ducking </td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>  <td> </td>  <td>✓</td>
-      </tr>
-      <tr>
-        <td> Echo cancellation </td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-      <th rowspan='2'>Analysis of sources </th>
-        <td> Level detection </td>
-        <td>Minority, but important</td>
-        <td>✓</td>   <td> </td>   <td>✓</td>   <td>✓</td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Frequency domain analysis </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-      <th rowspan='3'>Synthesis of sources</th>
-        <td> Generation of common signals for synthesis and parameter modulation purposes </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> The ability to read in standard definitions of wavetable instruments </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </tr>
-      <tr>
-        <td> Acceptable performance of synthesis </td>
-        <td>Minority, but important</td>
-        <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td>✓</td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>   <td> </td>  <td> </td>  <td> </td>
-      </table>
-
-    </section>
-
-        <section>
-        <h2>Features out of scope</h2>
-        <p>During its lifetime, the W3C Audio working group also considered a number of features or requirements which were not deemed important enough to be kept in the scope of the first revision of the Web Audio API, but were worth recording for future perusal.</p>
-        
-        <h3>An AudioParam constructor in the context of a JavaScriptAudioNode</h3>
-        <p>TBA: https://www.w3.org/2011/audio/track/issues/6</p>
-
-</section>
     
     <section class='appendix'>
       <h2>Acknowledgements</h2>