Add explicit mentions of lack of speed/pitch change support in Web Audio API.
--- a/reqs/Overview.html Tue Aug 21 15:36:48 2012 -0400
+++ b/reqs/Overview.html Wed Aug 22 14:00:16 2012 -0400
@@ -140,6 +140,7 @@
</li>
<li><p>This scenario is also a good example of the need for audio capture (from line in, internal microphone or other inputs). We expect this to be provided by <a href="http://www.w3.org/TR/html-media-capture/" title="HTML Media Capture">HTML Media Capture</a>.</p></li>
<li><p>The <a href="http://tools.ietf.org/html/draft-ietf-rtcweb-use-cases-and-requirements-06#section-4.2.1">first scenario in WebRTC's Use Cases and Requirements document</a> has been a strong inspiration for this scenario. Most of the technology, described above should be covered by the <a href="http://www.w3.org/TR/webrtc/" title="WebRTC 1.0: Real-time Communication Between Browsers">Web Real-Time Communication API</a>. The scenario illustrates, however, the need to integrate audio processing with the handling of RTC streams, with a technical requirement for processing of the audio signal at both ends (capture of the user's voice and output of its correspondents' conversation).</p></li>
+ <li><p>Speed changes are currently unsupported by the Web Audio API.</p></li>
</ol>
</section>
@@ -236,7 +237,7 @@
<li><p>Ducking affects the level of multiple audio sources at once, which implies the ability to associate a single <em>dynamic audio parameter</em> to the gain associated with these sources' signal paths. The specification's <code>AudioGain</code> interface provides this.</p></li>
<li><p>Smooth muting requires the ability to <em>smoothly automate gain changes</em> over a time interval, without using browser-unfriendly coding techniques like tight loops or high-frequency callbacks. The <em>parameter automation</em> features associated with <code>AudioParam</code> are useful for this kind of feature.</p></li>
<li><p>Pausing and resuming the show on the audience side implies the ability to <em>buffer data received from audio sources</em> in the processing graph, and also to <em>send buffered data to audio destinations</em>.</p></li>
- <li><p>The functionality for audio speed changing, a custom algorithm, requires the ability to <em>create custom audio transformations</em> using a browser programming language (e.g. <code>JavaScriptAudioNode</code>). When audio delivery is slowed down, audio samples will have to be locally buffered by the application up to some allowed limit, since they continue to be delivered by the incoming stream at a normal rate.</p></li>
+ <li><p>Speed changes are currently unsupported by the Web Audio API. Thus, the functionality for audio speed changing, a custom algorithm, requires the ability to <em>create custom audio transformations</em> using a browser programming language (e.g. <code>JavaScriptAudioNode</code>). When audio delivery is slowed down, audio samples will have to be locally buffered by the application up to some allowed limit, since they continue to be delivered by the incoming stream at a normal rate.</p></li>
<li><p>There is a standard way to access a set of <em>metadata properties for media resources</em> with the following W3C documents:
<ul><li><p> <a href="http://www.w3.org/TR/mediaont-10/" title="http://www.w3.org/TR/mediaont-10/">Ontology for Media Resources 1.0</a>. This document defines a core set of metadata properties for media resources, along with their mappings to elements from a set of existing metadata formats.
</p></li><li><p> <a href="http://www.w3.org/TR/mediaont-api-1.0/" title="http://www.w3.org/TR/mediaont-api-1.0/">API for Media Resources 1.0</a>. This API provides developers with a convenient access to metadata information stored in different metadata formats. It provides means to access the set of metadata properties defined in the Ontology for Media Resources 1.0 specification.
@@ -329,7 +330,7 @@
<ol>
<li><p>Local audio can be downloaded, stored and retrieved using the <a href="http://www.w3.org/TR/FileAPI/">HTML File API</a>.</p></li>
<li><p>This scenario requires a special audio transformation that can compress the duration of speech
- without affecting overall timbre and intelligibility. In the Web Audio API this could be accomplished through
+ without affecting overall timbre and intelligibility. In the Web Audio API this function isn't natively supported but could be accomplished through
attaching custom processing code to a <code>JavaScriptAudioNode</code>.</p></li>
<li><p>The "Noisy Environment" setting could be accomplished through equalization features in the Web Audio API such as <code>BiquadFilterNode</code> or <code>ConvolverNode</code>.</p></li>
</ol>