adding second diagram for DJ scenario but removing from text
authorOlivier Thereaux <Olivier.Thereaux@bbc.co.uk>
Mon, 13 Aug 2012 16:39:46 +0100
changeset 117 66505af998a0
parent 116 af0f448cd8b3
child 118 1440a379df3b
adding second diagram for DJ scenario but removing from text
reqs/DJ2.png
reqs/Overview.html
Binary file reqs/DJ2.png has changed
--- a/reqs/Overview.html	Mon Aug 13 15:10:04 2012 +0100
+++ b/reqs/Overview.html	Mon Aug 13 16:39:46 2012 +0100
@@ -283,10 +283,10 @@
       <ol>
         <li>As in many other scenarios in this document, it is expected that APIs such as the <a href="http://www.w3.org/TR/webrtc/" title="WebRTC 1.0: Real-time Communication Between Browsers">Web Real-Time Communication API</a> will be used for the streaming of audio and video across a number of clients.</li>
         <li>
-          <p>One of the specific requirements illustrated by this scenario is the ability to seamlessly switch audio destinations (in this case: switching from listening on headphones to streaming sound output to a variety of local and connected clients) while retaining the exact state of playback and processing of a source. in a modular, graph-based architecture like the Web Audio API, this can be achieved if sub-graphs in a given <code>AudioContext</code> can use several <code>AudioDestinationNode</code> and switch from one to the other by simultaneously modifying the gain in <code>AudioGainNode</code>s, as illustrated:</p>
-          <p><img src="DJ.png" alt="example graph showing the mixing of two tracks to two different AudioDestinationNodes" /></p> 
+          <p>One of the specific requirements illustrated by this scenario is the ability to seamlessly switch audio destinations (in this case: switching from listening on headphones to streaming sound output to a variety of local and connected clients) while retaining the exact state of playback and processing of a source. This may not be easy to achieve in the current Web Audio API draft, where a given <code>AudioContext</code> can only use one <code>AudioDestinationNode</code> as destination.</p> 
         </li>
-        <li>This scenario makes heavy usage of audio analysis capabilities, both for automation purposes (beat detection and beat matching) and visualization (spectrum, level and other abstract visualization modes)</li>
+        <li>This scenario makes heavy usage of audio analysis capabilities, both for automation purposes (beat detection and beat matching) and visualization (spectrum, level and other abstract visualization modes).</li>
+        <li>The requirement for pitch/speed change are not currently covered by the Web Audio API's native processing nodes. Such processing would probably have to be handled with custom processing nodes.</li>
       </ol>
     </section>