fixing the implementation notes for the DJ scenario
authorOlivier Thereaux <Olivier.Thereaux@bbc.co.uk>
Wed, 15 Aug 2012 12:52:03 +0100
changeset 122 444c00812209
parent 121 494f8a09609e
child 123 3a75c6b8488f
fixing the implementation notes for the DJ scenario
reqs/DJ.png
reqs/DJ2.png
reqs/Overview.html
Binary file reqs/DJ.png has changed
Binary file reqs/DJ2.png has changed
--- a/reqs/Overview.html	Tue Aug 14 10:18:19 2012 -0400
+++ b/reqs/Overview.html	Wed Aug 15 12:52:03 2012 +0100
@@ -283,7 +283,8 @@
       <ol>
         <li>As in many other scenarios in this document, it is expected that APIs such as the <a href="http://www.w3.org/TR/webrtc/" title="WebRTC 1.0: Real-time Communication Between Browsers">Web Real-Time Communication API</a> will be used for the streaming of audio and video across a number of clients.</li>
         <li>
-          <p>One of the specific requirements illustrated by this scenario is the ability to seamlessly switch audio destinations (in this case: switching from listening on headphones to streaming sound output to a variety of local and connected clients) while retaining the exact state of playback and processing of a source. This may not be easy to achieve in the current Web Audio API draft, where a given <code>AudioContext</code> can only use one <code>AudioDestinationNode</code> as destination.</p> 
+          <p>One of the specific requirements illustrated by this scenario is the ability to have two different outputs for the sound: one for the headphones, and one for the music stream sent to all the clients. With the typical web-friendly hardware, this would be difficult or impossible to implement by considering both as audio destinations, since they seldom have or allow two sound outputs to be used at the same time. And indeed, in the current Web Audio API draft, a given <code>AudioContext</code> can only use one <code>AudioDestinationNode</code> as destination.</p>
+          <p>However, if we consider that the headphones are the audio output, and that the streaming DJ set is not a typical audio destination but an outgoing <code>MediaStream</code> passed on to the WebRTC API, it should be possible to implement this scenario, sending output to both headphones and the stream and gradually sending sound from one to the other without affecting theexact state of playback and processing of a source.</p> 
         </li>
         <li>This scenario makes heavy usage of audio analysis capabilities, both for automation purposes (beat detection and beat matching) and visualization (spectrum, level and other abstract visualization modes).</li>
         <li>The requirement for pitch/speed change are not currently covered by the Web Audio API's native processing nodes. Such processing would probably have to be handled with custom processing nodes.</li>