adding intro text. Revising tone of scenario 1 for proposal to the WG
authorOlivier Thereaux <Olivier.Thereaux@bbc.co.uk>
Wed, 11 Jul 2012 10:50:10 +0100
changeset 92 37c07098b60b
parent 91 1079a18d951e
child 93 cbd42b4b8e9f
adding intro text. Revising tone of scenario 1 for proposal to the WG
reqs/Overview.html
--- a/reqs/Overview.html	Mon Jul 09 12:25:58 2012 -0700
+++ b/reqs/Overview.html	Wed Jul 11 10:50:10 2012 +0100
@@ -99,37 +99,47 @@
     </section>
     <section>
     <h2>Introduction</h2>
-    <p>TBA</p>
+    <p>What should the future web sound like? That was, in essence, the mission of the W3C Audio Working Group when it was chartered in early 2011 to “support the features required by advanced interactive applications including the ability to process and synthesize audio”. Bringing audio processing and synthesis capabilities to the Open Web Platform should allow developers to re-create well-loved audio software on the open web and add great sound to web games and applications; it may also enable web developers to reinvent the world of audio and music by making it more connected, linked and social.</p>
+
+    <p>This document attempts to describe the scenarios considered by the W3C Audio Working Group in its work to define Web Audio technologies. Not intended to be a comprehensive list of things which the Web Audio standards will make possible, it nevertheless attempts to:</p>
+
+<ul>
+<li>document a number of key applications of audio which Web audio standards should enable,</li>
+<li>provide a basis for discussion on how the technologies could be </li>
+<li>offer examples for early uses of the technology, which can then be used to gather feedback on the draft standard, and</li>
+<li>extract technical and architectural requirements for the Web Audio APIs or libraries built upon it.</li>
+</ul>
+
+    
+    
     </section>  
     <section>
-      <h2>Use Cases and Scenarios</h2>
+      <h2>Web Audio Scenarios</h2>
       
+      <p>This section will introduce a number of scenarios involving the use of Web Audio processing or synthesis technologies, and discuss implementation and architectural considerations.</p>
     
       <section>
-      <h3>UC 1: Video Chat</h3>
+      <h3>Scenario 1: Video Chat</h3>
       <p>Two or more users have loaded a video communication web application into their browsers, provided by the same service provider, and logged into the service it provides. When one online user selects a peer online user, a 1-1 video communication session between the browsers of the two peers is initiated.  If there are more than two participants, and if the participants are using adequate hardware, binaural processing is used to position remote participants.</p>
       
       <p>In one version of the service, an option allows users to distort (pitch, speed, other effects) their voice for fun. Such a feature could also be used to protect one participants' privacy in some applications.</p>
       
       <p>During the session, each user can also pause sending of media (audio, video, or both) and mute incoming media. An interface gives each user control over the incoming sound volume from each participant - with an option to have the software do it automatically. Another interface offers user-triggered settings (EQ, filtering) for voice enhancement, a feature which can be useful between people with hearing difficulties, in imperfect listening environments, or to compensate for poor transmission environments.</p>
       
-      <h4>UC1 — Notes</h4>
+      <h4>Notes and Implementation Considerations</h4>
+      
       <ol>
-        <li> This scenario is heavily inspired from <a href="http://tools.ietf.org/html/draft-ietf-rtcweb-use-cases-and-requirements-06#section-4.2.1" title="http://tools.ietf.org/html/draft-ietf-rtcweb-use-cases-and-requirements-06#section-4.2.1">the first scenario in WebRTC's Use Cases and Requirements document</a>
-      </li>
-      <li> One aspect of the scenario has participants using a "voice changing" feature on the way out (input device to server). This would mean that processing should be possible both for incoming and outgoing audio streams.
-      </li>
+        <li><p>This scenario is a good example of the need for audio capture (from line in, internal microphone or other inputs).</p></li>
+        <li><p>This scenario is heavily inspired from <a href="http://tools.ietf.org/html/draft-ietf-rtcweb-use-cases-and-requirements-06#section-4.2.1">the first scenario in WebRTC's Use Cases and Requirements document</a>. Most of the technology described by this scenario should be covered by the <a href="http://www.w3.org/TR/webrtc/" title="WebRTC 1.0: Real-time Communication Between Browsers">Web Real-Time Communication API</a>. The scenario illustrates, however, a technical requirement for processing of the audio signal at both ends (capture of the user's voice and output of its correspondents' conversation).</p></li>
+        <li><p>The processing effects needed by this scenario would include:</p>
+          <ul>
+            <li>Controlling the gain (mute, pause and volume) of several audio sources</li>
+            <li>Filtering (EQ, voice enhancement)</li>
+            <li>Pitch, speed distortion</li>
+          </ul>
+        </li>
       </ol>
-      <h4>UC1 — Priority</h4>
-      <pre> <i>Priority: <b>HIGH</b></i></pre>
       
-      <p>… consensus reached during the teleconference on <a href="http://www.w3.org/2012/02/13-audio-minutes" title="http://www.w3.org/2012/02/13-audio-minutes">13 Feb 2012</a>. 
-      </p>
-      
-      <p>This use case is based on the needs of the Real-Time Web working group, and thus a high priority.
-      </p>
-      
-
       </section>
       
       <section>