Clean up Online Music Production Tool scenario and inject references to API features.
authorJoe Berkovitz <joe@noteflight.com>
Fri, 10 Aug 2012 12:12:23 -0400
changeset 107 16aa9f403df4
parent 106 e27bd7c386de
child 108 12ef0779f863
Clean up Online Music Production Tool scenario and inject references to API features.
reqs/Overview.html
--- a/reqs/Overview.html	Fri Aug 10 11:55:53 2012 -0400
+++ b/reqs/Overview.html	Fri Aug 10 12:12:23 2012 -0400
@@ -170,7 +170,7 @@
 
       <h3>Online music production tool</h3>
       
-      <p>A user arranges a musical composition using a web-based Digital Audio Workstation (DAW) application.</p>
+      <p>A user creates a musical composition from audio media clips using a web-based Digital Audio Workstation (DAW) application.</p>
       
       <p>Audio "clips" are arranged on a timeline representing multiple tracks of audio.  Each track's volume, panning, and effects
       may be controlled separately.  Individual tracks may be muted or soloed to preview various combination of tracks at a given moment.
@@ -195,7 +195,12 @@
         
         <li><p>Building such an application may only be reasonably possible if the technology enables the control of audio with acceptable performance, in particular for <em>real-time processing</em> and control of audio parameters and <em>sample accurate scheduling of sound playback</em>. Because performance is such a key aspect of this scenario, it should probably be possible to control the buffer size of the underlying Audio API: this would allow users with slower machines to pick a larger buffer setting that does not cause clicks and pops in the audio stream.</p></li>
         
-        <li><p>The ability to visualise the samples and their processing would highly benefit from <em>real-time time-domain and frequency analysis</em>.</p></li>
+        <li><p>The ability to visualise the samples and their processing benefits from <em>real-time time-domain and frequency analysis</em>, as supplied by the Web Audio API's <code>RealtimeAnalyzerNode</code>.</p></li>
+        <li><p> Clips must be able to be loaded into memory for fast playback. The Web Audio API's <code>AudioBuffer</code> and <code>AudioBufferSourceNode</code> interfaces address this requirement.</p></li>
+        <li><p>The ability to schedule both audio clip playback and effects parameter value changes in advance is essential to support automated mixdown</p></li>
+        <li><p> To export an audio file, the audio rendering pipeline must be able to yield buffers of sample frames directly, rather than being forced to an audio device destination. Built-in codecs to translate these buffers to standard audio file output formats are also desirable.</p></li>
+        <li><p>Typical per-channel effects such as panning, gain control, compression and filtering must be readily available in a native, high-performance implementation.</p></li>
+        <li><p>Typical master bus effects such as room reverb must be readily available. Such effects are applied to the entire mix as a final processing stage. A single <code>ConvolverNode</code> is capable of simulating a wide range of room acoustics.</p></li>
       </ol>