Update audio workstation usecase to incorporate numerical synthesis of audio.
authorJoe Berkovitz <joe@noteflight.com>
Thu, 27 Sep 2012 07:09:25 -0400
changeset 164 43bd2c110478
parent 163 4b8ced7d6bb7
child 166 ce8b3d6e50ed
child 167 144acfdb1e13
Update audio workstation usecase to incorporate numerical synthesis of audio.
reqs/Overview.html
--- a/reqs/Overview.html	Wed Sep 26 18:22:36 2012 -0400
+++ b/reqs/Overview.html	Thu Sep 27 07:09:25 2012 -0400
@@ -197,6 +197,7 @@
         
         <li><p>The ability to visualize the samples and their processing benefits from <em>real-time time-domain and frequency analysis</em>, as supplied by the Web Audio API's <code>RealtimeAnalyzerNode</code>.</p></li>
         <li><p> Clips must be able to be loaded into memory for fast playback. The Web Audio API's <code>AudioBuffer</code> and <code>AudioBufferSourceNode</code> interfaces address this requirement.</p></li>
+        <li><p> Some sound sources may be purely algorithmic in nature, such as oscillators or noise generators. This implies the ability to generate sound from both precomputed and dynamically computed arbitrary sound samples. The Web Audio API's ability to create an <code>AudioBuffer</code> from arrays of numerical samples, coupled with the ability of <code>JavaScriptAudioNode</code> to supply numerical samples on the fly, both address this requirement.</p></li>
         <li><p>The ability to schedule both audio clip playback and effects parameter value changes in advance is essential to support automated mixdown</p></li>
         <li><p> To export an audio file, the audio rendering pipeline must be able to yield buffers of sample frames directly, rather than being forced to an audio device destination. Built-in codecs to translate these buffers to standard audio file output formats are also desirable.</p></li>
         <li><p>Typical per-channel effects such as panning, gain control, compression and filtering must be readily available in a native, high-performance implementation.</p></li>