Incorporate Olivier's feedback on Music Creation Environment scenario.
authorJoe Berkovitz <joe@noteflight.com>
Fri, 10 Aug 2012 10:28:25 -0400
changeset 104 8343c64772d7
parent 103 57f7f0f9ece8
child 105 e6829eee9db4
child 106 e27bd7c386de
Incorporate Olivier's feedback on Music Creation Environment scenario.
reqs/Overview.html
--- a/reqs/Overview.html	Fri Aug 10 10:16:27 2012 -0400
+++ b/reqs/Overview.html	Fri Aug 10 10:28:25 2012 -0400
@@ -248,7 +248,7 @@
       
       
       <h3>Music Creation Environment with Sampled Instruments</h3>
-      <p>A user is employing a web-based application to create and edit a musical composition. The user interface for composing can take a number of forms ranging from a beat grid or piano-roll display to conventional Western notation. Whatever the visual representation, the key idea of the scenario is that the user is editing a document that is sonically rendered as a series of precisely timed and modulated audio events (notes) that collectively make up a piece of music.</p>
+      <p>A user is employing a web-based application to create and edit a musical composition with live synthesized playback. The user interface for composing can take a number of forms including conventional Western notation and a piano-roll style display. The document can be sonically rendered on demand as a piece of music, <em>i.e.</em> a series of precisely timed, pitched and modulated audio events (notes).
       <p>The user occasionally stops editing and wishes to hear playback of some or all of the score they are working on to take stock of their work. At this point the program performs sequenced playback of some portion of the document. Some simple effects such as instrument panning and room reverb are also applied for a more realistic and satisfying effect.</p>
       <p>Compositions in this editor employ a set of instrument samples, i.e. a pre-existing library of recorded audio snippets. Any given snippet is a brief audio recording of a note played on an instrument with some specific and known combination of pitch, dynamics and articulation. The combinations in the library are necessarily limited in number to avoid bandwidth and storage overhead. During playback, the editor must simulate the sound of each instrument playing its part in the composition.  This is done by transforming the available pre-recorded samples from their original pitch, duration and volume to match the characteristics prescribed by each note in the composed music.  These per-note transformations must also be scheduled to be played at the times prescribed by the composition.</p>
       <p>During playback a moving cursor indicates the exact point in the music that is being heard at each moment.</p>
@@ -256,15 +256,15 @@
 
       <h4>Notes and Implementation Considerations</h4>
       <ol>
-        <li><p> Instrument samples must be able to be loaded into memory for fast processing during music rendering. These pre-loaded audio snippets must have a one-to-many relationship with objects in the API representing specific notes, to avoid duplicating the same sample in memory for each note in a composition that is rendered with it. The API's <code>AudioBuffer</code> and <code>AudioBufferSourceNode</code> interfaces address this requirement.</p></li>
-        <li><p>It must be possible to schedule large numbers of individual events over a long period of time, each of which is a transformation of some original audio sample, without degrading real-time browser performance. The API's graph-based approach makes the construction of any given transformation practical, by supporting simple recipes for creating subgraphs built around a sample's pre-loaded <code>AudioBuffer</code>.  These subgraphs can be constructed and scheduled to be played in the future. In one approach to supporting longer compositions, the construction and scheduling of future events can be kept "topped up" via periodic timer callbacks, to avoid the overhead of creating huge graphs all at once.</p></li>
+        <li><p> Instrument samples must be able to be loaded into memory for fast processing during music rendering. These pre-loaded audio snippets must have a one-to-many relationship with objects in the Web Audio API representing specific notes, to avoid duplicating the same sample in memory for each note in a composition that is rendered with it. The API's <code>AudioBuffer</code> and <code>AudioBufferSourceNode</code> interfaces address this requirement.</p></li>
+        <li><p>It must be possible to schedule large numbers of individual events over a long period of time, each of which is a transformation of some original audio sample, without degrading real-time browser performance. A graph-based approach such as that in the Web Audio API makes the construction of any given transformation practical, by supporting simple recipes for creating subgraphs built around a sample's pre-loaded <code>AudioBuffer</code>.  These subgraphs can be constructed and scheduled to be played in the future. In one approach to supporting longer compositions, the construction and scheduling of future events can be kept "topped up" via periodic timer callbacks, to avoid the overhead of creating huge graphs all at once.</p></li>
         <li><p>A given sample must be able to be arbitrarily transformed in pitch and volume to match a note in the music. <code>AudioBufferSourceNode</code>'s <code>playbackRate</code> attribute provides the pitch-change capability, while <code>AudioGainNode</code> allows the volume to be adjusted.</p></li>
         <li><p>A given sample must be able to be arbitrarily transformed in duration (without changing its pitch) to match a note in the music. <code>AudioBufferSourceNode</code>'s looping parameters provide sample-accurate start and end loop points, allowing a note of arbitrary duration to be generated even though the original recording may be brief.</p></li>
-        <li><p>Looped samples by definition do not have a clean ending. To avoid an abrupt glitchy cutoff at the end of a note, a gain and/or filter envelope must be applied. Such envelopes normally follow an exponential trajectory during key time intervals in the life cycle of a note. The <code>AudioParam</code> features of the API in conjunction with <code>AudioGainNode</code> and <code>BiquadFilterNode</code> support this requirement.</p></li>
+        <li><p>Looped samples by definition do not have a clean ending. To avoid an abrupt glitchy cutoff at the end of a note, a gain and/or filter envelope must be applied. Such envelopes normally follow an exponential trajectory during key time intervals in the life cycle of a note. The <code>AudioParam</code> features of the Web Audio API in conjunction with <code>AudioGainNode</code> and <code>BiquadFilterNode</code> support this requirement.</p></li>
         <li><p> It is necessary to coordinate visual display with sequenced playback of the document, such as a moving cursor or highlighting effect applied to notes. This implies the need to programmatically determine the exact time offset within the performance of the sound being currently rendered through the computer's audio output channel. This time offset must, in turn, have a well-defined relationship to time offsets in prior API requests to schedule various notes at various times. The API provides such a capability in the <code>AudioContext.currentTime</code> attribute.</p></li>
         <li><p> To export an audio file, the audio rendering pipeline must be able to yield buffers of sample frames directly, rather than being forced to an audio device destination. Built-in codecs to translate these buffers to standard audio file output formats are also desirable.</p></li>
-        <li><p>Typical per-channel effects such as stereo pan control must be readily available (<code>AudioPannerNode</code>).</p></li>
-        <li><p>Typical master bus effects such as room reverb must be readily available (<code>ConvolverNode</code>).</p></li>
+        <li><p>Typical per-channel effects such as stereo pan control must be readily available. Panning allows the sound output for each instrument channel to appear to occupy a different spatial location in the output mix, adding greatly to the realism of the playback.  Adding and configuring one of the Web Audio API's <code>AudioPannerNode</code> for each channel output path provides this capability.</p></li>
+        <li><p>Typical master bus effects such as room reverb must be readily available. Such effects are applied to the entire mix as a final processing stage. A single <code>ConvolverNode</code> is capable of simulating a wide range of room acoustics.</p></li>
       </ol>
 
     </section>
@@ -457,7 +457,7 @@
       <p>A source can be looped. It should be possible to loop memory-resident sources. It should be possible to loop on a whole-source and intra-source basis, or to play the beginning of a sound leading into a looped segment.</p>
       
       <h4>Capture of audio from microphone, line in, other inputs </h4>
-      <p>Audio from a variety of sources, including line in and microphone input, should be made available to the API for processing.</p>
+      <p>Audio from a variety of sources, including line in and microphone input, should be made available to the Web Audio API for processing.</p>
       
       <h4>Adding effects to the audio part of a video stream, and keep it in sync with the video playback </h4>
       <p>The API should have access to the audio part of video streams being played in the browser, and it should be possible to add effects to it and output the result in real time during video playback, keeping audio and video in sync.</p>