Rework UI/DOM sounds scenario into "Playful sonification" scenario and add Notes/Implementation section.
authorJoe Berkovitz <joe@noteflight.com>
Fri, 10 Aug 2012 12:38:38 -0400
changeset 108 12ef0779f863
parent 107 16aa9f403df4
child 109 1e84896a21ef
Rework UI/DOM sounds scenario into "Playful sonification" scenario and add Notes/Implementation section.
reqs/Overview.html
--- a/reqs/Overview.html	Fri Aug 10 12:12:23 2012 -0400
+++ b/reqs/Overview.html	Fri Aug 10 12:38:38 2012 -0400
@@ -158,7 +158,7 @@
       <ol>
         <li><p>Developing the soundscape for a game as the one described above can benefit from a <em>modular, node based approach</em> to audio processing. In our scenario, some of the processing needs to happen for a number of sources at the same time (e.g room effects) while others (e.g mixing and spatialization) need to happen on a per-source basis. A graph-based API makes it very easy to envision, construct and control the necessary processing architecture, in ways that would be possible with other kinds of APIs, but more difficult to implement. The fundamental <code>AudioNode</code> construct in the Web Audio API supports this approach.</p></li>
         <li><p>While a single looping music background can be created today with the <a href="http://www.w3.org/TR/html5/the-audio-element.html#the-audio-element" title="4.8.7 The audio element &#8212; HTML5">HTML5 &lt;audio&gt; element</a>, the ability to transition smoothly from one musical background to another requires additional capabilities that are found in the Web Audio API including <em>sample-accurate playback scheduling</em> and <em>automated cross-fading of multiple sources</em>. Related API features include <code>AudioBufferSourceNode.noteOn()</code> and <code>AudioParam.setValueAtTime()</code>.</p></li>
-        <li><p>The scenario illustrates many aspects of the creation of a credible soundscape. The game character is evolving in a virtual three-dimensional environment and the soundscape is, at all time, spatialized: a <em>panning model</em> can be used to spatialize sound sources in the game (<code>AudioPanningNode</code>); <em>obstruction / occlusion</em> modelling is used to muffle the sound of the clock going through walls, and the sound of flies buzzing around would need <em>Doppler Shift</em> simulation to sound believable (also supported by <code>AudioPanningNode</code>).</p></li>
+        <li><p>The scenario illustrates many aspects of the creation of a credible soundscape. The game character is evolving in a virtual three-dimensional environment and the soundscape is at all times spatialized: a <em>panning model</em> can be used to spatialize sound sources in the game (<code>AudioPanningNode</code>); <em>obstruction / occlusion</em> modelling is used to muffle the sound of the clock going through walls, and the sound of flies buzzing around would need <em>Doppler Shift</em> simulation to sound believable (also supported by <code>AudioPanningNode</code>).  The listener's position is part of this 3D model as well (<code>AudioListener</code>).</p></li>
         <li><p>As the soundscape changes from small room to large hall, the game benefits from the <em>simulation of acoustic spaces</em>, possibly through the use of a <em>convolution engine</em> for high quality room effects as supported by <code>ConvolverNode</code> in the Web Audio API.</p></li> 
         <li><p>Many sounds in the scenario are triggered by events in the game, and would need to be played with low latency. The sound of the bullets as they are fired and ricochet against the walls, in particular, illustrate a requirement for <em>basic polyphony</em> and <em>high-performance playback and processing of many sounds</em>. These are supported by the general ability of the Web Audio API to include many sound-generating nodes with independent scheduling and high-throughput native algorithms.</p></li>
       </ol>
@@ -286,11 +286,18 @@
       </p>
       </section>
       
-      <section>
+    <section>
       
-      <h3>UI/DOM Sounds</h3>
-      <p>A child is visiting a Kids' website where the playful, colorful HTML interface is accompanied by sound effects played as the child hovers or clicks on some of the elements of the page. For example, when filling a form the sound of a typewriter can be heard as the child types in the form field. Some of the sounds are spatialized and have a different volume depending on where and how the child interacts with the page. When an action triggers a download visualised with a progress bar, a rising pitch sound accompanies the download and another sound (ping!) is played when the download is complete.
-      </p>
+      <h3>Playful sonification of user interfaces</h3>
+
+      <p>A child is visiting a social website designed for kids. The playful, colorful HTML interface is accompanied by sound effects played as the child hovers or clicks on some of the elements of the page. For example, when filling in a form the sound of a typewriter can be heard as the child types in the form field. Some of the sounds are spatialized and have a different volume depending on where and how the child interacts with the page. When an action triggers a download visualised with a progress bar, a gradually rising pitch sound accompanies the download and another sound (ping!) is played when the download is complete.</p>
+
+      <h4>Notes and Implementation Considerations</h4>
+      <ol>
+        <li><p>Although the web UI incorporates many sound effects, its controls are embedded in the site's pages using standard web technology such as HTML form elements and CSS stylesheets. JavaScript event handlers may be attached to these elements, causing graphs of <code>AudioNodes</code> to be constructed and activated to produce sound output.</p></li>
+        <li><p>Modularity, spatialization and mixing play an important role in this scenario, as for the others in this document.</p></li>
+        <li><p>Various effects can be achieved through programmatic variation of these sounds using the Web Audio API. The download progress could smoothly vary the pitch of an <code>AudioBufferSourceNode</code>'s <code>playbackRate</code> using an exponential ramp function, or a more realistic typewriter sound could be achieved by varying an output filter's frequency based on the keypress's character code.</p></li>
+        <li><p>In a future version of CSS, stylesheets may be able to support simple types of sonification, such as attaching a "typewriter key" sound to an HTML <code>textarea</code> element or a "click" sound to an HTML <code>button</code>. These can be thought of as an extension of the visual skinning concepts already embodied by style attributes such as <code>background-image</code>.</p></li>
     </section>
     
     <section>