--- a/webaudio/specification.html Mon Aug 13 17:17:41 2012 +0100
+++ b/webaudio/specification.html Mon Aug 13 13:26:52 2012 -0700
@@ -6,7 +6,7 @@
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>Web Audio API</title>
<meta name="revision"
- content="$Id: Overview.html,v 1.8 2012/03/14 06:48:02 tmichel Exp $" />
+ content="$Id: Overview.html,v 1.4 2012/07/30 11:44:57 tmichel Exp $" />
<link rel="stylesheet" href="style.css" type="text/css" />
<!--
<script src="section-links.js" type="application/ecmascript"></script>
@@ -38,7 +38,7 @@
<dl>
<dt>This version: </dt>
<dd><a
- href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html</a>
+ href="https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html">https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html</a>
</dd>
<dt>Latest published version: </dt>
<dd><a
@@ -105,7 +105,7 @@
the <a href="http://www.w3.org/TR/">W3C technical reports index</a> at
http://www.w3.org/TR/. </em></p>
-<p>This is the public Working Draft of the <cite>Web Audio API</cite>
+<p>This is the Editor's Draft of the <cite>Web Audio API</cite>
specification. It has been produced by the <a
href="http://www.w3.org/2011/audio/"><b>W3C Audio Working Group</b></a> , which
is part of the W3C WebApps Activity.</p>
@@ -131,15 +131,16 @@
<div class="toc">
<ul>
- <li><a href="#introduction">1. Introduction</a></li>
+ <li><a href="#introduction">1. Introduction</a>
<ul>
<li><a href="#Features">1.1. Features</a></li>
<li><a href="#ModularRouting">1.2. Modular Routing</a></li>
<li><a href="#APIOverview">1.3. API Overview</a></li>
</ul>
+ </li>
<li><a href="#conformance">2. Conformance</a></li>
<li><a href="#terminology">3. Terminology and Algorithms</a></li>
- <li><a href="#API-section">4. The Audio API</a></li>
+ <li><a href="#API-section">4. The Audio API</a>
<ul>
<li><a href="#AudioContext-section">4.1. The AudioContext Interface</a>
<ul>
@@ -290,6 +291,7 @@
<li><a href="#MediaStreamAudioSourceNode">4.25. The
MediaStreamAudioSourceNode Interface</a></li>
</ul>
+ </li>
<li><a href="#AudioElementIntegration">5. Integration with the
<code>audio</code> and <code>video</code> elements</a></li>
<li><a href="#MixerGainStructure">6. Mixer Gain Structure</a>
@@ -1091,7 +1093,7 @@
<div id="methodsandparams-AudioNode-section" class="section">
<h3 id="methodsandparams-AudioNode">4.2.2. Methods and Parameters</h3>
<dl>
- <dt id="dfn-connect">The <code>connect</code> to AudioNode method</dt>
+ <dt id="dfn-connect-AudioNode">The <code>connect</code> to AudioNode method</dt>
<dd><p>Connects the AudioNode to another AudioNode.</p>
<p>The <dfn id="dfn-destination_2">destination</dfn> parameter is the
AudioNode to connect to.</p>
@@ -1113,7 +1115,7 @@
</dd>
</dl>
<dl>
- <dt id="dfn-connect">The <code>connect</code> to AudioParam method</dt>
+ <dt id="dfn-connect-AudioParam">The <code>connect</code> to AudioParam method</dt>
<dd><p>Connects the AudioNode to an AudioParam, controlling the parameter
value with an audio-rate signal.
</p>
@@ -1129,7 +1131,7 @@
<p>The <dfn id="dfn-destination_3">destination</dfn> parameter is the
AudioParam to connect to.</p>
- <p>The <dfn id="dfn-output_3">output</dfn> parameter is an index
+ <p>The <dfn id="dfn-output_3-destination">output</dfn> parameter is an index
describing which output of the AudioNode from which to connect. An
out-of-bound value throws an exception.</p>
</dd>
@@ -1137,7 +1139,7 @@
<dl>
<dt id="dfn-disconnect">The <code>disconnect</code> method</dt>
<dd><p>Disconnects an AudioNode's output.</p>
- <p>The <dfn id="dfn-output_3">output</dfn> parameter is an index
+ <p>The <dfn id="dfn-output_3-disconnect">output</dfn> parameter is an index
describing which output of the AudioNode to disconnect. An out-of-bound
value throws an exception.</p>
</dd>
@@ -1154,13 +1156,13 @@
<li>A <em>normal</em> JavaScript reference obeying normal garbage collection rules. </li>
<li>A <em>playing</em> reference for an <code>AudioSourceNode</code>. Please see details for each specific
<code>AudioSourceNode</code> sub-type. For example, both <code>AudioBufferSourceNodes</code> and <code>OscillatorNodes</code> maintain a <em>playing</em>
-reference to themselves while they are in the SCHEDULED_STATE or PLAYING_STATE.
+reference to themselves while they are in the SCHEDULED_STATE or PLAYING_STATE.</li>
<li>A <em>connection</em> reference which occurs if another <code>AudioNode</code> is connected to it. </li>
<li>A <em>tail-time</em> reference which an <code>AudioNode</code> maintains on itself as long as it has
any internal processing state which has not yet been emitted. For example, a <code>ConvolverNode</code> has
a tail which continues to play even after receiving silent input (think about clapping your hands in a large concert
hall and continuing to hear the sound reverberate throughout the hall). Some <code>AudioNodes</code> have this
- property. Please see details for specific nodes.
+ property. Please see details for specific nodes.</li>
</ol>
<p>
@@ -1359,7 +1361,8 @@
the time coordinate system of AudioContext.currentTime. The events define a mapping from time to value. The following methods
can change the event list by adding a new event into the list of a type specific to the method. Each event
has a time associated with it, and the events will always be kept in time-order in the list. These
-methods will be called <em>automation</em> methods:
+methods will be called <em>automation</em> methods:</p>
+
<ul>
<li>setValueAtTime() - <em>SetValue</em></li>
<li>linearRampToValueAtTime() - <em>LinearRampToValue</em></li>
@@ -1367,7 +1370,6 @@
<li>setTargetValueAtTime() - <em>SetTargetValue</em></li>
<li>setValueCurveAtTime() - <em>SetValueCurve</em></li>
</ul>
-</p>
<p>
The following rules will apply when calling these methods:
@@ -1380,7 +1382,7 @@
<li>If setValueCurveAtTime() is called for time T and duration D and there are any events having a time greater than T, but less than
T + D, then an exception will be thrown. In other words, it's not ok to schedule a value curve during a time period containing other events.</li>
<li>Similarly an exception will be thrown if any <em>automation</em> method is called at a time which is inside of the time interval
-of a <em>SetValueCurve</em> event at time T and duration D.
+of a <em>SetValueCurve</em> event at time T and duration D.</li>
</ul>
<p>
</p>
@@ -1396,7 +1398,7 @@
</p>
<p>
If the next event (having time T1) after this <em>SetValue</em> event is not of type <em>LinearRampToValue</em> or <em>ExponentialRampToValue</em>,
- then, for t: time <= t < T1, v(t) = value.
+ then, for t: time <= t < T1, v(t) = value.
In other words, the value will remain constant during this time interval, allowing the creation of "step" functions.
</p>
<p>
@@ -1415,7 +1417,7 @@
<p>The <dfn id="dfn-time_3">time</dfn> parameter is the time in the same time coordinate system as AudioContext.currentTime.</p>
<p>
- The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the time parameter passed into this method)
+ The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the time parameter passed into this method)
will be calculated as:
</p>
<pre>
@@ -1442,7 +1444,7 @@
or equal to 0, or if the value at the time of the previous event is less than or equal to 0.</p>
<p>The <dfn id="dfn-time_4">time</dfn> parameter is the time in the same time coordinate system as AudioContext.currentTime.</p>
<p>
- The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the time parameter passed into this method)
+ The value during the time interval T0 <= t < T1 (where T0 is the time of the previous event and T1 is the time parameter passed into this method)
will be calculated as:
</p>
<pre>
@@ -1477,7 +1479,7 @@
to reach the value 1 - 1/e (around 63.2%) given a step input response (transition from 0 to 1 value).
</p>
<p>
- During the time interval: <em>T0</em> <= t < <em>T1</em>, where T0 is the <em>time</em> parameter and T1 represents the time of the event following this
+ During the time interval: <em>T0</em> <= t < <em>T1</em>, where T0 is the <em>time</em> parameter and T1 represents the time of the event following this
event (or <em>infinity</em> if there are no following events):
</p>
<pre>
@@ -1502,13 +1504,11 @@
<p>The <dfn id="dfn-duration_5">duration</dfn> parameter is the
amount of time in seconds (after the <em>time</em> parameter) where values will be calculated according to the <em>values</em> parameter..</p>
<p>
- During the time interval: <em>time</em> <= t < <em>time</em> + <em>duration</em>, values will be calculated:
+ During the time interval: <em>time</em> <= t < <em>time</em> + <em>duration</em>, values will be calculated:
</p>
- <p>
<pre>
v(t) = values[N * (t - time) / duration], where <em>N</em> is the length of the <em>values</em> array.
</pre>
- </p>
</dd>
</dl>
<dl>
@@ -1554,7 +1554,7 @@
var curveLength = 44100;
var curve = new Float32Array(curveLength);
-for (var i = 0; i < curveLength; ++i)
+for (var i = 0; i < curveLength; ++i)
curve[i] = Math.sin(Math.PI * i / curveLength);
param.setValueAtTime(0.2, t0);
@@ -1570,7 +1570,6 @@
</div>
</div>
</div>
-</div>
@@ -2450,7 +2449,7 @@
float power = 0;
- for (size_t i = 0; i < numberOfChannels; ++i) {
+ for (size_t i = 0; i < numberOfChannels; ++i) {
float* sourceP = buffer->channel(i)->data();
float channelPower = 0;
@@ -2466,7 +2465,7 @@
power = sqrt(power / (numberOfChannels * length));
// Protect against accidental overload.
- if (isinf(power) || isnan(power) || power < MinPower)
+ if (isinf(power) || isnan(power) || power < MinPower)
power = MinPower;
float scale = 1 / power;
@@ -2631,7 +2630,7 @@
<img alt="channel splitter" src="images/channel-splitter.png" />
<p>Please note that in this example, the splitter does <b>not</b> interpret the channel identities (such as left, right, etc.), but
-simply splits out channels in the order that they are input.
+simply splits out channels in the order that they are input.</p>
<p>One application for <code>AudioChannelSplitter</code> is for doing "matrix
mixing" where individual gain control of each channel is desired. </p>
@@ -2677,7 +2676,7 @@
<img alt="channel merger" src="images/channel-merger.png" />
<p>Please note that in this example, the merger does <b>not</b> interpret the channel identities (such as left, right, etc.), but
-simply combines channels in the order that they are input.
+simply combines channels in the order that they are input.</p>
<p>Be aware that it is possible to connect an <code>AudioChannelMerger</code>
@@ -3179,13 +3178,13 @@
</dd>
</dl>
<dl>
- <dt id="dfn-noteOn">The <code>noteOn</code>
+ <dt id="dfn-noteOn-AudioBufferSourceNode">The <code>noteOn</code>
method</dt>
<dd><p>defined as in <a href="#AudioBufferSourceNode-section"><code>AudioBufferSourceNode</code></a>. </p>
</dd>
</dl>
<dl>
- <dt id="dfn-noteOff">The <code>noteOff</code>
+ <dt id="dfn-noteOff-AudioBufferSourceNode">The <code>noteOff</code>
method</dt>
<dd><p>defined as in <a href="#AudioBufferSourceNode-section"><code>AudioBufferSourceNode</code></a>. </p>
</dd>
@@ -3213,6 +3212,7 @@
</code></pre>
</div>
</div>
+</div>
<div id="MediaStreamAudioSourceNode-section" class="section">
<h2 id="MediaStreamAudioSourceNode">4.25. The MediaStreamAudioSourceNode
@@ -3742,11 +3742,11 @@
// Source in front or behind the listener.
double frontBack = projectedSource.dot(listenerFrontNorm);
-if (frontBack < 0)
+if (frontBack < 0)
azimuth = 360 - azimuth;
// Make azimuth relative to "front" and not "right" listener vector.
-if ((azimuth >= 0) && (azimuth <= 270))
+if ((azimuth >= 0) && (azimuth <= 270))
azimuth = 90 - azimuth;
else
azimuth = 450 - azimuth;
@@ -3755,7 +3755,7 @@
if (elevation > 90)
elevation = 180 - elevation;
-else if (elevation < -90)
+else if (elevation < -90)
elevation = -180 - elevation;
</code></pre>
</div>
@@ -3768,7 +3768,6 @@
<em>mono->stereo</em> and <em>stereo->stereo</em> panning must be supported.
<em>mono->stereo</em> processing is used when all connections to the input are mono.
Otherwise <em>stereo->stereo</em> processing is used.</p>
-<p>
<p>The following algorithms must be implemented: </p>
<ul>
@@ -3785,8 +3784,8 @@
<ol>
<li>
- </p>
- The <em>azimuth</em> value is first contained to be within the range -90 <= <em>azimuth</em> <= +90 according to:
+ <p>
+ The <em>azimuth</em> value is first contained to be within the range -90 <= <em>azimuth</em> <= +90 according to:
</p>
<pre>
// Clamp azimuth to allowed range of -180 -> +180.
@@ -3794,7 +3793,7 @@
azimuth = min(180, azimuth);
// Now wrap to range -90 -> +90.
- if (azimuth < -90)
+ if (azimuth < -90)
azimuth = -180 - azimuth;
else if (azimuth > 90)
azimuth = 180 - azimuth;
@@ -3813,7 +3812,7 @@
Or for <em>stereo->stereo</em> as:
</p>
<pre>
- if (azimuth <= 0) { // from -90 -> 0
+ if (azimuth <= 0) { // from -90 -> 0
// inputL -> outputL and "equal-power pan" inputR as in mono case
// by transforming the "azimuth" value from -90 -> 0 degrees into the range -90 -> +90.
x = (azimuth + 90) / 90;
@@ -3843,7 +3842,7 @@
</pre>
<p>Else for <em>stereo->stereo</em>, the output is calculated as:</p>
<pre>
- if (azimuth <= 0) { // from -90 -> 0
+ if (azimuth <= 0) { // from -90 -> 0
outputL = inputL + inputR * gainL;
outputR = inputR * gainR;
} else { // from 0 -> +90