W3C

CSS Syntax Module Level 3

Editor's Draft

This version:
http://dev.w3.org/csswg/css3-syntax/
Editor's draft:
http://dev.w3.org/csswg/css3-syntax/
Previous version:
http://www.w3.org/TR/2003/WD-css3-syntax-20030813/
Issue Tracking:
W3C Bugzilla
Feedback:
www-style@w3.org with subject line “[css-syntax] … message topic …” (archives)
Editors:
(Google, Inc.),

Abstract

CSS is a language for describing the rendering of structured documents (such as HTML and XML) on screen, on paper, in speech, etc. This module describes, in general terms, the basic structure and syntax of CSS stylesheets. It defines, in detail, the syntax and parsing of CSS - how to turn a stream of bytes into a meaningful stylesheet.

Status of this document

This is a public copy of the editors' draft. It is provided for discussion only and may change at any moment. Its publication here does not imply endorsement of its contents by W3C. Don't cite this document other than as work in progress.

The (archived) public mailing list www-style@w3.org (see instructions) is preferred for discussion of this specification. When sending e-mail, please put the text “css3-syntax” in the subject, preferably like this: “[css3-syntax] …summary of comment…

This document was produced by the CSS Working Group (part of the Style Activity).

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

The following features are at risk: …

Table of contents

1. Introduction

This section is not normative.

This module defines the abstract syntax and parsing of CSS stylesheets and other things which use CSS syntax (such as the HTML style attribute).

It defines algorithms for converting a stream of codepoints (in other words, text) into a stream of CSS tokens, and then further into CSS objects such as stylesheets, rules, and declarations.

1.1. Module interactions

This module defines the syntax and parsing of CSS stylesheets. It supersedes the lexical scanner and grammar defined in CSS 2.1.

2. Description of CSS's Syntax

This section is not normative.

A CSS document is a series of qualified rules, which are usually style rules that apply CSS properties to elements, and at-rules, which define special processing rules or values for the CSS document.

A qualified rule starts with a prelude then has a {}-wrapped block containing a sequence of declarations. The meaning of the prelude varies based on the context that the rule appears in - for style rules, it's a selector which specifies what elements the declarations will apply to. Each declaration has a name, followed by a colon and the declaration value. Declarations are separated by semicolons.

A typical rule might look something like this:

p > a {
	color: blue;
	text-decoration: underline;
}

In the above rule, "p > a" is the selector, which, if the source document is HTML, selects any <a> elements that are children of a <p> element.

"color: blue;" is a declaration specifying that, for the elements that match the selector, their ‘color’ property should have the value ‘blue’. Similiarly, their ‘text-decoration’ property should have the value ‘underline’.

At-rules are all different, but they have a basic structure in common. They start with an "@" character followed by their name. Some at-rules are simple statements, with their name followed by more CSS values to specify their behavior, and finally ended by a semicolon. Others are blocks; they can have CSS values following their name, but they end with a {}-wrapped block, similar to a qualified rule. Even the contents of these blocks are specific to the given at-rule: sometimes they contain a sequence of declarations, like a qualified rule; other times, they may contain additional blocks, or at-rules, or other structures altogether.

Here are several examples of at-rules that illustrate the varied syntax they may contain.

@import "my-styles.css";

The ‘@importat-rule is a simple statement. After its name, it takes a single string or ‘url()’ function to indicate the stylesheet that it should import.

@page :left {
	margin-left: 4cm;
	margin-right: 3cm;
}

The ‘@pageat-rule consists of an optional page selector (the ‘:left’ pseudoclass), followed by a block of properties that apply to the page when printed. In this way, it's very similar to a normal style rule, except that its properties don't apply to any "element", but rather the page itself.

@media print {
	body { font-size: 10pt }
}

The ‘@mediaat-rule begins with a media type and a list of optional media queries. Its block contains entire rules, which are only applied when the ‘@media’s conditions are fulfilled.

Property names and at-rule names are always identifiers, which have to start with a letter or a hyphen followed by a letter, and then can contain letters, numbers, hyphens, or underscores. You can include any character at all, even ones that CSS uses in its syntax, by escaping it with a backslash (\) or by using a hexadecimal escape.

The syntax of selectors is defined in the Selectors spec. Similarly, the syntax of the wide variety of CSS values is defined in the Values & Units spec. The special syntaxes of individual at-rules can be found in the specs that define them.

2.1. Error Handling

This section is not normative.

When errors occur in CSS, the parser attempts to recover gracefully, throwing away only the minimum amount of content before returning to parsing as normal. This is because errors aren't always mistakes - new syntax looks like an error to an old parser, and it's useful to be able to add new syntax to the language without worrying about stylesheets that include it being completely broken in older UAs.

The precise error-recovery behavior is detailed in the parser itself, but it's simple enough that a short description is fairly accurate:

3. Tokenizing and Parsing CSS

User agents must use the parsing rules described in this specification to generate the CSSOM trees from text/css resources. Together, these rules define what is referred to as the CSS parser.

This specification defines the parsing rules for CSS documents, whether they are syntactically correct or not. Certain points in the parsing algorithm are said to be a parse errors. The error handling for parse errors is well-defined: user agents must either act as described below when encountering such problems, or must abort processing at the first error that they encounter for which they do not wish to apply the rules described below.

Conformance checkers must report at least one parse error condition to the user if one or more parse error conditions exist in the document and must not report parse error conditions if none exist in the document. Conformance checkers may report more than one parse error condition if more than one parse error condition exists in the document. Conformance checkers are not required to recover from parse errors, but if they do, they must recover in the same way as user agents.

3.1. Overview of the Parsing Model

The input to the CSS parsing process consists of a stream of Unicode code points, which is passed through a tokenization stage followed by a tree construction stage. The output is a CSSStyleSheet object.

Implementations that do not support scripting do not have to actually create a CSSOM CSSStyleSheet object, but the CSSOM tree in such cases is still used as the model for the rest of the specification.

3.2. The input byte stream

The stream of Unicode code points that comprises the input to the tokenization stage may be initially seen by the user agent as a stream of bytes (typically coming over the network or from the local file system). The bytes encode the actual characters according to a particular character encoding, which the user agent must use to decode the bytes into characters.

To decode the stream of bytes into a stream of characters, UAs must follow these steps.

The algorithms to get an encoding and decode are defined in the Encoding Standard.

First, determine the fallback encoding:

  1. If HTTP or equivalent protocol defines an encoding (e.g. via the charset parameter of the Content-Type header), get an encoding for the specified value. If that does not return failure, use the return value as the fallback encoding.
  2. Otherwise, check the byte stream. If the first several bytes match the hex sequence
    40 63 68 61 72 73 65 74 20 22 (not 22)* 22 3B
    then get an encoding for the sequence of (not 22)* bytes, decoded per windows-1252.

    Note: Anything ASCII-compatible will do, so using windows-1252 is fine.

    Note: The byte sequence above, when decoded as ASCII, is the string "@charset "…";", where the "…" is the sequence of bytes corresponding to the encoding's name.

    If the return value was utf-16 or utf-16be, use utf-8 as the fallback encoding; if it was anything else except failure, use the return value as the fallback encoding.

    This mimics HTML <meta> behavior.

  3. Otherwise, get an encoding for the value of the charset attribute on the <link> element or <?xml-stylesheet?> processing instruction that caused the style sheet to be included, if any. If that does not return failure, use the return value as the fallback encoding.
  4. Otherwise, if the referring style sheet or document has an encoding, use that as the fallback encoding.
  5. Otherwise, use utf-8 as the fallback encoding.

Then, decode the byte stream using the fallback encoding.

Note: the decode algorithm lets the byte order mark (BOM) take precedence, hence the usage of the term "fallback" above.

Anne says that steps 3/4 should be an input to this algorithm from the specs that define importing stylesheet, to make the algorithm as a whole cleaner. Perhaps abstract it into the concept of an "environment charset" or something?

Should we only take the charset from the referring document if it's same-origin?

3.3. Preprocessing the input stream

The input stream consists of the characters (individual unicode code-points) pushed into it as the input byte stream is decoded.

Before sending the input stream to the tokenizer, implementations must make the following character substitutions:

4. Tokenization

Implementations must act as if they used the following algorithms to tokenize CSS. To transform a stream of characters into a stream of tokens, repeatedly consume a token until an 〈EOF〉 is encountered, collecting the returned tokens into a stream. Each call to the consume a token algorithm returns a single token, so it can also be used "on-demand" to tokenize a stream of characters during parsing, if so desired.

The output of the tokenization step is a stream of zero or more of the following tokens: 〈ident〉, 〈function〉, 〈at-keyword〉, 〈hash〉, 〈string〉, 〈bad-string〉, 〈url〉, 〈bad-url〉, 〈delim〉, 〈number〉, 〈percentage〉, 〈dimension〉, 〈unicode-range〉, 〈include-match〉, 〈dash-match〉, 〈prefix-match〉, 〈suffix-match〉, 〈substring-match〉, 〈column〉, 〈whitespace〉, 〈CDO〉, 〈CDC〉, 〈colon〉, 〈semicolon〉, 〈comma〉, 〈[〉, 〈]〉, 〈(〉, 〈)〉, 〈{〉, and 〈}〉.

The type flag of hash tokens is used in the Selectors syntax [SELECT]. Only hash tokens with the "id" type are valid ID selectors.

As a technical note, the tokenizer defined here requires only three characters of look-ahead. The tokens it produces are designed to allow Selectors to be parsed with one token of look-ahead, and additional tokens may be added in the future to maintain this invariant.

4.1. Token Railroad Diagrams

This section is non-normative.

This section presents an informative view of the tokenizer, in the form of railroad diagrams. Railroad diagrams are more compact than an explicit parser, but often easier to read than an regular expression.

These diagrams are informative and incomplete; they describe the grammar of "correct" tokens, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax of each token.

Diagrams with names between 〈〉 brackets represent tokens. The rest are productions referred to by other diagrams.

comment
/*anything but * followed by /*/
newline
\n\r\n\r\f
whitespace character
space\tnewline
escape
\not newline or hex digithex digit1-6 timeswhitespace character
〈whitespace〉
whitespace character
〈ident〉
-a-z A-Z _ or non-ASCIIescapea-z A-Z 0-9 _ - or non-ASCIIescape
〈function〉
〈ident〉(
〈at-keyword〉
@〈ident〉
〈hash〉
#a-z A-Z 0-9 _ - or non-ASCIIescape
〈string〉
"not " \ or newlineescape\newline"'not ' \ or newlineescape\newline'
〈url〉
〈ident "url"〉(〈whitespace〉url-unquotedSTRING〈whitespace〉)
url-unquoted
not " ' ( ) \ whitespace or non-printableescape
〈number〉
+-digit.digitdigit.digiteE+-digit
〈dimension〉
〈number〉〈ident〉
〈percentage〉
〈number〉%
〈unicode-range〉
Uu+hex digit1-6 timeshex digit1-5 times?1 to (6 - digits) timeshex digit1-6 times-hex digit1-6 times
〈include-match〉
~=
〈dash-match〉
|=
〈prefix-match〉
^=
〈suffix-match〉
$=
〈substring-match〉
*=
〈column〉
||
〈CDO〉
<!--
〈CDC〉
-->

4.2. Definitions

This section defines several terms used during the tokenization phase.

next input character
The first character in the input stream that has not yet been consumed.
current input character
The last character to have been consumed.
reconsume the current input character
Push the current input character back onto the front of the input stream, so that the next time you are instructed to consume the next input character, it will instead reconsume the current input character.
EOF character
A conceptual character representing the end of the input stream. Whenever the input stream is empty, the next input character is always an EOF character.
digit
A character between U+0030 DIGIT ZERO (0) and U+0039 DIGIT NINE (9).
hex digit
A digit, or a character between U+0041 LATIN CAPITAL LETTER A (A) and U+0046 LATIN CAPITAL LETTER F (F), or a character between U+0061 LATIN SMALL LETTER A (a) and U+0066 LATIN SMALL LETTER F (f).
uppercase letter
A character between U+0041 LATIN CAPITAL LETTER A (A) and U+005A LATIN CAPITAL LETTER Z (Z).
lowercase letter
A character between U+0061 LATIN SMALL LETTER A (a) and U+007A LATIN SMALL LETTER Z (z).
letter
An uppercase letter or a lowercase letter.
non-ASCII character
A character with a codepoint equal to or greater than U+0080 <control>.
name-start character
A letter, a non-ASCII character, or U+005F LOW LINE (_).
name character
A name-start character, A digit, or U+002D HYPHEN-MINUS (-).
non-printable character
A character between U+0000 NULL and U+0008 BACKSPACE, or U+000B LINE TABULATION, or a character between U+000E SHIFT OUT and U+001F INFORMATION SEPARATOR ONE, or U+007F DELETE.
newline
U+000A LINE FEED. Note that U+000D CARRIAGE RETURN and U+000C FORM FEED are not included in this definition, as they are converted to U+000A LINE FEED during preprocessing.
whitespace
A newline, U+0009 CHARACTER TABULATION, or U+0020 SPACE.
maximum allowed codepoint
The greatest codepoint defined by Unicode. This is currently U+10FFFF.

4.3. Tokenizer Algorithms

The algorithms defined in this section transform a stream of characters into a stream of tokens.

4.3.1. Consume a token

This section describes how to consume a token from a stream of characters. It will return a single token of any type.

Consume the next input character.

whitespace
Consume as much whitespace as possible. Return a 〈whitespace〉.
U+0022 QUOTATION MARK (")
Consume a string token with the ending character U+0022 QUOTATION MARK (") and return it.
U+0023 NUMBER SIGN (#)
If the next input character is a name character or the next two input characters are a valid escape, then:
  1. Create a 〈hash〉.
  2. If the next 3 input characters would start an identifier, set the 〈hash〉’s type flag to "id".
  3. Consume a name, and set the 〈hash〉’s value to the returned string.
  4. Return the 〈hash〉.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+0024 DOLLAR SIGN ($)
If the next input character is U+003D EQUALS SIGN (=), consume it and return a 〈suffix-match〉.

Otherwise, emit a 〈delim〉 with its value set to the current input character.

U+0027 APOSTROPHE (')
Consume a string token with the ending character U+0027 APOSTROPHE (') and return it.
U+0028 LEFT PARENTHESIS (()
Return a 〈(〉.
U+0029 RIGHT PARENTHESIS ())
Return a 〈)〉.
U+002A ASTERISK (*)
If the next input character is U+003D EQUALS SIGN (=), consume it and return a 〈substring-match〉.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+002B PLUS SIGN (+)
If the input stream starts with a number, reconsume the current input character, consume a numeric token and return it.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+002C COMMA (,)
Return a 〈comma〉.
U+002D HYPHEN-MINUS (-)
If the input stream starts with a number, reconsume the current input character, consume a numeric token, and return it.

Otherwise, if the input stream starts with an identifier, reconsume the current input character, consume an ident-like token, and return it.

Otherwise, if the next 2 input characters are U+002D HYPHEN-MINUS U+003E GREATER-THAN SIGN (->), consume them and return a 〈CDC〉.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+002E FULL STOP (.)
If the input stream starts with a number, reconsume the current input character, consume a numeric token, and return it.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+002F SOLIDUS (/)
If the next input character is U+002A ASTERISK (*), consume it and all following characters up to and including the first U+002A ASTERISK (*) followed by a U+002F SOLIDUS (/), or up to an EOF character. Then consume a token and return it.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+003A COLON (:)
Return a 〈colon〉.
U+003B SEMICOLON (;)
Return a 〈semicolon〉.
U+003C LESS-THAN SIGN (<)
If the next 3 input characters are U+0021 EXCLAMATION MARK U+002D HYPHEN-MINUS U+002D HYPHEN-MINUS (!--), consume them and return a 〈CDO〉.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+0040 COMMERCIAL AT (@)
If the next 3 input characters would start an identifier, consume a name, create an 〈at-keyword〉 with its value set to the returned value, and return it.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+005B LEFT SQUARE BRACKET ([)
Return a 〈[〉.
U+005C REVERSE SOLIDUS (\)
If the input stream starts with a valid escape, reconsume the current input character, consume an ident-like token, and return it.

Otherwise, this is a parse error. Return a 〈delim〉 with its value set to the current input character.

U+005D RIGHT SQUARE BRACKET (])
Return a 〈]〉.
U+005E CIRCUMFLEX ACCENT (^)
If the next input character is U+003D EQUALS SIGN (=), consume it and return a 〈prefix-match〉.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+007B LEFT CURLY BRACKET ({)
Return a 〈{〉.
U+007D RIGHT CURLY BRACKET (})
Return a 〈}〉.
digit
Consume a numeric token, and return it.
U+0055 LATIN CAPITAL LETTER U (U)
U+0075 LATIN SMALL LETTER U (u)
If the next 2 input character are U+002B PLUS SIGN (+) followed by a hex digit or U+003F QUESTION MARK (?), consume the next input character. Note: don't consume both of them. Consume a unicode-range token and return it.

Otherwise, reconsume the current input character, consume an ident-like token, and return it.

name-start character
Reconsume the current input character, consume an ident-like token, and return it.
U+007C VERTICAL LINE (|)
If the next input character is U+003D EQUALS SIGN (=), consume it and return a 〈dash-match〉.

Otherwise, if the next input character is U+0073 VERTICAL LINE (|), consume it and return a 〈column〉.

Otherwise, return a 〈delim〉 with its value set to the current input character.

U+007E TILDE (~)
If the next input character is U+003D EQUALS SIGN (=), consume it and return an 〈include-match〉.

Otherwise, return a 〈delim〉 with its value set to the current input character.

EOF
Return an 〈EOF〉 token.
anything else
Return a 〈delim〉 with its value set to the current input character.

4.3.2. Consume a numeric token

This section describes how to consume a numeric token from a stream of characters. It returns either a 〈number〉, 〈percentage〉, or 〈dimension〉.

Consume a number.

If the next 3 input characters would start an identifier, then:

  1. Create a 〈dimension〉 with the same representation, value, and type flag as the returned number, and a unit set initially to the empty string.
  2. Consume a name. Set the 〈dimension〉’s unit to the returned value.
  3. Return the 〈dimension〉.

Otherwise, if the next input character is U+0025 PERCENTAGE SIGN (%), consume it. Create a 〈percentage〉 with the same representation and value as the returned number, and return it.

Otherwise, create a 〈number〉 with the same representation, value, and type flag as the returned number, and return it.

4.3.3. Consume an ident-like token

This section describes how to consume an ident-like token from a stream of characters. It returns an 〈ident〉, 〈function〉, 〈url〉, or 〈bad-url〉.

Consume a name.

If the returned string's value is an ASCII case-insensitive match for "url", and the next input character is U+0028 LEFT PARENTHESIS ((), consume it. Consume a url token, and return it.

Otherwise, if the next input character is U+0028 LEFT PARENTHESIS ((), consume it. Create a 〈function〉 token with its value set to the returned string and return it.

Otherwise, create an 〈ident〉 token with its value set to the returned string and return it.

4.3.4. Consume a string token

This section describes how to consume a string token from a stream of characters. It returns either a 〈string〉 or 〈bad-string〉.

This algorithm must be called with an ending character, which denotes the character that ends the string.

Initially create a 〈string〉 with its value set to the empty string.

Repeatedly consume the next input character from the stream:

ending character
EOF
Return the 〈string〉.
newline
This is a parse error. Create a 〈bad-string〉 and return it.
U+005C REVERSE SOLIDUS (\)
If the stream starts with a valid escape, consume an escaped character and append the returned character to the 〈string〉’s value.

Otherwise, if the next input character is a newline, consume it.

Otherwise, this is a parse error. Create a 〈bad-string〉 and return it.

anything else
Append the current input character to the 〈string〉’s value.

4.3.5. Consume a url token

This section describes how to consume a url token from a stream of characters. It returns either a 〈url〉 or a 〈bad-url〉.

This algorithm assumes that the initial "url(" has already been consumed.

Execute the following steps in order:

  1. Initially create a 〈url〉 with its value set to the empty string.
  2. Consume as much whitespace as possible.
  3. If the next input character is EOF, create a 〈bad-url〉 and return it.
  4. If the next input character is a U+0022 QUOTATION MARK (") or U+0027 APOSTROPHE ('), then:
    1. Consume a string token with the current input character as the ending character.
    2. If a 〈bad-string〉 was returned, consume the remnants of a bad url, create a 〈bad-url〉, and return it.
    3. Set the 〈url〉’s value to the returned 〈string〉’s value.
    4. Consume as much whitespace as possible.
    5. If the next input character is U+0029 RIGHT PARENTHESIS ()) or EOF, consume it and return the 〈url〉; otherwise, consume the remnants of a bad url, create a 〈bad-url〉, and return it.
  5. Repeatedly consume the next input character from the stream:
    U+0029 RIGHT PARENTHESIS ())
    EOF
    Return the 〈url〉.
    whitespace
    Consume as much whitespace as possible. If the next input character is U+0029 RIGHT PARENTHESIS ()) or EOF, consume it and return the 〈url〉; otherwise, consume the remnants of a bad url, create a 〈bad-url〉, and return it.
    U+0022 QUOTATION MARK (")
    U+0027 APOSTROPHE (')
    U+0028 LEFT PARENTHESIS (()
    non-printable character
    This is a parse error. Consume the remnants of a bad url, create a 〈bad-url〉, and return it.
    U+005C REVERSE SOLIDUS
    If the stream starts with a valid escape, consume an escaped character and append the returned character to the 〈url〉’s value.

    Otherwise, this is a parse error. Consume the remnants of a bad url, create a 〈bad-url〉, and return it.

    anything else
    Append the current input character to the 〈url〉’s value.

    4.3.6. Consume a unicode-range token

    This section describes how to consume a unicode-range token. It returns a 〈unicode-range〉 token.

    This algorithm assumes that the initial "u+" has been consumed, and the next character verified to be a hex digit or a "?".

    Execute the following steps in order:

    1. Create a new 〈unicode-range〉 with an empty range.
    2. Consume as many hex digits as possible, but no more than 6. If less than 6 hex digits were consumed, consume as many U+003F QUESTION MARK (?) characters as possible, but no more than enough to make the total of hex digits and U+003F QUESTION MARK (?) characters equal to 6.

      If any U+003F QUESTION MARK (?) characters were consumed, then:

      1. Interpret the consumed characters as a hexadecimal number, with the U+003F QUESTION MARK (?) characters replaced by U+0030 DIGIT ZERO (0) characters. This is the start of the range.
      2. Interpret the consumed characters as a hexadecimal number again, with the U+003F QUESTION MARK (?) character replaced by U+0046 LATIN CAPITAL LETTER F (F) characters. This is the end of the range.
      3. Set the 〈unicode-range〉’s range, then return it.

      Otherwise, interpret the digits as a hexadecimal number. This is the start of the range.

    3. If the next 2 input character are U+002D HYPHEN-MINUS (-) followed by a hex digit, then:
      1. Consume the next input character.
      2. Consume as many hex digits as possible, but no more than 6. Interpret the digits as a hexadecimal number. This is the end of the range. Set the 〈unicode-range〉’s range, then return it.
    4. Set the 〈unicode-range〉’s range and return it.

      4.3.7. Consume an escaped character

      This section describes how to consume an escaped character. It assumes that the U+005C REVERSE SOLIDUS (\) has already been consumed and that the next input character has already been verified to not be a newline or EOF. It will return a character.

      Consume the next input character.

      hex digit
      Consume as many hex digits as possible, but no more than 5. Note that this means 1-6 hex digits have been consumed in total. If the next input character is whitespace, consume it as well. Interpret the hex digits as a hexadecimal number. If this number is zero, or is greater than the maximum allowed codepoint, return U+FFFD REPLACEMENT CHARACTER (�). Otherwise, return the character with that codepoint.
      anything else
      Return the current input character.

      4.3.8. Check if two characters are a valid escape

      This section describes how to check if two characters are a valid escape. The algorithm described here can be called explicitly with two characters, or can be called with the input stream itself. In the latter case, the two characters in question are the current input character and the next input character, in that order.

      This algorithm will not consume any additional characters.

      If the first character is not U+005D REVERSE SOLIDUS (\), return false.

      Otherwise, if the second character is a newline or EOF character, return false.

      Otherwise, return true.

      4.3.9. Check if three characters would start an identifier

      This section describes how to check if three characters would start an identifier. The algorithm described here can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order.

      This algorithm will not consume any additional characters.

      Look at the first character:

      U+002D HYPHEN-MINUS
      If the second character is a name-start character or the second and third characters are a valid escape, return true. Otherwise, return false.
      name-start character
      Return true.
      U+005C REVERSE SOLIDUS (\)
      If the first and second characters are a valid escape, return true. Otherwise, return false.

      4.3.10. Check if three characters would start a number

      This section describes how to check if three characters would start a number. The algorithm described here can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order.

      This algorithm will not consume any additional characters.

      Look at the first character:

      U+002B PLUS SIGN (+)
      U+002D HYPHEN-MINUS (-)
      If the second character is a digit, return true.

      Otherwise, if the second character is a U+002E FULL STOP (.) and the third character is a digit, return true.

      Otherwise, return false.

      U+002E FULL STOP (.)
      If the second character is a digit, return true. Otherwise, return false.
      digit
      Return true.
      anything else
      Return false.

      4.3.11. Consume a name

      This section describes how to consume a name from a stream of characters. It returns a string containing the largest name that can be formed from adjacent characters in the stream, starting from the first.

      This algorithm does not do the verification of the first few characters that are necessary to ensure the returned characters would constitute an 〈ident〉. If that is the intended use, ensure that the stream starts with an identifier before calling this algorithm.

      Let result initially be an empty string.

      Repeatedly consume the next input character from the stream:

      name character
      Append the character to result.
      the stream starts with a valid escape
      Consume an escaped character. Append the returned character to result.
      anything else
      Return result.

      4.3.12. Consume a number

      This section describes how to consume a number from a stream of characters. It returns a 3-tuple of a string representation, a numeric value, and a type flag which is either "integer" or "number".

      This algorithm does not do the verification of the first few characters that are necessary to ensure a number can be obtained from the stream. Ensure that the stream starts with a number before calling this algorithm.

      Execute the following steps in order:

      1. Initially set repr to the empty string and type to "integer".
      2. If the next input character is U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), consume it and append it to repr.
      3. While the next input character is a digit, consume it and append it to repr.
      4. If the next 2 input characters are U+002E FULL STOP (.) followed by a digit, then:
        1. Consume them.
        2. Append them to repr.
        3. Set type to "number".
        4. While the next input character is a digit, consume it and append it to repr.
      5. If the next 2 input characters are U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e) followed by a digit, then:
        1. Consume them.
        2. Append them to repr.
        3. Set type to "number".
        4. While the next input character is a digit, consume it and append it to repr.
      6. Convert repr to a number, and set the value to the returned value.
      7. Return a 3-tuple of repr, value, and type.

      4.3.13. Convert a string to a number

      This section describes how to convert a string to a number. It returns a number.

      This algorithm does not do any verification to ensure that the string contains only a number. Ensure that the string contains only a valid CSS number before calling this algorithm.

      Divide the string into seven components, in order from left to right:

      1. A sign: a single U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), or the empty string. Let s be the number -1 if the sign is U+002D HYPHEN-MINUS (-); otherwise, let s be the number 1.
      2. An integer part: zero or more digits. If there is at least one digit, let i be the number formed by interpreting the digits as a base-10 integer; otherwise, let i be the number 0.
      3. A decimal point: a single U+002E FULL STOP (.), or the empty string.
      4. A fractional part: zero or more digits. If there is at least one digit, let f be the number formed by interpreting the digits as a base-10 integer and d be the number of digits; otherwise, let f and d be the number 0.
      5. An exponent indicator: a single U+0045 LATIN CAPITAL LETTER E (E) or U+0065 LATIN SMALL LETTER E (e), or the empty string.
      6. An exponent sign: a single U+002B PLUS SIGN (+) or U+002D HYPHEN-MINUS (-), or the empty string. Let t be the number -1 if the sign is U+002D HYPHEN-MINUS (-); otherwise, let t be the number 1.
      7. An exponent: zero or more digits. If there is at least one digit, let i be the number formed by interpreting the digits as a base-10 integer; otherwise, let i be the number 0.

      Return the number s·(i + f·10-d)·10te.

      4.3.14. Consume the remnants of a bad url

      This section describes how to consume the remnants of a bad url from a stream of characters, "cleaning up" after the tokenizer realizes that it's in the middle of a 〈bad-url〉 rather than a 〈url〉. It returns nothing; its sole use is to consume enough of the input stream to reach a recovery point where normal tokenizing can resume.

      Repeatedly consume the next input character from the stream:

      U+0029 RIGHT PARENTHESIS ())
      EOF
      Return.
      the input stream starts with a valid escape
      Consume an escaped character. This allows an escaped right parenthesis ("\)") to be encountered without ending the 〈bad-url〉. This is otherwise identical to the "anything else" clause.
      anything else
      Do nothing.

      4.3.15. Set the 〈unicode-range〉’s range

      This section describes how to set a 〈unicode-range〉’s range so that the range it describes is within the supported range of unicode characters.

      It assumes that the start of the range has been defined, the end of the range might be defined, and both are non-negative integers.

      If the start of the range is greater than the maximum allowed codepoint, the 〈unicode-range〉’s range is empty.

      If the end of the range is defined, and it is less than the start of the range, the 〈unicode-range〉’s range is empty.

      If the end of the range is not defined, the 〈unicode-range〉’s range is the single character whose codepoint is the start of the range.

      Otherwise, if the end of the range is greater than the maximum allowed codepoint, change it to the maximum allowed codepoint. The 〈unicode-range〉’s range is all characters between the character whose codepoint is the start of the range and the character whose codepoint is the end of the range.

      5. Parsing

      The input to the parsing stage is a stream or list of tokens from the tokenization stage. The output depends on how the parser is invoked, as defined by the entry points listed later in this section. The parser output can consist of at-rules, qualified rules, and/or declarations.

      The parser's output is constructed according to the fundamental syntax of CSS, without regards for the validity of any specific item. Implementations may check the validity of items as they are returned by the various parser algorithms and treat the algorithm as returning nothing if the item was invalid according to the implementation's own grammar knowledge, or may construct a full tree as specified and "clean up" afterwards by removing any invalid items.

      The items that can appear in the tree are:

      at-rule
      An at-rule has a name, a prelude consisting of a list of component values, and an optional block consisting of a simple {} block.

      This specification places no limits on what an at-rule's block may contain. Individual at-rules must define whether they accept a block, and if so, how to parse it (preferably using one of the parser algorithms or entry points defined in this specification).

      qualified rule
      A qualified rule has a prelude consisting of a list of component values, and a value consisting of a list of at-rules or declarations.

      Most qualified rules will be style rules, where the prelude is a selector.

      declaration
      A declaration has a name, a value consisting of a list of component values, and an important flag which is initially unset.

      Should we go ahead and generalize the important flag to be a list of bang values? Suggested by Zack Weinburg.

      Declarations are further categorized as "properties" or "descriptors", with the former typically appearing in qualified rules and the latter appearing in at-rules. (This categorization does not occur at the Syntax level; instead, it is a product of where the declaration appears, and is defined by the respective specifications defining the given rule.)

      component value
      A component value is one of the preserved tokens, a function, or a simple block.
      preserved tokens
      Any token produced by the tokenizer except for 〈function〉s, 〈{〉s, 〈(〉s, and 〈[〉s.

      The non-preserved tokens listed above are always consumed into higher-level objects, either functions or simple blocks, and so never appear in any parser output themselves.

      The tokens 〈}〉s, 〈)〉s, 〈]〉, 〈bad-string〉, and 〈bad-url〉 are always parse errors, but they are preserved in the token stream by this specification to allow other specs, such as Media Queries, to define more fine-grainted error-handling than just dropping an entire declaration or block.

      function
      A function has a name and a value consisting of a list of component values.
      simple block
      A simple block has an associated token (either a 〈[〉, 〈(〉, or 〈{〉) and a value consisting of a list of component values.

      5.1. Parser Railroad Diagrams

      This section is non-normative.

      This section presents an informative view of the parser, in the form of railroad diagrams. Railroad diagrams are more compact than a state-machine, but often easier to read than a regular expression.

      These diagrams are informative and incomplete; they describe the grammar of "correct" stylesheets, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax.

      Stylesheet
      〈whitespace〉〈CDC〉〈CDO〉Qualified ruleAt-rule
      Rule list
      〈whitespace〉Qualified ruleAt-rule
      At-rule
      〈at-keyword〉Component value{} block;
      Qualified rule
      Component value{Declaration list}
      Declaration list
      ws*Declaration;Declaration listAt-ruleDeclaration list
      Declaration
      〈ident〉ws*:Component value!important
      !important
      !ws*〈ident "important"〉ws*
      ws*
      〈whitespace〉
      Component value
      Preserved token{} block() block[] blockFunction block
      {} block
      {Component value}
      () block
      (Component value)
      [] block
      [Component value]
      Function block
      〈function〉Component value)

      5.2. Definitions

      current input token
      The token or component value currently being operated on, from the list of tokens produced by the tokenizer.
      next input token
      The token or component value following the current input token in the list of tokens produced by the tokenizer. If there isn't a token following the current input token, the next input token is an 〈EOF〉.
      〈EOF〉
      A conceptual token representing the end of the list of tokens. Whenever the list of tokens is empty, the next input token is always an 〈EOF〉.
      reconsume the current input token
      Push the current input token back onto the list of tokens produced by the tokenizer, so that the next time a mode instructs you to consume the next input token, it will instead reconsume the current input token.
      ASCII case-insensitive
      When two strings are to be matched ASCII case-insensitively, temporarily convert both of them to ASCII lower-case form by adding 32 (0x20) to the value of each codepoint between U+0041 LATIN CAPITAL LETTER A (A) and U+005A LATIN CAPITAL LETTER Z (Z), inclusive, and then compare them on a codepoint-by-codepoint basis.

      5.3. Parser Entry Points

      The algorithms defined in this specification can be invoked in multiple ways to convert a stream of text into various CSS concepts.

      All of the algorithms defined in this section begin in the parser. It is assumed that the input preprocessing and tokenization steps have already been completed, resulting in a stream of tokens.

      Other specs can define additional entry points for their own purposes.

      The following notes should probably be translated into normative text in the relevant specs, hooking this spec's terms:

      • "Parse a stylesheet" is intended to be the normal parser entry point, for parsing stylesheets.
      • "Parse a rule" is intended for use by the CSSStyleSheet#insertRule method, and similar functions which might exist, which parse text into a single rule.
      • "Parse a list of declarations" is for the contents of a style attribute, which parses text into the contents of a single style rule.
      • "Parse a component value" is for things that need to consume a single value, like the parsing rules for ‘attr()’.
      • "Parse a list of component values" is for the contents of presentational attributes, which parse text into a single declaration's value.

      All of the algorithms defined in this spec may be called with either a list of tokens or of component values. Either way produces an identical result.

      5.3.1. Parse a stylesheet

      To parse a stylesheet from a stream of tokens:

      1. Create a new stylesheet.
      2. Consume a list of rules from the stream of tokens, with the top-level flag set.
      3. Assign the returned value to the stylesheet's value.
      4. Return the stylesheet.

      5.3.2. Parse a rule

      To parse a rule from a stream of tokens:

      1. Consume 〈whitespace〉s from the token stream until a non-〈whitespace〉 is encountered.
      2. If the current input token is a 〈CDO〉, 〈CDC〉, or 〈EOF〉, return a syntax error.

        Otherwise, if the current input token is an 〈at-keyword〉, consume an at-rule.

        Otherwise, consume a qualified rule. If nothing was returned, return a syntax error.

      3. Consume 〈whitespace〉s from the token stream until a non-〈whitespace〉 is encountered.
      4. If the current input token is an 〈EOF〉, return the rule obtained in step 2. Otherwise, return a syntax error.

      5.3.3. Parse a list of declarations

      To parse a list of declarations:

      1. Consume a list of declarations. If anything was returned, return it.

      5.3.4. Parse a component value

      To parse a component value:

      1. Discard 〈whitespace〉s from the token stream until a non-〈whitespace〉 is reached. If the token stream is exhausted without finding a non-〈whitespace〉, return a syntax error.
      2. Consume a component value. If nothing is returned, return a syntax error.
      3. Discard 〈whitespace〉s from the token stream until a non-〈whitespace〉 is reached. If the token stream is exhausted without finding a non-〈whitespace〉, return the value found in the previous step. Otherwise, return a syntax error.

      5.3.5. Parse a list of component values

      To parse a list of component values:

      1. Repeatedly consume a component value until an 〈EOF〉 is returned, appending the returned values into a list. Return the list.

      5.4. Parser Algorithms

      The following algorithms comprise the parser. They are called by the parser entry points above.

      These algorithms may be called with a list of either tokens or of component values. (The difference being that some tokens are replaced by functions and simple blocks in a list of component values.) Similar to how the input stream returned EOF characters to represent when it was empty during the tokenization stage, the lists in this stage must return an 〈EOF〉 when the next token is requested but they are empty.

      An algorithm may be invoked with a specific list, in which case it consumes only that list (and when that list is exhausted, it begins returning 〈EOF〉s). Otherwise, it is implicitly invoked with the same list as the invoking algorithm.

      5.4.1. Consume a list of rules

      Create an initially empty list of rules.

      Repeatedly consume the next input token:

      〈whitespace〉
      Do nothing.
      〈EOF〉
      Return the list of rules.
      〈CDO〉
      〈CDC〉
      If the top-level flag is set, do nothing.

      Otherwise, reconsume the current input token. Consume a qualified rule. If anything is returned, append it to the list of rules.

      〈at-keyword〉
      Reconsume the current input token. Consume an at-rule. If anything is returned, append it to the list of rules.
      anything else
      Reconsume the current input token. Consume a qualified rule. If anything is returned, append it to the list of rules.

      5.4.2. Consume an at-rule

      Create a new at-rule with its name set to the value of the current input token, its prelude initially set to an empty list, and its value initially set to nothing.

      Repeatedly consume the next input token:

      〈semicolon〉
      〈EOF〉
      Return the at-rule.
      〈{〉
      Consume a simple block and assign it to the at-rule's block. Return the at-rule.
      simple block with an associated token of 〈{〉
      Assign the block to the at-rule's block. Return the at-rule.
      anything else
      Consume a component value. Append the returned value to the at-rule's prelude.

      5.4.3. Consume a qualified rule

      Create a new qualified rule with its prelude initially set to an empty list, and its value initially set to nothing.

      Repeatedly consume the next input token:

      〈EOF〉
      This is a parse error. Return nothing.
      〈{〉
      Consume a simple block. Consume a list of declarations from the block's value. If anything was returned, assign it to the qualified rule's value. Return the qualified rule.
      simple block with an associated token of 〈{〉
      Consume a list of declarations from the block's value. If anything was returned, assign it to the qualified rule's value. Return the qualified rule.
      anything else
      Consume a component value. Append the returned value to the qualified rule's prelude.

      5.4.4. Consume a list of declarations

      Create an initially empty list of declarations.

      Repeatedly consume the next input token:

      〈whitespace〉
      〈semicolon〉
      Do nothing.
      〈EOF〉
      Return the list of declarations.
      〈at-keyword〉
      Consume an at-rule. Append the returned rule to the list of declarations.
      〈ident〉
      Initialize a temporary list initially filled with the current input token. Repeatedly consume a component value from the next input token until a 〈semicolon〉 or 〈EOF〉 is returned, appending all of the returned values up to that point to the temporary list. Consume a declaration from the temporary list. If anything was returned, append it to the list of declarations.
      anything else
      This is a parse error. Repeatedly consume a component value from the next input token until it is a 〈semicolon〉 or 〈EOF〉.

      5.4.5. Consume a declaration

      Create a new declaration with its name set to the value of the current input token.

      Repeatedly consume 〈whitespace〉s until a non-〈whitespace〉 is reached. If this token is anything but a 〈colon〉, this is a parse error. Return nothing.

      Otherwise, repeatedly consume a component value from the next input token until an 〈EOF〉 is reached, appending all of the returned values up to that point to the declaration's value.

      If the last two non-〈whitespace〉s in the declaration's value are a 〈delim〉 with the value "!" followed by an 〈ident〉 with a value that is an ASCII case-insensitive match for "important", remove them from the declaration's value and set the declaration's important flag to true.

      Return the declaration.

      5.4.6. Consume a component value

      This section describes how to consume a component value.

      If the current input token is a 〈{〉, 〈[〉, or 〈(〉, consume a simple block and return it.

      Otherwise, if the current input token is a 〈function〉, consume a function and return it.

      Otherwise, return the current input token.

      5.4.7. Consume a simple block

      This section describes how to consume a simple block.

      The ending token is the mirror variant of the current input token. (E.g. if it was called with 〈[〉, the ending token is 〈]〉.)

      Create a simple block with its associated token set to the current input token.

      Repeatedly consume the next input token and process it as follows:

      〈EOF〉
      ending token
      Return the block.
      anything else
      Consume a component value and append it to the value of the block.

      5.4.8. Consume a function

      This section describes how to consume a function.

      Create a function with a name equal to the value of the current input token, and with a value which is initially an empty list.

      Repeatedly consume the next input token and process it as follows:

      〈EOF〉
      〈)〉
      Return the function.
      anything else
      Consume a component value and append the returned value to the function's value.

      6. The An+B microsyntax

      Several things in CSS, such as the ‘:nth-child()’ pseudoclass, need to indicate indexes in a list. The An+B microsyntax is useful for this, allowing an author to easily indicate single elements or all elements at regularly-spaced intervals in a list.

      The An+B notation defines an integer step (A) and offset (B), and represents the An+Bth elements in a list, for every positive integer or zero value of n, with the first element in the list having index 1 (not 0).

      For values of A and B greater than 0, this effectively divides the list into groups of A elements (the last group taking the remainder), and selecting the Bth element of each group.

      The An+B notation also accepts the ‘even’ and ‘odd’ keywords, which have the same meaning as ‘2n’ and ‘2n+1’, respectively.

      Examples:

      2n+0   /* represents all of the even elements in the list */
      even   /* same */
      4n+1   /* represents the 1st, 5th, 9th, 13th, etc. elements in the list */

      The values of A and B can be negative, but only the positive results of An+B, for n ≥ 0, are used.

      Example:

      -n+6   /* represents the first 6 elements of the list */

      If both A and B are 0, the pseudo-class represents no element in the list.

      6.1. Informal Syntax Description

      This section is non-normative.

      When A is 0, the An part may be omitted (unless the B part is already omitted). When An is not included and B is non-negative, the ‘+’ sign before B (when allowed) may also be omitted. In this case the syntax simplifies to just B.

      Examples:

      0n+5   /* represents the 5th element in the list */
      5      /* same */

      When A is 1 or -1, the 1 may be omitted from the rule.

      Examples:

      The following notations are therefore equivalent:

      1n+0   /* represents all elements in the list */
      n+0    /* same */
      n      /* same */

      If B is 0, then every Ath element is picked. In such a case, the +B (or -B) part may be omitted unless the A part is already omitted.

      Examples:

      2n+0   /* represents every even element in the list */
      2n     /* same */

      Whitespace is permitted on either side of the ‘+’ or ‘-’ that separates the An and B parts when both are present.

      Valid Examples with white space:

      3n + 1
      +3n - 2
      -n+ 6
      +6

      Invalid Examples with white space:

      3 n
      + 2n
      + 2

      6.2. The <an+b> type

      The An+B notation was originally defined using a slightly different tokenizer than the rest of CSS, resulting in a somewhat odd definition when expressed in terms of CSS tokens. This section describes how to recognize the An+B notation in terms of CSS tokens (thus defining the <an+b> type for CSS grammar purposes), and how to interpret the CSS tokens to obtain values for A and B.

      The <an+b> type is defined (using the Value Definition Syntax in the Values & Units spec) as:

      <an+b> =
        odd | even |
        <integer> |
      
        <n-dimension> |
        '+'? n |
        -n |
      
        <ndashdigit-dimension> |
        '+'? <ndashdigit-ident> |
        <dashndashdigit-ident> |
      
        <n-dimension> <signed-integer> |
        '+'? n <signed-integer> |
        -n <signed-integer> |
      
        <n-dimension> ['+' | '-'] <signless-integer>
        '+'? n ['+' | '-'] <signless-integer> |
        -n ['+' | '-'] <signless-integer>

      where:

      • <n-dimension> is a 〈dimension〉 with its type flag set to "integer", and a unit that is an ASCII case-insensitive match for "n"
      • <ndashdigit-dimension> is a 〈dimension〉 with its type flag set to "integer", and a unit that is an ASCII case-insensitive match for "n-*", where "*" is a series of one or more digits
      • <ndashdigit-ident> is an 〈ident〉 whose representation is an ASCII case-insensitive match for "n-*", where "*" is a series of one or more digits
      • <dashndashdigit-ident> is an 〈ident〉 whose representation is an ASCII case-insensitive match for "-n-*", where "*" is a series of one or more digits
      • <integer> is a 〈number〉 with its type flag set to "integer"
      • <signed-integer> is a 〈number〉 with its type flag set to "integer", and whose representation starts with "+" or "-"
      • <signless-integer> is a 〈number〉 with its type flag set to "integer", and whose representation start with a digit

      The clauses of the production are interpreted as follows:

      odd
      A is 2, B is 1.
      even
      A is 2, B is 0.
      <integer>
      A is 0, B is the integer.
      <n-dimension>
      '+'? n
      -n
      A is the dimension's value, 1, or -1, respectively. B is 0.
      <ndashdigit-dimension>
      '+'? <ndashdigit-ident>
      <dashndashdigit-ident>
      A is the dimension's value, 1, or -1, respectively. B is the dimension's unit or ident's representation, as appropriate, with the first two characters removed and the remainder interpreted as a base-10 number.
      <n-dimension> <signed-integer>
      '+'? n <signed-integer>
      -n <signed-integer>
      A is the dimension's value, 1, or -1, respectively. B is the integer.
      <n-dimension> ['+' | '-'] <signless-integer>
      '+'? n ['+' | '-'] <signless-integer>
      -n ['+' | '-'] <signless-integer>
      A is the dimension's value, 1, or -1, respectively. B is the integer. If a ‘-’ was provided between the two, B is instead the negation of the integer.

      7. Defining Grammars for Rules and Other Values

      The Values spec defines how to specify a grammar for properties. This section does the same, but for rules.

      Just like in property grammars, the notation <foo> refers to the "foo" grammar term, assumed to be defined elsewhere. Substituting the <foo> for its definition results in a semantically identical grammar.

      Several types of tokens are written literally, without quotes:

      • 〈Ident〉s (such as ‘auto’, ‘disc’, etc.)
      • 〈at-keyword〉s, which are written as an @ character followed by the token's name, like "@media".
      • 〈function〉s, which are written as the function name followed by a ( character, like "translate(".
      • The 〈colon〉 (written as :), 〈comma〉 (written as ,), 〈semicolon〉 (written as ;), 〈(〉, 〈)〉, 〈{〉, and 〈}〉s.

      〈delim〉s are written with their value enclosed in single quotes. For example, a 〈delim〉 containing the "+" character is written as '+'. Similarly, the 〈[〉 and 〈]〉s must be written in single quotes, as they're used by the syntax of the grammar itself to group clauses. 〈whitespace〉 is never indicated in the grammar; 〈whitespace〉s are allowed before, after, and between any two tokens, unless explicitly specified otherwise in prose definitions. (For example, if the prelude of a rule is a selector, whitespace is significant.)

      When defining a function or a block, the ending token must be specified in the grammar, but if it's not present in the eventual token stream, it still matches.

      For example, the syntax of the ‘translateX()’ function is:
      translateX( <translation-value> )

      However, the stylesheet may end with the function unclosed, like:

      .foo { transform: translate(50px

      The CSS parser parses this as a style rule containing one declaration, whose value is a function named "translate". This matches the above grammar, even though the ending token didn't appear in the token stream, because by the time the parser is finished, the presence of the ending token is no longer possible to determine; all you have is the fact that there's a block and a function.

      7.1. Defining Block Contents: the <declaration-list>, <rule-list>, and <stylesheet> productions

      The CSS parser is agnostic as to the contents of blocks, such as those that come at the end of some at-rules. Defining the generic grammar of the blocks in terms of tokens is non-trivial, but there are dedicated and unambiguous algorithms defined for parsing this.

      The <declaration-list> production represents a list of declarations. It may only be used in grammars as the sole value in a block, and represents that the contents of the block must be parsed using the consume a list of declarations algorithm.

      Similarly, the <rule-list> production represents a list of rules, and may only be used in grammars as the sole value in a block. It represents that the contents of the block must be parsed using the consume a list of rules algorithm.

      Finally, the <stylesheet> production represents a list of rules. It is identical to <rule-list>, except that blocks using it default to accepting all rules that aren't otherwise limited to a particular context.

      For example, the ‘@font-face’ rule is defined to have an empty prelude, and to contain a list of declarations. This is expressed with the following grammar:
      @font-face { <declaration-list> }

      This is a complete and sufficient definition of the rule's grammar.

      For another example, ‘@keyframes’ rules are more complex, interpreting their prelude as a name and containing keyframes rules in their block Their grammar is:

      @keyframes <keyframes-name> { <rule-list> }

      For rules that use <declaration-list>, the spec for the rule must define which properties, descriptors, and/or at-rules are valid inside the rule; this may be as simple as saying "The @foo rule accepts the properties/descriptors defined in this specification/section.", and extension specs may simply say "The @foo rule additionally accepts the following properties/descriptors.". Any declarations or at-rules found inside the block that are not defined as valid must be removed from the rule's value.

      Within a <declaration-list>, !important is automatically invalid on any descriptors. If the rule accepts properties, the spec for the rule must define whether the properties interact with the cascade, and with what specificity. If they don't interact with the cascade, properties containing !important are automatically invalid; otherwise using !important is valid and has its usual effect on the cascade origin of the property.

      For example, the grammar for ‘@font-face’ in the previous example must, in addition to what is written there, define that the allowed declarations are the descriptors defined in the Fonts spec.

      For rules that use <rule-list>, the spec for the rule must define what types of rules are valid inside the rule, same as <declaration-list>, and unrecognized rules must similarly be removed from the rule's value.

      For example, the grammar for ‘@keyframes’ in the previous example must, in addition to what is written there, define that the only allowed rules are <keyframe-rule>s, which are defined as:
      <keyframe-rule> = <keyframe-selector> { <declaration-list> }

      Keyframe rules, then, must further define that they accept as declarations all animatable CSS properties, plus the ‘animation-timing-function’ property, but that they do not interact with the cascade.

      For rules that use <stylesheet>, all rules are allowed by default, but the spec for the rule may define what types of rules are invalid inside the rule.

      For example, the ‘@media’ rule accepts anything that can be placed in a stylesheet, except more ‘@media’ rules. As such, its grammar is:
      @media <media-query-list> { <stylesheet> }

      It additionally defines a restriction that the <stylesheet> can not contain ‘@media’ rules, which causes them to be dropped from the outer rule's value if they appear.

      8. Serialization

      This specification does not define how to serialize CSS in general, leaving that task to the CSSOM and individual feature specifications. However, there is one important facet that must be specified here regarding comments, to ensure accurate "round-tripping" of data from text to CSS objects and back.

      The tokenizer described in this specification does not produce tokens for comments, or otherwise preserve them in any way. Implementations may preserve the contents of comments and their location in the token stream. If they do, this preserved information must have no effect on the parsing step, but must be serialized in its position as "/*" followed by its contents followed by "*/".

      If the implementation does not preserve comments, it must insert the text "/**/" between the serialization of adjacent tokens when the two tokens are of the following pairs:

      • a 〈hash〉 or 〈at-keyword〉 followed by a 〈number〉, 〈percentage〉, 〈ident〉, 〈dimension〉, 〈unicode-range〉, 〈url〉, or a 〈function〉 token;
      • 〈number〉s, 〈ident〉s, and 〈dimension〉s in any combination;
      • a 〈number〉, 〈ident〉, or 〈dimension〉 followed by a 〈percentage〉, 〈unicode-range〉, 〈url〉, or 〈function〉;
      • an 〈ident〉 followed by a 〈(〉;
      • a 〈delim〉 containing "#" or "@" followed by any token except 〈whitespace〉;
      • a 〈delim〉 containing "-", "+", ".", "<", ">", or "!" following or followed by any token except 〈whitespace〉;
      • a 〈delim〉 containing "/" following or followed by a 〈delim〉 containing "*".

      The preceding pairs of tokens can only be adjacent due to comments in the original text, so the above rule reinserts the minimum number of comments into the serialized text to ensure an accurate round-trip. (Roughly. The 〈delim〉 rules are slightly too powerful, for simplicity.)

      8.1. Serializing <an+b>

      To serialize an <an+b> value, let s initially be the empty string:

      A and B are both zero
      Append "0" to s.
      A is zero, B is non-zero
      Serialize B and append it to s.
      A is non-zero, B is zero
      Serialize A and append it to s. Append "n" to s.
      A and B are both non-zero
      Serialize A and append it to s. Append "n" to s. If B is positive, append "+" to s Serialize B and append it to s.

      Return s.

      9. Changes from CSS 2.1 and Selectors Level 3

      This section is non-normative.

      Note that the point of this spec is to match reality; changes from CSS2.1 are nearly always because CSS 2.1 specified something that doesn't match actual browser behavior, or left something unspecified. If some detail doesn't match browsers, please let me know as it's almost certainly unintentional.

      Changes in decoding from a byte stream:

      • Only detect ‘@charset’ rules in ASCII-compatible byte patterns.
      • Ignore ‘@charset’ rules that specify an ASCII-incompatible encoding, as that would cause the rule itself to not decode properly.
      • Refer to the the Encoding Standard rather than the IANA registery for character encodings.

      Tokenization changes:

      • Any U+0000 NULL character in the CSS source is replaced with U+FFFD REPLACEMENT CHARACTER.
      • Any hexadecimal escape sequence such as ‘\0’ that evaluate to zero produce U+FFFD REPLACEMENT CHARACTER rather than U+0000 NULL.
      • The definition of non-ASCII character was changed to be consistent with every definition of ASCII. This affects characters U+0080 to U+009F, which are now name characters rather than 〈delim〉s, like the rest of non-ASCII characters.
      • Tokenization does not emit COMMENT or BAD_COMMENT tokens anymore. BAD_COMMENT is now considered the same as normal token (not an error.) Serialization is responsible for inserting comments as necessary between tokens that need to be separated, e.g. two consecutive 〈ident〉s.
      • The 〈unicode-range〉 token is now more restrictive. Should it? I can’t find a case where this change is even testable. Align the definition with the Fonts spec.
      • Apply the EOF error handling rule in the tokenizer and emit normal 〈string〉 and 〈url〉 tokens rather than BAD_STRING or BAD_URI on EOF.
      • The 〈prefix-match〉, 〈suffix-match〉, and 〈substring-match〉 tokens have been imported from Selectors 3.
      • The BAD_URI token (now 〈bad-url〉) is "self-contained". In other words, once the tokenizer realizes it's in a 〈bad-url〉 rather than a 〈url〉, it just seeks forward to look for the closing ), ignoring everything else. This behavior is simpler than treating it like a 〈function〉 and paying attention to opened blocks and such. Only WebKit exhibits this behavior, but it doesn't appear that we've gotten any compat bugs from it.
      • The 〈comma〉 has been added.
      • The 〈number〉, 〈number〉, and 〈dimension〉 tokens have been changed to include the preceding +/- sign as part of their value (rather than as a separate 〈delim〉 that needs to be manually handled every time the token is mentioned in other specs). The only consequence of this is that comments can no longer be inserted between the sign and the number.
      • Scientific notation is supported for numbers/percentages/dimensions to match SVG, per WG resolution.
      • 〈column〉 has been added, to keep Selectors parsing in single-token lookahead.

      Parsing changes:

      • Any list of declaration now also accepts at-rules, like ‘@page’, per WG resolution. This makes a difference in error handling even if no such at-rules are defined yet: an at-rule, valid or not, ends and lets the next declaration being at a {} block without a 〈semicolon〉.
      • The handling of some miscellanous "special" tokens (like an unmatched 〈}〉) showing up in various places in the grammar has been specified with some reasonable behavior shown by at least one browser. Previously, stylesheets with those tokens in those places just didn't match the stylesheet grammar at all, so their handling was totally undefined. Specifically:
        • [] blocks, () blocks and functions can now contain {} blocks, 〈at-keyword〉s or 〈semicolon〉s
        • Selectors can now contain semicolons
        • Selectors and at-rule preludes can now contain 〈at-keyword〉s

      An+B changes from Selectors Level 3 [SELECT]:

      • The An+B microsyntax has now been formally defined in terms of CSS tokens, rather than with a separate tokenizer. This has resulted in minor differences:
        • In values starting with "+n", a space is now allowed between the "+" and "n". (This is an accidental consequence of the "+" and "n" parsing as separate CSS tokens, and CSS's value grammar ignoring whitespace.)
        • In some cases, "-" characters or digits can be escaped (when they appear as part of the unit of a 〈dimension〉 or 〈ident〉).

      10. Conformance

      10.1. Document conventions

      Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.

      All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]

      Examples in this specification are introduced with the words “for example” or are set apart from the normative text with class="example", like this:

      This is an example of an informative example.

      Informative notes begin with the word “Note” and are set apart from the normative text with class="note", like this:

      Note, this is an informative note.

      10.2. Conformance classes

      Conformance to CSS Syntax Module Level 3 is defined for three conformance classes:

      style sheet
      A CSS style sheet.
      renderer
      A UA that interprets the semantics of a style sheet and renders documents that use them.
      authoring tool
      A UA that writes a style sheet.

      A style sheet is conformant to CSS Syntax Module Level 3 if it is syntactically valid according to this module.

      A renderer is conformant to CSS Syntax Module Level 3 if it parses a stylesheet according to this module.

      An authoring tool is conformant to CSS Syntax Module Level 3 if it writes style sheets that are syntactically valid according to this module.

      Acknowledgments

      Thanks for feedback and contributions from David Baron, 呂康豪 (Kang-Hao Lu), and Simon Sapin.

      References

      Normative references

      [RFC2119]
      S. Bradner. Key words for use in RFCs to Indicate Requirement Levels. Internet RFC 2119. URL: http://www.ietf.org/rfc/rfc2119.txt

      Other references

      [SELECT]
      Tantek Çelik; et al. Selectors Level 3. 29 September 2011. W3C Recommendation. URL: http://www.w3.org/TR/2011/REC-css3-selectors-20110929/

      Index

      • A, 6.
      • <an+b>, 6.2.
      • An+B, 6.
      • are a valid escape, 4.3.8.
      • ASCII case-insensitive, 5.2.
      • at-rule, 5., 2.
      • authoring tool, 10.2.
      • B, 6.
      • check if three characters would start an identifier, 4.3.9.
      • check if three characters would start a number, 4.3.10.
      • check if two characters are a valid escape, 4.3.8.
      • component value, 5.
      • Consume a component value, 5.4.6.
      • Consume a declaration, 5.4.5.
      • Consume a function, 5.4.8.
      • Consume a list of declarations, 5.4.4.
      • Consume a list of rules, 5.4.1.
      • Consume a name, 4.3.11.
      • Consume an at-rule, 5.4.2.
      • Consume an escaped character, 4.3.7.
      • Consume an ident-like token, 4.3.3.
      • Consume a number, 4.3.12.
      • Consume a numeric token, 4.3.2.
      • Consume a qualified rule, 5.4.3.
      • Consume a simple block, 5.4.7.
      • Consume a string token, 4.3.4.
      • Consume a token, 4.3.1.
      • Consume a unicode-range token, 4.3.6.
      • Consume a url token, 4.3.5.
      • Consume the remnants of a bad url, 4.3.14.
      • Convert a string to a number, 4.3.13.
      • current input character, 4.2.
      • current input token, 5.2.
      • <dashndashdigit-ident>, 6.2.
      • declaration, 5.
      • <declaration-list>, 7.1.
      • decode, 3.2.
      • digit, 4.2.
      • ending token, 5.4.7.
      • end of the range, 4.3.15.
      • 〈EOF〉, 5.2.
      • EOF character, 4.2.
      • function, 5.
      • get an encoding, 3.2.
      • hex digit, 4.2.
      • <integer>, 6.2.
      • letter, 4.2.
      • lowercase letter, 4.2.
      • maximum allowed codepoint, 4.2.
      • name character, 4.2.
      • name-start character, 4.2.
      • <ndashdigit-dimension>, 6.2.
      • <ndashdigit-ident>, 6.2.
      • <n-dimension>, 6.2.
      • newline, 4.2.
      • next input character, 4.2.
      • next input token, 5.2.
      • non-ASCII character, 4.2.
      • non-printable character, 4.2.
      • Parse a component value, 5.3.4.
      • Parse a list of component values, 5.3.5.
      • Parse a list of declarations, 5.3.3.
      • Parse a rule, 5.3.2.
      • Parse a stylesheet, 5.3.1.
      • parse error, 3.
      • preserved tokens, 5.
      • qualified rule, 2., 5.
      • reconsume the current input character, 4.2.
      • reconsume the current input token, 5.2.
      • renderer, 10.2.
      • <rule-list>, 7.1.
      • Set the 〈unicode-range〉’s range, 4.3.15.
      • <signed-integer>, 6.2.
      • <signless-integer>, 6.2.
      • simple block, 5.
      • start of the range, 4.3.15.
      • starts with an identifier, 4.3.9.
      • starts with a number, 4.3.10.
      • starts with a valid escape, 4.3.8.
      • start with an identifier, 4.3.9.
      • start with a number, 4.3.10.
      • <stylesheet>, 7.1.
      • style sheet
        • as conformance class, 10.2.
      • uppercase letter, 4.2.
      • whitespace, 4.2.
      • would start an identifier, 4.3.9.
      • would start a number, 4.3.10.

      Property index

      Property Values Initial Applies to Inh. Percentages Media