Copyright © 2013 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply.
CSS is a language for describing the rendering of structured documents (such as HTML and XML) on screen, on paper, in speech, etc. This module describes, in general terms, the basic structure and syntax of CSS stylesheets. It defines, in detail, the syntax and parsing of CSS - how to turn a stream of bytes into a meaningful stylesheet.
This is a public copy of the editors' draft. It is provided for discussion only and may change at any moment. Its publication here does not imply endorsement of its contents by W3C. Don't cite this document other than as work in progress.
The (archived) public mailing list www-style@w3.org (see instructions) is preferred for discussion of this specification. When sending e-mail, please put the text “css3-syntax” in the subject, preferably like this: “[css3-syntax] …summary of comment…”
This document was produced by the CSS Working Group (part of the Style Activity).
This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.
The following features are at risk: …
This section is not normative.
This module defines the abstract syntax and parsing of CSS stylesheets
and other things which use CSS syntax (such as the HTML style
attribute).
It defines algorithms for converting a stream of codepoints (in other words, text) into a stream of CSS tokens, and then further into CSS objects such as stylesheets, rules, and declarations.
This module defines the syntax and parsing of CSS stylesheets. It supersedes the lexical scanner and grammar defined in CSS 2.1.
This section is not normative.
A CSS document is a series of qualified rules, which are usually style rules that apply CSS properties to elements, and at-rules, which define special processing rules or values for the CSS document.
A qualified rule starts with a prelude then has a {}-wrapped block containing a sequence of declarations. The meaning of the prelude varies based on the context that the rule appears in - for style rules, it's a selector which specifies what elements the declarations will apply to. Each declaration has a name, followed by a colon and the declaration value. Declarations are separated by semicolons.
A typical rule might look something like this:
p > a { color: blue; text-decoration: underline; }
In the above rule, "p > a
" is the selector, which, if the
source document is HTML, selects any <a>
elements that
are children of a <p>
element.
"color: blue;
" is a declaration specifying that, for the
elements that match the selector, their ‘color
’ property should have the value ‘blue
’. Similiarly, their ‘text-decoration
’ property should have the value
‘underline
’.
At-rules are all different, but they have a basic structure in common. They start with an "@" character followed by their name. Some at-rules are simple statements, with their name followed by more CSS values to specify their behavior, and finally ended by a semicolon. Others are blocks; they can have CSS values following their name, but they end with a {}-wrapped block, similar to a qualified rule. Even the contents of these blocks are specific to the given at-rule: sometimes they contain a sequence of declarations, like a qualified rule; other times, they may contain additional blocks, or at-rules, or other structures altogether.
Here are several examples of at-rules that illustrate the varied syntax they may contain.
@import "my-styles.css";
The ‘@import
’ at-rule is a simple statement. After its name,
it takes a single string or ‘url()
’ function
to indicate the stylesheet that it should import.
@page :left { margin-left: 4cm; margin-right: 3cm; }
The ‘@page
’ at-rule consists of an optional page selector
(the ‘:left
’ pseudoclass), followed by a block
of properties that apply to the page when printed. In this way, it's very
similar to a normal style rule, except that its properties don't
apply to any "element", but rather the page itself.
@media print { body { font-size: 10pt } }
The ‘@media
’ at-rule begins with a media type and a list of
optional media queries. Its block contains entire rules, which are only
applied when the ‘@media
’s conditions are
fulfilled.
Property names and at-rule names are always identifiers, which have to start with a letter or a hyphen followed by a letter, and then can contain letters, numbers, hyphens, or underscores. You can include any character at all, even ones that CSS uses in its syntax, by escaping it with a backslash (\) or by using a hexadecimal escape.
The syntax of selectors is defined in the Selectors spec. Similarly, the syntax of the wide variety of CSS values is defined in the Values & Units spec. The special syntaxes of individual at-rules can be found in the specs that define them.
This section is not normative.
When errors occur in CSS, the parser attempts to recover gracefully, throwing away only the minimum amount of content before returning to parsing as normal. This is because errors aren't always mistakes - new syntax looks like an error to an old parser, and it's useful to be able to add new syntax to the language without worrying about stylesheets that include it being completely broken in older UAs.
The precise error-recovery behavior is detailed in the parser itself, but it's simple enough that a short description is fairly accurate:
User agents must use the parsing rules described in this specification to generate the CSSOM trees from text/css resources. Together, these rules define what is referred to as the CSS parser.
This specification defines the parsing rules for CSS documents, whether they are syntactically correct or not. Certain points in the parsing algorithm are said to be a parse errors. The error handling for parse errors is well-defined: user agents must either act as described below when encountering such problems, or must abort processing at the first error that they encounter for which they do not wish to apply the rules described below.
Conformance checkers must report at least one parse error condition to the user if one or more parse error conditions exist in the document and must not report parse error conditions if none exist in the document. Conformance checkers may report more than one parse error condition if more than one parse error condition exists in the document. Conformance checkers are not required to recover from parse errors, but if they do, they must recover in the same way as user agents.
The input to the CSS parsing process consists of a stream of Unicode code points, which is passed through a tokenization stage followed by a tree construction stage. The output is a CSSStyleSheet object.
Implementations that do not support scripting do not have to actually create a CSSOM CSSStyleSheet object, but the CSSOM tree in such cases is still used as the model for the rest of the specification.
The stream of Unicode code points that comprises the input to the tokenization stage may be initially seen by the user agent as a stream of bytes (typically coming over the network or from the local file system). The bytes encode the actual characters according to a particular character encoding, which the user agent must use to decode the bytes into characters.
To decode the stream of bytes into a stream of characters, UAs must follow these steps.
The algorithms to get an encoding and decode are defined in the Encoding Standard.
First, determine the fallback encoding:
40 63 68 61 72 73 65 74 20 22 (not 22)* 22 3Bthen get an encoding for the sequence of
(not 22)*
bytes, decoded per
windows-1252
.
Note: Anything ASCII-compatible will do, so using
windows-1252
is fine.
Note: The byte sequence above, when decoded as ASCII, is
the string "@charset "…";
", where the "…" is the
sequence of bytes corresponding to the encoding's name.
If the return value was utf-16
or utf-16be
,
use utf-8
as the fallback encoding; if it was anything else
except failure, use the return value as the fallback encoding.
This mimics HTML <meta>
behavior.
charset
attribute on the
<link>
element or <?xml-stylesheet?>
processing instruction that caused the style sheet to be included, if
any. If that does not return failure, use the return value as the
fallback encoding.
utf-8
as the fallback encoding.
Then, decode the byte stream using the fallback encoding.
Note: the decode algorithm lets the byte order mark (BOM) take precedence, hence the usage of the term "fallback" above.
Anne says that steps 3/4 should be an input to this algorithm from the specs that define importing stylesheet, to make the algorithm as a whole cleaner. Perhaps abstract it into the concept of an "environment charset" or something?
Should we only take the charset from the referring document if it's same-origin?
The input stream consists of the characters (individual unicode code-points) pushed into it as the input byte stream is decoded.
Before sending the input stream to the tokenizer, implementations must make the following character substitutions:
Implementations must act as if they used the following algorithms to tokenize CSS. To transform a stream of characters into a stream of tokens, repeatedly consume a token until an 〈EOF〉 is encountered, collecting the returned tokens into a stream. Each call to the consume a token algorithm returns a single token, so it can also be used "on-demand" to tokenize a stream of characters during parsing, if so desired.
The output of the tokenization step is a stream of zero or more of the following tokens: 〈ident〉, 〈function〉, 〈at-keyword〉, 〈hash〉, 〈string〉, 〈bad-string〉, 〈url〉, 〈bad-url〉, 〈delim〉, 〈number〉, 〈percentage〉, 〈dimension〉, 〈unicode-range〉, 〈include-match〉, 〈dash-match〉, 〈prefix-match〉, 〈suffix-match〉, 〈substring-match〉, 〈column〉, 〈whitespace〉, 〈CDO〉, 〈CDC〉, 〈colon〉, 〈semicolon〉, 〈comma〉, 〈[〉, 〈]〉, 〈(〉, 〈)〉, 〈{〉, and 〈}〉.
The type flag of hash tokens is used in the Selectors syntax [SELECT]. Only hash tokens with the "id" type are valid ID selectors.
As a technical note, the tokenizer defined here requires only three characters of look-ahead. The tokens it produces are designed to allow Selectors to be parsed with one token of look-ahead, and additional tokens may be added in the future to maintain this invariant.
This section is non-normative.
This section presents an informative view of the tokenizer, in the form of railroad diagrams. Railroad diagrams are more compact than an explicit parser, but often easier to read than an regular expression.
These diagrams are informative and incomplete; they describe the grammar of "correct" tokens, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax of each token.
Diagrams with names between 〈〉 brackets represent tokens. The rest are productions referred to by other diagrams.
This section defines several terms used during the tokenization phase.
The algorithms defined in this section transform a stream of characters into a stream of tokens.
This section describes how to consume a token from a stream of characters. It will return a single token of any type.
Consume the next input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, emit a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, if the input stream starts with an identifier, reconsume the current input character, consume an ident-like token, and return it.
Otherwise, if the next 2 input characters are U+002D HYPHEN-MINUS U+003E GREATER-THAN SIGN (->), consume them and return a 〈CDC〉.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, this is a parse error. Return a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, reconsume the current input character, consume an ident-like token, and return it.
Otherwise, if the next input character is U+0073 VERTICAL LINE (|), consume it and return a 〈column〉.
Otherwise, return a 〈delim〉 with its value set to the current input character.
Otherwise, return a 〈delim〉 with its value set to the current input character.
This section describes how to consume a numeric token from a stream of characters. It returns either a 〈number〉, 〈percentage〉, or 〈dimension〉.
If the next 3 input characters would start an identifier, then:
Otherwise, if the next input character is U+0025 PERCENTAGE SIGN (%), consume it. Create a 〈percentage〉 with the same representation and value as the returned number, and return it.
Otherwise, create a 〈number〉 with the same representation, value, and type flag as the returned number, and return it.
This section describes how to consume an ident-like token from a stream of characters. It returns an 〈ident〉, 〈function〉, 〈url〉, or 〈bad-url〉.
If the returned string's value is an ASCII case-insensitive match for "url", and the next input character is U+0028 LEFT PARENTHESIS ((), consume it. Consume a url token, and return it.
Otherwise, if the next input character is U+0028 LEFT PARENTHESIS ((), consume it. Create a 〈function〉 token with its value set to the returned string and return it.
Otherwise, create an 〈ident〉 token with its value set to the returned string and return it.
This section describes how to consume a string token from a stream of characters. It returns either a 〈string〉 or 〈bad-string〉.
This algorithm must be called with an ending character, which denotes the character that ends the string.
Initially create a 〈string〉 with its value set to the empty string.
Repeatedly consume the next input character from the stream:
Otherwise, if the next input character is a newline, consume it.
Otherwise, this is a parse error. Create a 〈bad-string〉 and return it.
This section describes how to consume a url token from a stream of characters. It returns either a 〈url〉 or a 〈bad-url〉.
This algorithm assumes that the initial "url(" has already been consumed.
Execute the following steps in order:
Otherwise, this is a parse error. Consume the remnants of a bad url, create a 〈bad-url〉, and return it.
This section describes how to consume a unicode-range token. It returns a 〈unicode-range〉 token.
This algorithm assumes that the initial "u+" has been consumed, and the next character verified to be a hex digit or a "?".
Execute the following steps in order:
If any U+003F QUESTION MARK (?) characters were consumed, then:
Otherwise, interpret the digits as a hexadecimal number. This is the start of the range.
This section describes how to consume an escaped character. It assumes that the U+005C REVERSE SOLIDUS (\) has already been consumed and that the next input character has already been verified to not be a newline or EOF. It will return a character.
Consume the next input character.
This section describes how to check if two characters are a valid escape. The algorithm described here can be called explicitly with two characters, or can be called with the input stream itself. In the latter case, the two characters in question are the current input character and the next input character, in that order.
This algorithm will not consume any additional characters.
If the first character is not U+005D REVERSE SOLIDUS (\), return false.
Otherwise, if the second character is a newline or EOF character, return false.
Otherwise, return true.
This section describes how to check if three characters would start an identifier. The algorithm described here can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order.
This algorithm will not consume any additional characters.
Look at the first character:
This section describes how to check if three characters would start a number. The algorithm described here can be called explicitly with three characters, or can be called with the input stream itself. In the latter case, the three characters in question are the current input character and the next two input characters, in that order.
This algorithm will not consume any additional characters.
Look at the first character:
Otherwise, if the second character is a U+002E FULL STOP (.) and the third character is a digit, return true.
Otherwise, return false.
This section describes how to consume a name from a stream of characters. It returns a string containing the largest name that can be formed from adjacent characters in the stream, starting from the first.
This algorithm does not do the verification of the first few characters that are necessary to ensure the returned characters would constitute an 〈ident〉. If that is the intended use, ensure that the stream starts with an identifier before calling this algorithm.
Let result initially be an empty string.
Repeatedly consume the next input character from the stream:
This section describes how to consume a number from a stream of characters. It returns a 3-tuple of a string representation, a numeric value, and a type flag which is either "integer" or "number".
This algorithm does not do the verification of the first few characters that are necessary to ensure a number can be obtained from the stream. Ensure that the stream starts with a number before calling this algorithm.
Execute the following steps in order:
This section describes how to convert a string to a number. It returns a number.
This algorithm does not do any verification to ensure that the string contains only a number. Ensure that the string contains only a valid CSS number before calling this algorithm.
Divide the string into seven components, in order from left to right:
Return the number s·(i +
f·10-d)·10te
.
This section describes how to consume the remnants of a bad url from a stream of characters, "cleaning up" after the tokenizer realizes that it's in the middle of a 〈bad-url〉 rather than a 〈url〉. It returns nothing; its sole use is to consume enough of the input stream to reach a recovery point where normal tokenizing can resume.
Repeatedly consume the next input character from the stream:
This section describes how to set a 〈unicode-range〉’s range so that the range it describes is within the supported range of unicode characters.
It assumes that the start of the range has been defined, the end of the range might be defined, and both are non-negative integers.
If the start of the range is greater than the maximum allowed codepoint, the 〈unicode-range〉’s range is empty.
If the end of the range is defined, and it is less than the start of the range, the 〈unicode-range〉’s range is empty.
If the end of the range is not defined, the 〈unicode-range〉’s range is the single character whose codepoint is the start of the range.
Otherwise, if the end of the range is greater than the maximum allowed codepoint, change it to the maximum allowed codepoint. The 〈unicode-range〉’s range is all characters between the character whose codepoint is the start of the range and the character whose codepoint is the end of the range.
The input to the parsing stage is a stream or list of tokens from the tokenization stage. The output depends on how the parser is invoked, as defined by the entry points listed later in this section. The parser output can consist of at-rules, qualified rules, and/or declarations.
The parser's output is constructed according to the fundamental syntax of CSS, without regards for the validity of any specific item. Implementations may check the validity of items as they are returned by the various parser algorithms and treat the algorithm as returning nothing if the item was invalid according to the implementation's own grammar knowledge, or may construct a full tree as specified and "clean up" afterwards by removing any invalid items.
The items that can appear in the tree are:
This specification places no limits on what an at-rule's block may contain. Individual at-rules must define whether they accept a block, and if so, how to parse it (preferably using one of the parser algorithms or entry points defined in this specification).
Most qualified rules will be style rules, where the prelude is a selector.
Should we go ahead and generalize the important flag to be a list of bang values? Suggested by Zack Weinburg.
Declarations are further categorized as "properties" or "descriptors", with the former typically appearing in qualified rules and the latter appearing in at-rules. (This categorization does not occur at the Syntax level; instead, it is a product of where the declaration appears, and is defined by the respective specifications defining the given rule.)
The non-preserved tokens listed above are always consumed into higher-level objects, either functions or simple blocks, and so never appear in any parser output themselves.
The tokens 〈}〉s, 〈)〉s, 〈]〉, 〈bad-string〉, and 〈bad-url〉 are always parse errors, but they are preserved in the token stream by this specification to allow other specs, such as Media Queries, to define more fine-grainted error-handling than just dropping an entire declaration or block.
This section is non-normative.
This section presents an informative view of the parser, in the form of railroad diagrams. Railroad diagrams are more compact than a state-machine, but often easier to read than a regular expression.
These diagrams are informative and incomplete; they describe the grammar of "correct" stylesheets, but do not describe error-handling at all. They are provided solely to make it easier to get an intuitive grasp of the syntax.
The algorithms defined in this specification can be invoked in multiple ways to convert a stream of text into various CSS concepts.
All of the algorithms defined in this section begin in the parser. It is assumed that the input preprocessing and tokenization steps have already been completed, resulting in a stream of tokens.
Other specs can define additional entry points for their own purposes.
The following notes should probably be translated into normative text in the relevant specs, hooking this spec's terms:
CSSStyleSheet#insertRule
method, and
similar functions which might exist, which parse text into a single
rule.
style
attribute, which parses text into the contents of a single style
rule.
attr()
’.
All of the algorithms defined in this spec may be called with either a list of tokens or of component values. Either way produces an identical result.
To parse a stylesheet from a stream of tokens:
To parse a rule from a stream of tokens:
Otherwise, if the current input token is an 〈at-keyword〉, consume an at-rule.
Otherwise, consume a qualified rule. If nothing was returned, return a syntax error.
To parse a list of declarations:
To parse a list of component values:
The following algorithms comprise the parser. They are called by the parser entry points above.
These algorithms may be called with a list of either tokens or of component values. (The difference being that some tokens are replaced by functions and simple blocks in a list of component values.) Similar to how the input stream returned EOF characters to represent when it was empty during the tokenization stage, the lists in this stage must return an 〈EOF〉 when the next token is requested but they are empty.
An algorithm may be invoked with a specific list, in which case it consumes only that list (and when that list is exhausted, it begins returning 〈EOF〉s). Otherwise, it is implicitly invoked with the same list as the invoking algorithm.
Create an initially empty list of rules.
Repeatedly consume the next input token:
Otherwise, reconsume the current input token. Consume a qualified rule. If anything is returned, append it to the list of rules.
Create a new at-rule with its name set to the value of the current input token, its prelude initially set to an empty list, and its value initially set to nothing.
Repeatedly consume the next input token:
Create a new qualified rule with its prelude initially set to an empty list, and its value initially set to nothing.
Repeatedly consume the next input token:
Create an initially empty list of declarations.
Repeatedly consume the next input token:
Create a new declaration with its name set to the value of the current input token.
Repeatedly consume 〈whitespace〉s until a non-〈whitespace〉 is reached. If this token is anything but a 〈colon〉, this is a parse error. Return nothing.
Otherwise, repeatedly consume a component value from the next input token until an 〈EOF〉 is reached, appending all of the returned values up to that point to the declaration's value.
If the last two non-〈whitespace〉s in the declaration's value are a 〈delim〉 with the value "!" followed by an 〈ident〉 with a value that is an ASCII case-insensitive match for "important", remove them from the declaration's value and set the declaration's important flag to true.
Return the declaration.
This section describes how to consume a component value.
If the current input token is a 〈{〉, 〈[〉, or 〈(〉, consume a simple block and return it.
Otherwise, if the current input token is a 〈function〉, consume a function and return it.
Otherwise, return the current input token.
This section describes how to consume a simple block.
The ending token is the mirror variant of the current input token. (E.g. if it was called with 〈[〉, the ending token is 〈]〉.)
Create a simple block with its associated token set to the current input token.
Repeatedly consume the next input token and process it as follows:
This section describes how to consume a function.
Create a function with a name equal to the value of the current input token, and with a value which is initially an empty list.
Repeatedly consume the next input token and process it as follows:
Several things in CSS, such as the ‘:nth-child()
’ pseudoclass, need to indicate indexes
in a list. The An+B microsyntax is
useful for this, allowing an author to easily indicate single elements
or all elements at regularly-spaced intervals in a list.
The An+B notation defines an integer step (A) and offset (B), and represents the An+Bth elements in a list, for every positive integer or zero value of n, with the first element in the list having index 1 (not 0).
For values of A and B greater than 0, this effectively divides the list into groups of A elements (the last group taking the remainder), and selecting the Bth element of each group.
The An+B notation also accepts the
‘even
’ and ‘odd
’
keywords, which have the same meaning as ‘2n
’ and ‘2n+1
’,
respectively.
Examples:
2n+0 /* represents all of the even elements in the list */ even /* same */ 4n+1 /* represents the 1st, 5th, 9th, 13th, etc. elements in the list */
The values of A and B can be negative, but only the positive results of An+B, for n ≥ 0, are used.
Example:
-n+6 /* represents the first 6 elements of the list */
If both A and B are 0, the pseudo-class represents no element in the list.
This section is non-normative.
When A is 0, the An part may
be omitted (unless the B part is already
omitted). When An is not included and B is non-negative, the ‘+
’ sign before B (when
allowed) may also be omitted. In this case the syntax simplifies to
just B.
Examples:
0n+5 /* represents the 5th element in the list */ 5 /* same */
When A is 1 or -1, the 1
may be omitted from the rule.
Examples:
The following notations are therefore equivalent:
1n+0 /* represents all elements in the list */ n+0 /* same */ n /* same */
If B is 0, then every Ath element is picked. In such a case, the +B (or -B) part may be omitted unless the A part is already omitted.
Examples:
2n+0 /* represents every even element in the list */ 2n /* same */
Whitespace is permitted on either side of the ‘+
’ or ‘-
’ that separates
the An and B parts when both
are present.
Valid Examples with white space:
3n + 1 +3n - 2 -n+ 6 +6
Invalid Examples with white space:
3 n + 2n + 2
<an+b>
typeThe An+B notation was originally defined using a slightly different tokenizer than the rest of CSS, resulting in a somewhat odd definition when expressed in terms of CSS tokens. This section describes how to recognize the An+B notation in terms of CSS tokens (thus defining the <an+b> type for CSS grammar purposes), and how to interpret the CSS tokens to obtain values for A and B.
The <an+b> type is defined (using the Value Definition Syntax in the Values & Units spec) as:
<an+b> = odd | even | <integer> | <n-dimension> | '+'? n | -n | <ndashdigit-dimension> | '+'? <ndashdigit-ident> | <dashndashdigit-ident> | <n-dimension> <signed-integer> | '+'? n <signed-integer> | -n <signed-integer> | <n-dimension> ['+' | '-'] <signless-integer> '+'? n ['+' | '-'] <signless-integer> | -n ['+' | '-'] <signless-integer>
where:
<n-dimension>
is a
〈dimension〉 with its type flag set to "integer", and a unit that
is an ASCII
case-insensitive match for "n"
<ndashdigit-dimension>
is a 〈dimension〉 with its type flag set to "integer", and a unit
that is an ASCII
case-insensitive match for "n-*", where "*" is a series of
one or more digits
<ndashdigit-ident>
is an
〈ident〉 whose representation is an ASCII case-insensitive
match for "n-*", where "*" is a series of one or more digits
<dashndashdigit-ident>
is an 〈ident〉 whose representation is an ASCII case-insensitive
match for "-n-*", where "*" is a series of one or more digits
<integer>
is a
〈number〉 with its type flag set to "integer"
<signed-integer>
is
a 〈number〉 with its type flag set to "integer", and whose
representation starts with "+" or "-"
<signless-integer>
is a
〈number〉 with its type flag set to "integer", and whose
representation start with a digit
The clauses of the production are interpreted as follows:
odd
’
even
’
<integer>
<n-dimension>
'+'? n
-n
<ndashdigit-dimension>
'+'? <ndashdigit-ident>
<dashndashdigit-ident>
<n-dimension> <signed-integer>
'+'? n <signed-integer>
-n <signed-integer>
<n-dimension>
['+' | '-'] <signless-integer>
'+'? n ['+' | '-'] <signless-integer>
-n ['+' | '-'] <signless-integer>
-
’ was provided between the two, B is instead the negation of the integer.
The Values spec defines how to specify a grammar for properties. This section does the same, but for rules.
Just like in property grammars, the notation <foo>
refers to the "foo" grammar term, assumed to be defined elsewhere.
Substituting the <foo>
for its definition results in a
semantically identical grammar.
Several types of tokens are written literally, without quotes:
auto
’, ‘disc
’, etc.)
:
), 〈comma〉 (written
as ,
), 〈semicolon〉 (written as ;
),
〈(〉, 〈)〉, 〈{〉, and 〈}〉s.
〈delim〉s are written with their value enclosed in single
quotes. For example, a 〈delim〉 containing the "+" character is
written as '+'
. Similarly, the 〈[〉 and 〈]〉s must
be written in single quotes, as they're used by the syntax of the
grammar itself to group clauses. 〈whitespace〉 is never indicated
in the grammar; 〈whitespace〉s are allowed before, after, and
between any two tokens, unless explicitly specified otherwise in prose
definitions. (For example, if the prelude of a rule is a selector,
whitespace is significant.)
When defining a function or a block, the ending token must be specified in the grammar, but if it's not present in the eventual token stream, it still matches.
translateX()
’ function is:
translateX( <translation-value> )
However, the stylesheet may end with the function unclosed, like:
.foo { transform: translate(50px
The CSS parser parses this as a style rule containing one declaration, whose value is a function named "translate". This matches the above grammar, even though the ending token didn't appear in the token stream, because by the time the parser is finished, the presence of the ending token is no longer possible to determine; all you have is the fact that there's a block and a function.
The CSS parser is agnostic as to the contents of blocks, such as those that come at the end of some at-rules. Defining the generic grammar of the blocks in terms of tokens is non-trivial, but there are dedicated and unambiguous algorithms defined for parsing this.
The <declaration-list> production represents a list of declarations. It may only be used in grammars as the sole value in a block, and represents that the contents of the block must be parsed using the consume a list of declarations algorithm.
Similarly, the <rule-list> production represents a list of rules, and may only be used in grammars as the sole value in a block. It represents that the contents of the block must be parsed using the consume a list of rules algorithm.
Finally, the <stylesheet> production represents a list of rules. It is identical to <rule-list>, except that blocks using it default to accepting all rules that aren't otherwise limited to a particular context.
@font-face
’ rule is defined to have an empty
prelude, and to contain a list of declarations. This is expressed with
the following grammar:
@font-face { <declaration-list> }
This is a complete and sufficient definition of the rule's grammar.
For another example, ‘@keyframes
’ rules
are more complex, interpreting their prelude as a name and containing
keyframes rules in their block Their grammar is:
@keyframes <keyframes-name> { <rule-list> }
For rules that use <declaration-list>, the spec for the rule must define which properties, descriptors, and/or at-rules are valid inside the rule; this may be as simple as saying "The @foo rule accepts the properties/descriptors defined in this specification/section.", and extension specs may simply say "The @foo rule additionally accepts the following properties/descriptors.". Any declarations or at-rules found inside the block that are not defined as valid must be removed from the rule's value.
Within a <declaration-list>,
!important
is automatically invalid on any descriptors.
If the rule accepts properties, the spec for the rule must define
whether the properties interact with the cascade, and with what
specificity. If they don't interact with the cascade, properties
containing !important
are automatically invalid;
otherwise using !important
is valid and has its usual
effect on the cascade origin of the property.
@font-face
’ in the previous example must, in
addition to what is written there, define that the allowed
declarations are the descriptors defined in the Fonts spec.For rules that use <rule-list>, the spec for the rule must define what types of rules are valid inside the rule, same as <declaration-list>, and unrecognized rules must similarly be removed from the rule's value.
@keyframes
’ in the previous example must, in
addition to what is written there, define that the only allowed rules
are <keyframe-rule>s, which are defined as:
<keyframe-rule> = <keyframe-selector> { <declaration-list> }
Keyframe rules, then, must further define that they accept as
declarations all animatable CSS properties, plus the ‘animation-timing-function
’ property, but that
they do not interact with the cascade.
For rules that use <stylesheet>, all rules are allowed by default, but the spec for the rule may define what types of rules are invalid inside the rule.
@media
’ rule accepts anything that can be placed in
a stylesheet, except more ‘@media
’ rules.
As such, its grammar is:
@media <media-query-list> { <stylesheet> }
It additionally defines a restriction that the <stylesheet> can not contain
‘@media
’ rules, which causes them to be
dropped from the outer rule's value if they appear.
This specification does not define how to serialize CSS in general, leaving that task to the CSSOM and individual feature specifications. However, there is one important facet that must be specified here regarding comments, to ensure accurate "round-tripping" of data from text to CSS objects and back.
The tokenizer described in this specification does not produce tokens for comments, or otherwise preserve them in any way. Implementations may preserve the contents of comments and their location in the token stream. If they do, this preserved information must have no effect on the parsing step, but must be serialized in its position as "/*" followed by its contents followed by "*/".
If the implementation does not preserve comments, it must insert the text "/**/" between the serialization of adjacent tokens when the two tokens are of the following pairs:
The preceding pairs of tokens can only be adjacent due to comments in the original text, so the above rule reinserts the minimum number of comments into the serialized text to ensure an accurate round-trip. (Roughly. The 〈delim〉 rules are slightly too powerful, for simplicity.)
To serialize an <an+b> value, let s initially be the empty string:
Return s.
This section is non-normative.
Note that the point of this spec is to match reality; changes from CSS2.1 are nearly always because CSS 2.1 specified something that doesn't match actual browser behavior, or left something unspecified. If some detail doesn't match browsers, please let me know as it's almost certainly unintentional.
Changes in decoding from a byte stream:
@charset
’ rules in
ASCII-compatible byte patterns.
@charset
’ rules that specify
an ASCII-incompatible encoding, as that would cause the rule itself
to not decode properly.
Tokenization changes:
\0
’ that evaluate to zero produce U+FFFD
REPLACEMENT CHARACTER rather than U+0000 NULL.
Parsing changes:
@page
’, per WG resolution. This makes a difference
in error handling even if no such at-rules are defined yet: an
at-rule, valid or not, ends and lets the next declaration being at a
{} block without a 〈semicolon〉.
An+B changes from Selectors Level 3 [SELECT]:
Conformance requirements are expressed with a combination of descriptive assertions and RFC 2119 terminology. The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC 2119. However, for readability, these words do not appear in all uppercase letters in this specification.
All of the text of this specification is normative except sections explicitly marked as non-normative, examples, and notes. [RFC2119]
Examples in this specification are introduced with the words “for
example” or are set apart from the normative text with
class="example"
, like this:
This is an example of an informative example.
Informative notes begin with the word “Note” and are set apart
from the normative text with class="note"
, like this:
Note, this is an informative note.
Conformance to CSS Syntax Module Level 3 is defined for three conformance classes:
A style sheet is conformant to CSS Syntax Module Level 3 if it is syntactically valid according to this module.
A renderer is conformant to CSS Syntax Module Level 3 if it parses a stylesheet according to this module.
An authoring tool is conformant to CSS Syntax Module Level 3 if it writes style sheets that are syntactically valid according to this module.
Thanks for feedback and contributions from David Baron, 呂康豪 (Kang-Hao Lu), and Simon Sapin.
<dashndashdigit-ident>
, 6.2.
<integer>
, 6.2.
<ndashdigit-dimension>
, 6.2.
<ndashdigit-ident>
, 6.2.
<n-dimension>
, 6.2.
<signed-integer>
, 6.2.
<signless-integer>
, 6.2.
Property | Values | Initial | Applies to | Inh. | Percentages | Media |
---|