Many national, regional and local governments, as well as other organizations in- and outside of the public sector, collect numeric data and aggregate this data into statistics. There is a need to publish these statistics in a standardized, machine-readable way on the Web, so that they can be freely integrated and reused in consuming applications.
In this document, the W3C Government Linked Data Working Group presents use cases and lessons supporting a recommendation of the RDF Data Cube Vocabulary [QB-2013]. We describe case studies of existing deployments of an earlier version of the Data Cube Vocabulary [QB-2010] as well as other possible use cases that would benefit from using the vocabulary. In particular, we identify benefits and challenges in using a vocabulary for representing statistics. Also, we derive lessons that can be used for future work on the vocabulary as well as for useful tools complementing the vocabulary.
The rest of this document is structured as follows. We will first give a short introduction to modeling statistics. Then, we will describe use cases that have been derived from existing deployments or from feedback to the earlier version of the Data Cube Vocabulary. In particular, we describe possible benefits and challenges of use cases. Afterwards, we will describe lessons derived from the use cases.
We use the term "Data Cube Vocabulary" throughout the document when referring to the vocabulary.
In the following, we describe the challenge of authoring an RDF vocabulary for publishing statistics as Linked Data. Describing statistics — collected and aggregated numeric data — is challenging for the following reasons:
The Statistical Data and Metadata eXchange [SDMX] — the ISO standard for exchanging and sharing statistical data and metadata among organizations — uses a "multidimensional model" to meet the above challenges in modeling statistics. It can describe statistics as observations. Observations exhibit values (Measures) that depend on dimensions (Members of Dimensions). Since the SDMX standard has proven applicable in many contexts, the Data Cube Vocabulary adopts the multidimensional model that underlies SDMX and will be compatible with SDMX.
Statistics is the study of the collection, organization, analysis, and interpretation of data. Statistics comprise statistical data.
The basic structure of statistical data is a multidimensional table (also called a data cube) [SDMX], i.e., a set of observed values organized along a group of dimensions, together with associated metadata. We refer to aggregated statistical data as "macro-data" and unaggregated statistical data as "micro-data".
Statistical data can be collected in a dataset, typically published and maintained by an organization [SDMX]. The dataset contains metadata, e.g., about the time of collection and publication or about the maintaining and publishing organization.
Source data is data from data stores such as relational databases or spreadsheets that acts as a source for the Linked Data publishing process.
Metadata about statistics defines the data structure and gives contextual information about the statistics.
A format is machine-readable if it is amenable to automated processing by a machine, as opposed to presentation to a human user.
A publisher is a person or organization that exposes source data as Linked Data on the Web.
A consumer is a person or agent that uses Linked Data from the Web.
A registry allows a publisher to announce that data or metadata exists and to add information about how to obtain that data [SDMX 2.1].
This section presents scenarios that are enabled by the existence of a standard vocabulary for the representation of statistics as Linked Data.
(Use case taken from SDMX Web Dissemination Use Case [SDMX 2.1])
Since we have adopted the multidimensional model that underlies SDMX, we also adopt the "Web Dissemination Use Case" which is the prime use case for SDMX since it is an increasingly popular use of SDMX and enables organizations to build a self-updating dissemination system.
The Web Dissemination Use Case contains three actors, a structural metadata Web service (registry) that collects metadata about statistical data in a registration fashion, a data Web service (publisher) that publishes statistical data and its metadata as registered in the structural metadata Web service, and a data consumption application (consumer) that first discovers data from the registry, then queries data from the corresponding publisher of selected data, and then visualizes the data.
(This use case has been summarized from Ian Dickinson et al. [COINS])
More and more organizations want to publish statistics on the Web, for reasons such as increasing transparency and trust. Although, in the ideal case, published data can be understood by both humans and machines, data often is simply published as CSV, PDF, XSL etc., lacking elaborate metadata, which makes free usage and analysis difficult.
Therefore, the goal in this scenario is to use a machine-readable and application-independent description of common statistics, expressed using open standards, to foster usage and innovation on the published data. In the "COINS as Linked Data" project [COINS], the Combined Online Information System (COINS) shall be published using a standard Linked Data vocabulary. Via the Combined Online Information System (COINS), HM Treasury, the principal custodian of financial data for the UK government, releases previously restricted financial information about government spending.
The COINS data has a hypercube structure. It describes financial transactions using seven independent dimensions (time, data-type, department etc.) and one dependent measure (value). Also, it allows thirty-three attributes that may further describe each transaction. COINS is an example of one of the more complex statistical datasets being publishing via data.gov.uk.
Part of the complexity of COINS arises from the nature of the data being released:
The published COINS datasets cover expenditure related to five different years (2005–06 to 2009–10). The actual COINS database at HM Treasury is updated daily. In principle at least, multiple snapshots of the COINS data could be released throughout the year.
The actual data and its hypercube structure are to be represented separately so that an application first can examine the structure before deciding to download the actual data, i.e., the transactions. The hypercube structure also defines, for each dimension and attribute, a range of permitted values that are to be represented.
An access or query interface to the COINS data, e.g., via a SPARQL endpoint or the linked data API, is planned. Queries that are expected to be interesting are: "spending for one department", "total spending by department", "retrieving all data for a given observation" etc.
According to the COINS as Linked Data project, the reason for publishing COINS as Linked Data are threefold:
The COINS use case leads to the following challenges:
(This use case has been contributed by Rinke Hoekstra. See CEDA_R and Data2Semantics for more information.)
Not only in government, there is a need to publish considerable amounts of statistical data to be consumed in various (also unexpected) application scenarios. Typically, Microsoft Excel sheets are made available for download.
For instance, in the CEDA_R and Data2Semantics projects publishing and harmonizing Dutch historical census data (from 1795 onwards) is a goal. These censuses are now only available as Excel spreadsheets (obtained by data entry) that closely mimic the way in which the data was originally published and shall be published as Linked Data.
Those Excel sheets contain single spreadsheets with several multidimensional data tables, having a name and notes, as well as column values, row values, and cell values.
Another concrete example is the Stats2RDF project that intends to publish Excel sheets with biomedical statistical data. Here, Excel files are first translated into CSV and then translated into RDF using OntoWiki, a semantic wiki.
(Use case has been taken from [QB4OLAP] and from discussions at publishing-statistical-data mailing list)
It often comes up in statistical data that you have some kind of 'overall' figure, which is then broken down into parts.
Example (in pseudo-turtle RDF):
ex:obs1 sdmx:refArea <uk>; sdmx:refPeriod "2011"; ex:population "60" . ex:obs2 sdmx:refArea <england>; sdmx:refPeriod "2011"; ex:population "50" . ex:obs3 sdmx:refArea <scotland>; sdmx:refPeriod "2011"; ex:population "5" . ex:obs4 sdmx:refArea <wales>; sdmx:refPeriod "2011"; ex:population "3" . ex:obs5 sdmx:refArea <northernireland>; sdmx:refPeriod "2011"; ex:population "2" .
We are looking for the best way (in the context of the RDF/Data
Cube/SDMX approach) to express that the values for
England, Scotland, Wales & Northern Ireland ought to add up to the value
for the UK and constitute a more detailed breakdown of the overall UK
figure. Since we might also have population figures for France,
Germany, EU28 etc., it is not as simple as just taking a
qb:Slice
where you fix the time period and the measure.
Similarly, Etcheverry and Vaisman [QB4OLAP] present the use case to publish household data from StatsWales and Open Data Communities.
This multidimensional data contains for each fact a time dimension with one level Year and a location dimension with levels Unitary Authority, Government Office Region, Country, and ALL. As unit, units of 1000 households is used.
In this use case, one wants to publish not only a dataset on the bottom most level, i.e., what are the number of households at each Unitary Authority in each year, but also a dataset on more aggregated levels. For instance, in order to publish a dataset with the number of households at each Government Office Region per year, one needs to aggregate the measure of each fact having the same Government Office Region using the SUM function.
Similarly, for many uses then population broken down by some category (e.g., ethnicity) is expressed as a percentage. Separate datasets give the actual counts per category and aggregate counts. In such cases it is common to talk about the denominator (often DENOM) which is the aggregate count against which the percentages can be interpreted.
(Use case has been provided by Epimorphics Ltd, in their UK Bathing Water Quality deployment)
As part of their work with data.gov.uk and the UK Location Programme, Epimorphics Ltd have been working to pilot the publication of both current and historic bathing water quality information from the UK Environment Agency as Linked Data.
The UK has a number of areas, typically beaches, that are designated as bathing waters where people routinely enter the water. The Environment Agency monitors and reports on the quality of the water at these bathing waters.
The Environment Agency's data can be thought of as structured in 3 groups:
The most important dimensions of the data are bathing water, sampling point, and compliance classification.
The Met Office, the UK's National Weather Service, provides a range of weather forecast products including openly available site-specific forecasts for the UK. The site specific forecasts cover over 5000 forecast points, each forecast predicts 10 parameters and spans a 5 day window at 3 hourly intervals, the whole forecast is updated each hour. A proof of concept project investigated the challenge of publishing this information as linked data using the Data Cube vocabulary.
This weather forecasts case study leads to the following challenges:
The World Meteorological Organization (WMO) develops and recommends data interchange standard and within that community compatibility with ISO19156 "Geographic information — Observations and measurements" (O&M) is regarded as important. Thus, this supports lesson Modelers using ISO19156 - Observations & Measurements may need clarification regarding the relationship to the Data Cube Vocabulary.
Solution in this case study:O&M provides a data model for an Observation with associated Phenomenon, measurement ProcessUsed, Domain (feature of interest) and Result. Prototype vocabularies developed at CSIRO and extended within this project allow this data model to be represented in RDF. For the site specific forecasts then a 5-day forecast for all 5000+ sites is regarded as a single O&M Observation.
To represent the forecast data itself, the Result in the O&M model, then the relevant standard is ISO19123 "Geographic information — Schema for coverage geometry and functions". This provides a data model for a Coverage which can represent a set of values across some space. It defines different types of Coverage including a DiscretePointCoverage suited to representing site-specific forecast results.
It turns out that it is straightforward to treat an RDF Data Cube as a particular concrete representation of the DiscretePointCoverage logical model. The cube has dimensions corresponding to the forecast time and location and the measure is a record representing the forecast values of the 10 phenomena. Slices by time and location provide subsets of the data that directly match the data packages supported by an existing on-line service.
Note that in this situation an observation in the sense of
qb:Observation
and an observation in the sense of ISO19156 Observations and
Measurements are different things. The O&M Observation is the
whole forecast whereas each
qb:Observation
corresponds to a single GeometryValuePair within the forecast results
Coverage.
Each hourly update comprises over 2 million data points and forecast data is requested by a large number of data consumers. Bandwidth costs are thus a key consideration and the apparent verbosity of RDF in general, and Data Cube specifically, was a concern. This supports lesson Publishers and consumers may need more guidance in efficiently processing data using the Data Cube Vocabulary.
Solution in this case study:Regarding bandwidth costs then the key is not raw data volume but compressibility, since such data is transmitted in compressed form. A Turtle representation of a non-abbreviated data cube compressed to within 15-20% of the size of compressed, handcrafted XML and JSON representations. Thus obviating the need for abbreviations or custom serialization.
(This use case has been taken from Eurostat Linked Data Wrapper and Linked Statistics Eurostat Data, both deployments for publishing Eurostat SDMX as Linked Data using the draft version of the Data Cube Vocabulary)
As mentioned already, the ISO standard for exchanging and sharing statistical data and metadata among organizations is Statistical Data and Metadata eXchange [SDMX]. Since this standard has proven applicable in many contexts, we adopt the multidimensional model that underlies SDMX and intend the standard vocabulary to be compatible to SDMX. Therefore, in this use case we explain the benefit and challenges of publishing SDMX data as Linked Data.
As one of the main adopters of SDMX, Eurostat publishes large amounts of European statistics coming from a data warehouse as SDMX and other formats on the Web. Eurostat also provides an interface to browse and explore the datasets. However, linking such multidimensional data to related data sets and concepts would require downloading of interesting datasets and manual integration. The goal here is to improve integration with other datasets; Eurostat data should be published on the Web in a machine-readable format, possibly to be linked with other datasets, and possibly to be freely consumed by applications. Both Eurostat Linked Data Wrapper and Linked Statistics Eurostat Data intend to publish Eurostat SDMX data as 5 Star Linked Open Data. Eurostat data is partly published as SDMX, partly as tabular data (TSV, similar to CSV). Eurostat provides a TOC of published datasets as well as a feed of modified and new datasets. Eurostat provides a list of used code lists, i.e., range of permitted dimension values. Any Eurostat dataset contains a varying set of dimensions (e.g., date, geo, obs_status, sex, unit) as well as measures (generic value, content is specified by dataset, e.g., GDP per capita in PPS, Total population, Employment rate by sex).
(This use case has mainly been taken from [COGS])
In several applications, relationships between statistical data need to be represented.
The goal of this use case is to describe provenance, transformations, and versioning around statistical data, so that the history of statistics published on the Web becomes clear. This may also relate to the issue of having relationships between datasets published.
A concrete example is given by Freitas et al. [COGS], where transformations on financial datasets, e.g., the addition of derived measures, conversion of units, aggregations, OLAP operations, and enrichment of statistical data are executed on statistical data before showing them in a Web-based report.
See SWPM 2012 Provenance Example for screenshots about this use case.
Making transparent the transformation a dataset has been exposed to increases trust in the data.
qb:DataSet
(e.g., ex:populationCount
and ex:populationPercent
)?
(Use case taken from SMART natural sciences research project)
Data that is published on the Web is typically visualized by transforming it manually into CSV or Excel and then creating a visualization on top of these formats using Excel, Tableau, RapidMiner, Rattle, Weka etc.
This use case shall demonstrate how statistical data published on the Web can be visualized inside a webpage with little effort and without using commercial or highly-complex tools.
An example scenario is environmental research done within the SMART research project. Here, statistics about environmental aspects (e.g., measurements about the climate in the Lower Jordan Valley) shall be visualized for scientists and decision makers. Statistics should also be possible to be integrated and displayed together. The data is available as XML files on the Web which are re-published as Linked Data using the Data Cube Vocabulary. On a separate website, specific parts of the data shall be queried and visualized in simple charts, e.g., line diagrams.
Easy, flexible and powerful visualizations of published statistical data.
(Use case taken from Google Public Data Explorer (GPDE))
Google Public Data Explorer (GPDE) provides an easy possibility to visualize and explore statistical data. Data needs to be in the Dataset Publishing Language (DSPL) to be uploaded to the data explorer. A DSPL dataset is a bundle that contains an XML file, the schema, and a set of CSV files, the actual data. Google provides a tutorial to create a DSPL dataset from your data, e.g., in CSV. This requires a good understanding of XML, as well as a good understanding of the data that shall be visualized and explored.
In this use case, the goal is to take statistical data published as Linked Data re-using the Data Cube Vocabulary and to transform it into DSPL for visualization and exploration using GPDE with as few effort as possible.
For instance, Eurostat data about Unemployment rate downloaded from the Web as shown in the following figure:
There are different possible approaches each having advantages and disadvantages: 1) A customer C is downloading this data into a triple store; SPARQL queries on this data can be used to transform the data into DSPL and uploaded and visualized using GPDE. 2) or, one or more XLST transformation on the RDF/XML transforms the data into DSPL.
(Use case taken from Financial Information Observation System (FIOS))
Online Analytical Processing (OLAP) [OLAP] is an analysis method on multidimensional data. It is an explorative analysis method that allows users to interactively view the data on different angles (rotate, select) or granularities (drill-down, roll-up), and filter it for specific information (slice, dice).
OLAP systems are commonly used in industry to analyze statistical data on a regular basis. OLAP systems first use ETL pipelines to extract-load-transform relevant data in a data warehouse and then allow interfaces to efficiently issue OLAP queries on the data.
The goal in this use case is to allow analysis of published statistical data with common OLAP systems [OLAP4LD].
For that a multidimensional model of the data needs to be generated. A multidimensional model consists of facts summarized in data cubes. Facts exhibit measures depending on members of dimensions. Members of dimensions can be further structured along hierarchies of levels.
An example scenario of this use case is the Financial Information Observation System (FIOS) [FIOS], where XBRL data provided by the SEC on the Web is re-published as Linked Data and made possible to explore and analyze by stakeholders in a Web-based OLAP client Saiku.
The following figure shows an example of using FIOS. Here, for three different companies, the Cost of Goods Sold as disclosed in XBRL documents are analyzed. As cell values either the number of disclosures or — if only one available — the actual number in USD is given:
(Use case motivated by Data Catalog vocabulary and RDF Data Cube Vocabulary datasets in the PlanetData Wiki)
After statistics have been published as Linked Data, the question remains how to communicate the publication and to let users discover the statistics. There are catalogs to register datasets, e.g., CKAN, datacite.org, da|ra, and Pangea. Those catalogs require specific configurations to register statistical data.
The goal of this use case is to demonstrate how to expose and distribute statistics after publication. For instance, to allow automatic registration of statistical data in such catalogs, for finding and evaluating datasets. To solve this issue, it should be possible to transform the published statistical data into formats that can be used by data catalogs.
A concrete use case is the structured collection of RDF Data Cube Vocabulary datasets in the PlanetData Wiki. This list is supposed to describe statistical datasets on a higher level — for easy discovery and selection — and to provide a useful overview of RDF Data Cube deployments in the Linked Data cloud.
The use cases presented in the previous section give rise to the following lessons that can motivate future work on the vocabulary as well as associated tools or services complementing the vocabulary.
The draft version of the vocabulary builds upon SDMX Standards Version 2.0. A newer version of SDMX, SDMX Standards, Version 2.1, is available.
The requirement is to at least build upon Version 2.0, if specific use cases derived from Version 2.1 become available, the working group may consider building upon Version 2.1.
Background information:
Supporting use cases:
There should be a consensus on the issue of flattening or abbreviating data; one suggestion is to author data without the duplication, but have the data publication tools "flatten" the compact representation into standalone observations during the publication process.
Background information:
qb:subslice
, the vocabulary
should clarify or drop the use of qb:subslice
; issue: http://www.w3.org/2011/gld/track/issues/34
Supporting use cases:
First, hierarchical code lists may be supported via SKOS [SKOS]. Allow for cross-location and cross-time analysis of statistical datasets.
Second, one can think of non-SKOS hierarchical code lists. E.g., if
simple
skos:narrower
/
skos:broader
relationships are not sufficient or if a vocabulary uses specific
hierarchical properties, e.g.,
geo:containedIn
.
Also, the use of hierarchy levels needs to be clarified. It has been
suggested, to allow
skos:Collections
as value of
qb:codeList
.
Richard Cyganiak gave a summary of different options for specifying the allowed dimension values of a coded property, possibly including hierarchies (see mail):
Background information:
Supporting use cases:
A number of organizations, particularly in the Climate and Meteorological area, already have some commitment to the OGC "Observations and Measurements" (O&M) logical data model, also published as ISO 19156. Are there any statements about compatibility and interoperability between O&M and Data Cube that can be made to give guidance to such organizations?
Partly solved by description for Publisher Case study: Site specific weather forecasts from Met Office, the UK's National Weather Service.
Background information:
Supporting use cases:
Background information:
Supporting use cases:
Background information:
Supporting use cases:
Background information:
Supporting use cases:
Background information:
Supporting use cases:
Background information:
Supporting use cases:
Background information:
Supporting use cases:
Clarify the relationship between DCAT and QB.
Background information:
Supporting use cases:
We thank Phil Archer, John Erickson, Rinke Hoekstra, Bernadette Hyland, Aftab Iqbal, James McKinney, Dave Reynolds, Biplav Srivastava, Boris Villazón-Terrazas for feedback and input.
We thank Hadley Beeman, Sandro Hawke, Bernadette Hyland, George Thomas for their help with publishing this document.