To foster the development of the Linked Data Platform specification, this document includes a set of user stories, use cases, scenarios and requirements that motivate a simple read-write Linked Data architecture, based on HTTP access to web resources that describe their state using RDF. The starting point for the development of these use cases is a collection of user stories that provide realistic examples describing how people may use read-write Linked Data. The use cases themselves are captured in a narrative style that describes a behavior, or set of behaviors based on, and using scenarios from, these user stories. The aim throughout has been to avoid details of protocol (specifically the HTTP protocol), and use of any specific vocabulary that might be introduced by the LDP specification.

Scope and Motivation

Linked Data was defined by Tim Berners-Lee with the following guidelines [[LINKED-DATA]]:

  1. Use URIs as names for things
  2. Use HTTP URIs so that people can look up those names
  3. When someone looks up a URI, provide useful information, using the standards (RDF*, SPARQL)
  4. Include links to other URIs. so that they can discover more things

These four rules have proven very effective in guiding and inspiring people to publish Linked Data on the web. The amount of data, especially public data, available on the web has grown rapidly, and an impressive number of extremely creative and useful “mashups” have been created using this data as result.

The goal for the [[LINKED-DATA-PLATFORM]] is to define a specification required to allow the definition of a writable Linked Data API equivalent to the simple application APIs that are often written on the web today using the Atom Publishing Protocol (APP), which shares some characteristics with Linked Data such as the use of HTTP and URLs but relying on a flexible data model based on RDF that allows for multiple representations.

Organization of this Document

This document is organized as follows:

User Stories

Maintaining Social Contact Information

Many of us have multiple email accounts that include information about the people and organizations we interact with – names, email addresses, telephone numbers, instant messenger identities and so on. When someone’s email address or telephone number changes (or they acquire a new one), our lives would be much simpler if we could update that information in one spot and all copies of it would automatically be updated. In other words, those copies would all be linked to some definition of “the contact.” There might also be good reasons (like off-line email addressing) to maintain a local copy of the contact, but ideally any copies would still be linked to some central “master.”

Agreeing on a format for “the contact” is not enough, however. Even if all our email providers agreed on the format of a contact, we would still need to use each provider’s custom interface to update or replace the provider’s copy, or we would have to agree on a way for each email provider to link to the “master”. If we look outside our own personal interests, it would be even more useful if the person or organization exposed their own contact information so we could link to it.

What would work in either case is a common understanding of the resource, a few formats needed, and access guidance for these resources. This would support how to acquire a link to a contact, how to use those links to interact with a contact (including reading, updating, and deleting it), as well as how to easily create a new contact, add it to my contacts, and when deleting a contact, how it would be removed from my list of contacts. It would also be good to be able to add some application-specific data about my contacts that the original design didn’t consider. Ideally we’d like to eliminate multiple copies of contacts, there would be additional valuable information about my contacts that may be stored on separate servers and need a simple way to link this information back to the contacts. Regardless of whether a contact collection is my own, shared by an organization, or all contacts known to an email provider (or to a single email account at an email provider), it would be nice if they all worked pretty much the same way.

Keeping Track of Personal and Business Relationships

In our daily lives, we deal with many different organizations in many different relationships, and they each have data about us. However, it is unlikely that any one organization has all the information about us. Each of them typically gives us access to the information (at least some of it), many through websites where we are uniquely identified by some string – an account number, user ID, and so on. We have to use their applications to interact with the data about us, and we have to use their identifier(s) for us.

Would it not be simpler if at least the Web-addressable portion of that data could be linked to consistently, so that instead of maintaining various identifiers in different formats and instead of having to manually supply those identifiers to each one’s corresponding custom application, we could essentially build a set of bookmarks to it all? When we want to examine or change their contents, would it not be simpler if there were a single consistent application interface that they all supported?

The information held by any single organization might be a mix of simple data and collections of other data, for example, a bank account balance and a collection of historical transactions. Our bank might easily have a collection of accounts for each member of its collection of customers.

System and Software Development Tool Integration

System and software development tools typically come from a diverse set of vendors and are built on various architectures and technologies. These tools are purpose built to meet the needs for a specific domain scenario (modeling, design, requirements and so on.) Often tool vendors view integrations with other tools as a necessary evil rather than providing additional value to their end-users. Even more of an afterthought is how these tools’ data -- such as people, projects, customer-reported problems and needs -- integrate and relate to corporate and external applications that manage data such as customers, business priorities and market trends. The problem can be isolated by standardizing on a small set of tools or a set of tools from a single vendor, but this rarely occurs and if does it usually does so only within small organizations. As these organizations grow both in size and complexity, they have needs to work with outsourced development and diverse internal other organizations with their own set of tools and processes. There is a need for better support of more complete business processes (system and software development processes) that span the roles, tasks, and data addressed by multiple tools. This demand has existed for many years, and the tools vendor industry has tried several different architectural approaches to address the problem. Here are a few:

It is fair to say that although each of those approaches has its adherents and can point to some successes, none of them is wholly satisfactory. The use of Linked Data as an application integration technology has a strong appeal [[OSLC]].

Library Linked Data

The W3C Library Linked Data Working Group has a number of use cases cited in their Use Case Report [[LLD-UC]]. These referenced use cases focus on the need to extract and correlate library data from disparate sources. Variants of these use cases that can provide consistent formats, as well as ways to improve or update the data, would enable simplified methods for both efficiently sharing this data as well as producing incremental updates without the need for repeated full extractions and import of data.

The 'Digital Objects Cluster' contains a number of relevant use cases:

The 'Collections' cluster also contains a number of relevant use cases:

Municipality Operational Monitoring

Across various cities, towns, counties, and various municipalities there is a growing number of services managed and run by municipalities that produce and consume a vast amount of information. This information is used to help monitor services, predict problems, and handle logistics. In order to effectively and efficiently collect, produce, and analyze all this data, a fundamental set of loosely coupled standard data sources are needed. A simple, low-cost way to expose data from the diverse set of monitored services is needed, one that can easily integrate into the municipalities of other systems that inspect and analyze the data. All these services have links and dependencies on other data and services, so having a simple and scalable linking model is key.

Healthcare

For physicians to analyze, diagnose, and propose treatment for patients requires a vast amount of complex, changing and growing knowledge. This knowledge needs to come from a number of sources, including physicians’ own subject knowledge, consultation with their network of other healthcare professionals, public health sources, food and drug regulators, and other repositories of medical research and recommendations.

To diagnose a patient’s condition requires current data on the patient’s medications and medical history. In addition, recent pharmaceutical advisories about these medications are linked into the patient’s data. If the patient experiences adverse effects from medications, these physicians need to publish information about this to an appropriate regulatory source. Other medical professionals require access to both validated and emerging effects of the medication. Similarly, if there are geographical patterns around outbreaks that allow both the awareness of new symptoms and treatments, this information needs to quickly reach a very distributed and diverse set of medical information systems. Also, reporting back to these regulatory agencies regarding new occurrences of an outbreak, including additional details of symptoms and causes, is critical in producing the most effective treatment for future incidents.

Metadata Enrichment in Broadcasting

There are many different use cases when broadcasters show interest in metadata enrichment:

This comes in support of more effective information management and data/content mining (if you can't find your content, it's like you don't have it and must either recreate or acquire it again, which is not financially effective).

However, there is a need for solutions facilitating linkage to other data sources and taking care of the issues such as discovery, automation, disambiguation, etc. Other important issues that broadcasters would face are the editorial quality of the linked data, its persistence, and usage rights.

Aggregation and Mashups of Infrastructure Data

For infrastructure management (such as storage systems, virtual machine environments, and similar IaaS and PaaS concepts), it is important to provide an environment in which information from different sources can be aggregated, filtered, and visualized effectively. Specifically, the following use cases need to be taken into account:

In this scenario, the important factors are to have abstractions that allow easy aggregation and filtering, are independent from the internal data model of the sources that are being combined, and can be used for pull-based interactions as well as for push-based interactions.

Sharing Payload of RDF Data Among Low-End Devices

Several projects around the idea of downscaling the Semantic Web need to be able to ship payloads of RDF across the nodes member of a given network. The transfers are done in a constrained context in terms of bandwidth, scope of the local semantics employed by the nodes and computing capabilities of the nodes. In a P2P style, every node has the capability to act either as a data consumer or a data provider, serving its own data or acting as a relay to pass other's data along (typically in mesh networks).

The transfer of an arbitrary payload of RDF data could be implemented through a container mechanism, adding and removing sets of RDF triples to it. Currently, the SemanticXO [[XO]] project uses named graphs and the graph store protocol to create/delete/copy graphs across the nodes but this (almost) imposes the usage of a triple store. Unfortunately, triple stores are rather demanding pieces of software that are not always usable on limited hardware. Some generic REST-like interaction backed up with a lightweight column store would be a better approach.

Sharing Binary Resources and Metadata

When publishing datasets about stars one may want to publish links to the pictures in which those stars appear, and this may well require publishing the pictures themselves. Vice versa: when publishing a picture of space we need to know which telescope took the picture, which part of the sky it was pointing at, what filters were used, which identified stars are visible, who can read it, who can write to it.

If Linked Data contains information about resources that are most naturally expressed in non-RDF formats (be they binary such as pictures or videos, or human readable documents in XML formats), those non-RDF formats should be just as easy to publish to the LinkedData server as the RDF relations that link those resources up. A LinkedData server should therefore allow publishing of non-linked data resources too, and make it easy to publish and edit metadata about those resources.

The resource comes in two parts - the image and information about the image (which may be in the image file but is better kept external to it as it's more general). The information about the image is vital. It's a compound item of image data and other data (application metadata about the image) that are not distinguished from the platform's point-of-view.

Data Catalogs

The Asset Description Metadata Schema [[ADMS]] provides the data model to describe semantic asset repository contents, but this leaves many open challenges when building a federation of these repositories to serve the need of asset reuse. These include accessing and querying individual repositories and efficiently retrieving updated content without having to retrieve the whole content. The Data Warehousing integration approach allows us to cope with heterogeneity of sources technologies and to benefit from the optimized performance it offers, given that individual repositories do not usually change frequently. With Data Warehousing, the federation requires one to:

Repository owners can maintain de-referenceable URIs for their repository descriptions and contained assets in a Linked Data compatible manner. ADMS provides the necessary data model to enable meaningful exchange of data. However, this leaves the challenge of efficient access to the data not fully addressed.

Constrained Devices and Networks

Information coming from resource constrained devices in the Web of Things [[WOT]] has been identified as a major driver in many domains, from smart cities to environmental monitoring to real-time tracking. The amount of information produced by these devices is growing exponentially and needs to be accessed and integrated in a systematic, standardized and cost efficient way. By using the same standards as on the Web, integration with applications will be simplified and higher-level interactions among resource constrained devices, abstracting away heterogeneities, will become possible. Up-coming IoT/WoT standards such as '6LowPAN' [[6LOWPAN]] - IPv6 for resource constrained devices - and the Constrained Application Protocol [[COAP]], which provides a downscaled version of HTTP on top of UDP for the use on constrained devices, are already at a mature stage. The next step now is to support RESTful interfaces also on resource constrained devices, adhering to the Linked Data principles. Due to the limited resources available, both on the device and in the network (such as bandwidth, energy, and memory) a solution based on SPARQL Update [[RDF-SPARQL-UPDATE]] is at the current point in time considered not to be useful and/or feasible. An approach based on the HTTP-CoAP Mapping [[COAP-MAP]] would enable constrained devices to directly participate in a Linked Data-based environment.

Services Supporting the Process of Science

Many fields of science now include branches with in silico data-intensive methods, e.g. bioinformatics, astronomy. To support these new methods we look to move beyond the established platforms provided by scientific workflow systems to capture, assist, and preserve the complete lifecycle from record of the experiment, through local trusted sharing, analysis, dissemination (including publishing of experimental data "beyond the PDF"), and re-use.

Project Membership Information

Information about people and projects changes as roles change, as organisations change and as contact details change. Finding the current state of a project is important in enabling people to contact the right person in the right role. It can also be useful to look back and see who was performing what role in the past.

A use of a Linked Data Platform could be to give responsibility for managing such information to the project team itself, instead of requiring updates to be requested from a centralised website administrator.

This could be achieved with:

To retain the history of the project, the old version of a resources, including container resources, should be retained so there is a need to address both specific items and also have a notion of "current".

Access to information has two aspects:

Cloud Infrastructure Management

Cloud operators offer API support to provide customers with remote access for the management of Cloud infrastructure (IaaS). Infrastructure consists of Systems, Computers, Networks, Discs, etc. The overall structure can be seen as mostly hierarchical, (Cloud contains Systems, Systems contain Machines, etc), complemented with crossing links (e.g. multiple Machines connected to a Network).

The IaaS scenario makes specific requirements on lifecycle management and discovery, handling non-instant changes, history capture and query:

Infrastructure management may be viewed as the manipulation of the underlying graph of resources.

Use Cases

The following use cases are each derived from one or more of the user stories above. These use cases are explored in detail through the development of scenarios, each motivated by some key aspect exemplified by a single user story. The examples they contain are included purely for illustrative purposes, and should not be interpreted normatively.

UC1: Compose resources

A number of user stories introduce the idea of a container as a mechanism for composing resources within the context of an application. A composition would be identified by URI being a linked resource in its own right. Its properties may represent the affordances of the application, enabling clients to determine what other operations they can do. These operations may include descriptions of application specific services that can be invoked by exchanging RDF documents.

Primary scenario: create a container

Create a new container resource within the LDP server. In Services Supporting the Process of Science, Research Objects are semantically rich aggregations of resources that bring together data, methods and people in scientific investigations. A basic workflow research object will be created to aggregate scientific workflows and their artefacts [[RESEARCH-OBJECTS]]. These artefacts will be added to the research object throughout the project lifecycle of the project.

The RDF description below captures the initial state of the research object. For the purposes of the example, we have included the time of creation. It is a linked data resource addressed via URL from which the following RDF can be retrieved. The null-relative URL <> should be understood as a self-reference to the research object itself.


@prefix ro:  <http://purl.org/wf4ever/ro#> .
@prefix dct: <http://purl.org/dc/terms/> .
@prefix ore: <http://www.openarchives.org/ore/> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .

<> a ro:ResearchObject, ore:Aggregation ;
    dct:created "2012-12-01"^^xsd:dateTime .
(see functional requirement F1.1)

Alternative scenario: create a nested container

The motivation for nested containers comes from the System and Software Development Tool Integration user story. The OSLC Change Management vocabulary allows bug reports to have attachments referenced by the membership predicate oslc_cm:attachment. This may be viewed as nested containment. The top-level-container contains issues, and each issue is itself a container of attachments. In the example, issue1234 is a member of the container top-level-container. In turn, attachment324 and attachment251 are attachments within issue1234. Treating these as containers makes it easier to manage them as self-contained units.


@prefix dcterms: <http://purl.org/dc/terms/>.
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
@prefix oslc_cm: <http://open-services.net/ns/cm#>.
@prefix : <http://example.org/>.

:top-level-container rdfs:member :issue1234 .

:issue1234 a oslc_cm:ChangeRequest;
      dcterms:identifier "1234";
      dcterms:type "a bug";
      oslc_cm:attachments :attachments.

:attachments a oslc_cm:AttachmentList;
      oslc_cm:attachment :attachment324, :attachment251.
(see functional requirement F1.2)

Alternative scenario: Delete a container

If a container can be deleted, it seems natural that any contained resources and nested containers should also be deleted.
(see functional requirement F1.3).

UC2: Manage resource lifecycle

This use case addresses the managed lifecycle of a resource and is concerned with resource ownership. The responsibility for managing resources belongs to their container. For example, a container may accept a request from a client to make a new resource. This use case focuses on creation and deletion of resources in the context of a container, and the potential for transfer of ownership by moving resources between containers. The ownership of a resource should always be clear; no resource managed in this way should ever be owned by more than one container.

Primary scenario: create resource

Resources begin life by being created within a container. From user story, Maintaining Social Contact Information, It should be possible to "easily create a new contact and add it to my contacts." This suggests that resource creation is closely linked to the application context. The new resource is created in a container representing "my contacts." The lifecycle of the resource is linked to the lifecycle of it's container. So, for example, if "my contacts" is deleted then a user would also reasonably expect that all contacts within it would also be deleted.

Contact details are captured as an RDF description and it's properties, including "names, email addresses, telephone numbers, instant messenger identities and so on." The description may include non-standard RDF; "data about my contacts that the original design didn’t consider." The following RDF could be used to describe contact information using the FOAF vocabulary [[FOAF]]. A contact is represented here by a foaf:PersonalProfileDocument defining a resource that can be created and updated as a single-unit, even though it may describe ancillary resources, such as a foaf:Person, below.


@prefix foaf:  <http://xmlns.com/foaf/0.1/> .

<> a foaf:PersonalProfileDocument;
	foaf:PrimaryTopic [ 
		a foaf:Person;
		foaf:name "Timothy Berners-Lee";
		foaf:title "Sir";
		foaf:firstName "Timothy";
		foaf:surname "Berners-Lee";
		foaf:nick "TimBL", "timbl";
		foaf:homepage <http://www.w3.org/People/Berners-Lee/>;
		foaf:weblog <http://dig.csail.mit.edu/breadcrumbs/blog/4>;
		foaf:mbox <mailto:timbl@w3.org>;
		foaf:workplaceHomepage <http://www.w3.org/>.
	]
(see functional requirement F2.1)

Alternative scenario: delete resource

Delete a resource and all it's properties. If the resource resides within a container it will be removed from that container, however other links to the deleted resource may be left as dangling references. In the case where the resource is a container, the server may also delete any or all contained resources. In normal practice, a deleted resource cannot be reinstated. There are however, edge-cases where limited undelete may be desirable. Best practice states that "Cool URIs don't change" [[COOLURIS]], which implies that deleted URIs shouldn't be recycled.

(see functional requirement F2.2)

Alternative scenario: moving contained resources

Resources may have value beyond the life of their membership in a container. This implies methods to add references to revise container membership. A change of ownership may or may not imply a change of URI, depending upon the naming policy. While assigning a new URI to a resource is discouraged [[WEBARCH]], it is possible to indicate that a resource has moved with an appropriate HTTP response.

(see functional requirement F2.3)

UC3: Retrieve resource description

Access the current description of a resource, containing properties of that resource and links to related resources. The representation may include descriptions of related resources that cannot be accessed directly. Depending upon the application, an server may enrich the retrieved RDF with additional triples. Examples include adding incoming links, owl:sameAs closure and rdf:type closure. The HTTP response should also include versioning information (i.e. last update or entity tag) so that subsequent updates can ensure they are being applied to the correct version.

Primary scenario: retrieve resource description

The user story Project Membership Information discusses the representation of information about people and projects. It calls for "Resource descriptions for each person and project" allowing project teams to review information held about these resources. The example below illustrates the kinds of information that might be held about organizational structures based on the Epimorphics organizational ontology [[ORG-ONT]].

Examples 4 and 5 below define two resources that would be hosted on an LDP server based at <http://example.com/>. The representation in Example 4 describes <http://example.com/member1>, while that of Example 5 describes <http://example.com/role>. A client reading Example 4 would have to separately retrieve Example 5 in order to get role information such as its descriptive label.

Note that the representations of these resources may include descriptions of related resources, such as <http://www.w3.org/>, that that fall under a completely different authority and therefore can't be served directly from the LDP server at this location.


@prefix org: <http://www.w3.org/ns/org#> .
@prefix owltime: <http://www.w3.org/2006/time> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@base <http://example.com/> .
     
<member1> a org:Membership ;
	org:member <http://www.w3.org/People/Berners-Lee/card#i> ;
	org:organization http://www.w3.org/> ;
	org:role <director> ;
	org:memberDuring [a owltime:Interval; owltime:hasBeginning [
		owltime:inXSDDateTime "1994-10-01T00:00:00Z"^^xsd:dateTime]] .

<http://www.w3.org/> a org:FormalOrganization ;
	skos:prefLabel "The World Wide Web Consortium"@en ;
	skos:altLabel "W3C" .

@prefix org: <http://www.w3.org/ns/org#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@base <http://example.com/> .

<director> a org:Role ;
	rdfs:label "Director" .
 
(see functional requirement F3.1)

Alternative scenario: retrieve description of a non-document resource (hash URI)

In many cases, the things that are of interest are not always the things that are resolvable. The example below demonstrates how a FOAF profile may be used to distinguish between the person and the profile; the former being the topic of the latter. Where the fragment is defined relative to the base, as in this example, the URL including the fragment may be used to access the resource representing the containing document. The HTTP protocol requires that the fragment part be stripped off before requesting the URI from the server. The client can then read properties of the hash URI <#i> from the retrieved description.


@base <http://www.w3.org/People/Berners-Lee/card>
@prefix foaf: <http://xmlns.com/foaf/0.1/>.
@prefix dc: <http://purl.org/dc/elements/1.1/>.

<> a foaf:PersonalProfileDocument ;
	dc:title "Tim Berners-Lee's FOAF file" ;
	foaf:homepage <http://www.w3.org/People/Berners-Lee/> ;
	foaf:primaryTopic <#i> .
(see functional requirement F3.2)

UC4: Update existing resource

Change the RDF description of a LDP resource, potentially removing or overwriting existing data. This allows applications to enrich the representation of a resource by addling additional links to other resources.

Primary scenario: enrichment

This relates to user story Metadata Enrichment in Broadcasting and is based on the BBC Sports Ontology [[BBC-SPORT]]. The resource-centric view of linked-data provides a natural granularity for substituting, or overwriting a resource and its data. The simplest kind of update would simply replace what is currently known about a resource with a new representation.

There are two distinct resources in the example below; a sporting event and an associated award. The granularity of the resource would allow a user to replace the information about the award without disturbing the information about the event.


@prefix : <http://example.com/>.
@prefix sport: <http://www.bbc.co.uk/ontologies/sport/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
 
 :mens_sprint a sport:MultiStageCompetition;
    rdfs:label "Men's Sprint";
    sport:award <#gold_medal> .

<#gold_medal> a sport:Award .

The description can be enriched as events unfold, adding a link to the winner of the gold medal by substituting the above description with the following.


@prefix : <http://example.com/>.
@prefix sport: <http://www.bbc.co.uk/ontologies/sport/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
 
 :mens_sprint a sport:MultiStageCompetition;
    rdfs:label "Men's Sprint";
    sport:award <#gold_medal> .
<#gold_medal> a sport:Award; 
    sport:awarded_to [
        a foaf:Agent ;
        foaf:name "Chris Hoy" .
    ] .
(see functional requirement F4.1)

Alternative scenario: selective update of a resource

This relates to user story Data Catalogs. A catalogue is described by the following RDF model, based on the Data Catalog Vocabulary [[vocab-dcat]] which provides a standard format for representing the metadata held by organizations.


@prefix : <http://example.com/>.
@prefix dcat: <http://www.w3.org/ns/dcat#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
   
 :catalog a dcat:Catalog ;
    dcat:dataset :dataset/001;
    dcterms:issued "2012-12-11"^^xsd:date.

A catalog may contain multiple datasets, so when linking to new datasets it would be simpler and preferable to selectively add just the new dataset links. For this example, a Changeset [[CHANGESET]] might be used to add a new dc:title to the dataset. The following update would be directed to the catalogue to add an additional dataset.


@prefix : <http://example.com/>.
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix cs: <http://purl.org/vocab/changeset/schema#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>.

<change1>
  a cs:ChangeSet ;
  cs:subjectOfChange :catalog ;
  cs:createdDate "2012-01-01T00:00:00Z" ;
  cs:changeReason "Update catalog datasets" ;
  cs:addition [
    a rdf:Statement ;
    rdf:subject :catalog ;
    rdf:predicate dcat:dataset ;
    rdf:object :dataset/002 .
  ] .
(see functional requirement F4.2)

UC5: Determine if a resource has changed

It should be possible to retrieve versioning information about a resource (e.g. last modified or entity tag) without having to download a representation of the resource. This information can then be compared with previous information held about that resource to determine if it has changed. This versioning information can also be used in subsequent conditional requests to ensure they are only applied if the version is unchanged.

Primary scenario: determine if a resource has changed

Based on the user story, Constrained Devices and Networks, an LDP server could be configured to act as a proxy for a CoAP [[COAP]] based Web of Things [[WOT]]. As an observer of CoAP resources, the LDP server registers its interest so that it will be notified whenever the sensor reading changes. Clients of the LDP can interrogate the server to determine if the state has changed.

In this example, the information about a sensor and corresponding sensor readings can be represented as RDF resources. The first resource below, represents a sensor described using the Semantic Sensor Network [[SSN]] ontology.


@prefix : <http://example.com/energy-management/>.
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#> .

<> a :MainsFrequencySensor;
  rdfs:comment "Sense grid load based on mains frequency";
  ssn:hasMeasurementCapability [
	a :FrequencyMeasurementCapability;
	ssn:hasMeasurementProperty <#property_1> .
  ] .

The value of the sensor changes in real-time as measurements are taken. The LDP client can interrogate the resource below to determine if it has changed, without necessarily having to download the RDF representation. As different sensor properties are represented disjointly (separate RDF representations) they may change independently.


@prefix : <http://example.com/energy-management/> .
@prefix ssn: <http://purl.oclc.org/NET/ssnx/ssn#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .


<http://example.com/energy-management#property_1> :hasMeasurementPropertyValue <> .
<> a :FrequencyValue;
	:hasQuantityValue "50"^^xsd:float.
(see functional requirement F5.1)

UC6: Aggregate resources

There is a requirement to be able to manage collections of resources. The concept of a collection overlaps with, but is distinct from that of a container. These collections are (weak) aggregations, unrelated to the lifecycle management of resources, and distinct from the ownership between a resource and its container. However, the composition of a container may be reflected as a collection to support navigation of the container and its contents. There is a need to be able to create collections by adding and deleting individual membership properties. Resources may belong to multiple collections, or to none.

Primary scenario: add a resource to a collection

This example is from Library Linked Data and LLD-UC [[LLD-UC]], specifically Subject Search.

There is an existing collection at <http://example.com/concept-scheme/subject-heading> that defines a collection of subject headings. This collection is defined as a skos:ConceptScheme and the client wishes to insert a new concept into the scheme. which will be related to the collection via a skos:inScheme link. In the example below, a new subject-heading, "outer space exploration" is added to the scheme:subject-heading collection. The following RDF describes the (item-level) description of the collection, also demonstrating that the relationship between the parent and child resources may run in a seemingly counter-intuitive direction, from child to parent.


@prefix scheme : <http://example.com/concept-scheme/>.
@prefix concept : <http://example.com/concept/>.
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .

scheme:subject-heading a skos:ConceptScheme.

concept:Outer+space+Exploration skos:inScheme scheme:subject-heading.
(see functional requirement F6.1)

Alternative scenario: add a resource to multiple collections

Logically, a resource should not be owned by more than one container. However, it may be a member of multiple collections which define a weaker form of aggregation. As this is simply a manipulation of the RDF description of a collection, it should be possible to add the same resource to multiple collections.

As a machine-readable collection of medical terms, the SNOMED CT ontology [[SNOMED]] is of key importance in user story, Healthcare. SNOMED CT allows concepts with more than one parent. In the example below, SNOMED concepts are treated as collections (aggregations) of narrower concepts. We see that the concept :TissueSpecimenFromHeart belongs to two parent collections as it is both a :TissueSpecimen and a :SpecimenFromHeart. This example also demonstrates how composition and aggregation support different scenarios, as the ability to have multiple parents should not be a possibility with composition.


@prefix : <http://example.com/snomed/>.
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .

:TissueSpecimen a skos:Concept ;
	:conceptID "119376003";
	skos:prefLabel "Tissue specimen"
	skos:narrowerTransitive :TissueSpecimenFromHeart.
   
:SpecimenFromHeart a skos:Concept ;
	:conceptID "127462005";
	skos:prefLabel "Specimen from heart"
	skos:narrowerTransitive :TissueSpecimenFromHeart.

:TissueSpecimenFromHeart a skos:Concept;
	:conceptID "128166000";
	rdfs:label "Tissue specimen from heart".
(see functional requirement F6.2)

UC7: Filter resource description

This use case extends the normal behaviour of retrieving an RDF description of a resource, by dynamically excluding specific (membership) properties. For containers, it is often desirable to be able to read a collection, or item-level description that excludes the container membership.

Primary scenario: retrieve collection-level description

This scenario, based on Library Linked Data, uses the Dublin Core Metadata Initiative Collection-Level description [[DC-COLLECTIONS]]. A collection can refer to any aggregation of physical or digital items. This scenario covers the case whereby a client can request a collection-level description as typified by the example below, without necessarily having to download a full listing of the items within the collection.


@prefix dc: <http://purl.org/dc/elements/1.1/>.
@prefix : <http://example.org/bookshelf/>.
@prefix dcmitype: <http://purl.org/dc/dcmitype/>.
@prefix cld: <http://purl.org/cld/terms/>.
@prefix dcterms: <http://purl.org/dc/terms/>.
 
<> dc:type dcmitype:Collection ;
	dc:title "Directory of organizations working with Linked Data" ;
	dcterms:abstract "This is a directory of organisations specializing in Linked Data."
	cld:isLocatedAt <http://dir.w3.org>
	cld:isAccessedVia <http://dir.w3.org/directory/pages/landing-page.xhtml?view>
(see functional requirement F7.1)

Alternative scenario: retrieve item-level description of a collection

This use case scenario, also based on Library Linked Data, focuses on obtaining an item-level description of the resources aggregated by a collection. The simplest scenario is where the members of a collection are returned within a single representation, so that a client can explore the data by following these links. Different applications may use different membership predicates to capture this aggregation. The example below uses rdfs:member, but many different membership predicates are in common use, including RDF Lists. Item-level descriptions can be captured using the Functional Requirements for Bibliographic Records (FRBR) ontology [[FRBR]] [[FRBR-CORE]].

Based on the example below, the item-level description should include as a minimum all the rdfs:member relationships. It need not include other properties of the collection, and it need not include additional properties of the members.


@prefix frbr: <http://purl.org/vocab/frbr/core#>.
@prefix dc: <http://purl.org/dc/elements/1.1/>.
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .

<> rdfs:member <#ebooks97>, <#ebooks21279>.

<#work97> a frbr:LiteraryWork;
    dc:title "Flatland: a romance of many dimensions" ;
	frbr:creator <#Abbott_Edwin>;
	frbr:manifestation <ebook97>.
 
<#work21279> a frbr:LiteraryWork;
	dc:title "2 B R 0 2 B" ;
	frbr:creator <#Vonnegut_Kurt>;
	frbr:manifestation <ebook21279>.
(see functional requirement F7.2)

UC8: Retrieve a large resource description in multiple parts

This use case addresses a problem with the “resource-centric” approach to interacting with RDF data. The problem is that some resources participate in a very large number of triples, and therefore a “resource-centric” granularity leads to resource descriptions that are too large to be practically processed in a single HTTP request. This use case applies to all resources, not just containers.

Primary scenario: Pagination

In user story, Maintaining Social Contact Information, it is not uncommon for users to have a very large number of contacts. This leads to a very large resource description, especially if some basic information about the contacts is included as well. The size of this representation may be so large that retrieval in a single HTTP request is impractical.

In this example the response to the first request includes a reference to the next resource in an ordered collection of resources. For the purposes of the example, we make use of the next property defined by the XHTML Metainformation Vocabulary. There is no presumption that the LDP specification will recommend the use of this vocabulary.


@prefix : <http://example.com/people/>.
@prefix xhv: <http://www.w3.org/1999/xhtml/vocab#>.

:alice a foaf:Person;
   rdfs:label "Alice";
   foaf:mbox <mailto:alice@example.com>.
   
<> xhv:next <http://example.com/1234567890>.
		

When the client requests the resource identified by next, the response includes additional content that can be merged with the earlier data to construct a more complete model of the originally requested resource. It may also contain further next links, which may be requested in turn.

The following representation is the response to the resource identified by next, completing the contacts list.


@prefix : <http://example.com/people/>.

:bob a foaf:Person;
   rdfs:label "Bob";
   foaf:mbox <mailto:bob@example.com>.
		
(see functional requirement F8.1)

UC9: Manage binary resources

It should be possible to easily add non-RDF binary resources to containers that accept them. Binary resources may be updated and removed during the lifecycle of the container.

Primary scenario: access binary resources

From the user story Sharing Binary Resources and Metadata it should be possible to easily add non-RDF resources to containers that accept them. Clients submit a non-RDF representation to a container in a media type accepted by that container. The container creates a URI to represent this media resource, and creates a link from the container to the new URI. The binary resource may be accompanied by an explicit RDF description. It should be possible to find the metadata about such a resource and to access and edit it in the usual ways.

This example uses the Ontology for Media Resources to describe a media resource added to a collection [[MEDIAONT]].


@prefix ma: <http://www.w3.org/ns/ma-ont#> .

<dataset> a ma:Collection ;
	ma:hasMember <dataset/image1.jpg>

<dataset/image1.jpg> a ma:MediaResource ;
	ma:hasFormat "image/jpeg" .
(see functional requirement F9.1)

Alternative scenario: media-resource attachments

A resource may have multiple renditions. For example, you can have a PDF and a JPEG representing the same thing. A user is trying to create a work order along with an attached image showing a faulty machine part. To the user and to the work order system, these two artifacts are managed as a set. A single request may create the work order, the attachment, and the relationship between them, atomically. When the user retrieves the work order later, they expect a single request by default to retrieve the work order plus all attachments. When the user updates the work order, e.g. to mark it completed, they only want to update the work order proper, not its attachments. Users may add/remove/replace attachments to the work order during its lifetime.

(see functional requirement F9.2)

Requirements

This section lists the functional and non-functional requirements arising from the use-cases catalogued in this document. Specific requirements that have been de-prioritized or rejected have been left in the document for completeness, but are shown as struck out.

Functional Requirements

F1.1:
The system shall provide the ability to create containers for composing resources, from UC1.
F1.2:
The system shall provide the ability to create nested containers, from UC1.
F1.3:
On deletion of a container, the system shall delete any contained resources and nested containers, from UC1.
F2.1:
The system shall provide the ability to create resources within a container, from UC2.
F2.2:
The system shall provide the ability to delete resources, from UC2.
F2.3:
The system shall provide the ability to move resources between containers, from UC2.
F3.1:
The system shall provide the ability to retrieve resource descriptions, from UC3.
F3.2:
The system shall enable the client to retrieve the description of a hash URI, from UC3.
F4.1:
The system shall provide the ability to update an existing resource by substitution, from UC4.
F4.2:
The system shall provide the ability to perform a selective update of a resource, from UC4.
F5.1:
The system shall provide the ability to determine if a resource has changed, from UC5.
F6.1:
The system shall provide the ability to aggregate resources, from UC6.
F6.2:
The system shall support the addition of a resource to multiple aggregations, from UC6.
F7.1:
The system shall provide the ability to retrieve a collection-level description of a composition, from UC7.
F7.2:
The system shall provide the ability to retrieve an item-level description of a composition or aggregation, from UC7.
F8.1
The system shall provide the ability to retrieve a paginated description of a composition or aggregation, from UC8.
F9.1:
The system shall provide the ability to store and access media resources, from UC9
F9.2:
The system shall provide the ability to add media-resource attachments, from UC9.

Non-Functional Requirements

NF1.1:
The system shall provide access guidance to resources, from UC1.
NF2.1:
The system shall encourage non-duplication of resources, from UC2.
NF2.2:
The system shall support distribution of resources, from UC2.
NF2.3:
The system shall support consistent, global naming, from UC2.
NF3.1:
The system shall support the use of standard vocabularies where appropriate, from UC3.
NF3.2:
The system shall provide a scalable linking model, from UC3.
NF4.1:
The system shall permit unrestricted vocabulary, from UC4.
NF5.1:
The LDP shall ensure consistent access in the case of multiple simultaneous attempts to access a resource, from UC5.
NF6.1:
The system shall allow resource descriptions that are a "mix of simple data and collections", from UC6.
NF6.2:
The system shall support relative URIs enabling sharing of collections, from UC6.

Acknowledgements

We would like to acknowledge the contributions of user story authors: Christophe Guéret, Roger Menday, Eric Prud'hommeaux, Steve Speicher, John Arwe, Kevin Page.