In my last post, I argued for designing RESTful hypermedia around the capabilities and needs of the client rather than around a specific service. A reasonable question to ask is whether the constraints of REST require this, or whether it is simply a good practice. This issue is definitely related to REST’s Uniform Interface constraint which requires that messages be self-descriptive. What isn’t always clear is what is meant by “self-descriptive” in the context of data formats. Roy Fielding explains on the REST-discuss list:
Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent. The specification does not need to be a standard (a.k.a., a measure that everyone agrees to). It would help, but most useful standards are defined through use. Whoever starts sending the data first should define the specification according to what is being sent, not try to get everyone to agree first.
This is one of those gray areas of increasing RESTfulness that will doubtless drive some people nuts. The problem is that I can’t say “REST requires media types to be registered” because both Internet media types and the registry controlled by IANA are a specific architecture’s instance of the style — they could just as well be replaced by some other mechanism for metadata description.
The broader question is what does it take to create an *evolving* set of standard data types? Obviously, I can’t say that all data types have to be *the* standard before they are used in a REST-based architecture. At the same time, I do require enough standardization to allow the data format sent to be understood as such by the recipient. Hence, both sender and recipient agree to a common registration authority (the standard) for associating media types with data format descriptions. The degree to which the format chosen is a commonly accepted standard is less important than making sure that the sender and recipient agree to the same thing, and that’s all I meant by an evolving set of standard data types.
And so standardization isn’t necessarily the deciding factor in self-descriptiveness. At the end of the day, the sender and receiver need to have agreement on the format being used for a representation (through the media type indicated in the transfer protocol) and have a well understood way to map that format name to a specification. This doesn’t seem to say much about the nature of the formats themselves though.
In the same posting, Roy identifies section 5.2.1 of his dissertation as the definition of the self-descriptiveness constraint (at least as it pertains to the data format). I find this section particularly useful because it compares the RESTful architecture of the Web to three alternative architectures for distributed hypermedia systems. Contrasting the Web with other possible solutions makes the distinguishing features of REST much more clear and highlights the strengths and weaknesses of the style.
The three alternatives proposed can be summarized as follows:
1) A client-server architecture where requested information is rendered on the server and the resulting image is sent to the client. A key benefit of this architecture is the encapsulation of the server-side data. This decouples the client from the server, isolating it from implementation details and allowing the service to evolve independently from the client – new features could easily be added to a web site in this architecture, the image sent to the client would just change appropriately. This comes at the cost of scalability issues as the server must take on the processing associated with rendering images, as well as the bandwidth cost of transmitting each page as an image. Additionally, the client is restricted to being a browser – or at least something that can deal with page images. For example, it would be practically impossible to implement a spider in this architecture.
2) A mobile-object style architecture where the server sends a combination of both the data and a rendering engine (i.e. processing logic or code) to the client. This provides data encapsulation within the downloaded rendering engine running on the client. The server also offloads some processing to the client at the cost of the increased bandwidth for downloading application code. Here the client is even further restricted in what it can do – little more than simply running the downloaded application code.
3) An architecture where the server sends the raw data to the client along with a media type that indicates the data format allowing the client to choose a rendering engine for the data. This offloads the processing associated with rendering from the server and limits the bandwidth requirements as only the raw data is downloaded. The client is fairly unrestricted – it could be a browser, or a spider, or anything that is able to process the raw data. However, this comes at the cost of data encapsulation. The client is exposed to the server’s raw data, increasing coupling and limiting the server’s ability to evolve or change its implementation.
Option 3 addresses the shortcomings of Options 1 and 2 at the cost of data encapsulation. Also, while Option 3 allows for a wider range of clients, the processing that can be performed by any single client implementation in Option 3 is fixed. Option 2, through downloaded code, affords a single client more flexibility.
One might think that Option 3 is REST, but it isn’t – there are subtle differences. REST is described as follows:
REST provides a hybrid of all three options by focusing on a shared understanding of data types with metadata, but limiting the scope of what is revealed to a standardized interface. REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types, selected dynamically based on the capabilities or desires of the recipient and the nature of the resource. Whether the representation is in the same format as the raw source, or is derived from the source, remains hidden behind the interface. The benefits of the mobile object style are approximated by sending a representation that consists of instructions in the standard data format of an encapsulated rendering engine (e.g., Java).
The first key distinction is that the raw data is not sent to the client in REST; instead, the data is converted into representation format that is:
a) “Matching one of an evolving set of standard types” – this is elaborated in detail above.
b) “Selected dynamically based on the capabilities or desires of the recipient and the nature of the resource” – this is essentially content negotiation. The format used to represent the data when transmitting it to the client isn’t fixed; an appropriate format can be selected for each request. Obviously different sets of formats are applicable to the various classes of resource (e.g. there are distinct formats that can be used for images and hypermedia). The client is able to constrain the set of formats it is willing to accept in the request (the HTTP accept header is designed for this, though separate URIs are often used in practice).
For example, rather than sending raw database query results to the client, you convert them into the standardized format most appropriate for the client before returning it. For a web browser, this is likely HTML, for a voice browser – VoiceXML. Of course, as Fielding points out, some resources are simply static hypermedia pages. Here, the raw data format is the same as the representation format, but should they be different, the client only ever sees (i.e. depends on) the representation format.
The second key distinction is that some of the standard data formats supported by a client can be bytecode or scripts – code on demand. This provides similar benefits as Option 2 as a single client is provided with increased flexibility. Of course, the more downloadable code is used, the more you are impacted by the drawbacks of Option 2. It is up to the architect to properly balance the tradeoffs for a specific system’s needs.
There is another set of tradeoffs to consider in the nature of the hypermedia format used – this gets us back to the subject of my previous post. As we have just seen, a RESTful service does not send its raw data to the client (that’s Option 3, not REST); it must be converted to a self-descriptive format. Given the data encapsulation benefits that this is intended to provide, it should be clear that simply serializing your raw data into XML or JSON doesn’t cut it. Angle brackets and curly braces don’t hide implementation details.
I would say that the minimum amount of abstraction is provided by a standardized format specific to an application, for example, a standard format for online book sellers. (I’m using Roy Fielding’s definition of application which is basically “something the user wants to do with computers” such as buying a book. I’ve used “service” to denote a specific instance of an application.) If an interface can be exported by multiple distinct implementations it’s a sure sign that it’s not coupling the client to a specific one. Of course, such a format would limit the ability for a service to evolve as I described previously. For example, if the service changed to also support trading books or selling other types of goods, the application-specific format would likely not be sufficient for these new features.
Data encapsulation and service evolvability can be maximized by instead designing (or selecting) a format around the client’s needs. This is because such a format more closely matches the full space of applications that the client is able to support. If you want a single service to be accessible by two distinct clients that don’t use the same format(s) then simply support them both, using content negotiation or distinct URIs to allow a client to choose the right one.
This data encapsulation may come at the cost of some efficiency as compared to a format designed around the application. Also, it requires that the service designer, at implementation time, has some understanding of the clients that will use the service. It is up to the system designers to consider the tradeoffs and choose a format that meets their needs. But REST does not constrain you to using a broader, standard format – an application-specific format is RESTful as long as it meets the self-descriptiveness criteria described above. Roy Fielding touches on this in one of the postings already linked above, saying:
Sure, it is easier to deploy the use of a commonly understood data format. However, it is also more efficient to use a format that is more specifically intended for a given application. Where those two trade-offs intersect is often dependent on the application. REST does not demand that everyone agree on a single format for the exchange of data — only that the participants in the communication agree. Beyond that, designers need to apply their own common sense and choose/create the best formats for the job.
If we want to call one more RESTful than the other, then we have to take the goal of evolution into account. I would say it is more RESTful to use a specific standard type when applicable or to define a new type that is specific to a given purpose AND intended to be standardized for that application type (i.e., proprietary types are less RESTful than industry-wide standard types, but new standard types are not less RESTful than old standard types). But that is really only my personal preference, since the style does not constrain REST-based architectures to a single standard.
July 23, 2010 at 2:17 pm |
[…] Self-Descriptive Hypermedia […]
July 27, 2010 at 4:31 pm |
[…] Self-Descriptive Hypermedia – An excellent explanation and discussion of what is meant by “self-descriptive” in the context of REST data formats. (by Andrew Wahbe) […]
February 10, 2011 at 1:38 am |
[…] https://linkednotbound.net/2010/07/19/self-descriptive-hypermedia/ […]
February 10, 2011 at 4:41 pm |
I think that cries for the application of a standardized description format whose instantiations can be serialized into appropriated representation formats that are consumable by the interacting clients (instead of creating all the time domain-specific media types that describe domain models).
So, I would suggest to use RDF Model as a basis and utilize existing Semantic Web knowledge representation languages and specific ontologies, that are built on top of that knowledge representation structure, for creating such domain-specific description. If you won’t find an appropriated Semantic Web ontology to describe (parts of) your domain model, you can create a new one. However, it is generally better to utilize existing Semantic Web ontologies to establish easier a shared understanding.
What do you think about that way?