Hypermedia is an Event Filter

January 31, 2011

In an earlier post, I examined multiple definitions of the term “hypermedia”. A key aspect shared by all those definitions was that hypermedia is interactive – an entity interacting with the system is able to effect change through some sort of input or action. In fact, Roy Fielding summarizes “hypertext” as data-guided controls. Unfortunately, little focus is typically given to input event processing in the context of REST even though the style offers some definite advantages in this area.

In another post, I discussed a comparison of various alternative architectures for the Web found in section 5.2.1 of Roy Fielding’s dissertation. The focus of this analysis was the transfer of information from server to client as the movement of data to the processor distinguishes REST from other architectural styles that move processing agents to the data.

In this post, I will re-examine these alternatives, extending the analysis to the information flowing from client to server and the protocols used to exchange information. By doing so, I hope to clarify the advantages provided by REST for input event processing.

Read the rest of this entry »

Web Linking

December 1, 2010

A couple months ago, Mark Nottingham’s Web Linking internet draft made its way to RFC status. This is a pretty significant specification for the web. It does three key things:

  1. It provides a generic definition of a “link”;
  2. It establishes a registry for link relations; and
  3. It defines the HTTP link header.

The first point is one of those things that surprisingly hadn’t been done before – at least as far as I know anyways. Sure, links have been defined in the context of specific formats, and the semantic web has a fairly generic definition of a link, but the web linking RFC provides an application and serialization agnostic definition, which is a pretty useful thing to have.

Read the rest of this entry »

Machine-to-Machine Hypermedia

August 25, 2010

Most developers and architects trying to create new RESTful hypermedia formats today are focused on “machine-to-machine” systems where the client is not driven by a user interface (UI). Hypermedia formats already exist for UI-driven clients. There’s obviously HTML plus a whole family of standards (SVG, SMIL, etc.) for graphical UIs and for voice UIs there are standards such as VoiceXML. While there are many great examples of hypermedia formats for UI-driven clients, it’s not even clear what “hypermedia” actually means outside of the context of a UI.

Let’s take a look at the Wikipedia definition of “hypermedia”:

Hypermedia is used as a logical extension of the term hypertext in which graphics, audio, video, plain text and hyperlinks intertwine to create a generally non-linear medium of information.This contrasts with the broader term multimedia, which may be used to describe non-interactive linear presentations as well as hypermedia.

This seems to define hypermedia as an extension of media designed for human consumption. So does it make sense to use the term hypermedia for something that isn’t consumed through some sort of user interface?

Read the rest of this entry »

Self-Descriptive Hypermedia

July 19, 2010

In my last post, I argued for designing RESTful hypermedia around the capabilities and needs of the client rather than around a specific service. A reasonable question to ask is whether the constraints of REST require this, or whether it is simply a good practice. This issue is definitely related to REST’s Uniform Interface constraint which requires that messages be self-descriptive. What isn’t always clear is what is meant by “self-descriptive” in the context of data formats. Roy Fielding explains on the REST-discuss list:

Self-descriptive means that the type is registered and the registry points to a specification and the specification explains how to process the data according to its intent. The specification does not need to be a standard (a.k.a., a measure that everyone agrees to). It would help, but most useful standards are defined through use. Whoever starts sending the data first should define the specification according to what is being sent, not try to get everyone to agree first.

Roy later goes on to explain:

This is one of those gray areas of increasing RESTfulness that will doubtless drive some people nuts. The problem is that I can’t say “REST requires media types to be registered” because both Internet media types and the registry controlled by IANA are a specific architecture’s instance of the style — they could just as well be replaced by some other mechanism for metadata description.

The broader question is what does it take to create an *evolving* set of standard data types? Obviously, I can’t say that all data types have to be *the* standard before they are used in a REST-based architecture. At the same time, I do require enough standardization to allow the data format sent to be understood as such by the recipient. Hence, both sender and recipient agree to a common registration authority (the standard) for associating media types with data format descriptions. The degree to which the format chosen is a commonly accepted standard is less important than making sure that the sender and recipient agree to the same thing, and that’s all I meant by an evolving set of standard data types.

Read the rest of this entry »

Hypermedia is the Client’s Lens

June 9, 2010

RESTful systems are by definition supposed to be based on the architectural style of the Web; however, there is one big fat glaring difference between the Web and almost all of the other systems out there that claim to be RESTful. I’m not talking about use of methods or any other aspect of HTTP. I’m not talking about the structure of the URIs, the resources in the system, or even whether or not the representations contain links. Many systems that can honestly claim to be at Level 3 of the Richardson Maturity Model have this deficiency.

I’m talking about over-constrained, service-specific hypermedia formats that precisely represent a service’s resources and workflow. It’s hard to think of this as a “problem” — it’s what we’re used to doing in software interfaces. But this is certainly not how things work on the Web. Here we have a single format, HTML, used by a wide variety of services: Google, Facebook, Amazon, etc., that all do very different things. The markup language is not designed around the semantics of any of the resources exposed by these services. There is no <book> element used to represent a book on Amazon.com. The Web’s interface is uniform not only because of HTTP but also because of HTML.

Read the rest of this entry »


May 17, 2010
I’ve been standing silently at the edge of the ongoing party that is the tech blogosphere for quite a while, just listening to the discussion. Slowly, I’ve joined in the conversation, first in blog comments, forums and mailing lists, and more recently in Twitter. But it’s awfully hard to really communicate without a blog of your own, and I’ve finally decided it’s time for me to take the floor.

I’m certain that most of what I have to say here will be related to software architecture. It’s what I spend my days working on, and while you’d think that it would be the last thing I’d want to spend my nights writing about, it’s a topic on which I have a lot of opinions and insights to share. While I’ve had a variety of roles in my career, I’ve been largely focused on loosely-coupled, hyperlink-driven systems based on standard protocols and formats — in a nutshell: Web-inspired software architecture. I imagine that this will be the prominent theme here (though I don’t have long term plans for what I am going to cover beyond the first few posts). The title of this blog, “linked, not bound”, is meant to reflect the nature of the subject matter, though it incidentally describes the format as well.

Read the rest of this entry »