Skip to Content

In this post, I think about OData, in particular where it came from and why it looks and acts like it does. I also consider why I think it was a good protocol for an organisation like SAP to embrace.

OData. Or as some people write it (which causes me to gnash my teeth) “oData”. Not as bad as “O’Data” as Brenton OCallaghan writes it, just to annoy me, though. Anyway, on a more serious note, I’ve been thinking about OData recently in the context of the fully formed and extensible CRUD+Q server that you can get for free with a small incantation of what seems to be magic in the form of the tools of the Application Programming Model for SAP Cloud Platform. I was also thinking about OData because of Holger Bruchelt‘s recent post “Beauty of OData” – nice one Holger.

 

OData fundamentals

OData is a protocol and a set of formats. It is strongly resource oriented, as opposed to service oriented, which to me as a fan of simplicity and RESTfulness is a very good thing. Consider Representational State Transfer (REST) as an architectural style, which it is, rather than a specific protocol (which it isn’t), and you’ll come across various design features that this style encompasses. For me, though, the key feature is the uniform interface – there are a small fixed number of verbs (OData operations) and an infinite set of nouns (resources) upon which the verbs operate. These OData operations map quite cleanly onto the HTTP methods that we know & love, and understand at a semantic level:

OData operation HTTP method
C – Create POST
R – Read GET
U – Update PUT
D – Delete DELETE
Q – Query GET

 

There’s more to this (e.g. the use of PATCH for merge semantics, or the batching of multiple operations within an HTTP POST request) but basically that’s it. We have a standard set of CRUD+Q operations that cover the majority of use cases when thinking about the manipulation of resources. And for the edge cases where thinking in terms of resources and relationships between them would be too cumbersome, there’s the function import mechanism (with which I have a love-hate relationship, as it’s useful but also rather service oriented and therefore opaque).

Beyond the protocol itself, there’s the the shape of the data upon which the OData operations are carried out. I don’t mean the format – that’s separate, and multi-faceted too. OData formats, which relates to the RESTful idea of multiple possible representations of a resource, come in different flavours – predominantly XML and JSON based. What I mean with “shape” is how the data in OData resources is represented.

One of the things I used to say a lot was that if something was important enough it should be addressable. More particularly, business data should be addressable in that elements should have addresses, not hidden behind some sort of opaque web services endpoint. In the case of an HTTP protocol like OData, these addresses are URLs. And the shape of the data can be seen in the way those URL addresses are made up*.

*some absolute RESTful purists might argue that URLs should be opaque, that we should not imply meaning from their structure. That to me is a valid but extreme position, and there has to be a balance between the beautiful theory of absolute purity and the wonderful utility of real life pragmatism.

And the shape of the data, which itself is uniform and predictable, allows this to happen. To understand what this shape is and how it works, I wanted to take a brief look at OData’s origins.

 

OData’s origins

OData goes back further than you might think. Here’s an image from a session on OData that I gave a few years ago:

The protohistory of OData

 

I’d suggest that if one looks at the big picture, OData’s origins go back to 1995, with the advent of the Meta Content Framework (MCF). This was a format that was created by Ramanthan V Guha while working in Apple’s Advanced Technology Group, and its application was in providing structured metadata about websites and other web-based data, providing a machine-readable version of information that humans dealt with.

A few years later in 1999 Dan Libby worked with Guha at Netscape to produce the first version of a format that many of us still remember and perhaps a good portion of us still use, directly or indirectly – RSS. This first version of RSS built on the ideas of MCF and was specifically designed to be able to describe websites and in particular weblog style content – entries that were published over time, entries that had generally had a timestamp, a title, and some content. RSS was originally written to work with Netscape’s “My Netscape Network” – to allow the combination of content from different sources (see Spec: RSS 0.9 (Netscape) for some background). RSS stood then for RDF Site Summary, as it used the Resource Description Framework (RDF) to provide the metadata language itself. (I have been fascinated by RDF over the years, but I’ll leave that for another time.)

I’ll fast-forward through the period directly following this, as it involved changes to RSS as it suffered at the hands of competing factions, primarily caused by some parties unwilling to cooperate in an open process, and it wasn’t particularly an altogether pleasant time (I remember, as I was around, close to the ongoing activities and knew some of the folks involved). But what did come out of this period was the almost inevitable fresh start at a new initiative, called Atom. Like RSS, the key to Atom was the structure with which weblog content was described, and actually the structure was very close indeed to what RSS looked like.

An Atom feed, just like an RSS feed, was made up of some header information describing the weblog in general, and then a series of items representing the weblog posts themselves:

header
  item
  item
  ...

And like RSS feeds, Atom feeds – also for machine consumption – were made available in XML, in parallel to the HTML-based weblogs themselves, which of course were for human consumption.

A few years later, in 2005, the Atom format became an Internet Engineering Task Force (IETF) standard, specifically RFC 4287, and became known as the Atom Syndication Format:

“Atom is an XML-based document format that describes lists of related information known as “feeds”. Feeds are composed of a number of items, known as “entries”, each with an extensible set of attached metadata. For example, each entry has a title.”

What was magic, though, was that in addition to this format, there was a fledgling protocol that was used to manipulate data described in this format. It was first created to enable remote authoring and maintenance of weblog posts – back in the day some folks liked to draft and publish posts in dedicated weblog clients, which then needed to interact with the server that stored and served the weblogs themselves. This protocol was the Atom Publishing Protocol, “AtomPub” or APP for short, and a couple of years later in 2007 this also became an IETF standard, RFC 5023:

“The Atom Publishing Protocol is an application-level protocol for publishing and editing Web Resources using HTTP [RFC2616] and XML 1.0 [REC-xml]. The protocol supports the creation of Web Resources and provides facilities for:

  • Collections: Sets of Resources, which can be retrieved in whole or
    in part.
  • Services: Discovery and description of Collections.
  • Editing: Creating, editing, and deleting Resources.”

Is this starting to sound familiar, OData friends?

Well, yes, of course it is. OData is exactly this – sets of resources, service discovery, and manipulation of individual entries.

AtomPub and the Atom Syndication Format was adopted by Google in its Google Data (GData) APIs Protocol while this IETF formalisation was going on and the publish/subscribe protocol known as PubSubHubbub (now called WebSub) originally used Atom as a basis. And as we know, Microsoft embraced AtomPub in the year it became an IETF standard and OData was born.

Microsoft released the first three major versions of OData under the Open Specification Promise, and then OData was transferred to the guardianship of the Organization for the Advancement of Structured Information Standards (OASIS) and the rest is history.

 

Adoption at SAP

I remember an TechEd event quite a few years back (it may have been ten or more) where I had a conversation with a chap at SAP who had been one of the members of a group that had been searching for a data protocol to adopt to take SAP into a new era of interoperability and integration. After a lot of technical research they decided upon OData. It was an open standard, a standard with which they could get involved, alongside Microsoft, IBM and others. For example, in 2014 OData version 4.0 was announced as an OASIS standard.

It was clear to me why such a standard was needed. In the aftermath of the WS-deathstar implosion there was clearly a desire for simplicity, standardisation, openness, interoperability and perhaps above all (at least in my view) a need for something that humans could understand, as well as machines. The resource orientation approach has a combination of simplicity, power, utility and beauty that is reflected in (or by) the web as a whole. One could argue that the World Wide Web is the best example of a hugely distributed web service, but that’s a discussion for another time.

OData has constraints that make for consistent and predictable service designs – if you’ve seen one OData service you’ve seen them all. And it passes the tyre-kicking test, in that the tyres are there for you to kick – to explore an OData service using read and query operations all you need is your browser.

OData’s adoption at SAP is paying off big time. From the consistencies in what we see across various SAP system surfaces, especially in the SAP Cloud Platform environment, through the simple ability to eschew the OData flavour itself and navigate OData resources as simple HTTP resources (how often have I seen UI5 apps retrieving OData resources and plonking the results into a JSON model?) to the crazy (but cool) ability to consume OData from other tools such as Excel. (Why you’d want to use these tools is a complete mystery to me, but that’s yet another story for another time, one best told down the pub.)

If you do one thing before your next coffee, have a quick look at an OData service. The Northwind service maintained by OASIS will do nicely. Have a look at the service document and, say, the Products collection.

Excerpts from the service document and from the Products collection

 

Notice how rich and present Atom’s ancestry is in OData today. In the service document, entity sets are described as collections, and the Atom standard is referenced directly in the “atom” XML namespace prefix. In the Products entity set, notice that the root XML element is “feed”, an Atom construct (we refer to weblog Atom and RSS “feeds”) and the product entities are “entry” elements, also a direct Atom construct.

Today’s business API interoperability and open standards are built upon a long history of collaboration and invention.

 

This post was brought to you by Pact Coffee’s Planalto and the delivery of the milk by the milkman even earlier than usual.

 

Read more posts in this series here: Monday morning thoughts.

 

To report this post you need to login first.

13 Comments

You must be Logged on to comment or reply to a post.

  1. Nabheet Madan

    As always great post sir. This i believe is one of the most important step which SAP took apart from the other #UI5, #Cloud, #HANA etc. To me it stands out, the reason is simple by embracing Odata/ REST/API stuff you open your doors for collaborations. Earlier getting data out of SAP was a big pain via SOAP and all and each with different approach. Now we have a uniform approach and anything can consume this data whether in app, webapps etc.

    I think from the moment the #SAP started thinking in direction of #TogetherWeCanAchiveMore things have changed for good. Having said that this is a long journey a journey where you keep on inventing without worrying about end results.  God knows what is next..BOTS programming etc. But one thing is for sure future will be great only if we collaborate for the greater good else it will be like a war.

    Thanks once again the great Monday morning thoughts series, i hope someday we have a book from SAP Press with title as Past, Present, Future and much more.

     

    Nabheet

     

    (1) 
    1. DJ Adams Post author

      Thanks Nabheet, I appreciate your comment. I think a key point is the “uniform approach” that you mention, that is definitely one of the wider benefits of adopting an open standard that has the qualities of OData.

      I think that we’ll see a continued stream of invention, and that’s not necessarily a bad thing. We just have to be mindful of that, and work out what are good inventions and what are not.

      (1) 
      1. Martin Fischer

        Hi DJ,

        great post, as usual! Uniform is very important, but almost as important and “the” reason for the success is for me, that it’s not only a SAP standard as protocols often have been in SAP’s past, it’s an open industry standard!

         

        Cheers,
        Martin

         

         

        (1) 
        1. DJ Adams Post author

          Yep, totally. Thanks Martin – it’s important to stress this. I know it’s crazy even to consider alternatives at this stage, but look at the success that came from embracing HTTP. Unthinkable not to these days, but of course it had to start somewhere … Cheers!

          (0) 
  2. Chris Paine

    I have a t-shirt (albeit a little faded now) that has on the back the text:


    “This Jen is the Internet”

    GET

    PUT

    POST

    DELETE

    –MERGE–*


    * (with two red lines crossing out the MERGE)

     

    I still remember reading the OData v2 spec and MERGE not PATCH being the supported verb and getting thoroughly annoyed. Then BATCH caused me much grief as I feared it was a clear step away from the RESTful APIs we had hoped SAP was going to adopt.

    My earliest memories of OData creeping into SAP were with the integrations with Microsoft Sharepoint and Duet. (as usual HR was right in the thick of trying out the new stuff first). Twas not so pretty back then. (Thinks back to the fights about how on earth these API calls were going to be licenced!)

    In the end though it was a process of iteration, and V4 isn’t that bad. – we may even be able to get rid of the XML layout with the JSON metadata representation (if anyone implements it!)


    *some absolute RESTful purists might argue that URLs should be opaque, that we should not imply meaning from their structure. That to me is a valid but extreme position, and there has to be a balance between the beautiful theory of absolute purity and the wonderful utility of real life pragmatism.

    Loved that aside – because I can feel that point – It just implies that we have to worry more about people rely on what might be implementation detail to navigate directly to resources – everything is an endpoint, which might not be something you want.

    The thing I tell myself, however, when I get cranky at OData for having a clearly not opaque URL structure (possibly not a bad thing, I will concede), for supporting BATCH and having “function imports” that clearly don’t follow the spec of not having  observable side effects (http://docs.oasis-open.org/odata/odata/v4.01/cs01/part1-protocol/odata-v4.01-cs01-part1-protocol.html#sec_Functions)   is that it isn’t a what I might have traditionally called a RESTful API. Instead it is a very good way of representing data structures of underlying database views/tables in a way that can be consumed over an HTTP call. “ODBC for the web”, I believe was one quote -(https://www.slideshare.net/fredverheul/gateway-for-the-uninformed-sitnl-edition Not sure who Fred Verheul  got the quote from – his was pretty close the earliest reference I could find.  Other than outside the SAP world  – https://www.apptius.com/apptitude/NewsLetter.aspx?n=7-9-2010)

    I would never implement an OData interface for one of my web apps – it’s just far too hard work for so little gain. But when you are building generic APIs for many people to consume, it makes sense.

    My fear is, however, that we become so fixated on use of OData that we forget about other possibilities. Use of Web Sockets to deliver real time data, for example, has the possibility to improve our user experience greatly, yet whilst it’s quite possible to bind a UI5 JSON model to data retrieved from a web socket event (done that, works really well!) because of the fixation in retrieving and sending data via OData and HTTP I fear we are bypassing some really cool stuff.

    Sorry to ramble so, your Monday morning posts are often far too inspiring.

    Cheers,

    Chris

     

     

    (1) 
    1. DJ Adams Post author

      A great ramble, Chris. Yes, the opaque URL structure (or lack of) does niggle a bit, but I find that reality gets the better of us and the fact that it’s not opaque has, I think, been one of the reasons for OData’s success – people can grok it quite easily with almost zero tools.

      I am intrigued by what you say here: “everything is an endpoint, which might not be something you want”. I might be misinterpreting the sentence, but it reminds me of the issues in the fundamental design of web services (WS style) where the endpoints were indeed opaque and not addressable – not a situation I’d like to see return.

      “ODBC for the web” – I remember that phrase too (perhaps from when Microsoft was involved) and think it’s both accurate and inaccurate in different ways. There’s a part of me that wants to dismiss it, but I think it does help folks understand it’s about DATA, not (remote) FUNCTIONS.

      Implementing an OData service (outside of Gateway) is still too hard, generally. Or has been. I do think that the Application Programming Model’s tooling is changing that. Look for example at the relevant sections of this tutorial group “Use the Application Programming Model to create a full stack app” and even just reading the tutorials will give you a sense that it’s possible to conjure up a service very quickly that does CRUDQ out of the box, which is lovely.

      You’re absolutely right to point out that there are other styles of data plumbing – web sockets is one, as you say – but moving up a level in the stack from there, publish/subscribe is, I think, due to make a flourished fresh appearance in the enterprise world. I for one am watching that space with interest.

       

      (1) 
  3. Eng Swee Yeoh

    Hi DJ

     

    Thanks for bringing us down memory lane. I enjoy reading your Monday morning posts, as they provide a different flavor to things.

     

    I’d love to hear your opinion on how granular an OData resource should look like. Many of the OData APIs that I’ve come across (e.g. SuccessFactors, Dynamics CRM) have fine granularity resources where header level and item level details are separated into different entities. However, IMHO traditional SAP ERP business objects do not lend itself that well to such granular resource-oriented paradigm. The database design of ERP objects (say sales/purchase order) tend to be very tightly coupled that it’s unlikely to be able to perform a CRUD-Q operation independently at either header/item level. It often needs to be handled as an entire object which may feel more service-oriented.

     

    Regards

    Eng Swee

    (1) 
    1. DJ Adams Post author

      That’s a very good question, thanks for joining the conversation too. It’s a difficult one to answer definitively; I do think that the facilities that OData already affords (deep inserts, batch operations) do go quite far in allowing us to “remain granular”, keeping header and item entities separately defined. I guess it’s down to how important we see the purity of the data model in some ways; if we want to “pre-compress” or “pre-compile” (for want of a better term) relationships, then we can do, and OData won’t mind, as a protocol. But I think that as an overall design, it encourages us to remain thinking at the individual entity level. One practical issue I could think of straight away from handling header/item relationships as entire objects is that paging of items could become more, rather than less, difficult. Great food for thought, though!

      (1) 
      1. Eng Swee Yeoh

        Thanks for sharing your thoughts.

         

        If I read you correctly, I’d say that if we were designing a new service from scratch (including backend implementation), it would be better to have fine granularity (i.e. header and item entities separate). This allows more flexibility in accessing the entities separately when we want to, or together as well (deep insert for CREATE, batch for UPDATE/DELETE, and $expand for READ/QUERY).

         

        And if we are exposing a service for an existing backend model (say ERP objects), it might be better to model it similar to how it has always been used – not necessarily splitting it into header/item entities if they were not meant to be handled as such.

        (1) 
        1. DJ Adams Post author

          Yes, you read me correctly. It’s an interesting point about modelling existing backend structures, and comes down to how important one sees an OData service as an abstraction rather than a pass-through.

          At a trivial level, I am mindful of the emphasis placed, in early docu, on using “human” (non-SAP-5-character) fieldnames in the entity definitions, and specifying just a subset of data according to what was required. At a more complex level, I think decisions have to be made on a case-by-case basis as to how much “reshaping” of the backend data structures is done when they’re exposed. Food for thought!

          (1) 

Leave a Reply