Skip to Content

In this post, I think about GraphQL, and its relationship with existing ideas for managing data and structured exposure to that data over a wire protocol. 

Last week, Chris Paine shared with us on Twitter some comments about GraphQL and how it compared with OData. It was an intriguing thought and led to all sorts of discussions. I didn’t know much about GraphQL so I took a bit of time to look into it. Not too much time so far, so please take these thoughts as coming from someone with a very limited exposure to GraphQL itself.

 

What is GraphQL?

GraphQL is an open sourced specification that originated from one of Facebook’s engineering teams. One of the pieces I consumed was a talk by one of GraphQL’s creators, Lee Byron: “Lessons from 4 years of GraphQL“, and one of the takeaways for me was the clear passion that has provided GraphQL with the early success it has been seeing. Beyond the specification, there’s a reference implementation in JavaScript, and over time, more than a dozen implementations have emerged, in different languages.

 

The sample on GraphQL’s homepage is a nice overview of what it looks and feels like

 

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. Like many combinations of specification and implementation, it’s also a community, with developers creating implementations of server-side components as well as client-side libraries.

One of the pieces of software you’ll see everywhere is Graphiql, a browser-based REPL*-like explorer where you can enter GraphQL queries on the left hand side (see the “Ask for what you” want in the image above) and see the results of those queries on the right hand side (see the “Get predictable results” in the image above). Queries and response are in the form of JSON-like structures, which is nice.

*REPL stands for Read Evaluate Print Loop and in many languages is a simple realtime environment in which you can interact with the language or service directly, to manually program or explore the possibilities it offers.

 

A fundamentally different data store

One of the core features of GraphQL is the shape of the data store. It’s not hierarchical, it’s not relational, it’s graphical. Entities persisted are generally either nodes – things – or arcs – relationships between things. I’m using the words “nodes” and “arcs” because the data store is nothing new – certainly not in my experience.

Back in the day, even before the SAP Community Network was born (which was in 2003), there was a lot of activity and thinking around the theme of the Semantic Web, the idea that the content of what was stored and retrieved via HTTP could be described in a separate, rich, semantic layer that could bring meaning to entities on the Web. Meaning in terms of what types of things the resources represented, and meaning in terms of how they were related.

This meaning was expressed in terms of nodes (the resources) and arcs (the relationships between those nodes), and the language that was used to describe these nodes and arcs was the Resource Description Framework (RDF), and the various ontological languages that were based on and used in conjunction with RDF (OWL – the “Web Ontology Language” is one of the more well known of those, along with Dublin Core).

 

Image result for RDF example nodes arcs

RDF nodes and arcs

 

I have been fascinated by RDF for a long time, and dabbled in various aspects in the early 2000’s – see the RDF tag on my blog to see some posts on that subject. Of course, RDF can be seen as one of the ancestors* of OData, which in a way is rather ironic.

*RSS was originally an RDF-based language (RSS stood for RDF Site Summary at the time), Atom was a successor to RSS, and OData was an extension of Atom (along with the Atom Publishing Protocol). But that’s a story for another time.

 

RDF and graph databases

RDF information is expressed in so-called “triples” in the form:

[subject] ---[predicate]---> [object]

where “subject” and “object” entities are nodes and “predicates” are arcs – the lines between them. Triples are stored in “triplestores”, and a more generalised form of a triplestore is … wait for it … a graph database.

To me, there is a fundamental beauty in the tension between the simple structure of triples, and the unstructured, or perhaps unrestricted nature of what you can store with triples. It’s quite far from the schema-restrained model of relational data stores. With a graph data store you can store information you hadn’t planned for, and ask questions of the data that you could have never foreseen at the outset. You can imagine an ever-growing network of arcs and nodes, of differing types and properties, being added to a graph database, and new queries being made on that database, filtering on properties and relationships that weren’t even around at the outset.

GraphQL, as a data store, then, is rather powerful. There are of course other examples of graph data stores – one that comes to mind immediately is Neo4j, which has been around for a long time.

 

Irony

Why is it ironic that RDF, with its closely related graph data store concept and link to GraphQL, is an ancestor of OData?

Because when you move up the stack from the data store to the protocol, things could hardly be more different. Here’s a quick summary of the major differences I see between OData and GraphQL, at a protocol level.

OData GraphQL
Treats HTTP as an application  protocol and aims for rough parity between OData operations and HTTP methods Treats HTTP mainly as a transport protocol
Operations are transparent at
an HTTP level
Operations are opaque at an HTTP level
Static schema-based data structures fixed at design time Dynamic schema-less data structures
Fixed query options Powerful query options
Easy to reason about from a security perspective Harder to reason about because of the opaque nature of the protocol implementation
Endpoints that represent the business data Single endpoint that represents the query “socket”

 

In some ways GraphQL at the protocol level reminds me of web services. Not the entire WS-Deathstar panoply of specifications — rather, the way that there’s a single endpoint for all operations and all queries. This makes me somewhat uncomfortable. Not because it feels like a return to the bad old days of web services, but because it just doesn’t feel right to me, as an advocate of what HTTP is (an application protocol and arguably the best example of a powerful, distributed web service, but that’s a story for another time).

 

Comparing and contrasting

The OData protocol treats data (entities) as first class citizens,  in that it gives each resource a URL, a URL that can then be semantically described, a URL that is part of a near infinite set of resources (nouns), with a very finite and predictable set of methods (verbs). Moreover, it comes with a built-in metadata and annotation layer, which can be used by consuming clients to great effect.

GraphQL, on the other hand, seems to treat data as merely the by-product of a query. If I want to point to some data which I want to describe with its own address, and then further annotate that (even at the RDF level, but perhaps that’s going too far), I’m not sure how I’d do it with the GraphQL protocol.

Talking of GraphQL queries, it would also seem that there’s a URL encoding of the query which to my eyes is rather unappealing. Yes, URLs should be opaque, we’ve touched on that in a previous Monday morning thoughts post, but I still maintain that there’s a pragmatic aspect that means, at least for me, human readable URLs are super useful at the practical level.

And talking of pragmatism and practicality, there’s been a question about GraphQL implementation in SAP systems. The SAP systems with which I’m most acquainted — the successors to the R/2 and R/3 line of products — are based on a hierarchical database design, not a graph design. That’s not to say that things can’t change … indeed, we moved from hierarchical to relational in the late 1980’s when SAP introduced the support for IBM’s (then) new relational data store DB/2, which eventually superseded IMS DB’s hierarchical data store DL/1. Moreover, the power and simplicity that HANA brings is not graph store based, it’s column store based. So I can’t imagine any straightforward conversion any time soon, even if it was the right decision.

That doesn’t mean, however, that we can’t embrace the ideas of GraphQL in different areas. While I can’t imagine a straightforward replacement at the enterprise data store level, I can, for example, more easily imagine an annotation model in UI5 that would support data driven UIs with Fiori elements, based on a GraphQL powered backend.

As I said at the start of this post, it’s still very early days for me with GraphQL, and I have a lot more to learn. Rather than see GraphQL as any sort of competitor to the status quo, or as simply the new kid on the block that’s better than anything that’s been before, by default, I see GraphQL as something wonderful in how it challenges our thinking, reminds us of our past, and adds to the richness of how we consider data and protocol architecture at a high level.

What are your thoughts about GraphQL and what it can bring to our architectural and development landscape? I’d love to hear what they are.

 

Our canal boat, moored this morning between Altrincham and Dunham Massey

 

This post was brought to you from the peace and quiet of a Monday morning here on the Bridgewater Canal, where I’m spending time on a canal boat with M on my birthday today.

 

Read more posts in this series here: Monday morning thoughts.

 

Update 07 Sep 2018: This is the tweet from Chris Paine that started it off: https://twitter.com/wombling/status/1034949320519245824, referring to a tweet by Jeff Handley.

Update 14 Sep 2018: Jeff published a very interesting blog post yesterday: GraphQL is not OData. It’s a super read, with lots of history and balanced thoughts, and I enjoyed it very much. I will have to read it at least one more time for the thoughts to sink in. I’d recommend it heartily.

I certainly agree with the post’s premise – GraphQL is certainly not OData, as you can see from Jeff’s post, but perhaps also from this one. The key takeaway for me so far from Jeff’s post (apart from the title) is that GraphQL and OData can actually live side by side, as GraphQL’s fit seems to be as an intermediary. I need to think about that some more, but for now, let the conversation and education continue!

 

 

 

To report this post you need to login first.

32 Comments

You must be Logged on to comment or reply to a post.

  1. Fred Verheul

    Happy Birthday DJ, and thanks for digging into GraphQL. It was on my to-do list, but after reading this excellent blog (post ;-)) I think I can safely postpone the deep dive.

    Cheers, Fred

    (1) 
  2. Chris Paine

    Hi DJ – and a belated Happy Birthday – thanks for this post, very informing!

    I’d add to your comparison chart the ability to optimise a particular query, this is much easier when every entity in the schema isn’t also an “endpoint” with the ability to hit the data from almost any angle. This was one of the main points that was raise by Jeff Handley in his twitter conversion about the difficulties that he faced using OData and why he favoured GraphQL.

    And then, of course, I’d link in the other conversation about how Fiori Elements currently only support OData (and specifically SAP specific annotations on top of OData).

    Just like last week, last Month, last year, last decade … we live in interesting times where everything is in a state of flux – and that’s awesome!

    Continual learning!

    Cheers,

    Chris

     

    (1) 
    1. DJ Adams
      Post author

      Thanks Chris for always making great contributions to posts – I really appreciate them (and did expect / hope that you’d chime in here too of course).

      You’re right, the question about endpoint optimisation did come up in the conversation. I think the challenge with this, at least to me, is that endpoints are not the only target of optimisation. I’m thinking that having only a single endpoint that every query hits is also a story for optimisation. That’s not to say that either is not suitable for optimisation, it’s just that the optimisation target is a different shape.

      On Fiori elements and OData, I know this is something that bothers you, but as the conversation continued with Graham Robinson et al., there’s no reason why this model approach couldn’t be extended. I don’t how a first implementation of an annotation driven UI approach should be pilloried for being what it is (the first, and a good example). Are there other annotation-rich data models with wire protocols that are suitable candidates for consideration? I only know of OData, so I know my thinking is limited.

      We do indeed live in interesting times, but as you point out, that’s nothing new! One thing I try to remember to do is be open minded and be driven by what makes sense and works. And as always, that’s within the scope of my own knowledge and understanding.

      To continual learning!

      (1) 
      1. Chris Paine

        On the whole UI5 supporting more than OData for annotated data  models was very heartening to hear from Andrea’s Kunz on the sap.ui.mdc bit of UI5 and how this will explicitly support other annotated data sources. … Just not quite yet. But then again UI5 is innovating so hard and building so much new stuff, I wouldn’t be surprised to see something come through by this time next year.

        It would also make sense as per Uwe’s comment that CRM might be a perfect target for GraphQL, so perhaps C/4HANA might drive that space? We shall see.

         

         

         

        (1) 
        1. Chris Paine

          And to respond to my placing of Fiori Elements into the stocks and throwing rotten tomatoes at them. My thoughts around this are that I have rarely seen SAP build out such complex solutions to later retrofit them to new designs. If the sap.ui.mdc framework does allow for other protocols/models to be consumed, then I would only see those Fiori Elements being retro fitted if another area of SAP that decided not to use OData (perhaps Concur, or C/4HANA!) had a need to create many UIs that fitted the Elements patterns. They are not part of OpenUI5 after all!

          Perhaps in one year from now, I’m going to be very happy that my extension builds for SuccessFactors can also (easily) leverage some of the great work done in Fiori Elements, without having to resort to building an OData endpoint to do it (hence the “easily”). Let’s see! The way progress is going right now, it could well be the case!

          (1) 
          1. DJ Adams
            Post author

            Totally understand. I share your positive outlook, too. Stepping back for a moment, I tried to imagine a[n SAP] developer world without Fiori elements now, and it’s hard. But it’s only a short time ago that we didn’t have anything like that.

            I must also point out that the whole idea of data driven UIs has me wondering where to place that paradigm – is it inside-out again, or still outside-in, or somewhere in between? (See this old post from (originally) mid 2012 “SAPUI5 – the future direction of SAP UI development?” for context here).

             

            (1) 
      2. Chris Paine

        I should also probably explain some of my experience with OData and why I’m so impressed with the Gateway and HANA teams that have built tooling that can generate OData entities from simple definitions.

        I spent quite some time a few years back building out some tooling that allowed a user to enhance a standard data model and extend it themselves, and then used this new model to provide an OData interface that could be queried by other tools.

        Doing this and using the Olingo OData libs to help me build such a solution, I became very aware of the complexity that supporting the whole OData specification entails. It became very obvious, very quickly in the piece that the only way to build out such functionality was at a generic base level and have this inherited by all areas. Building a specific optimisation for a specific case was very very hard. There were some queries that were taking far to long to run, just because the code had to follow a generic path, but overriding the generic path without causing side effects was hard.

        You can see this in the implementations of SAP SuccessFactors OData API – there are quirks in their implementation that have been caused by needing to address the resolution of different internal data representations with a consistent external representation (OData). In most cases they have succeeded, is some not quite. Their complete disregard for function calls not having side effects being perhaps the most major deviation, but there are others.

        But, for all this, I really like being able to consume data from an OData source – it gives me the power that I might have had if I were to have direct SQL access to the underlying database.

        (1) 
        1. DJ Adams
          Post author

          Definitely, and I can tell your comments are based on direct experience like this. I too like consuming data from an OData source, and I also like the “shape” of what I’m consuming (that’s not to say I therefore dislike the shape of GraphQL, mind you).

          I make this point about consuming, because that’s what the majority of my time with OData has been, as a consumer, rather than as a producer. Traditionally, it’s fair to say, it has taken some effort to build an OData service (outside of the SAP Gateway context) – the Olingo libraries have helped, but it’s not been as simple as pointing a generator at a set of data definitions and saying “Go!”. So I think our collective experience in writing performant OData services “manually” is not as large as it might otherwise be.

          That said, you may be interested to take a look at what the application programming model for SAP Cloud Platform is doing for OData services – I’m a big fan and I’d encourage anyone to look more into this subject. I’ve been manually collecting posts together on this subject with a user tag: https://blogs.sap.com/tag/applicationprogrammingmodel/ .

          (0) 
          1. Chris Paine

            I’m getting there with the application programming model – I’m not entirely sure that I like how it ties my applications into a defined data model. But it does also seem to have the option for a lot of flexibility.

            Was was having the discussion with my team the other day about the need to have a differentiation between data access and data presentation, whilst allowing for the optimisation of the first depending on the needs of the latter. It’s a complex space which again goes back to my desires not to expose my application APIs in a generic format.

            I’m not wedded to, but I do like the BFF (Back-end for Front-end) approach to application development.

            This is something I will be pondering on for the next few months.

            But having got a framework that works (my own multi-tenant JPA adaptor and multi-tenant SuccessFactors OData consumption libraries) it’s going to take a bit of convincing to try out a different approach.

            Cheers

            (1) 
            1. DJ Adams
              Post author

              Thanks Chris. That looks like a good article – I’ve saved it for consumption in my Learning Continuum. I know what it feels like to have a framework that works well for you, and of course it’s a good basic reason to move away to something different. I do think the application programming model can be different things to different folks, but only time will tell. I know for sure that the folks behind the model are super switched on and big thinkers to boot – there’s a lot of consideration going into that area.

              I think you have the “pondering scale” about right, too – things like this take more than days or weeks to consider. I like the idea of “slow food” (and slow canal boats) and perhaps moreso “slow pondering”!

              (1) 
  3. Uwe Fetzer

    Hi DJ,

    excelent blog post, like always.

    Especially in CRM I can see use cases for GraphQL (or Graph Databases generally). You can create strange queries which you are not able to imagine during design time of the data model.

    If one want to play around with ABAP and the Graph Database Neo4j you can have a look at my old blog post Neo4a: The Neo4j ABAP Connector

    (1) 
    1. DJ Adams
      Post author

      Hey Uwe, thanks! I didn’t know about your work with Neo4j – thanks for the pointer! Trust you to be hacking on great stuff as always 😉 Cheers!

      (1) 
    1. DJ Adams
      Post author

      Hey Leo, thanks 🙂

      Cheers for the pointer to this podcast, downloading it for a listen.

      [Next day] I listened to (most of) the podcast on my run back from dropping the car off in Denton this morning. Some great chat, and quite a nicely balanced discussion. Of course, there are things that I didn’t quite agree with, but it would have been a very dull podcast if that had not been the case! 

      Some interesting points I picked up (not in any order):

      • it’s even more like SOAP than I thought – not only a single endpoint, but convention seems to be to use POST for everything, and carry the response code (success, error, etc) back in the payload (always returning a successful 200 OK at the HTTP header level)
      • a single endpoint is indeed the case – the comment made was that it helps developers go faster (I may or may not have said out loud, in between running breaths, “yeah, faster towards joining the other SOAP and SOAP-like protocols in the graveyard)
      • The GraphQL API layer can be seen perhaps as an abstraction layer, in the logical sense but also in the physical sense, which suggests that latency may be an issue in some circumstances as the resolvers (a GraphQL API term) make HTTP (or other) calls of their own to satisfy the incoming original GraphQL query
      • there doesn’t seem to be any versioning of APIs that made sense to me – just deprecating fields without changing versions explicitly seems a little dangerous to me (to drive this home even more, I’d only just recently listened to the ever wise Rich Hickey in an interview on this podcast episode: Problem Solving and Clojure 1.9 with Rich Hickey, where he’d talked about the dangers of breaking changes)

      Anyway, if you, dear reader / commenter, listen to this podcast too, please share your thoughts here – I’d love to see what they are.

       

      (1) 
  4. Tiago Almeida

    Thanks for the really interesting post and discussion 🙂

     

    OData and GraphQL aim to solve the same problems of pure REST (e.g. multiple requests, analysis paralysis of multiple options w.r.t. to caching/paging/selection etc.)

    GraphQL pros and cons:

    + Nicer query language (nicer than fiddling with $filter, $select and $expand in the URL)

    + The only real option in the javascript ecosystem ( there are OData libs for Node but nowhere near as mature ) . Same in the Python world.

    – Goes on top of HTTP yet ignores all of its principles by using a single method and a single endpoint. This may create problems from a security and performance perspective as you mentioned.

    ? Claimed better tooling. Not sure what these are. Anyone knows?

     

    OData:

    + Clean implementation on top of HTTP.

    – Created by Microsoft when it wasn’t cool. Also, it’s no longer new and shiny and we all know how the IT crowd loves gadgets and new things.

    – Hard to implement in key technologies. This is probably the main reason why GQL is so popular. I’m sure Apache Olingo is solid but you need to write thousands of LOC of Java to get a server running. Alternatively you could go with C# and Entity Framework or SAP. All these options scream “Enterprise bloatware” really loudly. If it becomes dead easy to implement an OData server in Node and Python then it may have a chance in the non-enterprise space.

     

    Does SAP need GraphQL ? 

    Not until GraphQL has some killer feature that OData does not. This is not true yet, imho.

     

    Does UI5 need GraphQL?

    Doesn’t need but it would certainly help it’s and other SAP technologies adoption. For example, if we were to build a UI5 app with a node backend API in SCP then it be easier to make that backend in GraphQL. It would be really nice to have the option of a GraphQL Model in UI5.

     

    My 2 cent 🙂

    (3) 
    1. DJ Adams
      Post author

      What a great comment Tiago, thanks very much! I must say you’re spot on with that analysis, it resonates with what my research is telling me too. Especially the part about OData having an “unfortunate” association and also not being shiny any more.

      Your point about the challenges of implementing an OData service are well made, and as I mentioned to Chris Paine earlier in this comment thread, the application programming model has some goodies to offer there.

      Along with Graham Robinson and others, I do also like the prospect of an annotation model in UI5 that could support various transports, including GraphQL perhaps.

      BTW your comment about claimed better tooling is an interesting one. I’m not versed well enough in GraphQL tooling yet, but considering we have Graphiql as a frontend to try out queries against a GraphQL endpoint, I’m wondering (a) whether that also supports CUD operations (would be pretty cool if it did) and (unrelated) (b) whether you really need Graphiql for read and query operations (with OData, you can do read and query operations with the URL bar alone, as many of us here know of course).

      (1) 
    2. Helmut Tammen

      Great answer Tiago and of course a great blog DJ.

      I agree with you that implementing a OData server with node.js is not as easy as implementing a GraphQL server seems to be  A couple of years ago I started to implement a node OData server but as you mentioned also mine is far away from being mature.

      I think the two main advantages of GraphQL are

      • The clean query language and
      • The broader adoption in programmers world

      So I would say UI5 needs GraphQL to get out of the niche.

      Regards Helmut

      (1) 
      1. DJ Adams
        Post author

        Hey Helmut, great to see you here, thanks for your comment. The GraphQL query language is indeed clean, and I’d go so far as to say rather powerful – because of its basis in graph data storage. I don’t see a reason why we can’t have a GraphQL model mechanism … after all, others have written model mechanisms for UI5 already – for Firebase, for example (paging Tiago Almeida!) and even, ahem, Doge 😉

        (1) 
      2. Robin Panneels

        Great blog and conversation in the comments.

        I think that UI5 might get more traction outside the SAP community if we have a GraphQL model mechanism.

        In the react-community they all jumped on GraphQL so it would benefit the UI5 community if we could get even a small percentage of them into looking into UI5.

        In the Syntax.fm podcast there was also an  episode on GraphQL in the past which was interesting.

        I’m not sure but i thought that with Graphiql you could also try out the CUD operations.

        Robin

        (1) 
        1. DJ Adams
          Post author

          Hey Robin – thanks for the comment! The more I think about it, the more that a wider array of models in UI5 appeal, for sure. I guess it’s just down to getting the right tuits lined up 🙂

          I’m guessing that the jumping onto GraphQL from React happened partly because the two fit together quite nicely, where React was “the V in MVC”. But anything that gets others looking at UI5 can only be a good thing.

          Thanks for the pointer to this Syntax.FM podcast – subscribed!

          At some stage (perhaps after TechEd) I definitely want to find some time to dig into GraphQL more, and my first stop will indeed be Graphiql (and the non-read-only operations). Cheers!

          (0) 
  5. Chuan Miao

    Most interesting part about graphql, for me, is that it allows to write nested queries in a declarative way from the client side. I am not very sure about how odata implements nested queries. but overall, I like the declarative style better than the anotations. It’s just more readable and therefore easy for debugging. And also, I think odata leaves little space for the client side. 

    There is another interesting open source project, postgraphile, that implements the graphql interface directly to the postgresql database. It provides similar functionalities as the cds view from SAP. But I think there is similar problem with CDS view as odata. It would be wonderful to implement graphql with Hana.

    (1) 
    1. DJ Adams
      Post author

      Interesting thoughts indeed, thank you! After looking into GraphQL for this post, and considering it subsequently, I do think there is space for both approaches, there are pros and cons to each. I’m not sure what you mean by “I like the declarative style better than the annotations” – can you give an example of what you mean? Also, what do you mean by “there is a similar problem with CDS view as odata”? Just want to understand more.

      (0) 
      1. Chuan Miao

        pardon my lousy english vocabulary, I probably used wrong word for what I wanted to say.

        I really like your summary and the discussion here. I am trying to learn graphql myself. It’s a wonderful opensource project. The design principles comes from tons of discussions from different angles. I am interested to understand more about it.

        so here is what I meant.

        considering aggregation in a query. In CDS View, one can add a annotation line like this

        @DefaultAggregation: #MIN
        price as Flightprice

        It’s semantic, which is very cool. But the syntax, the ‘@’ and ‘#’ signs, to me it’s a bit shutdown. And it can hardly be debugged, if not impossible at all. You have to treat it as a blackbox. You will need a completely different approach to build a customised aggregation (Please experts correct me, if I am wrong). And when the code base becomes very large, it will be a headache to spot a mistake in those annotation lines.

         

        In graphql, you can build something like this

        query LikeData {
          viewer {
            allArticles(where:{ createdAt: { gt: "2016", lt: "2017" }}) {
              aggregations {
                sum {
                  reads
                }
              }
            }
          }
        }

        true it is less semantic (to be fair, it’s also not too bad, if you stick to some naming conventions), and it takes indeed more lines. (from the entity aggregations to the closing brackets, there are 5 lines). And for the entities “aggregation”, “sum” and “reads”, the user need to define his own resolvers, which is a function.

        It looks like more efforts. but one can try to be smart to reuse some of the resolvers, like the ones for “aggregation” and “sum” in the above example. And as a developer, it is immediately clear how many function calls there are. And it’s possible to browse to those functions easily. (one can think of build some code peaking tools)

        That’s just my 2 cents.

        Cheers,

        Chuan

        (1) 
        1. DJ Adams
          Post author

          Thanks Chuan, that’s a great explanation. It wasn’t your lousy English btw, it was my inability to understand! 🙂 That’s a really nice example, and teaches me how aggregation can be done in GraphQL. I’m still learning (that learning is suspended right now as I have other priorities) but it’s something I’ll come back to and will want to dig into that. Cheers! DJ

          (0) 
  6. Lars Breddemann

    This is another well-written blog post, good work.

    From what I take from it, it is about another abstraction layer that should enable application developers to access data in other processes. That’s, of course, fine and the technology seems to be usable when Facebook uses it for their purposes.

    The question then is of course if and how this would play any role for developers that build something else than facebook. Would it make a difference in productivity or quality of the systems they build?

    When I read that the data access is “declarative” I could not help to think “oh, yet another declarative language that can will be wrongly used most of the times.”

    Having seen my fair share of SQL written by the same developer group that also writes FIORI applications and that commonly cannot wait to get the latest version of tool XYZ into their hands, I have a hard time envisioning that this specific declarative language will have a better fate.

    Maybe it is too much of a mental cliff to switch between pseudo-functional JavaScript or ABAP and SQL but I’ve yet to see the development team that smoothly switches between the “how-to-compute” and the “what-to-compute”-approaches?

    The last few years have seen major turn-over for development tools and techniques in the SAP world; not sure how many of the new and shiny tools actually have caught on. What I am sure about though is that despite the volumes of material about the functions and features of said tools, there is a massive gap where the explanations of concepts, reasoning and application approaches should be.
    Everyone can explain trivial SQL but barely anyone is able to effectively analyze and improve a query with certainty (running all five hints one recalls doesn’t count).
    Most SAP related developers by now have heard about CDS and about a “VDM” but it’ll be nearly hopeless to get those same developers to explain the ideas and concepts and come up with their own VDMs that incorporate those concepts (without simply copying what they find in S/4).

    In many places development on SAP systems seems to be development by imitation (look at existing code and change it so that it does what you need it to).
    This is practical and saves a lot of “thinking time” during sprints. Where does not work too well is when the task is not to change the recipe of how to compute what you want but the description of it. In the declarative world, its usually not the optimal solution to say “I’ll have what he’s having – just with fries instead of salad and also with dessert served first“.

    Another aspect that has been mentioned a few times in the comments is that having something like GraphQL in the technology portfolio would help adoption of SAP products because there are “more” developers for GraphQL “out there”.
    Now is that really the case? I doubt it. Non-SAP developers that need to interact with SAP solutions will use whatever they can. Of course, it would be nice if their favourite way to interact is supported but if it isn’t this won’t change the requirement for interfacing and whatever is actually available gets used.
    Also, in my experience the kind of folks who actually know what “GraphQL” or “OData” means are not the ones leading decisions about purchasing systems like SAP.

    I don’t have a great conclusio here apart from the impression that GraphQL for SAP systems looks a lot like another layer of tyres on the good ol’ fire.

     

    (1) 
    1. DJ Adams
      Post author

      Thanks Lars, and gosh, this is a wonderful set of thoughts. It’s a shame it’s “only” within a comment to this blog post – I’d love to see you expand those thoughts in a post of your own, if you have time. A lot of what you write here resonates with me, and I think it would be valuable to share those thoughts with the rest of the community.

      I’ll also be ruminating on what you wrote here, so that when I do get to read more from you, I’ll have a richer consumption experience.

      Thanks! (and no pressure! ;-))

       

      ps thanks for the link to the video – added to my Watch Later list!

      (0) 

Leave a Reply