Developing GraphQL API on ABAP: Architecture and Technical Details
This article is the second in series, explains the technical details and includes few screenshots to give a preview. If you want to read some background and boring stuff, is covered in the first part.
Before going to the details, a few product vision key features I have considered.
- Leverage what SAP/ABAP has to offer most. ex. ICM, OOPs
- Modularize the implementation
- Independent JSON converters
- GraphQL Service Handler knows nothing about the schema
- Resolvers can be independent of Schema generation logic
- BOL Resolvers to generate Dynamic schema and resolve all CRUD operations using GENIL operations
- Schema can be saved to Database Tables to be re-loaded/configured manually
- SAP Sessions to improve performance with BOL and retaining Schema across requests
- Testable Code and ABAP Test Scripts for code coverage
- GraphQL server is exposed as an ICF service from ABAP, which accepts and provides JSON data
- ICF handler class takes JSON string and convert to ABAP GraphQL Schema classes and validated
- Schema is built only for the first time if the Session is enabled
- The user input Query will be parsed and the Execution Context is created
- Resolver class will be called for individual resolver identifications
- Resolver will either go to BOL/GenIL or APIs or can directly talk to DB to fetch data
- Data will be submitted back to ExecutionContext as JSON classes
- Finally, JSON classes will be validated and converted to JSON string as ICF service response
Accessing the API
Note: currently this is under active development, watch this space for updates and code
MYSAPSSO2 token for authentication
Follows standard SAP authorization like MYSAPSSO2 header/cookie in the Request.
GraphQL API provides API documentation by default and can be accessed using Schema Queries. Which will provide information to frontend team to browse through the contract to know what fields/operations are supported by the API
Queries — using BP Advanced Search
Query is the way to get some data from API, which is same as HTTP “GET” operations. Query can have Arguments which will help filter the results from the API. Query should end with requesting necessary field by the UI.
BP Advanced Search Results
Here we can see the Results as sent back from API, which exactly matches the Requested 4 fields from the server.
Mutations — Creating BP with Deep Data
Mutations are a way to change some data on the server. Here we are creating a BP along with the child relations like Address and Marketing Attributes.
Mutations — Creating Child data of BP
Mutations — Updates and Deletes
At a Glance
- ZGRAPHQL — Main Package
- ZGRAPHQL_LANG — contains all the Language/Schema Implementation classes
- ZGRAPHQL_API — contains base Service classes to expose CRM data as API
- ZGRAPHQL_CORE — Base Server Classes
Message Class — ZGRAPHQL
- responsible for creating the GraphQL Schema (Type System)
- can create either from Internal Tables or from Database Tables
- Validates and Creates Schema Object
- loads all required components into Memory
- Builds Introspection Resolver and Holds User Resolver
- Base SICF handler class to handle basic functionality of the GraphQL server
- All Sub-classes are only expected to provide Schema from PREPARE_SCHEMA method
- which builds Schama using given ABAP table names
- Class to resolve all Business Partner related BOL operations
- ZGQLCL_BUIL_CUST_RESOLVER is a specialization which allows restricting
- BOL entities and Relationships exposed
- Mapping for the Relationship names
- Resolver to provide links to Operation and Data
- All Implementation classes must provide
- Schema to be initiated
- Resolving Method for each Operation
- responsible for resolving Schema Introspection queries
- Base Exception class available globally to throw Errors
- The server automatically handles this error and passes to Client as “errors” array in response
- can handle BOL errors and BAPI errors
- the class is responsible for keeping the current Context until the end of the operation execution
- Validates the current Operation submitted by Client before starting the execution
- holds Schema, Operation currently being executed, Selections requested by Client
- the class is responsible for executing Query, Mutations and handle Errors
- the method resolves Query operations
- the method performs Mutation (update) operations
- method to resolve each field requested by the Client
- recursively resolves each Custom Type until it reaches Leaf Nodes
- used for logging the requests to the GraphQL server
- logs Request Operations, Response Errors
- Transaction: SLG0, Object — ZGRAPHQL, Sub Object — ZGQL_GEN
- utility functions for reading AppSet Entries and Domain Values
The benefits I observe from this approach are:
- Higher Abstraction of the API Implementation
- Easy Integration using widely popular toolchain on UI running outside SAP
- Above all, Performance can be tweaked within SAP as we run the logic near Database
Even though I started this as an exploring project not knowing if I could finish it on my own or not, spending my evenings and holidays, After seeing the final result, I requested my Manager to spend some time for a small the Demo.
Having no expectations, with the CRM experience my Manager immediately identified the potential of this solution and the advantages our Organization can get in exposing CRM data using GraphQL. With his support, I have managed to present to higher management.
And today we are experimenting and building unified enterprise level services for multiple Web Applications with minimal friction and with High performance. The feedback from the Web developers is that ‘Fast development times, Ease of use and more options in choosing client libraries’.
Other sources to track this topic.
Another very nice blog in this series!
do you share your coding?