Part 7 – Understanding API Policy Flow Routing
Recap of Part 6
In Part 6 of this document series, we were introduced to the idea of API Policies.
The Agigee Edge software that powers SAP API Management divides the request/response cycle into two main segments, the Proxy Endpoint segment and the Target Endpoint Segment. Then within each of these segments, there is the same set of two (and optionally three) processing stages.
Once you have created an instance of an API Policy and placed within it the details of your own functionality, that policy instance must then be assigned to a specific processing stage of a specific segment.
What we will look at here is how the entire request/response cycle is divided up, first into segments, then within each segment, into processing stages.
Overview of API Policy Flow Routing
The easiest way to understand the processing stages into which the request/Response cycle is divided, is by means of the diagram below. As we move through this explanation, more and more detail will be added to the diagram; but to start with, we will simply take a high-level overview.
The incoming request originates with the client on the left hand side of the diagram. It then follows the green arrow in a clockwise path across the top of the diagram, eventually arriving at the backend system. The response generated by the backend system then follows the grey arrow, still moving in a clockwise direction and is passed back to the client.
During this cycle, the incoming request moves first through the processing segment called the “Proxy Endpoint”, then on to the “Target Endpoint” segment. On the way back, the outbound response moves through exactly the same processing segments, but in reverse order.
The first segment is called the Proxy Endpoint segment and it contains all the processing needed either for preparing or rejecting the incoming request. Within this segment are at least two (often three) processing stages.
The first of these stages is called “Preflow”.
Preflow is always the very first stage to be processed in any segment. Any policies assigned to the Preflow stage are run unconditionally and are always run first. This is where we perform the most fundamental checks on the incoming request – for instance, checking that the request contains an API Key.
If all the policies assigned to the Preflow stage run successfully (I.E. they find no reason to reject the request), then we move onto the next stage called Condition Flows.
It is possible for there to be multiple Condition Flow stages. In fact, if you are consuming an OData service, then you will see that for every collection in the OData service, a specific Condition Flow stage has been generated.
On the inbound request side, the purpose of a Condition Flow stage it to associate a specific set of API Policies with a particular pattern in the URL.
So taking the case of an OData collection, when you perform a READ or GET operation against a collection, the URL contains first the OData service’s base URL, followed by the name of the collection. This forms a recognisable “signature” in the URL that identifies this request as being targetted to a specific collection. The name of the collection can then be tested for in the condition string assigned to this Condition Flow.
Using the example of the BusinessPartnerSet collection in the GWSAMPLE_BASIC OData service, the Condition Flow for this OData collection has had the following condition string generated for it. (This string has been indented and formatted for clarity. The OData collection name is highlighted in red)
|Condition String for BusinessPartnerSet Condition Flow|
(proxy.pathsuffix MatchesPath “/BusinessPartnerSet” OR
(request.verb = “DELETE” OR
request.verb = “POST” OR
request.verb = “PUT” OR
request.verb = “GET”)
In other words if the above condition evaluates to true, then however many policies have been assigned to this processing stage will be executed. Think of the condition string acting as a guard clause for a set of policies. The policies will only be executed if the guard clause evaluates to true.
If the condition string attached to a Condition Flow evaluates to false, then the request processing continues on to the next Condition Flow. If there are no subsequent Condition Flows, then we proceed to the third and final processing stage of the Proxy Endpoint segment.
If any policies are assigned to the Post Flow stage, then these are run as the last stage of processing in this segment, and are always run unconditionally.
At this point, all the processing stages of the Proxy Endpoint segment are complete and we must now decide whether or not to pass the request on to the backend system. This is where the Route Rules come in.
The Route Rules are where must make the Go/No Go decision for routing the request to the backend system.
For instance, it is often the case that we do not want to send HTTP requests to the backend system that were made using the HTTP verb HEAD. The HTTP verb HEAD behaves exactly like GET (in that it makes a request to the backend system for some data) except that in a HEAD request, only the HTTP headers are returned – there is no response body. So unless the business backend system specifically allows for the HEAD request, it is usually a big waste of processing time to allow HEAD requests to hit the system.
SAPUI5 OData models can now request the entire OData Service Document using an HTTP HEAD request. As part of the response to this request, the ABAP server sends the client a CSRF token needed to validate subsequent OData requests.
Remember that without a valid CSRF token, SAPUI5 OData model objects will not function correctly. Therefore, if you do want to block HTTP HEAD requests, you should coordinate this action with the SAPUI5 developers to ensure that they switch on the OData model flag disableHeadRequestForToken.
See the OData Model documentation for more details.
Here’s where we can filter out such requests.
The route rules are a simple set of conditions that are processed in strict sequential order. If any route rule evaluates to false, it is ignored and the next route rule is processed. The first matching rule wins and the rest are not processed. Therefore, you should always place the most specific route rules first, and have the more generic one last.
If we make it through the route rules (I.E. all the route rules evaluate to false), then we can be certain that we want this request to pass through to the backend system.
Now we come to the Target Endpoint Segment. At this point in the request/response cycle we can be certain that the incoming request is valid and acceptable and can therefore to be passed to the backend system. So here we perform any processing needed to prepare the request for the specific requirements of the backend system – for instance, converting the request from a set of URL parameters into an XML SOAP request body.
Here we see the same set of processing stages as were present in the Proxy Endpoint segment:
- Unconditional execution of the zero or more policies assigned to the PreFlow stage
- Conditional execution of the zero or more policies assigned to the zero or more Condition Flow stages
- Unconditional execution of the zero or more policies assigned to PostFlow stage
Now (finally!), the request is passed to the backend system and a response is generated. This response is picked up by API Management and again we run through the same set of processing stages the we’ve seen before.
Since we are now dealing with an outbound response, it makes no sense to talk about Route Rules. Route Rules apply only to the inbound request, not the outbound response.
Finally, the response passes from the Target Endpoint segment to the Proxy Endpoint segment, and the same set of three processing stages are performed.
Remember that any one of these processing stages can have zero or more policies assigned to it; so in many cases, some processing stages might well do nothing.
As you can see, the possibility exists for you to implement some very comprehensive processing within an API Proxy.
In the next document, we will look at the specific details of the individual API Policies and how they can be assigned to different processing stages in order to implement the desired functionality.