Skip to Content

Recap of Part 6

In Part 6 of this document series, we were introduced to the idea of API Policies.

An API Policy is a container for some specific unit of functionality such as a Quota Check, or a Spike Arrest (prevents a Denial of Service attack) or some script containing custom code written JavaScript or Python.  However, to be of any effect, these policies must be assigned to different processing stages in the request/response cycle.

The Agigee Edge software that powers SAP API Management divides the request/response cycle into two main segments, the Proxy Endpoint segment and the Target Endpoint Segment.  Then within each of these segments, there is the same set of two (and optionally three) processing stages.

Once you have created an instance of an API Policy and placed within it the details of your own functionality, that policy instance must then be assigned to a specific processing stage of a specific segment.

What we will look at here is how the entire request/response cycle is divided up, first into segments, then within each segment, into processing stages.

Overview of API Policy Flow Routing

The easiest way to understand the processing stages into which the request/Response cycle is divided, is by means of the diagram below.  As we move through this explanation, more and more detail will be added to the diagram; but to start with, we will simply take a high-level overview.

Picture7.png

The incoming request originates with the client on the left hand side of the diagram.  It then follows the green arrow in a clockwise path across the top of the diagram, eventually arriving at the backend system.  The response generated by the backend system then follows the grey arrow, still moving in a clockwise direction and is passed back to the client.

During this cycle, the incoming request moves first through the processing segment called the “Proxy Endpoint”, then on to the “Target Endpoint” segment.  On the way back, the outbound response moves through exactly the same processing segments, but in reverse order.

The first segment is called the Proxy Endpoint segment and it contains all the processing needed either for preparing or rejecting the incoming request.  Within this segment are at least two (often three) processing stages.

The first of these stages is called “Preflow”.

Picture8.png

Preflow is always the very first stage to be processed in any segment.  Any policies assigned to the Preflow stage are run unconditionally and are always run first.  This is where we perform the most fundamental checks on the incoming request – for instance, checking that the request contains an API Key.

If all the policies assigned to the Preflow stage run successfully (I.E. they find no reason to reject the request), then we move onto the next stage called Condition Flows.

Picture9.png

It is possible for there to be multiple Condition Flow stages.  In fact, if you are consuming an OData service, then you will see that for every collection in the OData service, a specific Condition Flow stage has been generated.

On the inbound request side, the purpose of a Condition Flow stage it to associate a specific set of API Policies with a particular pattern in the URL.

So taking the case of an OData collection, when you perform a READ or GET operation against a collection, the URL contains first the OData service’s base URL, followed by the name of the collection.  This forms a recognisable “signature” in the URL that identifies this request as being targetted to a specific collection.  The name of the collection can then be tested for in the condition string assigned to this Condition Flow.

Using the example of the BusinessPartnerSet collection in the GWSAMPLE_BASIC OData service, the Condition Flow for this OData collection has had the following condition string generated for it.  (This string has been indented and formatted for clarity.  The OData collection name is highlighted in red)

Condition String for BusinessPartnerSet Condition Flow

(proxy.pathsuffix MatchesPath “/BusinessPartnerSet”     OR
proxy.pathsuffix MatchesPath “/BusinessPartnerSet/**”  OR
proxy.pathsuffix MatchesPath “/BusinessPartnerSet(**“)

AND

(request.verb = “DELETE” OR

  request.verb = “POST”  OR

  request.verb = “PUT”   OR

  request.verb = “GET”)

In other words if the above condition evaluates to true, then however many policies have been assigned to this processing stage will be executed.  Think of the condition string acting as a guard clause for a set of policies.  The policies will only be executed if the guard clause evaluates to true.

If the condition string attached to a Condition Flow evaluates to false, then the request processing continues on to the next Condition Flow.  If there are no subsequent Condition Flows, then we proceed to the third and final processing stage of the Proxy Endpoint segment.

Picture10.png

If any policies are assigned to the Post Flow stage, then these are run as the last stage of processing in this segment, and are always run unconditionally.

At this point, all the processing stages of the Proxy Endpoint segment are complete and we must now decide whether or not to pass the request on to the backend system.  This is where the Route Rules come in.

Picture11.png

The Route Rules are where must make the Go/No Go decision for routing the request to the backend system.

For instance, it is often the case that we do not want to send HTTP requests to the backend system that were made using the HTTP verb HEAD.  The HTTP verb HEAD behaves exactly like GET (in that it makes a request to the backend system for some data) except that in a HEAD request, only the HTTP headers are returned – there is no response body.  So unless the business backend system specifically allows for the HEAD request, it is usually a big waste of processing time to allow HEAD requests to hit the system.

IMPORTANT

SAPUI5 OData models can now request the entire OData Service Document using an HTTP HEAD request.  As part of the response to this request, the ABAP server sends the client a CSRF token needed to validate subsequent OData requests.

Remember that without a valid CSRF token, SAPUI5 OData model objects will not function correctly.  Therefore, if you do want to block HTTP HEAD requests, you should coordinate this action with the SAPUI5 developers to ensure that they switch on the OData model flag disableHeadRequestForToken.

See the OData Model documentation for more details.

Here’s where we can filter out such requests.

The route rules are a simple set of conditions that are processed in strict sequential order.  If any route rule evaluates to false, it is ignored and the next route rule is processed. The first matching rule wins and the rest are not processed.  Therefore, you should always place the most specific route rules first, and have the more generic one last.

If we make it through the route rules (I.E. all the route rules evaluate to false), then we can be certain that we want this request to pass through to the backend system.

Now we come to the Target Endpoint Segment.  At this point in the request/response cycle we can be certain that the incoming request is valid and acceptable and can therefore to be passed to the backend system.  So here we perform any processing needed to prepare the request for the specific requirements of the backend system – for instance, converting the request from a set of URL parameters into an XML SOAP request body.

Picture12.png

Here we see the same set of processing stages as were present in the Proxy Endpoint segment:

  • Unconditional execution of the zero or more policies assigned to the PreFlow stage
  • Conditional execution of the zero or more policies assigned to the zero or more Condition Flow stages
  • Unconditional execution of the zero or more policies assigned to PostFlow stage

Now (finally!), the request is passed to the backend system and a response is generated.  This response is picked up by API Management and again we run through the same set of processing stages the we’ve seen before.

Picture13.png

Since we are now dealing with an outbound response, it makes no sense to talk about Route Rules.  Route Rules apply only to the inbound request, not the outbound response.

Finally, the response passes from the Target Endpoint segment to the Proxy Endpoint segment, and the same set of three processing stages are performed.

Remember that any one of these processing stages can have zero or more policies assigned to it; so in many cases, some processing stages might well do nothing.

Picture14.png

As you can see, the possibility exists for you to implement some very comprehensive processing within an API Proxy.

In the next document, we will look at the specific details of the individual API Policies and how they can be assigned to different processing stages in order to implement the desired functionality.

Part 8 – Understanding the API Policy Designer

Chris W

To report this post you need to login first.

6 Comments

You must be Logged on to comment or reply to a post.

  1. Kai-Christoph Mueller

    Hi Chris,


    thank you very much for this clear explanation of the flow.

    I have a question:


    Thinking of the processing flow of a request/response I am wondering why there are two segments I can insert policies for handling the response from the target endpoint (first target segment then proxy segment, both for the response). Isn’t this redundant?


    Thank you in advance and best regards,

    kc

    (0) 
    1. Chris Whealy Post author

      Hi KC

      At first glance it might seem to be overly complex, but this complexity is a consequence of the fact that Apigee has designed this software to give you maximum flexibility.

      Consider the following.  If you split the request handling processing into two stages (separated by Route Rules), then it allows you first to filter out invalid or unacceptable requests, and then secondly, only to process the ones you know you want

      1. In the Proxy EndPoint segment, you need to answer the following question “Do I want this request to hit one of my backend systems?”

        By implementing the required policies, you can arrive at a simple Yes/No answer.

        If your answer is “No”, then you can abort the entire request and shield your backend systems from having to handle unnecessary requests.

        If however, you answer “Yes” then you proceed to part two.

      2. In the Target EndPoint segment, you know that the incoming request is both valid and acceptable, and should therefore be directed to one of your backend systems.

        Now you can implement those policies that perform any required backend-specific processing.

        For instance, you might need to transform the request from JSON to SOAP format, or extract an ad campaign identifier from the query string and fire off another request to a second backend system that analyzes ad clicks…

      Hopefully, this gives you an idea of the rationale behind the architecture?

      Regards

      Chris W

      (0) 
      1. Kai-Christoph Mueller

        Hi Chris,

        thanks for the answer.

        I was more thinking of the phase where a reponse was already received from the target endpoint.

        At that point to my understanding there are two sections, which look pretty much the same:

        Response handling sections for

        Target Endpoint and Proxy Endpoint (each with their pre, cond, post flow parts).

        Am I missing something?

        Thank you,

        kc

        (0) 
        1. Chris Whealy Post author

          Hi KC

          Remember that the policies assigned to the outbound stream are designed to handle many more situations than simply the response generated by the backend system.

          It is entirely possible that the API Management layer will need to create a response in the event of some policy failure.  For instance, if a quota limit policy assigned to the preflow stage of the proxy endpoint segment fails, then it can set an internal error flag (of your own making) and allow processing to continue.  Then this error flag will be picked up later by a general purpose policy that traps errors and generates a meaningful HTTP response, rather than just the generic HTTP 500.

          So there are several situations where response processing might be required.  This list is certainly not complete…

          1. Format conversion of the backend response – E.G. JSON to SOAP, or SOAP to JSON
          2. Injection of extra information into the response that comes from a different system – E.G. Targeted adverts as part of an ad campaign
          3. URL rewriting to mask internal host and path names
          4. Setting the correct HTTP response code in the event of an earlier policy failure

          Although it may not be obvious at first, I think that as you start to get into the details of building API proxies, you will discover many situations in which this high degree of flexibility will prove useful.

          Regards

          Chris W

          (1) 
  2. Murali Shanmugham

    Hi Chris,

    Firstly, thanks for the blog series. It is very useful.

    I agree with KC. There are just too many places to add policies. I understand it’s how it was designed and gives more flexibility and control when handling APIs.

    In order to make it easy for API developers, it would be good to have the following:

    1) best practise document which lists several examples as to what type of policies can be used for each stage within a segment.

    2) When I am in the segment “ProxyEndPoint” and select the stage “Preflow”, the system should only provide a set of policies which are applicable for this stage. As explained by you, it makes sense to use “verifyAPIKey” policy in this stage, but may not be a good choice to have “XML to JSON” policy for this stage. This probably will help the API developers to assign the policies to the right stages.

    Cheers,

    Murali.

    (0) 
    1. Chris Whealy Post author

      Hi Murali

      I agree that at first, the number of places where you can add policies seems to be too flexible (I thought exactly the same thing when I first started looking at Apigee Edge), and it is indeed true that a beginner can potentially create muddled policy definitions that lead to self-contradictory or nonsensical configuration.  (Been there, done that! 😉 )

      However, as I said to KC, as you get into the details of building real life API proxies, you will start to discover that the apparent excessive flexibility actually provides you with precisely the tools needed to handle the subtle details of real life.


      For instance, Apigee Edge provides you with a cache facility so that you can provide answers from a cache rather than the backend system.  If you choose to implement these caching features, then you are most certainly going to need to place a much greater quantity of functionality into the outbound stream segments of your proxy definition than simply basic URL rewriting.

      However, if we take a step back and look at the big picture, is still relatively early days for the API Management tools in HCP; so over time, you will see more documentation and training material become available – and this will include some “real life” scenarios in which it will be very difficult to implement the required functionality without this high degree of flexibility.

      Regards

      Chris W

      (1) 

Leave a Reply