Skip to Content

In this blog post I’ll outline how one can implement coarse-grained access control with the help of SAP API Management. The code examples can be found on the serban-petrescu/sapim-scopes GitHub repository. Instructions on how to get it up and running can be found in the repository’s read me file.


Let’s say that you have an API that you want to protect with OAuth. If you can’t change the API itself, the API Manager could be a viable solution for achieving this.

Generally, it is a good idea to rely on some other mechanism to create the tokens themselves, like for example the SCP NEO OAuth Service or the SCP CF XSUAA Service. Only the enforcement of the access control should be done on API Management.

In the case of cloud foundry, you even have the possibility of forcing all requests towards an app to go through the API Manager via the API Manager route service. So you might think of doing this cross-cutting concern centrally for a suite of microservices instead of spreading this logic to each microservice.

Access Control

Access control can be boiled down to three big components:

  • Authentication: the requests coming in must be authenticated. When we talk about OAuth, this translates to “the requests must have an OAuth token attached”.
  • Coarse-grained: the requester must have access to the URL / Verb that he is using for the request. In OAuth, we can use scopes to model these permissions. For example, one could say that if you have a “ReadBooks” scope, you could make  “GET /books” HTTP requests.
  • Fine-grained: the requester must have access to each individual resource that he wants to work with. For example the European regional manager for a multinational company may not access the employee information for North American employees. This can also be modelled through OAuth via claims (~ attributes).

We’ll talk a little about coarse-grained access control and how to implement it with the API Manager in the next chapters.

JSON Web Tokens

We will work with JWT OAuth tokens, which can be decoded to extract the OAuth claims without having to call the Authorization Server. Such a token looks like so:


It has three components separated by dots: the header, the body (containing the claims) and the signature (which is used to check that the token was truly issued by the authorization server). We are interested in the body for doing the coarse-grained access control.

The header and the body are Base64 encoded JSON strings. Decoding the example token from above using results in the following body:

  "scope": ["HttpBin.Read", "HttpBin.Create"],
  "cid": "my-client-id",
  "grant_type": "authorization_code",
  "user_id": "test",
  "user_name": "",
  "exp": 9999999999

The Proxy

Our example API Proxy will secure the “anything” endpoint of We want to have the following coarse-grained access control scopes:

  • HttpBin.Read:
    • GET*
    • POST
  • HttpBin.Create
    • PUT*
    • POST

We store this information (scopes – access control rule mapping – we’ll call them “specs” from now on) into a key value map to be able to configure it more flexibly.


Our policy will have four main steps, modelled as individual policies:

  1. [KeyValueMapOperations] readSpecs: Read the key value map containing the specs.
  2. [ExtractVariable] extractToken: Extract the OAuth token from the header.
  3. [JavaScript] checkScopes: Parse the OAuth token and check the scopes against the specs.
  4. [RaiseFault] raiseFault: Respond with an error message if the check has failed.


We can already see that there will be two types of specs: exact ones (e.g. “POST /entities/search”) and “fuzzy” ones (e.g. “GET /entities/*”). For writing up the “fuzzy” ones, we simply go with a RegExp for specifying the URL. To cover tokens from authorization servers that auto-generate scope prefixes (like the XSUAA), I’ve decided to also allow for the possibility of using a RegExp for matching the scopes.

As a result, each spec will have the following components:

  • The scope name.
  • A flag indicating if this name is exact.
  • An array of URL + verb patterns, each consisting of:
    • The HTTP verb (e.g. GET, POST) or the “*” wildcard.
    • An URL or a RegExp for it.
    • A flag indicating if the URL is exact (i.e. if it is not a RegExp).


We will use the sapim library for deployment, so we can easily write the specs into a YAML manifest file. The library will take care of creating a key-value map out of it for us.

We don’t really have any need for placeholders, so we won’t template the API Proxy files. The resulting manifest looks like the following:

  name: oauth-http-bin
  path: ./src/
  templated: false
    - scope: HttpBin.Read
      exact: true
      - verb: GET
        url: ^/entities/?.*$
        exact: false
      - verb: POST
        url: /entities/search
        exact: true
    - scope: HttpBin.Create
      exact: true
      - verb: POST
        url: /entities
        exact: true
      - verb: PUT
        url: ^/entities/.+$
        exact: false


Let’s dig into the core logic of the API Proxy: the checkScopes JavaScript policy. It relies on a single .js file which does the following steps:

  • Parses the token and retrieves the scopes.
  • Parses the specs and retrieves all the other necessary information (like HTTP verb, url).
  • Checks if any spec matches the scopes.

For parsing the token, we simply do the following:

// note that atob needs to be polyfilled in the API Manager
// also, HttpException is a custom made Error JS object
function getScopesFromToken(token) {
    try {
        return JSON.parse(atob(token.split(".")[1])).scope || [];
    } catch (e) {
        throw new HttpException(403, "Forbidden", "OAuth token missing or malformed.");

Then we need a helper function for checking there is at least one OAuth token scope matching a given spec:

function scopeExists(spec, scopes) {
    for (var i = 0; i < scopes.length; ++i) {
        if (spec.exact ? scopes[i] === spec.scope : scopes[i].match(spec.scope)) {
            return true;
    return false;

We also need a couple of helper functions for checking if the request path and verb match the patterns of a spec:

function patternMatches(pattern, verb, url) {
    return (pattern.verb === verb || pattern.verb === "*") &&
        (pattern.exact ? pattern.url === url : url.match(pattern.url));

function anyPatternMatches(patterns, verb, url) {
    for (var i = 0; i < patterns.length; ++i) {
        if (patternMatches(patterns[i], verb, url)) {
            return true;
    return false;

We can combine all of this and make a single function for checking the overall access control:

function checkSecurity(specs, scopes, verb, url) {
    for (var i = 0; i < specs.length; ++i) {
        if (scopeExists(specs[i], scopes) && anyPatternMatches(specs[i].patterns, verb, url)) {
            return true;
    return false;

Finally, after combining this function with the specs retrieved from the key value map and the context variables, we obtain the main body of the JS policy:

try {
    var specs = JSON.parse(context.getVariable("ro.spet.specs")),
        scopes = getScopesFromToken(context.getVariable("ro.spet.token")),
        verb = context.getVariable("request.verb"),
        url = context.getVariable("proxy.pathsuffix") || context.getVariable("request.path");
    if (!checkSecurity(specs, scopes, verb, url)) {
        throw new HttpException(403, "Forbidden", "Missing necessary scopes.");
} catch (e) {
    context.setVariable("ro.spet.code", e.statusCode || 500);
    context.setVariable("ro.spet.phrase", e.statusText || "Internal Server Error");
    context.setVariable("ro.spet.content", e.message);



Naturally, we want to test all the things that we implemented. Firstly, we unit test the checkScopes JavaScript policy locally. We use Mocha for building up the tests and we generate a simple hardcoded OAuth token to run all the tests with it.

One of these tests looks like the following:

it("should return forbidden for non-matching method", function () {
    var result = run({
        "ro.spet.specs": JSON.stringify([{
            scope: "MyApp.Read",
            exact: true,
            patterns: [{ verb: "GET", url: "/something", exact: true }]
        "ro.spet.token": token,
        "request.verb": "POST",
        "proxy.pathsuffix": "/something/else"

    assert.deepEqual(result, {
        "ro.spet.code": 403,
        "ro.spet.phrase": "Forbidden",
        "ro.spet.content": "Missing necessary scopes."


We would also like to test the API Proxy on the API Management directly. Of course, this testing should also be automatic. Postman is a good tool for writing these kinds of tests using JavaScript:

pm.test("Response status is Forbidden", function () {;

pm.test("Response body is 'Missing necessary scopes.'", function () {"Missing necessary scopes.");

We group our requests into a single postman collection. The base path of the API Proxy is specified as a Postman variable such that we can fill it in dynamically when running the tests. The newman library is a very convenient way of running the postman collection automatically. We also write a simple script to invoke it:

require("sapim").default().getManifestUrl("proxy.yaml").then(function(url) {
        collection: require("../../postman.json"),
        globals: {
            values: [{
                "key": "base-path",
                "value": url,
                "type": "text",
                "enabled": true
        reporters: ["cli"]

Finishing Touches

For convenience sake, we also write up some small npm scripts to run various operations more easily:

  • “test”: runs the Mocha tests.
  • “deploy”: deploys the API proxy using the sapim library.
  • “integration”: runs the Postman tests.
  • “build”: does all of the above.

We also integrate our simple repository with Travis CI. First we include the following .travis.yml file:

language: node_js
node_js: '7'
  - node_modules
- npm run build

Then we go to the Travis settings, enable it for the repository and add environment variables for the sapim library:

Now if we run a test build, we see that the unit tests are executed, then the proxy is deployed to the API Manager and lastly the integration tests are run.


To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

Leave a Reply