Skip to Content
Technical Articles

Amazon RDS PostgreSQL consumption on SAP Cloud Platform

**NOTE: This feature is now available on SAP Cloud Platform Cloud Foundry Trial Accounts.**

Introduction

SAP has made a strategic decision to embrace the hyper-scalers to empower our mission to make SAP Cloud Platform a true Business Platform for the Intelligent Enterprise. With the retirement of SAP ‘managed’ open-source backing services, we now have to understand what the way forward is to be able to run our applications on SAP Cloud Platform. Customers and partners can now leverage the best of both worlds, to be able run applications which can use the Business Services available on the SAP Cloud Platform and power their applications using their existing hyper-scaler infrastructure.

In this direction we now have the capability on SAP Cloud Platform to consume Amazon RDS PostgreSQL. Which means, the essential operations such as service instance creation, deletion, service binding/unbinding and credentials management can be performed on SAP Cloud Platform. Many other features would be supported in the days to come.

Prerequisites

  1. SAP Cloud Platform Cloud Foundry account on AWS with a Sub-account and Space .
  2. AWS Account with necessary authorisations.
  3. AWS Access Key ID and Secret Access Key.
    1. If you do not have these already, login to your AWS account and open the IAM console, where we have a tab “Security Credentials”. Click on “Create access key”. You will be displayed a new Access Key ID and an Access Key Secret (Make note of these values or download the CSV, as you cannot recover it again.)
  4. AWS Region where you wish to create your PostgreSQL instances. It is recommended to choose a region that is co-located with your SAP Cloud Platform account, so as to reduce the network and latency related issues.
  5. Publicly accessible VPC on AWS where you want the PostgreSQL instances to be launched. Make note of the VPC ID from the VPC console. If you have not already created one on your AWS account, you could follow the steps below. Otherwise, skip to the next section: “Consuming Amazon RDS PostgreSQL on SAP Cloud Platform”.

 


Create a publicly accessible AWS VPC 

An AWS CloudFormation template which automates the setup of the AWS account listed in the steps below can be found in the SAP GitHub samples.

The following steps can be followed to create a Public VPC with 2 subnets. This would be a one time setup activity required to be able to create/manage PostgreSQL instances from SAP Cloud Platform.

Navigate to the VPC console on you AWS console.

  1. Create a VPC using the VPC Wizard –
    1. Click on Launch VPC Wizard, ensure you have selected the correct Region, which is ideally the same as the region your SAP Cloud Platform account is located.
    2. Choose the appropriate VPC Configuration, we will create a VPC with a Single Public Subnet to begin with:
    3. DNS hostnames must be explicitly enabled. Select an IP CIDR that’s applicable (the largest one allowed is 10.0.0.0/16). Provide the values like below and click on ‘Create VPC’.
    4. Make note the generated ‘VPC ID’, we will need this VPC ID for the configuration in the next steps.
  2. The VPC must have at least two subnets mapped to two separate availability zones (with subsets of the CIDR). Create another subnet and map it to a separate availability zone.
    1. Choose ‘Subnets’ in the navigation menu under the VPC Dashboard and Click on ‘Create Subnet’
    2. Provide a Name tag for this subnet, ensure that the IP CIDR is a different subset of the VPC’s CIDR. Choose a different availability zone for this subnet. And then click on “Create”.
    3. Once created, ensure both the subnets that were created are in “available” state.
  3. Once the VPC is created with 2 subnets in different availability zones we now create a Route Table and assign it to the VPC. Route table [3] is necessary to route any incoming requests from the CIDR to the appropriate Internet Gateway.
    1. Choose ‘Route Tables’ in the navigation menu under the VPC Dashboard and Click on ‘Create Route table’.
    2. Provide a Name Tag and choose the same VPC that we created in the previous steps and click “Create”.
  4. Now that we have created a VPC with 2 subnets and a route table associated with it. We can now create an Internet Gateway [4] to allow internet-routable traffic to the instances created with in this VPC.
    1. Provide a Name tag for the internet gateway and click “Create”. 
    2. Now you should find a new Internet Gateway created in ‘detached’ state. 
    3. Next, we need to attach the internet gateway as a routable link to the Route Table created earlier. 
    4. Click on ‘Edit Routes’ for the route table created. Add the internet gateway as a target with the target CIDR 0.0.0.0/0 and Save the routes. 
  5. Once a VPC is created, we need to create rules for the in-bound traffic from applications on SAP Cloud Platform to this instance. For which we need to create a Security Group[5] which allows all inbound traffic to port 5432(default PostgreSQL port).
    1. Navigate to the Security Groups console
    2. Create a security group that is mapped to the VPC (note the security group ID for the next steps)
    3. Once the security group is created, we need to create rules to allow access to the PostgreSQL instances from internet.
    4. Click on “Edit rules” under the Inbound Rules section for the security group.
    5. Click on “Add rule” and provide the port range as “5432” , Source as “Anywhere” and provide an appropriate description for the rule. Click on “Save rules”.

 


 

Consuming Amazon RDS PostgreSQL on SAP Cloud Platform

  1. Login to SAP Cloud Platform Cloud Foundry account and at the Global Account level click on “Resource Providers” option on the navigation menu. Here we would be configuring your AWS account credentials which will be required subsequently to create & manage the RDS PostgreSQL instances. The AWS account credentials shared with SAP will be saved in a secure store.
  2. Click on the “New Provider”. This opens a pop-up dialog where you would have to provide the Hyperscaler Account Credentials. In this case, we would be keying in the AWS account details.
  3. Key in the values for the above parameters as per the following:
    1. Provider: Choose among the supported Hyperscalers, in this case we will go with AWS as the Hyperscaler.
    2. Display Name: Provide a suitable display name for the provider for identification on the cockpit.
    3. Technical Name: Provide a unique technical name. This name would be required by the application developers as a parameter when creating service instances from this provider.
    4. Description: Provide an optional description for this resource provider.
    5. AWS Access Key ID: Make use of the Access Key ID that was created as part of the prerequisites item 3.
    6. AWS Access Key Secret: Make use of the Secret Access Key that was created as part of the prerequisites item 3.
    7. AWS VPC ID: Make use of the VPC ID that was created as part of the prerequisites item 5.
    8. AWS Region:Provide the aws region where you want to create the PostgreSQL instances, i.e., where the VPC has been created. Eg: us-east-1/eu-central-1
  4. Once you have all the values above, provide that in the dialog pop-up to create a new Resource Provider.
  5. Once the new Resource Provider is created, we need to assign the entitlements to the sub-accounts where you create PostgreSQL instances. Click on “Entitlements” -> “Sub-account Assignments” and choose the Sub accounts for which you wish to provide this service entitlement. Click on “Add Service Plans”.
  6. You will now have to choose the “PostgreSQL on Amazon (AWS)” service from the catalog and choose the service plans from the resource provider created in Step 4. Click on “Add Service Plan” to assign the services to the Sub-account.
  7. Now once the entitlement is made available to a sub-account you can also limit the number of PostgreSQL instances that can be instantiated on that sub-account. So now you can provide that limit on entitlements screen and click ‘Save’. Here in the below scenario we see that for the sub account ‘Demo Account’ we have provided PostgreSQL on Amazon (AWS) service entitlement with a quota of 2 units.
  8. Creation of PostgreSQL Instance via the SAP Cloud Platform Cockpit
    1. Login to the sub-account which was given the entitlement and go to the ‘Service Marketplace’ tab. You should now be able see “PostgreSQL on Amazon (AWS)” service.
    2. Click on the “PostgreSQL on Amazon (AWS)” service tile and see the available plans and respective documentation. Click on the “Instances” option on the navigation menu and click “New Instance”.
    3. Choose the appropriate service plan and provide the instance parameters in the json format. You could also choose to not provide any values, in which case the default parameters are set and db credentials are auto-generated. (More detailed information on the parameters along with the default configuration can be obtained here)
      • Choose a service plan as per the requirement and click “Next”
      • If you do not wish to provide any additional configuration parameters, you can leave the additional parameters blank. Although, if you have more than one Resource Providers in your account, you would have to specify which resource provider should be used in creation of the PostgreSQL instance (resourceTechnicalName parameter). Additional Parameters that can be configured are as below:
        {
        	"adminPassword": "<Your Password>", //Atleast 12 characters long
        	"adminUsername": "<Your Admin Username>", //Atleast 12 characters long
        	"backupRetentionPeriod": 14, //Period in days
        	"dbEngineMajorVersion": "9.6", //PostgreSQL DB Engine Version 
        	"dbInstanceType": "db.t2.micro", // Instance type more options can be found in help
        	"dbName": "mynewdb", //Name of the Database Instance
        	"multiAz": true, //Multi Availability Zone Support Required
        	"resourceTechnicalName": "demo_provider", //Technical name of the resource provider to be used
        	"storageEncrypted": false, //Flag to choose if the data needs to be encrypted
        	"storageGb": 20 //Storage in GB required
        }​

      • You could leave the application blank for now and confirm the service instance creation with a suitable instance name and click ‘Finish’.
      • Instance creation will be started and might take sometime to get created.
  9. Creation of PostgreSQL instance via CLI: We could also create the new PostgreSQL instance via the command line interface by using the below command:
    cf create-service <SERVICE> <SERVICE PLAN> <SERVICE_INSTANCE> -c {..}​
    
    Eg:
    cf create-service aws-rds-postgresql development <myinstance> -c '{"instance_type": "db.t2.micro", "storageGb": 100, "backupRetentionDays": 20,"resourceTechnicalName": "demo_provider"}

  10. Once the instance is created now you can click on the created instance. This instance can be bound to an application running on SAP Cloud Platform.
  11. Or you could choose to create a new Service Key and obtain the service connection parameters.
  12. Once you create a new service key you will be able to view the service instance credentials-  URL, Username, Password, DB Name, and the SSL Certificate which could be used in the application running on SAP Cloud Platform.

With this approach we can now create and consume Amazon PostgreSQL DB on SAP Cloud Platform. This service will soon be enhanced with more features to make the consumption and management of PostgreSQL instances lot more easier.

Consumption of PostgreSQL on Azure Database is now available on SAP Cloud Platform.

SAP Cloud Platform will also enable consumption of PostgreSQL Database services from other Hyperscalers like GCP and AliCloud soon.

 

References

[1] SAP Help Documentation

[2] VPChttps://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html

[3] Route Table: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html

[4] Internet Gateway: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html

[5] Security Group: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html

Other Useful Links

Announcement of retirement of SAP ‘managed’ backing services.

FAQ regarding the retirement and way forward

Consuming hyperscaler services as ‘User-provided Services’ on SAP Cloud Platform.

8 Comments
You must be Logged on to comment or reply to a post.
  • Thanks for the blog post. It mentions as prerequisite:

    AWS Account with necessary authorizations.

    Can you please elaborate more precisely which AWS authorizations are needed in order to use these credentials with the Resource Provider on SCP? I think it would be a good practice to not just copy an own Admin user but to have a dedicated one with restrictive permissions…

    Thanks!

  • Hi Suhas

    Great informative blog, thanks for posting.

    Could you let me know how long the service (SCP to consume Amazon RDS PostgreSQL) is going to be available? On the website of SCP under capabilities matrix – the PostgreSQL service is going to be retired on January 15th, 2020

    https://cloudplatform.sap.com/capabilities/product-info.PostgreSQL-on-SAP-Cloud-Platform.d03d9706-13e7-4c0f-b9ca-53b5abe88afc.html

    We have a requirement to connect our SCP to Amazon RDS PostgreSQL as part of our project deliverable.

    Appreciate your response

    Best regards

    Maddy

    • Hi Maddy,

      I believe you are slightly mistaken here. We are actually retiring the SAP ‘managed’ PostgreSQL service on SAP Cloud Platform.

      This blog is regarding the new feature of consuming Amazon RDS PostgreSQL on SAP Cloud Platform that was released on 1st August, 2019.

      The difference here is that the you would need to have an Amazon AWS account as a pre-requisite which would be configured on SAP Cloud Platform. Through this new service on SCP once you configure the AWS credentials on SAP Cloud Platform, you can create and manage your AWS PostgreSQL instances from SAP Cloud Platform.

      Hope this answers your question.

      Thanks

      Suhas

  • Just in case anyone else runs into the same issue I did 😉

     

    Make sure you do step 4. iv on AWS for both subnets you created, otherwise some instances will be successfully be created while the creation of others will hang, depending on which Subnet (and thus availability zone) the request went to