Skip to Content
Technical Articles
Author's profile photo Rajesh PS

Integrating Amazon Simple Storage Service (Amazon S3) and SAP ECC v6.0 via SAP PI v7.5 using AWS Signature v5 and Signing Algorithm (HMAC-SHA256)

It is not at all surprising that more than a million active customers, from Airbnb to GE, use AWS Cloud solutions to deliver flexibility, fast, scalability, reliability and inexpensive data storage infrastructure. Companies like Netflix, Airbnb, Disney, NASA, BWM and many more are all using AWS to make business decisions in real time. These companies use data collection systems for nearly everything from business analytics to near-real-time operations, to executive reporting, computing and storage.

As part of AWS Storage, Amazon Simple Storage Service (S3) provides scalable object storage for data backup, archival and analytics and used to store and retrieve any amount of data, at any time, from anywhere on the web.

Benefits

Amazon Simple Storage Service (S3) is low cost, 99.99% availability, secure by default, transfer a large amount of data and easy to handle.

Conceptualizes

Amazon Simple Storage Service (S3) Conceptualizes of buckets, objects, regions, keys and Amazon S3 data consistency model.

Data is stored as objects within resources called “buckets”, and a single object can be up to 5 terabytes in size. S3 features include capabilities to append metadata tags to objects, move and store data across the S3 Storage Classes, configure and enforce data access controls, secure data against unauthorized users, run big data analytics, and monitor data at the object and bucket levels.

Amazon S3 is designed for 99.999999999% (11 9’s) of durability, and stores data for millions of applications for companies all around the world.

Integration:

Amazon S3 supports the ‘REST API‘. Support for SOAP over HTTP is deprecated, but it is still available over HTTPS. However, new Amazon S3 features will not be supported for SOAP. Amazon recommends that you use either the REST API or the AWS SDKs.

Recently I had developed a unidirectional interface integrating Amazon Simple Storage Service (Amazon S3) and SAP ECC 6.0 via SAP PI 7.5 using AWS Signature v5 and Signing Algorithm (HMAC-SHA256). This integration scenario defines IDOC to REST by means iDocument-CSV conversion using REST Adapter.

In the server side i.e. Amazon Simple Storage Service (Amazon S3), file should be delivered in the form of comma separated value(.csv) using AWS Signature version 5 and Signing Algorithm (HMAC-SHA256). Amazon has provided authentication methods and signing requests to calculate the Signature process. Below are the methods to generate Header values:

  1. Generate Signature – Authorization
  2. Generate Content Hash – X-Amz-Content-Sha256
  3. Generate Date Stamp –X-Amz-Date
  4. Generate Content Type –Content-Type
  5. Dynamically generate HTTP Headers

Basically, Amazon S3 expects all the above mentioned mandatory header values to authenticate the client and it looks like below:

Common Request Headers:

The following table describes headers that can be used by various types of Amazon S3 REST requests.

Header Name Description
Authorization

The information required for request authentication. It starts with AWS4-HMAC-SHA256 and value looks like:

AWS4-HMAC-SHA256 Credential=access-key-id/date/aws-region/aws-service/aws4_request, SignedHeaders=content-length;content-type;host;x-amz-content-sha256;x-amz-date, Signature=256-bit signature expression

where <date> value is specified using YYMMDD format and <aws-service> value is s3 when sending request to Amazon S3.

Content-Type The content type of the resource in case the request content in the body. Example: text/plain
Content-MD5 The base64 encoded 128-bit MD5 digest of the message (without the headers) according to RFC 1864. This header can be used as a message integrity check to verify that the data is the same data that was originally sent.
Host For path-style requests, the value is s3.amazonaws.com. For virtual-style requests, the value is BucketName.s3.amazonaws.com.
x-amz-content-sha256 When using signature version 4 to authenticate request, this header provides a hash of the request payload. For more information see Signature Calculations for the Authorization Header: Transferring Payload in a Single Chunk (AWS Signature Version 4). When uploading object in chunks, you set the value to STREAMING-AWS4-HMAC-SHA256-PAYLOAD to indicate that the signature covers only headers and that there is no payload.
x-amz-date The current date and time according to the requester. Example: Wed, 01 Mar 2006 12:00:00 GMT. When you specify the Authorization header, you must specify either the x-amz-date or the Date header. If you specify both, the value specified for the x-amz-date header takes precedence.

 

Calculating a Signature:

To calculate a signature, you first need a string to sign. You then calculate a HMAC-SHA256 hash of the string to sign by using a signing key. The following diagram illustrates the process, including the various components of the string that you create for signing.

The process of putting a request in an agreed-upon form for signing is called ‘canonicalization’.

Deriving the Header values Using Java (User defined functions)

  1. Generate Signature – Authorization
  2. Generate Content Hash – X-Amz-Content-Sha256
  3. Generate Date Stamp –X-Amz-Date
  4. Generate Content Type –Content-Type
  5. Generate Payload
  6. Dynamically generate HTTP Headers

At first to define the individual global methods for above header parameters.

In the Enterprise service repository:

Step 1: Create a new function library and specify the attributes and methods(global variables) as below:

String dateStamp =””;

String signature =””;

String method = “PUT”;

String FileName=””;

 

Step 2: Create a user defined function to Generate Signature

public String generateSignature(String lcl_filePath, String lcl_dateTimeStamp, String contentType, String awsAccessKeyId, String awsSecretKey, String payload, Container container) throws StreamTransformationException{

{

AbstractTrace trace = container.getTrace();

String authorization = “”;

try {

String algorithm = “HmacSHA256”;

Mac mac = Mac.getInstance(algorithm);

SimpleDateFormat dt1 = new SimpleDateFormat(“yyyyMMdd’T’HHmmss’Z'”);

Date parsedDate = dt1.parse(lcl_dateTimeStamp.toString());

SimpleDateFormat dt2 = new SimpleDateFormat(“yyyyMMdd”);

String lcl_dateStamp =                         dt2.format(parsedDate);

trace.addWarning(“Date:” + lcl_dateStamp);

MessageDigest md = MessageDigest.getInstance(“SHA-256”);

byte[] hashPayloadInBytes = md.digest(payload.getBytes(“UTF-8”));

StringBuilder payloadSb = new StringBuilder();

for (byte b : hashPayloadInBytes) {

payloadSb.append(String.format(“%02x”, b));

}

String hashPayload = payloadSb.toString();

trace.addWarning(hashPayload);

trace.addWarning(lcl_dateTimeStamp);

StringBuffer canonicalRequest = new StringBuffer();

canonicalRequest.append(“PUT”).append(“\n”);

canonicalRequest.append(lcl_filePath).append(“\n\n”);

canonicalRequest.append(“content-type:” + contentType).append(“\n”);

canonicalRequest.append(“host:bucketName.s3.amazonaws.com”).append(“\n”);

canonicalRequest.append(“x-amz-content-sha256:” + hashPayload).append(“\n”);

canonicalRequest.append(“x-amz-date:” + lcl_dateTimeStamp).append(“\n\n”);

canonicalRequest.append(“content-type;host;x-amz-content-sha256;x-amz-date”).append(“\n”);

canonicalRequest.append(hashPayload);

byte[] hashCanonicalReqInBytes = md.digest(canonicalRequest.toString().getBytes(“UTF-8”));

StringBuilder hashCanonicalSb = new StringBuilder();

for (byte b : hashCanonicalReqInBytes) {

hashCanonicalSb.append(String.format(“%02x”, b));

}

trace.addWarning(hashCanonicalSb.toString());

StringBuffer sringToSignSb = new StringBuffer();

sringToSignSb.append(“AWS4-HMAC-SHA256”).append(“\n”);

sringToSignSb.append(lcl_dateTimeStamp).append(“\n”);

sringToSignSb.append(lcl_dateStamp + “/” + “ap-south-1/s3/aws4_request”).append(“\n”);

sringToSignSb.append(hashCanonicalSb.toString());

String stringToSign = sringToSignSb.toString();

System.out.println(stringToSign);

trace.addWarning(stringToSign);

byte[] kSecret = (“AWS4” + awsSecretKey).getBytes(“UTF-8”);

mac.init(new SecretKeySpec(kSecret, algorithm));

byte[] kDate = mac.doFinal(lcl_dateStamp.getBytes(“UTF-8”));

mac.init(new SecretKeySpec(kDate, algorithm));

byte[] kRegion = mac.doFinal(“ap-south-1”.getBytes(“UTF-8”));

mac.init(new SecretKeySpec(kRegion, algorithm));

byte[] kService = mac.doFinal(“s3”.getBytes(“UTF-8”));

mac.init(new SecretKeySpec(kService, algorithm));

byte[] kSigning = mac.doFinal(“aws4_request”.getBytes(“UTF-8”));

mac.init(new SecretKeySpec(kSigning, algorithm));

byte[] kSignature = mac.doFinal(stringToSign.getBytes(“UTF-8”));

String signature = Hex.encodeHexString(kSignature);

authorization = “AWS4-HMAC-SHA256 Credential=” + awsAccessKeyId + “/” + lcl_dateStamp

+ “/ap-south-1/s3/aws4_request,SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date,Signature=”

+ signature;

} catch (Exception e) {

e.printStackTrace();

}

return authorization;

}

}

 

Step 3: Create a user defined function to Generate  DateStamp

public String generateDateTimeStamp(Container container) throws StreamTransformationException{

AbstractTrace trace = container.getTrace();

SimpleDateFormat dt1 = new SimpleDateFormat(“yyyyMMdd’T’HHmmss’Z'”);

dt1.setTimeZone(TimeZone.getTimeZone(“GMT”));

dateStamp = dt1.format(new Date());

trace.addWarning(dateStamp);

return dateStamp;

}

 

Step 4: Create a user defined function to Generate Content Hash

public String generateContentHashing(String payload, Container container) throws StreamTransformationException{

AbstractTrace trace = container.getTrace();

StringBuilder payloadSb = new StringBuilder();

try {

MessageDigest md = MessageDigest.getInstance(“SHA-256”);

byte[] hashPayloadInBytes = md.digest(payload.getBytes());

for (byte b : hashPayloadInBytes) {

payloadSb.append(String.format(“%02x”, b));

}

} catch (NoSuchAlgorithmException e) {

e.printStackTrace();

}

trace.addWarning(payload);

return payloadSb.toString();

}

 

Step 5: Create a user defined function to Generate csv Payload

public void generatePayload(String[] SKUId, String[] EANNumber, String[] Warehouse, String[] Quantity, String[] UOM, String[] Cost, String[] Entity, String[] TransactionType, ResultList rs, Container container) throws StreamTransformationException{

AbstractTrace trace = container.getTrace();
try
{
String header = “SKUId,EANNumber,Warehouse,Quantity,UOM,Cost,Entity,TransactionType”; // field names from your first structure
String content = header + “\n”;
for(int i =0; i< SKUId.length; i++)
{

// adjust the below line with your field names from first structure
content = content + SKUId[i] +”,” +EANNumber[i] + “,”+ Warehouse[i] + “,” + Quantity[i] + “,” + UOM[i] + “,” + Cost[i] + “,” + Entity[i] + “,” + TransactionType[i] + “\n”;

}
trace.addInfo(content);
rs.addValue(content);
}

//Create attachment with CSV data
catch (Exception e)
{
e.toString();
}

 

Step 6: Create a user defined function to Generate Dynamic HTTP Headers

public String HTTPHeaders(String dateStamp, String signature, String contentHash, String fileName, Container container) throws StreamTransformationException{

DynamicConfiguration conf2 = (DynamicConfiguration) container.getTransformationParameters().get(StreamTransformationConstants.DYNAMIC_CONFIGURATION);

DynamicConfigurationKey key3 = DynamicConfigurationKey.create(“http://sap.com/xi/XI/System/REST”, “XAmzDate”);

conf2.put(key3,dateStamp);

DynamicConfigurationKey key4 = DynamicConfigurationKey.create(“http://sap.com/xi/XI/System/REST”, “Authorization”);

conf2.put(key4,signature); 

DynamicConfigurationKey key5 = DynamicConfigurationKey.create(“http://sap.com/xi/XI/System/REST”, “XAmzContentSha256”);

conf2.put(key5,contentHash);

DynamicConfigurationKey key6 = DynamicConfigurationKey.create(“http://sap.com/xi/XI/System/REST”, “FileName”);

conf2.put(key6,fileName);

return “”;

 

Below are the Detailed steps which explains in reading the API headers parameters i.e. the key and value and sending the file name dynamically :

In the first graphical mapper(IDOC to XML) declare the Adapter Specific Message Attributes as an User-defined Functions for deriving the file name scheme as below and map to the target field ‘fileName’ and also pass the respective file path to the target field ‘filePath‘.

User defined function to generate file name dynamically:

public String getASMAFileName(String CREDAT, String CRETIM, Container container) throws StreamTransformationException{

String filename = “INV_” + CREDAT + “_” + CRETIM + “.csv”;

return filename;

1st Mapper:

In the second graphical mapper(XML to CSV) use the function libraries mentioned in above steps.

Hard coded values (like file path, content type, aws access key id and aws secret key can be moved to value mapping appropriately.

2nd Mapper:

 

In the Integration Directory:

Coming to the REST Receiver communication channel, under the REST URL tab the header variables defined in pattern variables are replaced by the respective values in the request message dynamically. For each part, I use an adapter specif attribute to read dynamically  the respective values from the Adapter specific attributes.

The URL Pattern describes the full URL produced by this channel by using named placeholders for dynamic parts. Placeholder variable names must be enclosed in curly braces.

Here value source is Adapter specific attribute which retrieves the value from an Adapter-Specific Attribute by name. The predefined names are: service, resource, id, resource2, id2, operation.

Switching to tab REST Operation . Here, I have set the HTTP Operation Source equals PUT which is a static value.

Now defining the format of the messages of the RESTful service. Switch to tab Data Format here the format of the request is JSON and response is expected to be in XML.

Finally in the HTTP Headers define the header and value pattern appropriately.These are dynamically generated using user defined functions.The header value may contain all placeholders defined on the REST URL tab.

 

Run the Scenario:

Background job is scheduled in ECC and subsequently an iDOc is generated and delivered to PI for transformation and exchange of message.

 

SAP PI Middleware Server:

Amazon S3 target server:

comma separated value(.csv) looks like:

 

Conclusion

However, in this blog, I had accomplished this integration using Java mapping as per the recommendations provided by AWS. Below are the references:

https://docs.aws.amazon.com/general/latest/gr/Welcome.html

https://docs.aws.amazon.com/index.html

Other benefits include low cost, 99.99% availability, secure by default, transfer a large amount of data and easy to handle.

 

In next blogs I will be briefing on the integration with MS Azure and Kafka Applications using external adapters.

Thank you!

Assigned Tags

      16 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Vikas Kumar Singh
      Vikas Kumar Singh

      Hi Rajesh,

       

      I was working on this now since a month with POSTMAN native authorization type

      But I was facing lot of hurdles with POST and Headers and signature mismatch errors. Your blog really helps. Kudos!

      Regards,

      Vikas

      Author's profile photo Rajesh PS
      Rajesh PS
      Blog Post Author

      You're Welcome!

      This is indeed really a tricky and challenging one At last yes Kudos.

       

      My next blog soon on Kafka and MS Azure Integration. Thank you!

      Author's profile photo Anupam Ghosh
      Anupam Ghosh

      Complex process explained in a simple manner. Thank you.

      Author's profile photo Rajesh PS
      Rajesh PS
      Blog Post Author

      Thanks much Anupam.

      Author's profile photo Sitakant Tripathy
      Sitakant Tripathy

      Hi Rajesh,

      Any specific reason u chose to develop the integration through PI for this particular use case.

      Thinking aloud, S3 storage functions should sit really close to business application for read and write, something like DMS and is it really necessary to introduce PI to the mix even when AWS clearly does mentions supporting rest going forward.

      Worlds changing and I think we are not very comfortable in breaking the mould around “PI is the answer to all integrations in SAP“.

      In my view, this should surely sit on top of ECC and maybe SAP needs to come up with a framework for consuming rest services directly on top of ABAP stack wherever required. Introducing PI into consuming external rest services does not seem to provide any major value but overheads.

      It would be good to have ur thoughts on this.

      Anupam Ghosh ur thoughts as well.

      Regards.

       

      Author's profile photo Anupam Ghosh
      Anupam Ghosh

      Hi  Sitakant ,

      ECC server has been loaded with business rules and business logic. The integration overhead will add more complexity to already existing complex scenarios. PI/PO handles all issues related to external servers which operate with different protocols. Then ECC or S4 HANA server has to be loaded with all kinds of adapters to talk to different web services.  Even to resolve issues related to integration consultants need to log in ECC creating memory constraint. During year end when volume of data grows up , definitely business will not take up this overhead. Hence use of PI/PO seems reasonable.

       

      Regards

      Anupam

      Author's profile photo Rajesh PS
      Rajesh PS
      Blog Post Author

      Hi Srikant,

       

      A valid question indeed. Appreciate it.

      For this integration SAP ECC and SAP PI is within intranet and AWS is a third party system(internet).

      As a thumb rule SAP ECC data will ideally not be exposed directly to thirdy party system so here EAI SAP PI integration broker is used to connect AWS with routing and  transformation. To over come any preventive measure from Ddos attack, Xss, vrius etc.

       

      AWS supports REST but in PI REST Communication channel there is no form data attachemnts only >SP9 patch it is doable.

      PI or CPI may be the answer for any integrations in SAP. Since it is build for that and obviously a powerhouse orchestrator tool.

      Probably S4 HANA can answer and it would be a Direct P2P but depend wholely on business requirements and other criterias looking from an End -to end perspective.

       

      Thanks and Regards - Rajesh PS

      Author's profile photo Sanjeev Shekhar Singh
      Sanjeev Shekhar Singh

      Hi Rajesh,

      Nicely explained blog. Out of curiosity, did you consider using AWS libraries instead of creating payload in AWS native format and if so, what was the driving factor to go this mechanism rather than using library. Reason I ask is I have a set of interfaces where I need to do integration with SQS (with extended library) option to pull and push messages to AWS. I am trying to find if anyone else has come across these patterns and if there are any lessons learnt.

      I am more leaning towards implementing these scenarios using:

      • For reading from queue –> Using a Java mapping to read SQS payload using AWS library
      • For writing to queue –> Using receiver Java proxy to write to the SQS queue

      Appreciate if you have any inputs. Apologies in advance for raising this in your blog, but would appreciate thoughts from people who could have already considered these design decisions.

      PS: Raised a forum thread as well: 

      https://answers.sap.com/questions/12740800/sap-po-integration-with-sqs-extended-library-possi.html

      Cheers,

      Sanjeev

      Author's profile photo Muni M
      Muni M

      Hi Rajesh

      I have worked on similar requirement and i have used AWS java libraries to generate all required headers instead of writing it again.

      I had to use java mapping as the requirement was to fetch .csv file from file server and upload into S3. Using java mapping i have avoided csv to xml  conversion conversion.

      Regards,

      Muni

       

       

      Author's profile photo Rajesh PS
      Rajesh PS
      Blog Post Author
      Muniyappan Marasamy Sounds good. Please share more details in elucidate so that it is seen and read by others too.
      Author's profile photo Rajani Duddupudi
      Rajani Duddupudi

      Could you please share some more details on your approach

      Author's profile photo Vijayashankar Konam
      Vijayashankar Konam

      Hello Rajesh,

      Did you actually try uploading a large XML file (no CSV conversation needed) with your interface? I suspect the adapter may not support chunks mode transfer even if I move all the UDFs to an adapter module. Could you please let me know your experience dealing with larger files?

       

      Vijay Konam (VJ)

      Author's profile photo Arsh Gupta
      Arsh Gupta

      Rajesh PS Thanks for a wonderful blog expressed with such prowess really helped me with a similar integration!

      Author's profile photo Santosh Ibrampurkar
      Santosh Ibrampurkar

      Hi Rajesh

       

      Is it possible the reverse scenario i.e. to pull the file from Amazon S3 Service using PI 7.5?

       

      Santosh Ibrampurkar

       

      Author's profile photo Anshul Jain
      Anshul Jain

      Hi Rajesh,

       

      I am trying to do a similar Integration with AWS Selling Partner APIs with real time calls in JSON format but i m getting error as below:

      "message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
      In my case, as it is get calls so payload is empty so cant put as an input in generate signature function.
      Can you please help in rectifying the same?
      Best Regards,
      Anshul Jain
      Author's profile photo Former Member
      Former Member

      Hi Rajesh,

      I am in similar instance, with no PI involved.

      Could you please advise the procedure to write file to AWS file just with ABAP code

      Thanks.