Skip to Content
Technical Articles

Working with files in CAP (and Amazon S3)

Working with media resources (i.e files) is well covered in the CAP documentation so let’s jump into a very simple example.

1 – Defaults handlers

First we need to create a new CAP project (Node.js OData v4) and a cds file to define our data model (db\schema.cds):

namespace media;

entity Pictures {
  key ID : UUID;
  @Core.MediaType: 'image/png'
  content : LargeBinary;
}

 

Then we can create a simple service (srv\media-service.cds):

using media as db from '../db/schema';

service MediaService {
  entity Pictures as projection on db.Pictures;
}

 

As we are using OData v4, we can store an image by sending one request to create the object:

POST: https://host/media/Pictures

Request Headers:
Content-Type: application/json

Request Body : {}

Note: the request body is an empty object in this case because the ID is generated by the framework (type UUID) and we only have one mandatory property (we still need to send some application/json payload for some reason). Thanks Uwe Fetzer for pointing this out.

 

And then a second request to upload the image (using the id returned by the first request):

PUT: https://host/media/Pictures(xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)/content

Request Headers:
Content-Type: image/png

Request Body : <MEDIA>

 

We can then get the image back with the following request:

GET: https://host/media/Pictures(xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx)/content

 

Pretty simple, right? However using the default handlers means that the files are stored in the database which is often not a good idea. We can for instance store the files in Amazon S3.

2 – Storing files in Amazon S3

SAP Cloud Platform provides an ObjectStore service so let’s see how we can leverage it. You’ll need an SCP subaccount on Cloud Foundry (AWS) for this. Or you can also run it locally thanks to Gregor Wolf contribution.

First let’s create a service instance of the objectstore service:

> cf create-service objectstore s3-standard s3-pictures

 

Then update the mta.yaml file to add a new resource and dependency:

modules:
...
  - name: cap-media-node-srv
   ...
    requires:
      - name: cap-media-node-db-hdi-container
      - name: s3-pictures
resources:
...
  - name: s3-pictures
    type: objectstore
    parameters:
      service: objectstore
      service-plan: s3-standard
      service-name: s3-pictures

 

Then add a dependency in the srv\package.json file for the AWS SDK :

...
    "dependencies": {
        "@sap/cds": "^3.18.1",
        "aws-sdk": "^2.559.0",
        "express": "^4.17.1",
        "hdb": "^0.17.1"
    },
...

 

After building the srv module, we can then use the AWS SDK to interact with S3. We just need to implement the UPDATE handler to store the file in S3 using the upload method and the READ handler to retrieve the file from S3 using the getObject method :

module.exports = srv => {

	const vcap_services = JSON.parse(process.env.VCAP_SERVICES)
	const AWS = require('aws-sdk')
	const credentials = new AWS.Credentials(
		vcap_services.objectstore[0].credentials.access_key_id,
		vcap_services.objectstore[0].credentials.secret_access_key)
	AWS.config.update({
		region: vcap_services.objectstore[0].credentials.region,
		credentials: credentials
	})
	const s3 = new AWS.S3({
		apiVersion: '2006-03-01'
	})

	srv.on('UPDATE', 'Pictures', async req => {
		const params = {
			Bucket: vcap_services.objectstore[0].credentials.bucket,
			Key: req.data.ID,
			Body: req.data.content,
			ContentType: "image/png"
		};
		s3.upload(params, function (err, data) {
			console.log(err, data)
		})
	})

	srv.on('READ', 'Pictures', (req, next) => {
		if (!req.data.ID) {
			return next()
		}

		return {
			value: _getObjectStream(req.data.ID)
		}
	})

	/* Get object stream from S3 */
	function _getObjectStream(objectKey) {
		const params = {
			Bucket: vcap_services.objectstore[0].credentials.bucket,
			Key: objectKey
		};
		return s3.getObject(params).createReadStream()
	}
}

 

Note: the VCAP_SERVICES environment variable has to be parsed to retrieve the parameters for the AWS config (region, bucket id and credentials). These parameters can also be listed with the cf env command :

> cf env <APP_NAME>

 

Now when we send the same PUT request we used earlier, we can see in the console that the file is stored on S3:

PUT /media/Pictures(<ID>)/content
null { ETag: '"<ETAG>"',
ServerSideEncryption: 'AES256',
Location: '<LOCATION>',
key: '<ID>',
Key: '<ID>',
Bucket: '<BUCKET>' }

 

3 – Next steps

Now that we have a basic example, we could improve it in many ways:

  • add some error handling
  • handle different MIME types
  • implement DELETE handler
  • create generic service to handle interactions with S3 and re-use this service in other projects
  • investigate pros and cons of other methods to access S3 (service broker, user provided services)

Note: the source code is available on GitHub .

Cheers,

Pierre

Edit: add some details about the first POST request and how to run the project locally.

7 Comments
You must be Logged on to comment or reply to a post.