Skip to Content
Technical Articles
Author's profile photo Lalit Mohan Sharma

Use Private Registry for Containerize a CAP Application – Part 2 (Amazon ECR)

Welcome Back! In my first blog post(click here), we tried to understand the advantages of private registries over public registries. Additionally, we understand how to choose the best container registry based on customers’ needs and compare the leading container registries available in the market.

One of the top registries in the current market is Amazon Elastic Container Registry (ECR)an AWS-managed container image registry service that is secure, scalable, and reliable. It offers resource-based permissions on private and public image repositories using AWS IAM. For additional information, click here

This blog article aims to use the Amazon ECR to containerize your Simple CAP Application. It includes tagging the local image to the Private registry tags and pushing it to the Amazon ECR private repositories, as well as how to pull the application’s image and run it.


You need to have the following:

  1. A valid AWS account is configured with multi-factor authentication (MFA) to help protect your AWS resources. If not, please follow the Create and Activate a new AWS account.
  2. Install the latest version of the AWS CLI for your operating system. If already installed, check the CLI version and update it if required.
  3. Install the CDS tools by following the steps mentioned in the link.
  4. Git is the version control system that you need to download files. If you don’t have it, go to Git downloads, pick the installer appropriate for your operating system and install it.
  5. Choose VS Code as an editor. If you don’t have it, go to the Visual Studio Code homepage and install it.
  6. You need to have a server-side javascript runtime environment called NodeJS. Download and install (Node.js version 12.x or 14.x is recommended).

Let’s proceed step by step

Before we will start, first download the repository, and then the relevant branch will be selected. Go to your computer’s terminal and enter the following:

git clone
cd cloud-cap-risk-management
git checkout create-ui-fiori-elements

Step 1: Run the CAP Application in a Docker Container Locally   

Since we intend to use the CAP Application’s docker image, we need to create a docker image for the CAP application. You must create a file called a Dockerfile that specifies how to build the image and what to do when it is executed.

Create a file named Dockerfile and add the following lines to it:

FROM node:14-slim

WORKDIR /usr/src/app
COPY gen/srv .
RUN npm install
COPY app app/
COPY db/data db/data/
RUN find app -name '*.cds' | xargs rm -f

USER node
CMD [ "npm", "start" ]

Since you don’t want to start from scratch, the FROM directive at the beginning of the file specifies the base image you want to utilise. In this case, you utilise a public image that comes pre-installed with NodeJS 14.x. Following that, you declare that the CAP default port 4004 is open to outside traffic and start the CAP server with npm.

To simplify, we can try out the scenario without an external database server. First, delete the developer dependency for sqlite3 and then it can be introduced again as a runtime dependency.

npm i sqlite3 -P

Add the following snippet to the package.json file:

  "name": "capapp",
  "cds": {
    "requires": {
      "db": {
        "kind": "sql"

Run cds build first since the image uses the build results from the gen/srv folder before you can create it.

cds build

Build the docker image locally:

docker build -t capapp .

Docker images are made up of various “filesystem layers.” Your customized Docker image is a layer on top of the basic image. A file can be added or removed for each layer.

Run the Docker Container:

docker run --rm -p 4004:4004 -t capapp

This instructs Docker to expose port 4004 on the host to traffic from Port 4004 on the Docker container. You could also use a different host port, but let’s keep things straightforward.

The CAP service is now available at http://localhost:4004.

Step 2: Create the IAM user’s Access Keys & Add Permission.

Open the IAM console by logging into the AWS Management Console at

Select Users in the navigation pane and choose the Security credentials tab after selecting the user’s name. Make a note of the Serial number, or select Assign MFA device from Multi-factor authentication(MFA) if you don’t have a virtual MFA device.

Select Create access key from the Access keys section.

Note: Select Show to display the new access key pair. After this dialogue box closes, you won’t have access to the secret access key again.

Your credentials should resemble these:

Amazon ECR provides several managed policies that you can attach to IAM users. These policies allow differing levels of control over access to Amazon ECR resources and API operations. You can apply these policies directly or use them as starting points for creating your own policies.

To. manage Amazon ECR, we must have full administrator access. Hence we must attach the AmazonEC2ContainerRegistryFullAccess policy to your IAM IDs.

Choose the Permissions tab. Click on Add permissions. 


 Configure AWS CLI:

To access a resource in your AWS account, the AWS CLI requires three necessary arguments (access key, secret key, and region). When you create an IAM user, you receive the access key and secret key. The region option specifies the region in which the AWS CLI will access the resources.

aws configure

Check if AWS CLI is properly configured run the command sts get-caller-identity, and it will return the 12-digit identification number of your AWS account.

aws sts get-caller-identity

Step 2: MFA token to authenticate access to AWS Resources

Multi-factor authentication (MFA) is the best practice for securing your account and its resources. You need to create a temporary session if you want to use an MFA device to access your resources using the AWS CLI.

Run the sts get-session-token AWS CLI command, replacing the variables with information from your account, resources, and MFA device:

aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/ \
--token 470242

You receive an output with temporary credentials and an expiration time (by default, 12 hours): please check the following.


Using temporary credentials with named profiles

You can also use named profiles to specify the commands that require MFA authentication. To do so, edit the credentials file in the .aws folder in the user’s home directory to add a new profile configuration for issuing MFA-authenticated commands. Here’s an example profile configuration:

After the credentials expire, run the get-session-token command again and then export the returned values to the environment variables or to the profile configuration.

Step 3: Create a private Repository

Now you already have an image to push to Amazon ECR, you must create a repository to hold it. To create a repository, Open the ECR console by logging into the AWS Management Console at

Click on the Get Started Button.

You will be taken to the Create repository page, where you can enter all the details for your new repository. Choose visibility Private, name the repository

Click on the Create repository Button.

After creating the repository, you will see your newly created repository listed, as shown below and Make a note of the URI of your repo. We’ll have to use it in the next Steps.

Step 4: Log in to AWS ECR with profile MFA

After configuring the AWS CLI, authenticate the Docker CLI to your default registry. this allows the docker command to push and pull images with Amazon ECR. The get-login-password is the preferred method for authenticating to an Amazon ECR private registry when using the AWS CLI.

Use the value AWS for the username and specify the Amazon ECR registry URI you want to authenticate to when passing the Amazon ECR authorization token to the docker login command. Your private registry’s default URL is

If authenticating to multiple registries, you must repeat the command for each registry.

aws ecr get-login-password --region us-east-1 --profile mfa \
 | docker login --username AWS --password-stdin

Step 5: Tag the local image to private registry tags

Tag your image with the Amazon ECR registry, repository, and optional image tag name combination to use. The registry format is The repository name should match the repository that you created for your image. If you omit the image tag, we assume that the tag is the latest.

docker tag capapp:latest

Step 6: Push the docker image

Push the image using the docker push command:

docker push

After you push your docker image, you will see the new image has been added to the repository with the tag.

Step 7: Pull the image

With get-login-password, we already authenticate Docker to a private Amazon ECR registry in Step 4. Therefore, use the docker pull command to fetch the image. To pull images by tag, the image name format should be registry/repository[:tag].

docker pull

Step 8: Run the command to run your private container

To run images by tag, the image name format should be registry/repository[:tag]. Simply run it as shown below.

docker run --rm -p 4004:4004 -t

What’s next?

This concludes the blog entry; I hope now you can understand how to use the Amazon ECR private registry to containerize your simple CAP Application. Stay excited about the following article, in which you will see how to use other available container registries to containerize a CAP Application.


Amazon ECR private registry
Run a CAP Application on Kyma
How to choose a Container Registry
Amazon Elastic Container Registry

Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.