Skip to Content
Technical Articles
Author's profile photo DJ Adams

Boosting tutorial UX with dev containers part 3 – containers into action

In this three-part series I outline and demonstrate an approach to help newcomers get started more quickly with our tutorials, by describing and providing an environment with all the prerequisite tools installed ready to go. This is part three, where I put the image definition and container configuration into action.

🚨We’ll be covering this topic in a #HandsOnSAPDev live stream “Let’s explore dev containers with VS Code and Tutorial Navigator content” on Fri 04 Feb at 0800 UK – pop by and say hi, everyone is always welcome:

Thumbnail%20of%20upcoming%20live%20stream%20video

See also the previous posts in this three-part series:

Reviewing what we have created

At the end of part 2 we had completed the definition of the image, in the form of the Dockerfile contents, which are as follows:

ARG VARIANT="16-buster"
FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:${VARIANT}

RUN wget -q -O - https://packages.cloudfoundry.org/debian/cli.cloudfoundry.org.key | apt-key add - ; \
echo "deb https://packages.cloudfoundry.org/debian stable main" | tee /etc/apt/sources.list.d/cloudfoundry-cli.list

RUN apt-get update \
&& apt-get -y install --no-install-recommends cf7-cli sqlite

RUN su node -c "npm install -g @ui5/cli @sap/cds-dk yo"

Finalising the configuration

We also had a basic devcontainer.json based configuration, to help VS Code know what to do – where to get the container from and what extensions to install.

Before we continue, there are a couple more properties that we might want to add to this configuration.

What we’ll be building initially, in the actual tutorials (specifically in the Create a CAP Application and SAP Fiori UI group) is a CAP application, in the first tutorial in this group: Create a CAP-Based Application.

Those of you who have read ahead and browsed the tutorial, or who have built applications and services with CAP before will know that port 4004 is the default port that is used to listen for and respond to HTTP requests.

One of the things we have to think about when using containers is that they’re independent with respect to the host environment and have their own environment, in a very similar way to how virtual machines are independent of the host too.

This means that if a service or application listens on a port inside a container, which will be the case here because our development will take place inside the container that we’ll get VS Code to connect to, then by default only clients inside that container will be able to connect to that port. So there’s a concept of port “publishing” or “forwarding”, meaning that a port in a container can be accessed from outside the container.

The upshot of this is that you’ll be able to continue to use the browser on your local machine to connect with and send requests to the app or service that’s running inside your container – in this case, via http://localhost:4004 for example.

Docker refers to this concept as port publishing while in the context of VS Code and dev containers, this is called port forwarding (a term that’s common in other networking areas too).

With the forwardPorts property in the devcontainer.json configuration file, we can specify which ports should be automatically forwarded, or published, from the container to the host. So we will use this to specify that port 4004 should be made available.

Also if, like me, running commands as the “root” user makes you nervous, there’s the remoteUser property with which you can specify a different user to be in the container. Traditionally this is either “user”, or (in this sort of Node.js container environment) “node”.

Adding these two properties to the configuration, we end up with this as the final content for our devcontainer.json file:

{
  "name": "Tutorial dev container",
  "build": {
    "dockerfile": "Dockerfile",
  },
  "extensions": [
    "sapse.vscode-cds",
    "sapse.sap-ux-fiori-tools-extension-pack"
  ],
  "forwardPorts": [ 4004 ],
  "remoteUser": "node"
}

 

Putting everything into action

We’re just about ready to try things out!

Creating a project working directory

First, there’s a tiny bit more general setup required, described in the last tutorial of the Prepare Your Development Environment for CAP group, which is to create a directory for development.

This is not tools related, or related to the container directly, it’s just about creating a directory to have somewhere to store the app that you’re going to create, and to have a copy of a set of templates that will help you along the way.

Basically all we need to do here is create a directory, and then (if we want to follow along closely with the tutorials) a subdirectory within that called “cpapp/”. Let’s do that now. Note that this is on your local machine, not in the container.

I’ll create the two directories inside a local “~/work/” directory that I already have – you can put yours where you want. I’ll use the name “cap-tut/” for the higher level directory, and have “cpapp/” within that:

# ~/work
; mkdir -p cap-tut/cpapp
# ~/work
; tree cap-tut/
cap-tut/
└── cpapp

1 directory, 0 files

The “cpapp/” directory will be the focus of our development, and it’s the directory we’ll open up shortly in VS Code.

The Create a Directory for Development tutorial also mentions cloning a repository to get some app templates that you can copy.

Let’s do that too:

# ~/work
; cd cap-tut/
# ~/work/cap-tut
; git clone https://github.com/SAP-samples/cloud-cap-risk-management tutorial
Cloning into 'tutorial'...
remote: Enumerating objects: 3286, done.
remote: Counting objects: 100% (3286/3286), done.
remote: Compressing objects: 100% (1050/1050), done.
remote: Total 3286 (delta 1870), reused 3235 (delta 1824), pack-reused 0
Receiving objects: 100% (3286/3286), 11.16 MiB | 8.31 MiB/s, done.
Resolving deltas: 100% (1870/1870), done.
# ~/work/cap-tut
; ls
./ ../ cpapp/ tutorial/

It also mentions creating a new repository of your own on GitHub. We don’t need that to test things out here, so we can leave that for now.

Bringing in Dockerfile and devcontainer.json

What we will need to do, however, is bring in the Dockerfile and devcontainer.json file. We want to put them in a specially named directory within our “cpapp/” directory that we’ll open up in VS Code, so that VS Code recognises that there’s some remote container setup to do.

The directory that we want to put our Dockerfile and devcontainer.json files in is “.devcontainer/” – this is what VS Code will recognise – and should be at the root of our app directory (“cpapp/”). Let’s create that now too:

# ~/work
; cd cap-tut/cpapp/
# ~/work/cap-tut/cpapp
; mkdir .devcontainer

Finally, the Dockerfile and devcontainer.json file should go into that new “.devcontainer/” directory.

Here’s what it looks like when it’s all ready:

# ~/work/cap-tut/cpapp
; tree -a
.
└── .devcontainer
    β”œβ”€β”€ Dockerfile
    └── devcontainer.json

1 directory, 2 files

Starting up VS Code

Taking our cue from the first part of the first tutorial in the Create a CAP Application and SAP Fiori UI group, i.e. the Create a CAP Application tutorial, it’s now time to open up the “cpapp/” directory in VS Code and get started.

As I’m still in the “cpapp/” directory from just before, I can use the following command (which is also shown in the tutorial):

; code .

This will start VS Code and the “.” is of course a reference to the current directory, i.e. “cpapp/”.

Note that the tutorial mentions carrying out “cds init” – we don’t need to do that here, and shouldn’t, we’ll be doing that within the container! The whole point is of course that if you’ve followed along, and haven’t already installed the @sap/cds-dk package globally, you wouldn’t even be able to run “cds init” on your local machine anyway πŸ™‚

Let’s see what happens. First, we get a nice shiny VS Code screen:

VS%20Code%20screen%20on%20startup

But hey, what’s that message in the bottom right corner? Let’s take a closer look:

Message%20offering%20the%20option%20to%20open%20the%20remote%20container

Oooh!

This has happened because VS Code has indeed recognised the “.devcontainer/” directory. Of course, the only sensible option for curious people like us is to press the Reopen in Container button, right?

Doing that causes VS Code to reopen, but in doing so, VS Code has acted upon the contents of the devcontainer.json file which has caused a container to be created based on the image described by our Dockerfile that’s referenced. It’s also caused our specified extensions to be installed.

During this process, you may have seen this message appear briefly in the bottom right corner:

Option%20to%20show%20the%20log

If you’d have selected the link, you’d have been taken to the details of what was going on, details that look like this:

Container%20log%20detail

If you missed it, you can always get to the logΒ via the Command Palette with the command Remote-Containers: Show Container log as well:

Command%20Palette%20-%20show%20container%20log

The eagle-eyed amongst you may be wondering about that (1) next to the “PORTS” heading in the screenshot of the dev container log.

You will probably not be surprised that this is because there’s an entry in the list of ports that are exposed, just like we requested with the forwardPorts property:

Ports%20list

So at this stage we’re all set.

Starting the tutorial

At this stage, you are running VS Code on your local machine, the extensions specified are installed, and all the tools needed are in the container which VS Code has instantiated for you.

Opening a terminal

What’s more, opening up a terminal now in VS Code will open up a shell inside the container, with access to those tools.

Let’s do that now, selecting “bash” from this menu:

selecting%20a%20Bash%20shell

This gives us a lovely command line environment within which to work, and to carry out the commands specified in the tutorial:

Bash%20shell%20ready

The prompt here looks a little different to what we’ve seen in the previous tutorials in this series:

root ➜ / $

But actually it’s the same pattern:

[username] ➜ [current directory] $

Remember that we specified that we wanted the user “node” (instead of “root”) in our container, and that we’re now in our app directory “cpapp/”.

Running cds and npm commands

Following the tutorial instructions, we’re guided to initialise the CAP project with “cds init” and then install the Node.js packages that are listed in the package.json file that (amongst other files) the “cds init” process creates.

Are you ready?

Running “cds init” creates various files and directories (you’ll see them appear in the Explorer on the left hand side), all of which should be familiar to you if you’ve developed CAP apps or services before:

Result%20of%20running%20cds%20init

Let’s pause here for the briefest of moments, to reflect on something: We’re running VS Code locally, on our host machine, and it’s showing the sudden appearance of files and directories … but those files and directories are not actually local, they’re inside the dev container that VS Code has instantiated and connected to. In fact, everything now happens inside the container.

OK, let’s continue. We’re now instructed to run “npm install”, which does does what you expect.

Starting up the skeleton CAP service

The final part of the tutorial we’re going to do in testing out our container is to run “cds watch”.

Bear in mind at this point we have nothing in the app, neither schema definitions nor service definitions. This is therefore what we get:

node ➜ /workspaces/cpapp $ cds watch

cds serve all --with-mocks --in-memory? 
watching: cds,csn,csv,ts,mjs,cjs,js,json,properties,edmx,xml,env,css,gif,html,jpg,png,svg... 
live reload enabled for browsers 
_______________________


No models found in db/,srv/,app/,schema,services.
Waiting for some to arrive...

Fair enough!

Adding a schema and connecting to the service

At this stage the tutorial suggests copying a schema definition from the templates directory of the repository (SAP-samples/cloud-cap-risk-management) that we cloned into the “tutorial/” directory.

So let’s do that, by creating a new file in the “db/” directory called schema.cdsΒ and pasting the following contents in there – you can copy/paste this content here as it’s exactly that schema definition in the repository:

namespace sap.ui.riskmanagement;
using { managed } from '@sap/cds/common';

entity Risks : managed {
Β  key ID : UUID @(Core.Computed : true);
Β  title : String(100);
Β  prio : String(5);
Β  descr : String;Β 
Β  miti : Association to Mitigations;
Β  impact : Integer;
Β  criticality : Integer;
}

entity Mitigations : managed {
Β  key ID : UUID @(Core.Computed : true);
Β  description : String;
Β  owner : String;
Β  timeline : String;
Β  risks : Association to many Risks on risks.miti = $self;
}

Once you’ve pasted it in, unless you’ve disabled automatic save, VS Code will save the contents of your new schema.cds file (do it manually if it doesn’t).

By the way, notice at this point that the CDS content saved in schema.cds is automatically colour-coded, and that’s thanks to the SAP CDS Language Support extension that has been installed according to our devcontainer.json configuration.

Then, through the magic of the still-running “cds watch” process, our fledgling service comes to life!

The%20service%20comes%20to%20life

What’s more, VS Code helpfully points out that we now have a service listening on port 4004:

Listening%20on%20port%204004

So what are we going to do here? Open in Browser of course!

As you do that, remember that this is a link from VS Code running locally, to your browser also running locally on your machine, but it’s connecting to the CAP based service running inside the container. This would be a good time to stop again and think about what this means for a second.

Once you’ve finished pondering life, the universe and everything related to development containers and turtles, you can turn to what your browser is displaying, which will look something like this:

service%20running

Yes, that URL is indeed a “localhost” URL, but there’s nothing listening to 4004 on your local machine – the connection is being forwarded to the container.

And yes, that’s the CAP service inside your container sending that response. It’s not a very exciting response right now as there’s hardly any data or service definition to work with. But it’s there, it’s alive, and it’s ready for the next part of your tutorial based learning!

Wrapping up

I’ll leave it to you to continue with the tutorial and the group – it’s a great set of learning resources!

What I hope you take away from this series is that with the power of containers, we can improve the developer experience in many contexts – this tutorial prerequisites based learning contexts is just the start. But it’s a start that’s simple enough for us to build on and give us further ideas, right?

There are of course many questions I’ve deliberately left unanswered for now. How do we make this accessible to more than just me? How best can we distribute images, or image definitions, and dev container configuration? What if I don’t want to use VS Code? (That’s an easy one – I use dev containers for many things, and I don’t use VS Code).

Perhaps more fundamentally, however, is this question, and I’d love to hear from you in the comments below: Does this resonate with you? Can you see yourself using dev containers to make your life easier? Would you enjoy tutorial prerequisite specific container configuration?

Please share your thoughts, and if you’ve got this far in the series, thank you for reading!

Assigned Tags

      14 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Christian Drumm
      Christian Drumm

      Great three part series DJ AdamsΒ πŸ‘πŸ‘! I will try this immediately for Python development as well.Β Now you only need to write part four on doing all this inside vim πŸ˜‰

      Christian

      Author's profile photo Nils Lutz
      Nils Lutz

      Awesome series! Now we only need to add some πŸ“„ dotfiles for a prettier shell 🎨 prompt and some globally installed πŸ“¦ npm packages like prettier (pun unintended πŸ˜‰) to the πŸ“ devcontainer.

      I know you're a huge fan of the πŸ‘Ύ terminal, as am I, so I can't get around it. You really have to check out the tool Fig.io. (Not as advertisment meant)

      Nils

      Author's profile photo DJ Adams
      DJ Adams
      Blog Post Author

      Thanks Nils! I remember taking a look at fig.io a while ago, then completely forgot about it. I'll definitely check it out again.

      I'd love to see where these explorations take us, perhaps to sharing config, setup and so on ...

      Author's profile photo Nils Lutz
      Nils Lutz

      Yep it is now out of private beta, the community is very active and i think it should be easy to add some config for SAP related cli tools like ui5-cli, cds, btp cli and so on 😊 i chatted some time ago with one of their devs in their discord about integrating SAP cli tools and he was very interested πŸ™‚

      Author's profile photo Sascha Weidlich
      Sascha Weidlich

      Hi DJ Adams, awesome stuff!

      Just a quick question, maybe its a silly one, but I currently have no clue about this topic at all:

      Is there any way to store the Dockerfile in a central way? The idea would be that a whole dev team could simply do some kind like "npm install" to get the latest dev environment for their team.

      In your blog-series it looks like everyone needs to create the dockerfile etc. when creating a new repository.

      I mean you could store this in a git-repo and clone & copy/paste it everytime you create a new repo but this doesn't feel right for me atleast..

      Thanks for your Inputs/Thoughts,

      Sascha

      Author's profile photo DJ Adams
      DJ Adams
      Blog Post Author

      Hey Sascha, not a silly question at all!

      There are many ways to approach this and it certainly is something that teams will want to do. We can think about the answer in general by considering the two different "levels" at play here.

      The first is the Dockerfile (and devcontainer.json) context. Those files can easily be stored in a shared repository, say on GitHub, for anyone to clone and use. Even better would be to store them in the actual repository that contains the entire project that you're working on - the VS Code conventions here even dictate a directory name to use: ".devcontainer/".

      If the image from which the containers are to be built are fairly stable, i.e. they are generic and contain tools common to more than a specific project or tutorial, then the images themselves can be pushed to Docker Hub - a sort of clearing house for Docker images, that can be public or private. Docker Hub is the most well known but there are these "container registries" available elsewhere, for example GitHub has its own container registry, see https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry for example.

      HTH!

      Author's profile photo Sascha Weidlich
      Sascha Weidlich

      Thanks for the detailed answer DJ!

      I will definitely give it a try to use a Docker Hub and publish an image there. Keep up the good work, will follow you on your livestream tomorrow. πŸ˜‰

      Sascha

      Author's profile photo Huseyin Dereli
      Huseyin Dereli

      Thanks again DJ, I feel lucky that we have this content at the SAP community πŸŽ‰πŸŽ‰

      After that session I wanted to try to add btp cli to the image that we worked on. One of the previous episodes, there was a script you've built as a utility in order to install BTP Cli. I thought I might use that. Then added couple of lines, faced couple of issues and this is the result.

      https://github.com/hdereli/devco-sap-cds/blob/main/Dockerfile

      1- First line downloads the script I mentioned and saves it to the image. (I'm sure we can execute without downloading)

      RUN curl -LJO https://raw.githubusercontent.com/SAP-samples/sap-tech-bytes/2021-09-01-btp-cli/getbtpcli
      2- We need permissions to execute the downloaded script
      RUN chmod +x getbtpcli
      3- And in this line I needed to run it but the script needed an interaction which in this case caused the build fail. The license should be accepted to proceed. Then I find a way to send enter through a pipe. I'm not 100% sure what's going on but it helped me to run the script by accepting the agreement.
      RUN echo -ne '\n' | ./getbtpcli
      4- Well just like after installing the BTP Cli to my local I know we add BTP bin to the PATH. That was easy to find but I'm not sure if I should use node user or the root user when executing all these commands. So I had to use ./root/ explicitly. I could be a bad practice.
      ENV PATH "$PATH:./root/bin/"
      Well I've built and ran it to see if btp command was working. I worked!! πŸ™‚
      For the 3rd step, I tried to add a -y flag to the installation to make it more clear for me and also to do it a little bit more elegantly. I've copied existing -t flag and try to grab the -y flag similarly. https://github.com/hdereli/btp-cli/blob/main/getbtpcli . Then changed the 3rd step like below simply;
      RUN ./getbtpcli -y
      No errors, build is OK. But it didn't install the binary to the image. When I try to run the script "./getbtpcli -y" manually inside the image, it works 😬
      I'm so curious to see which type of stupid mistakes that I've made πŸ˜€
      Author's profile photo DJ Adams
      DJ Adams
      Blog Post Author

      No stupid mistakes! We're all learning and sharing together here, as always. I'm just about to retire for the evening, and I'm out of the office tomorrow, so here's a drive-by suggestion for the first part of your puzzle - how to download the getbtpcli script and execute it, in one go, sending a confirmation for the licence in that too.

      Replace all the lines above (in your comment) with this one:

      RUN bash -c "echo -ne '\n' | bash <(curl -L https://raw.githubusercontent.com/SAP-samples/sap-tech-bytes/2021-09-01-btp-cli/getbtpcli)"

      The plain output from the build, for this step, will look something like this (use --progress=plain to get the detail when running docker build):

      #8 [5/5] RUN bash -c "echo -ne '\n' | bash <(curl -L https://raw.githubusercontent.com/SAP-samples/sap-tech-bytes/2021-09-01-btp-cli/getbtpcli)"
      #8 sha256:bc5ab6d4b93c4581349288bdc932260f8c4d647567eb128a04a59693d24375a8
      #8 0.225 % Total % Received % Xferd Average Speed Time Time Time Current
      #8 0.232 Dload Upload Total Spent Left Speed
      100 3266 100 3266 0 0 46657 0 --:--:-- --:--:-- --:--:-- 47333
      #8 0.307 Proceed (with Enter) only if you accept the SAP Developer Licence 3.1
      #8 0.307 (see https://tools.hana.ondemand.com/developer-license-3_1.txt) ...Version is 2.14.0
      #8 DONE 1.1s

      Note that btp is indeed installed (version 2.14.0 in this case).

      If you're wondering about the <(...) part, that's Process Substitution. I use it in this blog post: Mass deletion of GitHub Actions workflow runs and explain it in context in this related one GitHub Actions workflow browser.

      I'll hand the baton over to you and others here to see where the btp executable ends up, and how best to deal with that.

      HTH!

      dj

      Author's profile photo Huseyin Dereli
      Huseyin Dereli

      Yes, this is better and it worked, thanks.

      I was looking for sth like verbose. Good to learn --progress=plain.

      I actually checked Process Substitution while I was exploring the source code of getbtpcli.

        "$tempdir/btp" --config "$tempdir/config" --version 2> /dev/null | grep -P -o '(?<=v)\d+\.\d+\.\d+'
      Author's profile photo Cesar Felce
      Cesar Felce

      Hi @DJ Adams,

      Thanks for this blog, it's awesome!

      IΒ΄m trying to do a cds deploy --to hanafrom inside the container, but i'm receiving the following error

      [ERROR] ENOENT: no such file or directory, chmod '/workspaces/cppapp/gen/db/node_modules/@sap/hdi-deploy/node_modules/@sap/hana-client/prebuilt/linuxppc64le-gcc48/.libdbcapiHDB.so.icloud'

      It's possible that the image doesn't support this library?

      regards,

      Cesar

      Author's profile photo DJ Adams
      DJ Adams
      Blog Post Author

      Hey Cesar, thanks! Perhaps we can dig into some further information together to see what's going on here. Are you using the container as-is from this blog post series? What would you normally do here from an npm module install context? The "ppc" part of the name in the path of the module there doesn't look right - that's the Power PC architecture and if the module is pre-compiled for that target, then that could be part of the issue too.

      Author's profile photo Cesar Felce
      Cesar Felce

      Hi DJ Adams,

      Yes we can dig into it, more than happy to help a legend!
      I'm using the container almost as-is in the blog, I added the hana-cli.

      I just did basic stuff
      1.- cds init <project> --add mta,hana,pipeline
      2.- npm i
      3.- hana-cli createModule
      4.- modify the schema.cds with a basic structure
      5.- cds deploy --to hana

      It works nicely with cds watch in sql, but with the deploy gives me that error.

      How can we do this together?

      thanks again!

       

      Author's profile photo Cesar Felce
      Cesar Felce

      Hi DJ Adams,

      I notice that my project was also deployed locally and not in the container, i removed the package-lock.json and the node_modules directory, and then it worked!

       

      kind regards,
      Cesar