Annotated links: Episode 22 of Hands-on SAP dev with qmacro
This is a searchable description of the content of a live stream recording, specifically “Episode 22 – Live stream community share – dotfiles and Google Cloud Run goodness” in the “Hands-on SAP dev with qmacro” series. There are links directly to specific highlights in the video recording. For links to annotations of other episodes, please see the “Catch the replays” section of the series blog post.
This episode, titled “Live stream community share – dotfiles and Google Cloud Run goodness“, was streamed live on Wed 01 May 2019 and is approximately one hour in length. The stream recording is available on YouTube.
Below is a brief synopsis, and links to specific highlights – use these links to jump directly to particular places of interest in the recording, based on ‘hh:mm:ss’ style timestamps.
In this midweek episode we enjoy a little off piste activity with two special guests from the #HandsOnSAPDev community. Ronnie Sletta describes his setup and takes us through his dotfiles for new machine setups, and Nabheet Madan shows us how he deployed a CAP project to Google Cloud Run.
Links to specific highlights
00:02:05: Drawing our attention to Scott Dillon who runs a regular series of “Garage” online sessions on SAP Cloud Platform technical topics. You can find out more at his WhySCP page and the recordings of the sessions are available in a YouTube playlist SAP Cloud Platform in the Garage on the SAP Technology channel. Great stuff!
00:03:36: Pointing out another coding live streamer, Brendan Enrick who streams live on Twitch with the DevChatter handle. All sorts of interesting stuff and right now he’s building something that will help folks like you and me find other code based live streams. Definitely worth following.
00:04:28: Mentioning an update to SAP Web IDE relating to building Node.js based CAP apps – see the blog post Develop Business Services with Node.js in SAP Web IDE by Liat Borenshtein for more info.
00:08:50: Ronnie introduces himself and what he’s going to be showing us, with some background about how he uses virtual machines and spins them up at a moment’s notice – which means that he’s had to come up with a solution for automating some of the post-creation setup, using dotfiles and shell scripts. Cross platform too!
00:15:10: Ronnie switches over to show his screen, and explains dotfiles, giving us examples from his own set, including
.tmux.conf and – possibly the most important here –
.bashrc (which differs from
.bash_profile as described in this StackExchange post).
00:22:58: There’s also a
.functions file that allows Ronnie to keep his code and setup nice and modular. In a similar way, there are also the
00:24:20: Ronnie uses nvm to manage his Node.js installations; this is a great way to do it – and we’ve seen its use also in a previous episode in this series. One bonus of using
nvm is that you can install and manage Node.js without needing root access. If you’re in charge of your own VMs that is not so much of an issue, but it’s nice to know you can be independent of any BOFHs! 🙂
00:24:55: Starting to look at Ronnie’s
.bash_profile which has all sorts of goodness, including platform specific logic. He has taken some of the content from others, which is a great way to bootstrap your shell skills. The references he gives are in his repo’s README. In fact all the code Ronnie is showing is available online in the repo, which is here: https://github.com/rsletta/dotfiles.
00:28:20: Moving on now to the real star of the repo, which is the
bootstrap.sh script. Ronnie talks about how he came to write this, and why. At this point we dig in deeper with a demo where Ronnie uses his iPad as a terminal, using an SSH client (Blink Shell and shows us how he sets up a newly minted (but otherwise bare) virtual machine.
He follows the instructions in his repo’s README which is essentially to clone the repo, and run the
git clone https://github.com/rsletta/dotfiles.git <DIRECTORY_NAME> && cd <DIRECTORY_NAME> && ./bootstrap.sh
Running this script sets up lots of different things, including Node.js itself and of course the
00:33:25: At this stage, the shell’s prompt is nice and attractive and informative, and he can jump into
vim to finalise the setup (for example of the Vim plugins). All done!
00:34:20: Ronnie encourages you to go into the repo and check out what’s inside of the platform specific scripts
Ubuntu.sh. So what are you waiting for? 😉
00:36:50: Now it’s time to bring Nabheet onto the stream. After being dialled in, Nabheet introduces himself and sets the scene for what he’s going to show us. He’s written it all up in a blog post too, which you can read here: CAPM meet Google Cloud Run – Serverless Containers.
00:38:40: Nabheet highlights Lucia Subatin‘s blog post Post Google Next ’19 curiosity – Playing with Cloud Run and HANA which got him started on this particular journey of discovery of deploying a CAP app to the Google Cloud and running it there.
00:40:30: He also highlights his series of posts #HelloWorld SAPUI5 meets Kubernetes – Containers (a series which also contains a post from Ronnie – nice teamwork!), thinking about progressing to serverless functions to serverless containers … and Google Cloud Run is about scaled serverless containers.
00:41:55: Nabheet describes his starting point with a CAP app based on the tutorial Create a Business Service with Node.js using Visual Studio Code on developers.sap.com. The plan is first to deploy it to Google Cloud and run it “locally” in that it will be run from the Google Cloud Shell, and then build a container image and deploy & run that.
00:42:42: And now it’s demo time, where Nabheet shows us the new Cloud Run feature in the Google Cloud Platform console.
00:44:25: Looking at the Google Cloud Shell (of which I’m a big fan and user too) – a browser accessible shell environment which gives you a pretty decent environment with many of the tools you expect, plus the
gcloud tool, and a 5GB filesystem which persists (so you can install other tools there* and store files too).
* There’s a new feature, still in alpha, which now allows you to define your own build instructions for the image that is used for Google Cloud Shell.
00:46:03: Nabheet shows us that as well as regular tools such as the Vim editor, Google Cloud Shell has a web-based IDE too, which you can invoke with a button or from the command line with the
edit command. Lucia had referred to this in the chat too.
00:47:35: Running the CAP app locally, with
cds run, and we see, as expected, the service is available on http://localhost:4004. This port on the Google Cloud Shell is proxied so it can be accessed directly from the browser … but how does this actually work?
There’s a bit of Google magic here at play that I think we missed, and during the playback just now (while I write these annotations) I looked into it. Nabheet clicked on the http://localhost:4004 link and what actually happened is that that link (which is made available in the shell to us via an HTML5 link) was actually a link to this URL:
This seems to be a generic service that will cause ports to be automatically proxied by what appears to be a Google App Engine based service (I’m assuming that from the resulting URL which includes the giveaway string
appspot). In my experiment just now, doing what Nabheet did, the end URL is:
00:47:50: So at this stage we see the familiar “Welcome to cds.service” page in the browser, showing that the CAP app is running successfully … in Nabheet’s Google Cloud Shell.
00:48:10: So now it’s time to create a container image. Here is the build command that Nabheet used, referring to the “capm3” image definition:
gcloud builds submit --tag gcr.io/steel-signifier-225916/capm3
steel-signifier-225916 is an identifier that is specific to Nabheet’s session).
00:48:55: We have a quick look at the
Dockerfile which contains the build instructions for the image, which is based on the official Node.js 10 image which Lucia has also used (note that the contents of the file are in Nabheet’s post mentioned earlier).
00:49:40: At this point the image has now been built (builds can be checked in the Cloud Build section of the console). So now it’s time to deploy, with:
gcloud beta run deploy --image gcr.io/steel-signifier-225916/capm3
00:52:00: Going to the Cloud Run section of the console we now see the new “capm3” service where we look briefly at the logs before accessing the CAP service that’s running there, via this URL:
There’s a small issue with the service URLs not appearing below the “Welcome to cds.services” but they are available all the same, via the usual paths, and we can retrieve the book and author entities via OData query operations as usual.
cloudshell launch-tutorial -d <path-to-the-tutorials>
I really enjoyed Ronnie and Nabheet’s share of experimentation and knowledge – thanks so much again. Until next time!