Skip to Content
Event Information

Annotated links: Episode 42 of Hands-on SAP dev with qmacro

This is a searchable description of the content of a live stream recording, specifically “Ep.42 – Impromptu stream on GitHub CAP Community work” in the “Hands-on SAP dev with qmacro” series. There are links directly to specific highlights in the video recording. For links to annotations of other episodes, please see the “Catch the replays” section of the series blog post.

This episode was streamed live on Fri 01 Nov 2019 and is approximately 60 minutes in length. The stream recording is available on YouTube.

Brief synopsis: After being unable to stream well enough since the Catalina upgrade I took the opportunity to investigate Lightstream, a Chrome extension for streaming. In this impromptu, unplanned stream I work through an issue assigned to me on the CAP Community repository.

Note: Although the details can still be seen OK, this particular recording has some encoding and compression artifacts that were a result of me running my display at too high a resolution for the stream resolution target. I’ve fixed this now, and in fact reduced the artifact issue half way through this particular session too.

00:01:30 Start to look at the work that is assigned to me on the CAP Community GitHub repo, which was issue #11 “Add ‘calculated fields’ example to /examples please“.

00:03:00 Looking at the Tiling Window Manager for Chrome OS, and mentioning my modification of it that gives me the ability to increase and decrease the window gaps dynamically via keyboard shortcuts (I recorded a brief demo of this) – give me a shout if you’re interested in learning more about this.

00:05:30 Explaining the background to this Calculated Fields issue, by referring to a blog post I wrote as an extended reply to a question from Pierre Dominique. The post is this one: “Computed field example in CAP” and describes one way of using computed properties.

00:10:00 I’ve already forked the CAP Community repo and the first thing I do is to clone this fork. In this session you’ll see how I keep a fork of a repo up to date.

00:12:15 In order to keep the fork up to date, I need to reference the upstream source repo. The upstream repo is the original repo at https://github.com/sapmentors/cap-community and my fork is at https://github.com/qmacro/cap-community.

Right now, there’s no reference in my clone to the upstream repo, as I can see with this command, which shows all the remote sources:

-> git remote -v
origin  git@github.com:qmacro/cap-community.git (fetch)
origin  git@github.com:qmacro/cap-community.git (push)

Only the “origin” is shown, which is my fork that is the source of this clone on my local machine.

To refer to the upstream repo, I add a remote, and call it “upstream”, by convention:

-> git remote add upstream git@github.com:sapmentors/cap-community.git

Now when I check again to see the remotes that are defined, I see both the origin and the upstream:

-> git remote -v
origin  git@github.com:qmacro/cap-community.git (fetch)
origin  git@github.com:qmacro/cap-community.git (push)
upstream        git@github.com:sapmentors/cap-community.git (fetch)
upstream        git@github.com:sapmentors/cap-community.git (push)

00:13:35 At this stage, we see that my fork is behind the source (upstream) by a number of commits, so we have to bring the fork up to date and apply the upstream commits to it.

This is what we do now, and the commands used are shown here, along with sample output.

First we fetch the commits from upstream:

-> git fetch upstream
remote: Enumerating objects: 2, done.
remote: Counting objects: 100% (2/2), done.
Unpacking objects: 100% (5/5), done.
remote: Total 5 (delta 2), reused 2 (delta 2), pack-reused 3
From github.com:sapmentors/cap-community
 * [new branch]      master     -> upstream/master

Next, we make sure we’re on the “master” branch locally:

-> git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.

There’s actually only the “master” branch right now anyway, as we can see:

-> git branch
* master

Now we can merge the commits into the “master” branch of this clone of the fork:

-> git merge upstream/master
Updating 994fa05..81236e9
Fast-forward
 README.md | 3 ++-
 [... other changes shown too ]
 .
 .
 .

Finally, we can push those changes to the origin, i.e. the fork that this local clone is from:

-> git push origin master
Total 0 (delta 0), reused 0 (delta 0)
To github.com:qmacro/cap-community.git
   994fa05..81236e9  master -> master

Now our fork is up to date with the original source!

00:17:10 Creating the new directory for the example, with:

-> cds init --modules db,srv computed-field

Subsequent exploration of what has been created shows us a nice fresh CAP Node.js project to start with.

00:18:45 At this stage we start to refer to the original blog post and fill out the example code based upon the samples in that post, starting with the data-model.cds contents, following on with the cat-service.cds contents, which is where the computed field comes in.

00:23:10 We check what the computed field definition has given us in at the service definition layer with cds compile srv/cat-service.cds --to sql and note that the numberOfBooks property really only exists at the service definition layer and not at the data model layer.

00:24:05 Renaming the .cds files and references to match what’s in the blog post.

00:24:40 Compiling to EDMX, with cds compile srv/service.cds --to edmx, we can see that the numberOfBooks property has been automatically annotated as a Core.Computed property:

      <Annotations Target="CatalogService.Authors/numberOfBooks">
        <Annotation Term="Core.Computed" Bool="true"/>
      </Annotations>

Nice!

00:25:10 Starting to look at the service definition layer implementation, in the form of an event, by creating service.js alongside the service.cds file – the former will be taken automatically as an “implementation” for the latter.

Note: Please refer to the blog post for details and discussion of the service.js implementation.

00:35:40 Noting that localhost on the Crostini-hosted Linux VM is available to the parent host (Chrome OS) via a special hostname <containername>.linux.test, in my case, with the default Linux VM name, it’s http://penguin.linux.test.

00:37:55 Looking briefly at the device I’m working on, which is the Asus Chromebox 3 running the excellent ChromeOS of course.

00:38:20 Getting together some sample data for the example, in the form of CSV files. Luckily I remember there is some CSV data that we can use in one of the tutorials from the SAP TechEd 2019 mission on CAP, Cloud SDK and S/4HANA extensions.

00:41:15 After adding the data, we run a cds deploy and restart the service, which now gives us some books and authors data. Lovely. And a quick test shows us that the computed field is indeed appearing and is filled with the correct data. Also lovely!

00:44:15 At this stage it’s just time to add some helpful information to this example, in the form of a README which explains what the example is and how to try it out.

00:49:05 Arranging the example directory contents so that it’s ready to be pushed, and run by others. We test this briefly by blasting away the node_modules/ directory, and starting again, with:

-> npm i && cds run --in-memory

All seems to work as expected (including a slight digression installing jq on this VM to format the JSON output nicely!).

00:55:55 So we add the README to a commit and push the changes too (also removing the .vscode/ directory that we don’t really want in this context). This push is of course to the origin, i.e. my fork of the main CAP Community repo, and we’re then advised by GitHub, appropriately, that “This branch is 1 commit ahead of sapmentors master”.

00:57:48 So we create a Pull Request (PR) which GitHub allows us to do, specifically for requesting a pull of these changes in my fork to the upstream source repository, adding Volker Buzek as a reviewer.

All done!

/
Be the first to leave a comment
You must be Logged on to comment or reply to a post.