Event Information
Annotated links: Episode 9 of Hands-on SAP dev with qmacro
This is a searchable description of the content of a live stream recording, specifically “Episode 9 – Continuing with data retrieval for Northbreeze” in the “Hands-on SAP dev with qmacro” series. There are links directly to specific highlights in the video recording. For links to annotations of other episodes, please see the “Catch the replays” section of the series blog post.
This episode, titled “Continuing with data retrieval for Northbreeze“, was streamed live on Wed 06 Mar 2019 and is approximately one hour in length. The stream recording is available on YouTube.
Below is a brief synopsis, and links to specific highlights – use these links to jump directly to particular places of interest in the recording, based on ‘hh:mm:ss’ style timestamps.
Brief synopsis
In episode 8 last Friday we were constructing the data retrieval script using Axios and promises. In this episode we continue to build that out, and then convert the data thus retrieved to CSV for loading into the database with cds deploy
.
Links to specific highlights
00:02:20: Starting off with some SAP Inside Track announcements! – Maidenhead – Friday 15 March – Frankfurt – Saturday 30 March – Oslo – Saturday 17 August
00:03:00: Spotting an update to the VS Code extension for CDS – from 1.0.36 to 1.1.1.
00:04:50: Looking at the SAP Developers YouTube channel, to which you should definitely subscribe!
00:06:50: Looking at where we left off with the Northbreeze project, looking at the model and service definitions that we set up before detouring to grab some seed data from the original Northwind service with our “grab” project.
00:08:20: Opening a tmux session and looking at the “grab” project details, and in particular reminding ourselves how the skipToken
works, and looking where we’ve saved local copies of the entity data, to serve to ourselves as we develop the script.
00:10:30: Opening up the grab.js
script and looking through where we’d got to, and also starting up our local HTTP server in another tmux pane to serve the local data to the script.
00:16:55: Starting up nodemon
in yet another tmux pane, to monitor and rerun the script as we make modifications.
00:18:00: Inserting another entry in the dot chain to see how many entities there are for each type, thus:
.then(xs => xs.length)
This shows us that there’s something not quite right … of course, we’ve hardcoded “Products” in the baseurl
constant, so we’ll never manage to get any Categories or Suppliers in this configuration.
We fix this by using placeholders in the constant instead:
http://localhost:8000/ENTITY-TOKEN
Also, even for the minimum number of pages of data, we’re still getting 40 entities. That’s because, for reasons unknown, I’d added 1 to the number of skip tokens like this:
range(entities[entity].tokens + 1)
I can’t for the life of me remember why I did that, and it’s not correct. So we remove that + 1
.
00:21:00: To simplify things now, we create a new function buildurl
that takes an entity name and a token count and constructs a real URL to be used in the Axios calls. This function includes the value of the baseurl
constant so it can be pure.
00:21:50: The nice thing about the replace
function in JS is that it’s well behaved, in that (a) it doesn’t mutate the source, and (b) what it produces is a new string, which we can then of course use to chain a further call to replace
or whatever operation we want on that string.
00:23:39: With those changes, we’ve now fixes what we had, and can see the correct numbers of entities being returned: 77 Products, 29 Suppliers and 8 Categories.
00:24:15: Looking at another package json2csv which will convert JSON to, you guessed it, CSV. It is in the form of a command line utility as well as having an API that can be used from within a JS script.
00:25:38: For now, to build out the next part, we’ll just focus on Categories, and for that we insert a filter into the dot chain thus:
.filter(x => x == 'Categories')
In looking at what we get as output, we notice that there’s a whole load of data for a picture
property – some sort of encoded image, which we don’t want. And in digging deeper, we also notice that the properties we’ve defined in our entities (in CDS) are not the same as the properties we have in the JSON, so we’ll have to convert them (for example we have name
in the entity definition but CategoryName
in the Northwind source that has come through as JSON).
00:30:08: In the context of talking about Chris coming onto the next live stream episode to talk about functional programming, I point out that the filter we just added is a little ugly, in that it’s hardcoded and also only useful there. What we would like to do ideally (partly, admittedly, to explore some functional programming aspects) is to write something like:
.filter(onlyCategories)
because that’s easy to understand and also “solid state”.
00:30:50: To do this, we start by defining a new function is
:
is = (val, x) => x == val
With this we can now say:
.filter(x => is('Categories', x))
which is a bit nicer but not much. But we’re on our way to a micro-nirvana.
00:32:55: Defining the is
function like this instead:
is => val => x => x == val
we can now define an onlyCategories
like this:
onlyCategories = is('Categories')
which is effectively the result of partially applying the call to the is
function, with the first argument it wants; and now we have a function onlyCategories
which is expecting also a single argument, which is the equivalent of the second argument to is
.
We can then do what we want, which is:
.filter(onlyCategories)
Stare that that for a bit.
00:36:01: Installing json2csv
and double-checking, with npm list --depth=1
, what packages we have installed locally in this project (directory).
00:37:30: Looking at Example 3 in the json2csv
package documentation which shows how we can match up and rename properties during the conversion.
00:40:05: Starting to introduce the json2csv
package use to our script, looking in particular at the destructuring construct (from ES6) in the example:
const opts = { fields }
00:42:00: A reference to a recording of a great talk by Brian Lonsdorf called “Hey Underscore, You’re Doing It Wrong!” which talks a lot about the importance of parameter order in function definitions, and how that affects how well (or not) functions can be used in a functional (point free) style.
00:44:33: Defining the fields that we want in the entities
map we defined earlier, alongside the token information. We can then refer to this information in the options parameter in the call to json2csv
.
The next execution of the script shows us that it’s doing what we want (in this case, creating CSV data with the new (and reduced) field set).
00:50:51: Removing the call to console.log
in the dot chain to see if we get the CSV written to files, and in fact we do!
00:51:42: So we bring those CSV files into a new csv
directory within the db
directory in our CAP-based Northbreeze project, remembering the CSV file naming convention: <namespace>-Entity.csv
.
00:55:00: Installing the sqlite3
package, so we can deploy the data and service definitions, and the CSV data, to a SQLite database like this:
cds deploy --to sqlite:northbreeze.db
00:56:10: In response to a question from Jluiizz, I explain that the keyboard I’m using is the Vortex Race 3 with Cherry MX Blue switches. I buy my keyboards from Candykeys.
00:57:45: We use the sqlite3
command line client to check if everything has worked … and it has!