In my introduction to this blog series I laid out my fairly specific objectives. I’ll restate them here for the stragglers … in this series we’re going to build a lightweight proof-of-concept dashboard that displays SAP technical performance data in an interesting and visually stimulating way. I have decided to use the following structure:

  1. Picking the Visualization (you are here) – introducing D3.js and the concept of data joins;
  2. Introduction to the SAPUI5 Shell ‘bootstrap’ environment and how to integrate components;
  3. Description of the SAP SMON functions and how to create a new service in the SAP ICF;
  4. Bringing it all together.

Picking the right visualization tools

So how do you pick the right visualization? This is a highly subjective topic so please don’t be offended by my personal choices. I am a respectable SAP Analytics consultant by day, but the darkness hides a closet geek. I blame my parents and a certain Clive Sinclair (and his ZX81 back in 1981) for my Hyde-esque alter-ego. My Dr. Jekyll persona absolutely gets the argument for application solutions. After all I’ve built a multiple-decade career on the back of enterprise applications. It’s literally my bread and butter.

However from time to time the ‘best-of-breed’ argument haunts me, like the ghosts of Christmas Past and Christmas Future that plagued Scrooge! So I dabble, on occasion, ‘to feed the beast’.

If that’s too dark for you then there is also a lighter, professional aspect to this dabbling: My reasoning is if I know how to build things from the ground up then I am better equipped to evaluate the value-proposition of each of the application solutions I consult upon and can therefore better advise clients on the relative benefits of the specific choices that they themselves face.

In addition, IMHO, there are a few use-cases that warrant the effort of building ‘from the ground up’:

  • One of these is the kind of highly-specific visualizations that the likes of Mike Bostock and his team produce at the NY Times. And therefore, by extension, these techniques are relevant to the rise of ‘infographics’ as a new branch on the visualization tree. This is top-end stuff, the place you go to very occasionally, but when you go there you want to be unconstrained.
  • The other use-case I foresee is at the extreme other end of the analytics spectrum. Simple, small-volume ‘info-vingettes’ that have a short shelf life and don’t warrant a large investment of development effort, time or (most importantly for this series) infrastructure.

So Why D3?

Looking around I found a few ‘visualization’ libraries. Most purported to deliver a charting engine – nice, but that functionality is already part of the various application solutions I advise on every day. What I wanted was a fully configurable, but abstracted, library that was open-ended in terms of functionality (I wanted to be constrained by imagination only). But at the same time I wanted some abstraction from the core functions; and as they say in the classics: Voila! D3.

Also look at the jaw dropping visualizations at the New York Times, Beware! this is serious data p0rn.

How does D3 work?

Many have tried to answer this question and to be fair most succeed – I won’t waste your time with my own detailed explanation, ask Dr. Google, and look for anything by Scott Murray, Sebastian Guttierez at and the good Mr Bostock himself.

But I do think it’s worth explaining the basic principles so that you are at least aware of the core D3 functions we will shortly enable.

Let’s be clear: D3 is not for sissies! You need a deep knowledge of JavaScript, the Document Object Model and Cascading Style Sheets to win in this area … however, if you are a noob, don’t be discouraged … there are many ‘giants’ out there and I’ve found that it is reasonably easy to clamber onto their shoulders. Have a look at for literally hundreds of examples and a healthy community of committed developers. There’s also a vibrant Google group for D3.

Basically D3 (Data Driven Documents) is an abstraction layer that allows you to interact with the Document Object Model through your data, yes it ‘does what is says on the tin’: the data drives the document.

What this means is that we are no longer concerned with coding at length to test for existence in our data, the library does all that heavy lifting, neither are we concerned with the nuances of the different browser restrictions. Based upon the entrance of, or changes to (or even demise of) data we can interact with various elements within your document (even those that do not yet exist …!).

Freed from the shackles of looping through the data and coding interesting transitions we can spend our time imagining the visualization technique that best fits the application of our data. Consider the Motion Chart examples from Hans Rosling … well we can easily do that with D3:

Motion Chart.png

Here is a simple example of D3 to get you started:

var dataset = [ 5, 10, 15, 20, 25 ];“body”).selectAll(“p”)




         .text(function(d) { return d; });

This is JavaScript, it is a call to the D3.js library. When it runs the D3 logic selects the ‘body’ element of the DOM and changes certain attributes of every ‘p’ element, and if there are insufficient ‘p’ elements to match the data it creates new ‘p’ elements (that’s the kicker!) and assigns to each of them in turn the next integer in the dataset array.

It produces the following output:






We can, and will, do better. Have a look below to get a preview of the example we will be using to present our gauges:

var gauges = [];

var timestamp;

var free_mem_pc;

var cpu_idle;

var lan_packets;

function createGauge(name, label, min, max)


                        var config =


                                                size: 120,

                                                label: label,

                                                min: undefined != min ? min : 0,

                                                max: undefined != max ? max : 100,

                                                minorTicks: 5



                        var range = config.max – config.min;

                        config.yellowZones = [{ from: config.min + range*0.75, to: config.min + range*0.9 }];

                        config.redZones = [{ from: config.min + range*0.9, to: config.max }];

                        gauges[name] = new Gauge(name + “GaugeContainer”, config);



function createGauges()


                        createGauge(“memory”, “Memory”);

                        createGauge(“cpu”, “CPU”);

                        createGauge(“network”, “Network”);


function getData () {

                        d3.text(““, function(error, result) {

                                                obj = JSON.parse(result);

                                                timestamp = obj.Timestamp;

                                                free_mem_pc = obj.free_mem_pc;

                                                cpu_idle = obj.idle_total;

                                                lan_packets = obj.packets;

                                                if (error) return console.warn(error);




function updateGauges()


                                                gauges[“memory”].redraw(100 – parseInt(free_mem_pc));

                                                gauges[“cpu”].redraw(100 – parseInt(cpu_idle));



                                                                        .text(“last refreshed ” + timestamp);


function initialize()




                        setInterval(getData, 5000);


It’s a big leap I know, but have a look at the links above and soak up the work of the good guys and gals who have sacrificed their own time so you and I don’t have to.

How Do I Start?

Good question! There are many commentators on the best way to choose a visualization (consult the good doctor …) My personal approach is to analyse the subject of the dataset on the one hand and at the same time think about the story I want to tell on the other hand. The guys at NY Times spend more than just a few hours thinking this through. My requirements are usually not that unique or specialized and they can usually be satisfied by one of the published examples.

I literally scan and pick out one that fits my use case. I copy and paste to my IDE, plug in my data (this is usually the stage that takes the longest time) and sit back to check it out.

The reason data preparation takes up the majority of the development time is because your data may not easily fit the current interface to the visualization of your choosing. So you’ll have to adapt either the data or the interface – which you choose will be determined by your proficiency in JavaScript!

When we get to the SAP ICF we’ll see that this is actually less of an issue when you have full control over the data structure through an ABAP handler interface, assuming you know a good ABAPer or can ‘code SAP’ yourself.

What IDE to Choose?

That’s simple, I use Notepad++. There are others, including Eclipse which has certain benefits from a SAPUI5 point of view, but that’s a later story … and covered in the next instalment (Charles Dickens eat your heart out!)

In the next instalment I will explore the next component of this build – SAPUI5.



To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply