CRM and CX Blogs by SAP
Stay up-to-date on the latest developments and product news about intelligent customer experience and CRM technologies through blog posts from SAP experts.
cancel
Showing results for 
Search instead for 
Did you mean: 
svenhaiges
Product and Topic Expert
Product and Topic Expert
Both AI and Edge Computing are hot topics, but the combination of both is even more fun – and the combination of these technologies enables completely new use cases and customer experiences, too.

There are a couple of reasons why I am writing this blog post. First and foremost, AI on the Edge is also topic that our SAP CX Labs team has identified as a trending topic and we’re internally also working on prototypes that touch these technologies in one way or the other. Secondly, I am honestly super happy to have found a nice way to get my hands dirty with AI. I think this could work for you, too, so this is why I am sharing my experience.

Enough good reasons? I hope so.

AI and Edge – what does it really mean?


Before I’ll combine things and reason about it, some quick definitions first.

AI is a huge topic, a big buzzword, and we should more specifically probably be talking about machine learning (ML) – especially with the example I’ll use later which is around machine vision, e.g doing image classification or object recognition (for starters, object recognition is like image classification with XY boundaries where the detected objects are). As AI is the more generic term for all this ‘smartness’, I’ll go ahead with AI for now. Just a few years back, you would need a very powerful computer and special GPUs (Graphical Processing Units) to use AI. Over the last years, many cloud platforms such as SAP Leonardo Machine Learning made a lot of features easily accessible via cloud APIs. Did I mention that our very own SAP CX Labs team member Lars Gregori has written an awesome book about SAP Leonardo Machine Learning?

Edge Computing is also a big trending topic, but I’d argue not quite as new and fancy. It’s about data processing close to the edge devices. Edge computing tries to process the data as close as possible to where the data originates from (sensors, cameras, etc.) Speaking from my past labs experience, we’ve used a lot of Raspberry PIs, arguably the makers’ poster child of a edge device. A Raspberry PI is a little linux computer with lot’s of I/O, e.g it’s often trivial to connect sensors or cameras. The recent release of the Raspberry PI 4 has increased the processing power of this little device by a factor of 4 – these devices themselves are now as powerful as cheap laptops.

Without special help, AI on the Edge is painfully sloooow…


So far, AI was pretty much happening on powerful machines or (on powerful machines) in the cloud accessed via APIs. AI on edge devices like a PI meant waiting a few seconds for an image to be classified. The CPU of a typical PI, even the latest one, is not made for AI applications. This changes as special AI edge devices come to the market. Two examples worth mentioning (but not the only ones on the market today) are the Nvidia Jetson Nano (which adds a powerful GPU) and the Google Coral line of products (which add TPUs, tensor processing units). While GPUs speed up the processing of AI tasks, they are also good for other applications areas such as image processing and in general have a pretty wide application area. The TPUs specifically target AI applications and specifically speed up the processing of Tensorflow-based AI projects.

Feeding a live image stream from a Raspberry PI camera to a MobileNet v1 Image Classification task on the latest version 4 of the Raspberry PI will yield about 1 result per second (in my own testing). Switching to the TPU-based tensorflow model and adding the TPU via the new USB3 ports, e.g. using the Coral USB Accelerator, will yield about 30 results per second. So that’s roughly 30 times more results, 30 times faster. And it’s these speed improvments which enable new use cases.

Let’s shake it up – but wait, why exactly is AI and Edge Computing so relevant?


I can think of three main reasons, but maybe you can find more: Privacy, Offline, Realtime. If you have these requirements, then AI on edge devices is for you:

  • Privacy: processing sensitive data such as image data (or your heart beat) directly on the edge means that this sensitive stream of data does not have to be transmitted, hence no cloud can be broken into and the data be stolen. If data has to be communicated to the cloud, then only the results and not the raw data stream has to be transmitted, saving a ton of bandwidth. It could also allow you to switch from a powerful, but power-hungry communication method such as LTE or future 5G to low-power RF communication (think LoRaWAN for example).

  • Offline: yeah, a no-brainer, but if AI services were only accessible on powerful cloud platforms then offline use cases would not be possible. AI-enabled edge devices enable offline and – maybe a good addition – mobile use cases. As an example, let’s imagine a drone in an industrial setting detecting certain objects, picking them up and bringing them to a central place where temporary internet access / maybe WiFi exists to transmit the outcome of the last drone flight.

  • Realtime: even if your edge device is online and AI cloud services are available to you, the network roundtrip adds to the time it takes for processing the data. If you need to decide quickly, then offloading this decision to the edge is the solution. I think a good example is autonomous delivery. Autonomously driving vehicles that deliver goods to customers will need to stop immediately if their path is blocked by a person that just ran into their path. Every millisecond counts if a person’s life is at stake.


I quickly would also like to note, that all of the above can affect the customer experience. Think of offline & realtime shopping tools that help blind people to decipher price tags or products. Or what about a follow-me-style shopping cart that decides in which direction to move based on a object recognition model running on it in realtime. What about a store analytics system that tracks people arriving in the store, but with privacy in mind and without transmitting the data to the cloud (just the numbers, no images). Or what about a product recommendation system, similar to our smart mirror, that store owners can easily retrain and update themselves?


Point made: applicable to all of us in Customer Experience. Undeniably.

As I’ve written above – there will be more good reasons for AI on the edge – please let me know via comments on this post.

Getting started with Edge AI – this worked for me.


I believe in a playful and creative approach to learning and this blog post is just part of this process by the way (the sharing part). As a maker, physical objects also attract my attention. So when I first read about Mike Tyka’s Teachable Machine implementation, I was on fire.

The original teachable machine is a web-based demonstration of the capabilities of Tensorflow.js. More specifically is shows how Transfer Learning can be used to to “teach” a machine to detect multiple objects that are presented via the webcam. It uses a special javascript implementation of the MobileNet model for Tensorflow and interestingly it does not require the training of your own model, but rather it uses one of the output layers of the neural network and compares it to classes/labels that you want to detect. There’s a great article that explains how transfer learning with the teachable machine works in detail.

Mike Tyka has taken this web-based teachable machine and ported it to a maker-style version running on a Raspberry PI in combination with the Coral USB Accelerator to speed it up. He’s written an awesome description of his teachable machine project here.

This means a lasercut base plate, buttons, screws, connector cables and all this goodness. Here are some pictures of the version I created at home – from the beginnings on. As a small Sven-specifc change, I’ve lasered a custom felt base and added the URL of the labs blog – have a look:








And as I used a Raspberry PI4, I was able to discover a rather odd “Segmentation Fault” when I started the teachable machine via the Python3 command line. It turned out that I needed to connect the PI to a HDMI display to make it work – just using it headless and connecting via SSH will cause this issue (but this might soon be fixed as Mike might update the RPI4 image). Here’s also a short video showing how the teachable machine is being used – first the machine needs to be training by holding the object it should detect close to the camera and then pressing the buttons. Later it will recognize these objects and the corresponding LED will light up – Transfer Learning using MobileNet – et voila!




With this new knowledge, I was curious to find out if I would be able to run my own custom tensorflow model. I uploaded a model to the PI which I had trained for a recent design thinking workshop. It is able to label 4 different drinks: Mai Tai, Cuba Libre, a kind of Martini and a Mojito-style drink.

It works – I had to fiddle a tiny bit with the labels file (correct order and tab-ased spacing of the number/label combination) – but it was as easy as specifying a different model file for the raspicam tensorflow example. Here’s a quick video of it (note: the cam streams the ceiling and it seems to be closest to a martini 🙂

https://youtu.be/eR1KhsRIklA

All right, this was a longer blog post and I hope you enjoyed it. Maybe this intro to practical AI works for you, too – let me know.