Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
TomVanDoo
Active Contributor

When I started my career in SAP more than 10 years ago, the first thing I said when being confronted with the infamous grey screens was:


“Nope”.


Eversince I have been working on improving the user experience on these screens. Initially by using the screen controls to their full extent (you know, ALV grids, images, splittercontainers and the likes), but later on also by using the HTML viewer in full screen mode and having the ABAP generate HTML and react to events from the DOM.

Back then, people said I was crazy and shouldn’t invest time in those screens. SAP is about functionality, not user experience and HTML will never provide a better User Interface than a GUI.

Well,… I’d say history showed them.

UX <> Functionality?

But then again, there’s one point in particular that really bugged me about that mindset. Since when is functionality not an integral part of user experience? If I turn my steering wheel in my car, I expect the car to turn. If it doesn’t and I run into a brick wall,… That’s some pretty nasty user experience, right?

That point lingered, and when back in May, I was asked to prepare a demo and a speech around user experience for an event, I decided to not just look at the screen, but at the way we interact with our IT systems in general. It would've been so easy to show a couple of standard Fiori apps, a Personified GUI screen and a mobile app, but I figured: “Go big, or go home”.

BobbyTables

That’s how the idea of the UX-demo table started. My colleague Leon had once built a setup using a projector, a webcam and some nifty image processing to create a touch sensitive table which could detect whether an object moved, and display information around it. We decided to take that idea a step further.

New technologies and innovation must be first applied in a sandboxed environment, so that’s what we did. We built a sandbox.

The projector would beam a Fiori application on the surface of the sandbox. The Fiori application runs on the Hana Cloud Platform and is affectionately called “Bobbytables”. This makes it easier to consume it from a variety of demo devices and to collaborate on the development.

5 stages of User Experience

The application features the 5 stages of user experience.

  • Phase 0 is a bad user experience. Obviously, I did not include that phase in the demo, as most attendees were quite familiar with SAP GUI already.
  • Phase 1: Visual improvements (the screen in other words. The phase which most people consider to be the only thing that User Experience is all about)
  • Phase 2: Intuitive input: what, no keyboard and mouse?
  • Phase 3: Alternative feedback: what, no screen?
  • Phase 4: What, no user?

The visual improvements are obvious. We’re running a Fiori application (custom) from the Hana Cloud Platform.

Touch sensitive wooden panel...?

The interaction however is very different. Using a Kinect, we can detect the hand-movements and create a sensation similar to a touch screen. Interaction can also be done with an actual touchscreen, such as a mobile device. Given the UI5 technology behind the demo, we can embed mobile features.

In order to use the Kinect as a mouse pointer, I’ve had to revert back to C# programming, which was annoying because I've never programmed in C# up to 4 months ago. I've published the solution via OneDrive:

https://netorg689932-my.sharepoint.com/personal/tom_vandoorslaer_fiddle_be/_layouts/15/guestaccess.a...

In the solution, I use Fleck for the Websockets server. (yes, I was working on an older laptop which did not have embedded support for websockets) The kinect I’m using is the 360 version, so SDK V1.8 is required.

There is a company called UBI interactive (http://www.ubi-interactive.com/ ) that partnered up with Microsoft to create such ”touch-control” with Kinect as well, which charge a nice sum of money. My source code is for free :wink:

Toddler-level-interaction

You’ll also notice that the Kinect project also contains a SandboxControl module. That’s because of a sub-demo where I use the sandbox part of the table to interact with the application. Using the Kinect, the application scans the surface of the sand, measures the height of every pixel, and based on that creates a color map, which is in turn projected onto the sand surface, creating the illusion of an altitude map in 3D.

This is a very intuitive way in which you can interact with an application, and it can radically change certain processes.

The governmental department responsible for the inspection of our roads, uses a similar approach. With a camera mounted on the bumper of the inspection vehicle, they scan the road surface while driving and can automatically identify damage on the road, map it with the geolocation, calculate the size of the pothole and report it back into the system along with a picture. This is actually already an example of “no-user experience”, since the employee isn't consciously working with the IT application.

Robotics?

In the third phase, we also want to change the output.

We don’t always necessarily need a screen.

Suppose we want to direct a robotic forklift via our input. Do we really need to see the result of the forklift on the screen, or do we want to see our forklift move physically?

Combining a different way of creating input, we can let the computer process the commands and detect collisions itself, and then send the resulting command to the machine. No screen has to be inbetween.

We demonstrated this by using a LeapMotion in our Fiori application, to send IR-commands to a lego-technics front-loader. As we move our hand over the LeapMotion, it detects the intention of the operator, and translates it into proper movement signals for the machine. This could lead to much safer machine operations, given that the operator can keep a safe distance, has a more intuitive input and the computer calculates possible collisions and prevents operations if they can lead to balance disruptions, collisions and the likes.

(obviously, we used low-cost Lego Technics instead of an actual bulldozer)

We also investigated if we could use the LeapMotion as an embedded module in Fiori to interact with the mouse. It’s possible, it even works, but it has a limited surface range, which proved to be too small for our full-scale table.

The LeapFiori module is a separate JS-class which you can include in your own Fiori projects.

https://netorg689932-my.sharepoint.com/personal/tom_vandoorslaer_fiddle_be/_layouts/15/guestaccess.a...

Mind you, you do need to have the LeapMotion drivers installed on your machine, and an active Leapmotion device connected. Can’t help you with that.

The way to activate leapmotion control is simple.

In you app.controller.js, require the LeapFiori module:

jQuery.sap.require("be.fiddle.uxtable.util.LeapFiori");

And then, in the same controller, onAfterRendering:


LeapFiori.drawCanvas(this.getView().getDomRef() );

There, done. Now you can wave at yourself:

No Screen, no user

In a last stage, we don’t want the user to be interacting anymore with the IT application. We want, whoever it is, to do what they are good at, and have sensors monitor whatever they do. So we showed a demo in which an OBDII dongle would register all sensor data from the car and upload it into HCP.

From there, a dashboard shows the trips, routes and speed. Much more info is available, but not shown yet.

  • Rainsensor data
  • Fuel consumption
  • Engine problems
  • Tire pressure
  • Temperature
  • Turbo pressure

The source code however is not for publication (mainly because it’s 90% spoofed).

So you see, the best user experience, is no user experience.

PS: There is a video which is currently being edited, and will be released soon. I will add the video link later.

1 Comment
Labels in this area