By now you may have had a chance to try out virtual reality (VR) for yourself, perhaps in one of those popup demo booths that have started appearing in some shopping centres.  There are some great looking demos by SAP of VR, and much has been written on the topic already, however all that hardware can make VR look esoteric and expensive.

In this blog I want to show how accessible VR is to both developers and consumers.  The barrier to entry for VR is extremely low, and the potential for new applications is high.  As to just how high, witness the Facebook acquisition of one of the leading VR companies and Google’s increasing commitment to VR with their recently announced Google Daydream not to mention all the already present players like Sony, Valve and HTC.

To demonstrate how easy it is to get “entry level” VR working, we’re going to visualise phone sensor data collected using Pat Colucci‘s excellent SensorPhone app in a simple VR app developed using Unity (more on this later) called “SensorPhone VR Visualisation”.  To get the VR experience, we will run this app on our phone and place the phone into a Google Cardboard VR headset (more on this later too).  The end to end flow looks like this:


The above shows sensor data being collected on the left hand side, stored in the HANA Cloud Platform (HCP) Internet of Things (IoT) service, and the data exposed using an OData service.  The VR phone app on the right hand side needs your HCP login details and the URL of the OData service.  Then the sensor data is sent to the VR app and transformed to allow it to be rendered in a VR world.

But Why?

So why might we want to do this?  Well first be clear that VR does not just mean a 3D viewer.  VR means a 3D world plus your presence in that world.  Objects seem to physically exist in front of you, they appear tangible, lit and solid.  You can look around like you would in real life.  On good hardware this feeling of presence is very strong.  I had a chance to try a demo of the HTC Vive recently and there is a really impressive, if quite hard to describe, sensation of “being there”.  Even in our little demo app here you’ll get a taste of it.

This “presence” aspect has led some VR researchers to explore it as an additional dimension to data visualisation.  There is a fascinating paper from 2014 called “Data Visualization Using Virtual Reality” where researchers use “colors, sizes, transparencies and shapes of data points, their textures, orientations, rotation, pulsation” to encode additional dimensions from large datasets.  The idea is that people can more easily spot patterns in their data in VR.  The paper mentions some measurable cases where VR has allowed more effective interpretation of datasets.  A recent competition sponsored by the Wellcome Trust called the “Big Data VR Challenge” showed what some companies are achieving in this space already.  Others have written that the first company to achieve a decent generic VR data visualisation tool will become a Unicorn.  Having tried VR myself, I’d caution against believing all the hype, though.  It is good, but I don’t believe it is so revelatory that it’s going to change how we interact with data anytime soon.  It is still early days, the headsets are heavy when worn for extended periods and there are still many other challenges, not least around how to move and interact with objects in VR.

So that sets the context.  Now, what do the VR results look like for phone sensor data?

The End Result

The dataset I used for this demo is based on the data from Witalij Rudnicki‘s very interesting road trip blog series which he kindly lent to me to test the app.  This is a decent dataset because there are many points collected during a long road trip that are spread over a large area with plenty of changes in height.  These real world spatial coordinates of the phone are transformed to fit into a VR “world” and the data points are shown as cubes coloured according to the phone’s accelerometer.  The more red the more tilt, the more green the more twist, and the more blue the more lift.  All sorts of further mappings could be done, from mapping gyroscope or time to some visual properties like texture etc.  The output is like this:


One problem of course with trying to show 2D images of VR in a blog is that, well, you can’t.  The phone screenshot on the above right looks a bit rubbish, but once you’re wearing the headset the image is stretched across a wide part of your field of view, and the image reacts realistically to your head movement.  You can look around, and move through the world to see the data from different viewpoints.  You can only really experience what the view looks like if you use a VR headset.  So let’s proceed with doing that.

What You Will Need

Before we get started, here is what you will need

  • Some phone sensor data stored in HCP, collected according to the SensorPhone tutorial using app version 1.3.2. If for some reason you can’t complete this part, then the VR viewer app does have some built in demo data for use offline (thanks again to Witalij) so you can still try out the viewing experience without this step.
  • A Google Cardboard VR headset.  There are many available, see the Google Cardboard official list. I’ve been using the BrizTech VR for a mere $15 which works very well.
  • An Android phone to place into the Google Cardboard VR headset.  This needs to be running Android 4.4 (Kit-Kat) or higher.  At present, the VR visualisation app is only available for Android devices.

Using the SensorPhone VR Visualisation App

To use the app, follow these steps:

Step 1) Install the app

The app is only available in Google Play at present, use the link below or search the Google Play store for “SensorPhone VR Visualisation”.  A version for iOS may be possible in future, see the Development Details section later.

Get it on Google Play

Step 2) Configure the “SensorPhone VR Visualisation” app

Open the SensorPhone VR Visualisation app, and you are prompted to enter your HCP details:


The screen is fairly self explanatory, you need to enter your HCP user ID and password.  The only fiddly bit is to get the OData service URL.  To get this, start off at the top level HCP Dashboard.  Then choose Services -> Internet of Things Services -> Go To Service.  Then from the Internet of Things Services Cockpit click on the icon that says MMS Cockpit at the bottom right.  This brings you to the Message Management Service (MMS) Cockpit, which should look something like this:

MMS Cockpit.PNG

Now click on the Application Data tile that says “Display Stored Messages”.  This brings up a list of tables, something like this:

Capture MMS IoT.PNG

Click on the table that holds the message data you want to use, remember this has to be data collected by the SensorPhone app v1.3.2 so that the message type is the expected format.  At the top right of the next screen, there should be a link saying “OData API” like this:

Capture OData link.PNG

That link is what we need to enter into that top box on the VR app configuration screen.  The URL should look something like this:

The “NEO_” part is not needed so in fact this URL works also:

Enter this URL into that top box on the app config screen, and then press “Check Connection”.  If all is well you should see something like this:

Capture after check conn.PNG

As mentioned, if you don’t have internet access, you can try an offline demo.  Tap the “Offline” checkbox (a small white dot appears on the left of the checkbox label) and then press Start to see some of the “road trip” data from Witalij that was used to provide the screenshots for this blog.

Step 3) Move around and view the data

Finally press the Start button and place the phone into your Google Cardboard headset.  Data points will start to appear, in their timestamp order, over a period of 10 seconds or so.  To move around, look in the direction you want to go and press the Google Cardboard button. This gives you a gentle “kick” in the direction you’re facing and you’ll float off in that direction.  In the centre of the screen is a small dot representing your “gaze”.  If you gaze at the cubes an overlay appears showing more data about them:


Development Details

For the technically minded, here are more details on how the VR app was built.  The app was developed using Unity, a tool originally built as a game engine but whose makers also encourage use cases in data visualisation.  Unity supports C# and JavaScript and takes takes a lot of the pain out of building for multiple platforms; the same code base can target around a dozen platforms depending on licensing.  There are Unity plugins for Google VR development, Microsoft’s HoloLens and some Augmented Reality (AR) products too.

The source code for this app is available in GitHub, freely available for any purpose.  More technical details of the project structure are in the GitHub file, in particular how to change the visuals (materials, textures etc) according to the sensor data.

Some ideas on extending the project:

  • Given that Unity is so well-suited to multi-platform development, an iOS version should be a straightforward addition.
  • It would be interesting to allow user-defined mappings of sensor measurements to in-world properties like texture, colour and so on.  Perhaps this could be defined in HCP in a UI5 dashboard and the mappings sent to the app.
  • At present the app does a single fetch to get a maximum of 400 data points.  Better would be to make many smaller calls to get a stream coming in.
  • At present the app renders a maximum of 400 data points.  Depending on the phone, a few thousand data points being rendered at once is fine and decent phones will cope with more.  To get beyond that would need some optimisation work.  A downside of VR is that consistent high framerates are needed to avoid feelings of discomfort when viewing.

Closing Thoughts

With some free phone applications, a free HCP trial account and a VR headset costing a few dollars we’ve managed to achieve entry level VR with Cloud integration.  Hopefully this shows how low the barrier to entry is to get a taste of VR.

If we were to extend the idea shown in this blog way beyond the “entry level” that it is, then we could imagine the following.  Imagine a team of footballers (soccer players) with IoT position sensors sewn into their boots and clothing.  Throughout the football match the sensors stream position information to HCP, say 60 readings per second per sensor.  A similar, but much improved, VR app could play back this information with 3D models of player positions overlayed.  It would then be possible to view a playback of the game, from any position, at any instant and have an accurate view of how the game looked.  I’m not aware of this being available in the UK for soccer yet, but it does look like the sort of thing a company called STRIVR already do in the US (their website is light on detail).  With VR becoming more accessible this sort of tech may become more widespread.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Raphael Pacheco

    Very cool this blog! It’s cool we bring more people to this type of experience. A few months I have been studying about and even thought about start developing solutions in virtual reality and augmented reality (which is a big subject too).


    Raphael Pacheco.

    1. Kevin Small Post author

      Thanks Raphael, glad it was of interest.

      Unity supports AR solutions also, there are a bunch of plugins for various AR SDKs.  It isn’t such a big step from VR to AR, add a front facing camera and add it to the display.

      Pokemon Go was built using Unity.


Leave a Reply