Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

Your phones have become much “smarter”, no doubt about that.

They are starting to know a whole lot about the user context.

That includes, naming a few items:

  • Current location
  • Next meeting
  • Light brightness
  • If you are walking or in a car

Some of those inputs are just cool or nice to have...until someone finds an innovation way to leverage them.

For example, the accelerometer/gyroscope was mostly used to manage portrait/landscape orientation.

Then, people managed to improve indoor location accuracy based on your acceleration data!

We can detect the pattern for your footsteps reflected in the acceleration data, and in addition to that,

if your horizontal acceleration is null, your speed doesn’t change (Ask Newton :smile: ).

Anyways, my point is:

  • We already have plenty of great sensors
  • Combining sensors can create a new powerful “meta-sensor”
  • Combining hardware sensors + external context data (Facebook likes, calendar appointments, emails, etc…) can bring your applications to a whole new level. Ex. If you are running in non-work hours, we could automatically send a message to your friends proposing to join jogging. On your way back, detecting the weather, we can suggest a store to get an umbrella.

As we can see that’s very good, but your own (hardware + software) => (human body + brain) is outperforming all devices today in many areas.

Lets take an example. Stop reading for a second and look around you. (Really, do it!).

Instantly, you know so many things about your surroundings your phone can only dream of!

People

  • How many people are around you
  • Who they are, or an idea of their age/gender/social status
  • What they are probably doing
  • Your emotional state, and an estimation of theirs.
    (Humans and some animals such as dogs are very good for that, it helps us predict how someone could act or to optimize our interactions with them)

Geometry & Movements

  • How big the room is and its general geometry
  • What type of objects you can find and how you can use them
  • Where you would go to reach the exit door
    (Unless you are in a Las Vegas Casino, you will get lost no matter what: D )
  • Movements. Who is walking where, your coffee mug is about to fall off the table and a colleague is throwing an Angry Bird Teddy Bear at you. (Thanks peripheral vision!)

Other

  • Smell. Since we don’t use this sense that much we never put so much efforts into odor sensor, but we use chemical sensors in some industries.

There was for years a lot of work regarding face detection, to be able to know many things about a person

  • Estimated Age
  • Gender
  • Currently displayed emotion
  • Body type
  • Etc.

Regarding geometry, that’s where a lot of crazy things are happening at the moment, and that’s the focus on the next section.

Scanning and Mapping your World

Here comes the fun part. It seems the most basic computing we use on a daily basis from a young age (what is around me? How far is this door? Lets turn left not to hit this wall) is very tricky to perform on a device at the moment.

But a wide range of new device is coming to the market!

  • Project Tango
    • Aim is to “give a human-level understanding of your surroundings”
      Basically, we want a way for a machine to parse your surroundings and get the same of data as you currently do, as we mentioned in the previous section.
    • It is an android tablet at the moment, but it is only a matter of time before the required sensors and processing become available on your daily phone.
    • have a look here! Say hello to Project Tango! - YouTube

  • Structure Sensor
    • Extra Hardware component you can attach to an iPad
    • Provides depth and texture information
    • http://technabob.com/blog/wp-content/uploads/2013/11/structure-sensor-ipad.jpg
    • You can see an example of my self scanned :grin: here

  • Panono
    • Consumer device, 360 camera that you throw into the air
    • Video there

They allow amazing new possibilities, but they still are external device we need to carry around.

For Software to know your current surroundings context, you don’t want to have to take a device out of your pocket and scan the room.

Ideally, it should be available to the software seamlessly, just like for your brain.

One idea is to feed this device the same input your brain gets, especially from your two front facing HD camera, also known as your eyes :smile:

Some attempts have been made, especially in the shape of “smart glasses”, and lets talk about it more in the next section.

From “Augmented Reality” to the Reality V2

I’ve tried a few smart glasses.

  • Vuzix M100
  • Google Glass V1
  • Kopin Golden-I
  • Atheer Labs Glass
  • Osterhoutgroup Glass

http://www.smartglassesnews.org/wp-content/uploads/2014/02/compare-smart-glasses.jpg

From my perspective, each device available today made at least one big compromise, which limits it from mass-market adoption.

Those are a few things to keep in mind when evaluating smart-glasses:

  • Screen resolution
  • Speed
  • Battery Life
  • Field of View
  • Focus distance (which virtual distance is what we see)
  • Transparent / non transparent overlay for your eyes
  • Contrast
  • Weight / Overall Comfort
  • Ability to use on top of other glasses OR with correcting lenses
  • Interaction Mode
  • OS
  • Both Eyes VS single screen
  • Price

Among the top compromises are:

  • Battery Live (30 minutes on Google Glass!)
  • Contrast, especially for see-through devices
    Sometimes its so hard to see you just wish you had an iPad :grin:
  • Comfort (Many of those devices, after five minutes you wish you could be “free” again from wearing it.
    It could be the weight or the way it changes what you see VS real life)
  • Interaction mode. That’s a tough challenge. Glasses have to come up with new UX paradigm, but voice is not there yet.
    Some detect your gesture, which is very innovative, but it requires some practice and when it does not work the first time the timesaving benefit is gone.

In the end, if the goal is to have a way to interact quickly with the user at all times, you may consider something like this.

http://4.bp.blogspot.com/-6l-f7fKVoHw/UI9s-6P1ueI/AAAAAAAAChg/OG2QOGoEeuc/s1600/Smartwatch.jpg

In the case you need augmented reality, the lack of comfort I have experienced from smart-glasses makes me consider just holding a tablet and looking around through it as a great (and cheap!) alternative. I expect much better glasses to come in the market within the next two/three years. I mean, wearable AR device people feel comfortable wearing in the streets, not just crazy early adopters :grin: There is room for more than one winner in this market, i'm very curious to see who they will be.

Looking more at the long term, we are going to merge with technology (wearables and/or implants) as much as technology will become more human, and as the limit between those two words gets thinner, our experience of reality will change. The most impressive changes at the moment is Augmented Reality, but only works when you wear/use the device, and that will be nothing compared to our future…Reality V2?

So, to conclude, you are still the best device…for now!

Follow me on twitter! @omercier

Visit www.sap.com/mobileinnovation and learn how to achieve your business objectives.

2 Comments