At TechEd Madrid Nic Doodson and I (Will Powell) presented Keytree CEO Vision. CEO Vision is a spatial operating system that allows you to interact with augmented reality objects in the most native way possible… with your hands. No mice, keyboards, tablets or mobile phones it is as simple as look at what you want to know more about and reach out and touch the data. Here is a condensed video of CEO Vision in Madrid Demo Jam 2012:
CEO Vision uses off the shelf technology. The system list for the latest version of CEO Vision generation 2 is:
- Microsoft Kinect x 2
- Vuzix Star 1200
- HD Camera
These devices provide a total of 6 cameras (2 infrared and 4 rgb) to track the environment and they make up the three aspects of the front end. The backend is powered by SAP NetWeaver Cloud and SAP HANA.
CEO Vision uses SAP NetWeaver Cloud and SAP HANA on the cloud. The demoed version has a staggering 1.2 billion rows in the database, which it can access near instantaneously. This speed makes the platform an ideal choice for CEO Vision because we cannot put up a loading bar or pause the users reality so we can get data into the right place at the right time. It needs to be seamless, as if the augmented reality and real reality were indistinguishable. SAP NetWeaver Cloud hands Restful requests from the front end that it then marshals and manages with the interaction on SAP HANA in the Cloud. Once the required queries and analysis are complete the data set is returned in JSON to the front making the transfer as light and a quick as possible. The data structure for CEO Vision is sales based data where we can store information like quantities, sales person, product id, description, location, online or offline, date and more.
CEO Vision has three parts to the front end Augmented Reality and Hand Tracking. It is the culmination of the augmented reality and hand tracking that makes CEO Vision possible.
We use markerless augmented reality so the user experience is as close to real life as possible rather than having predefined markers. When CEO Vision starts it parses known objects to form a set that the system knows about and can then trigger the packages augmentations on top. Once an object is recognised CEO Vision pulls data from SAP HANA and SAP NetWeaver Cloud to drive the augmentations. The glasses themselves are fully transparent and the GPU accelerated environment is delivered stereoscopically. Stereoscopic means that we provide two separate images one for each eye which allow for the approx 6.6cm separation between eyes which really does give a full 3D experience in the glasses. For augmented reality we actually scale down 1080p to 720p for the recognition and scale the augmented world back afterwards to support better performance and speed of recognition. A good place to start with augmented reality is to look at OpenCV one of the largest projects into computer vision.
CEO Vision uses two Kinects to do precise hand tracking by comparing IR camera images with pre sampled hand position images. This tracking allows us to detect when a hand is moved and when a pinch interaction happens. Once we have these interactions more complex multi hand user interactions can be put on top. For instance pinch to zoom or rotate in full 3D which we can do with the world map in our demo.
Face recognition is the final functionality of CEO Vision. Unfortunately this did not work in the demo jam because I looked at the Nic with the large screen behind so it saw multiple Nics. This functionality provides a fast method to identify people who you have interacted with and provide relevant content so you can concentrate on the more important aspects of your interaction. Once face recognition has occurred data is pulled and displayed around the face of the person you are looking at. This content can be their emails, sales figures, latest tweets. All providing you with a quicker way of getting to the data that is important. The face recognition uses Haar cascades for face and eyes to look for positive pattern patching within the images.
The hand tracking, face recognition and augmented reality working together to drive the front end user experience to create a rich immersive experience and with the speed of HANA we can interact with data without the worry of lag or delay.
History of CEO Vision
Dan McNamara (CEO Keytree) originally came up with the idea in late 2011, with generation 1 completed in April 2012. This encompassed face recognition, markerless recognition and tracking and used only a single kinect. It relied on gesture interaction and had the spatial resolution for interaction of about 10cm. That is to say distinct objects could not be closer than 10cm. In May 2012 CEO Vision won Inno Jam online. Generation 2 improved and enhanced CEO Vision Generation 1 and was completed October 2012. This has the spatial resolution of about 2cm and no longer relies on gestures as it has precise hand tracking.
CEO Vision will continue to evolve and at Keytree we think it includes some of the technologies we may well see in the not too distant future especially with the introduction of project glass from Google in early 2013 and other companies patenting smart glasses technologies. Parts of the CEO Vision technology are being taken forward to other applications as well as the spatial operating system in it’s entirety.
We are thrilled at Keytree that we won Demo Jam in Madrid 2012 with CEO Vision. I think demo jam is a fantastic platform to be inspired by technology and there were some fantastic technologies and a tough competition. I personally urge anyone to get involved. Bring on 2013….
If you have any questions about CEO Vision please post them or visit http://www.keytree.co.uk