Skip to Content
Technical Articles

Smile, you are on Qualtrics !

Authors: Gianluigi Bagnoli, Trinidad Martinez, Thiago Mendes, Edward Neveux

We talk a lot about experience management. We identify a suitable audience and carefully build surveys to get meaningful feedback; we send them and then collect feedback; we slice and analyze the experience data and try to get some useful operational insight.

But, wait a moment: what is the best feedback solution to use?

You are in a sales cycle and propose a solution for a prospect. How do you know if this is being well received and resonates into the brain of your counterpart, or if he/she is just thinking about his/her next holidays? Another example, you are speaking in front of an audience during an event … how do you know whether what you are presenting makes sense in their mind or if they are already looking forward to the evening gala? Possibly you give a gift to someone … how do you know if he/she is happy about the gift or if he/she is already thinking about whom to recycle it?

Simple: by their face, and out of the many expressions there is one that’s a great predictor of the inner true feelings of anyone, their smile.

So wouldn’t be great to measure satisfaction or disappointment out of the smile of the faces of our audience, and use this as a measure in operational terms? Of course, this requires a lot of attention when it comes to data privacy: our face is something really personal and we don’t like the idea of it being used in unfavorable ways. There are legal tools to deal with that, starting with GDPR legislation. For this article let’s suffice to know that these tools exist and that they must be used in any real application.

Let’s focus now on the code and, as an example, build a solution for a simple scenario. Imagine to have a simple survey, with only one single simple question, whether you like or not something (a product, an event, a brand, whatever). Imagine then to extract the feedback about this question from the faces of each of your audience, whether they smile or not. Look at this video:

Of course we can imagine even more complex scenarios, for instance taking a picture of a full audience and measuring the overall feedback by the average smile-ness in all the faces, or more articulated surveys across several dimensions, but let’s remain very simple. From a high-level point-of-view, implementing this scenario should be pretty simple: we just need:

  • first some code that takes a picture, for instance from a mobile device, understands whether there’s a face in it and if so, applies some intelligence to see whether there’s a smile on it;
  • and then a system that stores all the feedback data, provides all the statistical tools to slice and analyze them and infer some operational data from them.

It should not come as any surprise that we will use the latest acquisition of SAP, Qualtrics, to store feedback data. But, instead of explicitly asking people in a form what they think we will infer it from their expressions.

The link between these two parts of the solution will be the Qualtrics APIs. In fact, like any 21st century cloud solution, Qualtrics offers a very complete set of APIs that expose to the web all of Qualtrics capabilities. Our two components will be then connected in a loosely-coupled way, and the only dependencies between them will be just the service interfaces provided by Qualtrics itself.

 

Measuring your smile

Technology is there to build algorithms that recognize 1. whether there’s a face in a picture and 2. measure whether this face contains a smile or not. It involves a lot of Machine Learning code, building datasets and training the algorithms and so on. Plenty of literature available, interesting research area, but we are pretty lazy here and we shamelessly reuse here building blocks already widely available:

  • To recognize whether the picture contains a face we can use YOLO. We already showed how that can be done in another article, we are not going to repeat it here.
  • Once a face has been identified, then we must measure its level of “smile-ness”. Amazon Rekognition provides the code for this, together with an API to consume it.

As we have an API, the code to measure the smile-ness in a face is pretty straightforward. Here it is:

const AWS = require('aws-sdk');
    
//Calls DetectFaces API and shows emotions of detected faces
    function DetectFaces(imageBase64, callback) {
        var imageString = imageBase64.toString();
        var newImage = Buffer.alloc(imageString.length,imageBase64, "base64");
        
        AWS.config.update({region: '<your_region>'});
        AWS.config.credentials = new AWS.CognitoIdentityCredentials({
            IdentityPoolId: '<your_ID>',
          });
          
        var rekognition = new AWS.Rekognition();
        var params = {
            Image: {
                Bytes: newImage
            },
            Attributes: ['ALL',]
        };
        
        rekognition.detectFaces(params, function (err, data) {
            if (err) console.log(err, err.stack);
            else {
                // retrieve the higher graded emotion
                for (var i = 0; i < data.FaceDetails.length; i++) {
                    var arr = data.FaceDetails[i].Emotions;
                    function getMax(arr, prop) {
                        var max;
                        for (var k=0 ; k<arr.length ; k++) {
                            if (max == null || parseInt(arr[k][prop]) > parseInt(max[prop]))
                            max = arr[k];
                        }
                        return max;
                    }
                    var maxConf = getMax(arr, "Confidence");
                    callback(null, maxConf.Type);
                }
            }
        });
    }

 

At this point all we have to do is to deploy this code in some cloud infrastructure, as for instance the SAP Cloud Platform.

Please note that we use just one face feature here, the smile. We can make much more complex analysis, all based on automatic detection of face features. For instance we could divide the feedback based on the measured age of the respondent, just using an API for that. We could also classify the data based on the respondent gender, again using another API from Rekognition.

Also, by using yet another API, we can extract feedback from a more complex set of emotions rather than just happiness/unhappiness and check whether the message we are conveying makes the respondent angry, confused, surprised or afraid.

 

Storing and analyzing all the smiles

Back to our simple case, we have now extracted a measure of the smile. We can then build a feedback response and send it back to Qualtrics. Qualtrics APIs makes this a simple task, here’s the code to fill an answer into a survey:

function FillSurvey(surveyData, callback) {
        var sData = JSON.parse(surveyData);
        var uri = "https://<qualtricsTenant>.qualtrics.com/API/v3/surveys/<SurveyID>/responses"
        var resp = {}
    
        var data = {
            "values": {
                "startDate": "2019-10-17T14:45:54Z",
                "endDate": "2019-10-17T14:46:18Z",
                "status": 0,
                "ipAddress": "95.122.177.240",
                "progress": 100,
                "duration": 23,
                "finished": 1,
                "recordedDate": "2019-10-17T14:46:19.048Z",
                "locationLatitude": "40.4143066406",
                "locationLongitude": "-3.70159912109",
                "distributionChannel": "anonymous",
                "userLanguage": "EN",
                "QID1": sData.QID1, //Rating goes here
                "QID1_DO": [
                "1",
                "2",
                "3",
                "4",
                "5"
                ]
            }
        };
    
        //Set HTTP Request Options
        var options = {
            uri: uri,
            body: JSON.stringify(data),
            headers: {
                'Content-Type': 'application/json',
                'X-API-TOKEN': '<your_qualtrics_API_token>'
            }
        }

        console.log("Filling Survey in Qualtrics " + uri);
        //Make Request
        req.post(options, function (error, response, body) {
            if (!error && response.statusCode == 200) {
                console.log("Posted succesfully to Qualtrics \n")
               callback(null, resp);
            } else {
                callback(response.statusMessage, response);
            }
        });
    }

 

Done. Feedback is stored into Qualtrics exactly as if they were coming from explicit questions in real forms and it’s then possible to analyze them with standard Qualtrics statistics tools. Besides, once your experience data (X-data) are stored in Qualtrics, you can start integrate them with your operational data (O-data), as we discussed already in this article.

A final note: in architectural terms, we just orchestrated a few services, some from SAP and some not, in an intelligent XM solution. Here’s a high-level architecture of this solution:

Conclusion

You do not need to explicitly ask feedback in forms, just use the face of your audience to get X-data out of them!

All the code used in this article is freely available in a GitHub repository.

 

1 Comment
You must be Logged on to comment or reply to a post.
  • Awesome, Though you can use some tensor flow library to build your own api and use it, But it looks awesome. because rating prediction is pretty much successful here.