Skip to Content
Technical Articles

There’s more than a smile in your face !

Authors: Gianluigi Bagnoli. Thiago Mendes, Eddy Neveux.

Starring on stage: Finn Backer.

 

We have seen in a previous blog post how it is possible to get valuable feedback from the face of people, skipping all the tedious survey loop and get an immediate sense of what is going on within your audience.

 

Live from SMB Innovation Summit 2020

We showed this capability during our annual innovation event, where thousands of partners gather to share their experience and learn about our roadmaps and innovation. This year participants could rate the keynote not by entering a rating but just by smiling (or not) into their mobile phones. You can have an idea of this by watching into some tweets – or in this image:

The overall architecture about this demo is pretty straight forward, and was already discussed in a previous post. It basically relies on capabilities that these days are pretty mainstream. We used AWS Rekognition services to detect the presence (or not) of a smile in participants’ face and then send a calculated rating back into a Qualtrics system, with no need to explicitly send survey and collect feedback.

Some interesting notes:

  • The connection with Qualtrics is mediated by a set of API provided by Qualtrics itself, a pretty standard way of interacting with systems in these cloudy days.
  • We used AWS services but we could have used also other similar services available on the cloud. For instance we could have used the ones provided by Azure. These days face recognition algorithms are a commodity provided via set of services.
  • The application itself is then just an orchestration of all these services, residing on an account in the SAP Cloud Platform.

 

Much more than just a smile …

Now, we focused on the smile on your face.

Point is that your face tells more about you than just a smile. It tells about your gender, your age, whether you have glasses or not, whether you wears a moustache and so on. And all these data

  • can be relevant to segment the responses,
  • and can be also automatically detected by the same services that try to find a smile in your face. Actually detecting a smile is part of a much broader processing on the image of your face.

In fact the recognition algorithms return much more features than just smile. These are the several features that you can GET from the algorithm:

body = {
      ...
      beard: String(faceAnalysis.Beard.Value),
      age: estimatedAge,
      eyeglasses: String(faceAnalysis.Eyeglasses.Value),
      eyesOpen: String(faceAnalysis.EyesOpen.Value),
      gender: String(faceAnalysis.Gender.Value),
      mouthOpen: String(faceAnalysis.MouthOpen.Value),
      mustache: String(faceAnalysis.Mustache.Value),
      smile: String(faceAnalysis.Smile.Value),
      sunglasses: String(faceAnalysis.Sunglasses.Value),
      emotion: String(emotion)
    };

Based on these we added the corresponding custom fields in the Survey data back in Qualtrics.

The code then to POST the calculated rating to Qualtrics must then include all these several different dimensions.

var data = {
    values: {
      startDate: currDate,
      endDate: currDate,
      status: 0,
      ipAddress: "127.0.0.1",
      progress: 100,
      duration: 1,
      finished: 1,
      recordedDate: currDate,
      locationLatitude: "49.3008",
      locationLongitude: "8.6442",
      distributionChannel: "anonymous",
      userLanguage: "EN",
      SessID_CED6t672nk: surveyData.sessionID,
      QID1: surveyData.question1,
      QID2_TEXT: surveyData.question2,
      Age_CED3xr977x: surveyData.age,
      Gender_CEDjhthsdn: [surveyData.gender],
      Eyegla_CED0k3z2q8: [surveyData.eyeglasses],
      EyesOp_CEDxwjpth4: [surveyData.eyesOpen],
      MouthO_CEDjyxmrnw: [surveyData.mouthOpen],
      Mustac_CED359e9s4: [surveyData.mustache],
      Smile_CEDo6lw72w: [surveyData.smile],
      Sungla_CED208nwbb: [surveyData.sunglasses],
      Beard_CEDgucai3u: [surveyData.beard],
      Emotio_CEDol7vonu: [surveyData.emotion]
    }
  };

  //Set HTTP Request Options
  var options = {
    uri: uri,
    body: JSON.stringify(data),
    headers: {
      "Content-Type": "application/json",
      "X-API-TOKEN": process.env.QUALTRICS_TOKEN
    }
  };

  //Make Request
  req.post(options, function(error, response, body) {
    if (!error && response.statusCode == 200) {
      var obj = JSON.parse(body);
      resp = obj.result.responseId;
      callback(null, resp);
    } else {
      callback(response.statusMessage, response);
    }
  });

This way we were able for instance to segment the response on age, without explicitly asking any information to the participants.

What to do then with all these smiles ?

We collected then plenty of Surveys. What to do then with all these X-data ? We should have some operational effect by interacting with our O-data back-end, as we discussed already here.

We listen then to the completion of Survey data, using Qualtrics web-hooks. The code snippet below registers the callback https://xo.cfapps.eu10.hana.ondemand.com to the event CompleteResponse.

That means that every time a Survey is filled with a response an event “Completed Response” is fired and the registered callback is called. This callback then can do some analysis and based on that then interact with the O-data backend.

In our case, we decided to act when the Survey is bad. For instance, if the response is under some threshold (for instance 2) we want to create some Activity in the back-end CRM system. Here’s the code to do so using the SAP Business One Service Layer:

req.get(options, function(error, response, body) {
    if (!error && response.statusCode == 200) {
      var obj = JSON.parse(body);
      var rating = obj.result.values.QID1;
      if (rating == 1 || rating == 2) {
        var surveyText = obj.result.values.QID2_TEXT;
        B1SL.PostActivity(surveyText, callback);
      }
    } else {
      callback(response.statusMessage, response);
    }
  });

function PostActivity(surveyText, callback) {
  var oDataEndpoint = "/Activities?";
  var bodyActivity = {
    CardCode: null,
    Notes: surveyText,
    ActivityDate: moment().format("YYYY-MM-DD"),
    StartDate: moment().format("YYYY-MM-DD"),
    Subject: 11,
    DocType: "-1",
    DocNum: null,
    DocEntry: null,
    Priority: "pr_High",
    Details: surveyText,
    Activity: "cn_Task",
    ActivityType: 3
  };

  ServiceLayerRequest(false, "POST", oDataEndpoint, bodyActivity, function(
    error,
    response,
    body
  ) {
    if (!error && response.statusCode == 201) {
      body = JSON.parse(body);
      callback(null, body);
    } else {
      callback(error);
    }
  });
}

 

Conclusion

Face recognition is a very powerful feature that can be leveraged in experience management not only to calculate a rating but also to segment it through different dimensions such as age, gender, and so on – and all of this without asking any extra question to the participants of the survey. The rating experience is pretty much immediate and simple and all the segmentation is automatic and runs behind the scenes.

Once the survey are completed then we are back to XO normal: we catch all the completions and see whether we can extract some signal for our operations

You can find the code for this demo, together with the instructions  on how to deploy it in this public repository. Please keep in mind that a real-world productive app like this will have to be compliant to all the privacy and data ownership applicable laws.

 

2 Comments
You must be Logged on to comment or reply to a post.