Technical Articles
Replicate data from SuccessFactors to Commissions using Speech-To-Text
This blog post will give you an overview of how you can replicate information from SAP SuccessFactors to SAP Commissions using REST API. Sounds pretty basic right? How about we by leverage SAP’s Conversational AI and also use Speech-To-Text to accomplish our goal? Sounds interesting? Let us get started.
Requirements:
- Create an account in https://cai.tools.sap/ . This is to create our chatbot.
- SAP BTP trial account
- Access to a SAP SuccessFactors instance
- Access to SAP Commissions instance
- Code editor, Example:Visual Studio Code.
I’ve broken down this blog post into 3 sections.
- Create iFlow to replicate Employee Info from SuccessFactors to Commissions
- Create bot using SAP CAI to initiate replication process
- Create a simple UI5 app and embed chatbot with Speech-To-Text functionality.
1. Create iFlow to replicate Employee Info from SuccessFactors to Commissions.
Ensure you have your BTP trial account setup and subscribe to SAP’s Integration suite. Then you can start creating your iFlow. Please note, before you start creating the iFlow, ensure to setup the basic auth credentials for your SuccessFactors and Commissions tenant under Monitor>Integrations>Security Material.
Once done create and setup the iFlow as shown in the below images.
Set Participant Properties is a groovy script. Below is the script.
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.xml.MarkupBuilder ;
def Message processData(Message message) {
def body = message.getBody(java.lang.String)as String;
def parseXML = new XmlParser().parseText(body);
String name ;
String email ;
name = "${parseXML.PerPerson.personalInfoNav.PerPersonal.firstName.text().toString()}" ;
email = "${parseXML.PerPerson.emailNav.PerEmail.emailAddress.text().toString()}";
message.setProperty("name", name);
message.setProperty("email", email);
return message;
}
Deploy the iFlow once the setup is done. Once deployed copy the endpoint url of your iFlow, we will need this when we create our bot.
2. Create bot using SAP CAI to initiate replication process
Next let us create a bot in SAP’s CAI as shown in the below images. Create a new bot and give it any name you want. Now it is time to define our intent and entities.
Create a Free entity by the name Employeeid, under Entities tab.
Then go to the Intent tab. Create a new intent and give it a name – replicateemployee. Create intents as shown below.












3.Create a simple UI5 app and embed chatbot with Speech-To-Text functionality.
- webclient.js
- webclientBridge.js
- webclientBridgeImpl.js



const webclientBridge = {
callImplMethod: async (name, ...args) => {
console.log(name)
if (window.webclientBridgeImpl && window.webclientBridgeImpl[name]) {
return window.webclientBridgeImpl[name](...args)
}
},
// if this function returns an object, WebClient will enable the microphone button.
sttGetConfig: async (...args) => {
return webclientBridge.callImplMethod('sttGetConfig', ...args)
},
sttStartListening: async (...args) => {
return webclientBridge.callImplMethod('sttStartListening', ...args)
},
sttStopListening: async (...args) => {
return webclientBridge.callImplMethod('sttStopListening', ...args)
},
sttAbort: async (...args) => {
return webclientBridge.callImplMethod('sttAbort', ...args)
},
// only called if useMediaRecorder = true in sttGetConfig
sttOnFinalAudioData: async (...args) => {
return webclientBridge.callImplMethod('sttOnFinalAudioData', ...args)
},
// only called if useMediaRecorder = true in sttGetConfig
sttOnInterimAudioData: async (...args) => {
// send interim blob to STT service
return webclientBridge.callImplMethod('sttOnInterimAudioData', ...args)
}
}
window.sapcai = {
webclientBridge,
}
// Handles working with browser speech recognition API
class SpeechToText {
constructor(onFinalised, onEndEvent, onAnythingSaid) {
var _this = this;
var language = arguments.length > 3 && arguments[3] !== undefined ? arguments[3] : 'en-US';
if (!('webkitSpeechRecognition' in window)) {
throw new Error("This browser doesn't support speech recognition. Try Google Chrome.");
}
var SpeechRecognition = window.webkitSpeechRecognition;
this.recognition = new SpeechRecognition(); // set interim results to be returned if a callback for it has been passed in
this.recognition.interimResults = !!onAnythingSaid;
this.recognition.lang = language;
var finalTranscript = ''; // process both interim and finalised results
this.recognition.onresult = function (event) {
var interimTranscript = ''; // concatenate all the transcribed pieces together (SpeechRecognitionResult)
for (var i = event.resultIndex; i < event.results.length; i += 1) {
var transcriptionPiece = event.results[i][0].transcript; // check for a finalised transciption in the cloud
if (event.results[i].isFinal) {
finalTranscript += transcriptionPiece;
onFinalised(finalTranscript);
finalTranscript = '';
} else if (_this.recognition.interimResults) {
interimTranscript += transcriptionPiece;
onAnythingSaid(interimTranscript);
}
}
};
this.recognition.onend = function () {
onEndEvent();
};
this.startListening = function () {
this.recognition.start();
};
this.stopListening = function () {
this.recognition.stop();
};
}
}
// Contains callbacks for when results are returned
class STTSpeechAPI {
constructor(language = 'en-US') {
this.stt = new SpeechToText(this.onFinalResult, this.onStop, this.onInterimResult, language)
}
startListening() {
this.stt.startListening()
}
stopListening() {
this.stt.stopListening()
}
abort() {
this.stt.recognition.abort()
this.stt.stopListening()
}
onFinalResult(text) {
const m = {
text,
final: true,
}
window.sap.cai.webclient.onSTTResult(m)
}
onInterimResult(text) {
const m = {
text,
final: false,
}
window.sap.cai.webclient.onSTTResult(m)
}
onStop() {
const m = {
text: '',
final: true,
}
window.sap.cai.webclient.onSTTResult(m)
}
}
// Contains methods SAP Conversational AI needs for handling
// chatbot UI events
let stt = null
const sttSpeech = {
sttGetConfig: async () => {
return {
useMediaRecorder: false,
}
},
sttStartListening: async (params) => {
const [metadata] = params
const { language, _ } = metadata
stt = new STTSpeechAPI(language)
stt.startListening()
},
sttStopListening: () => {
stt.stopListening()
},
sttAbort: () => {
stt.abort()
},
}
window.webclientBridgeImpl = sttSpeech
You should see the UI5 page open with our bot showing ‘Chat with me’. Click on it and you should see the chatbot open. You should also see a microphone icon.
Click on the microphone icon on your bot and say ‘Replicate employee <employeeId >’.
Please note the employeeId here is the personIdExternal, which is the unique identifier for the PerPerson entity in SuccessFactors. So ensure you provide a valid personIdExternal that exists in SF but not in present in Commissions as a participant.
And there you have it, a voice enabled chatbot that replicates information from SuccessFactors to Commissions. Thanks for reading and happy learning!
You may also want try out IBM Watson’s STT Service. Below are the references which helped me write this blog.
https://github.com/SAPConversationalAI/WebClientDevGuide/tree/main/examples/WebClientBridge
https://blogs.sap.com/2022/03/31/how-to-implement-the-new-speech-to-text-in-chatbots/
https://answers.sap.com/questions/13631383/speech-to-text-for-the-sap-cai-web-client-using.html
https://developers.sap.com/tutorials/conversational-ai-speech-2-text-simple.html