Skip to Content
Technical Articles

Day 5 | Skybuffer Community Chatbot | How to Create Multilingual “Question and Answer” Complex Scenario

Business Challenge

It is quite a frequent case when companies start with a simple “Questions and Answers” chatbot development. But moving ahead and continuously improving AI skills of the chatbot, they are faced with a challenge that “Questions and Answers” skill should have a cascade structure to capture the user’s input and, having considered it, to provide a clarifying answer. Let us assume it should work for a multi-lingual chatbot that is, for the cost-saving purpose, trained properly in English only, and is supposed to provide high-quality (not automatically translated) replies in other languages as well by simply taking replies from the chatbot glossary.

Solution

Skybuffer’s offering provides “Questions and Answers” (QnA) mechanisms of creation and configuration of your multilingual skills with additional get-skills structure to capture users’ data or collect more information in order to provide the best answer. So, what you need to do is just get acquainted with the previous posts about our open-source AI content developed on SAP Conversational AI platform and learn how to, using template skills’ group, enrich your chatbot with a new advanced QnA skill in a fast and efficient manner.

Following next steps in this guide, you will be able to take a template skill’s group from our open-source SAP Conversational AI content and create advanced level QnA chat bot based on it.

Note: we are using Skybuffer open-source community version of the chatbot to develop an advanced QnA skill on top of it. You can always get access to this bot from our Day 1 post or directly navigating to:

Organization: https://cai.tools.sap/skybuffer-community

Chatbot: https://cai.tools.sap/skybuffer-community/skybuffer-foundation-content

We will start with the Function skills group. First of all, we’d like to point out that we are using Skybuffer-invented “object-oriented conversational AI” implementation approach (OO-CAI). This OO-CAI approached has been used to develop the community version of the Conversational AI content.

Consider Skills Group as a Class Entity

We use skill groups to encapsulate the set of skills that are supposed to execute one business step (function). It may be either complex QnA skill or a skill that requests some data from the backend system (via webhook or API integration). So, the skill group acts as a class entity in the object-oriented programming.

Consider Skills in the Skills Group as Methods of Class

Once we have an analogue of the class, we need an analogue of the interface, so we invented that each skill group can have maximum 4 skills. It can have fewer, but not more than 4, because each skill in the group is supposed to play a specific role for the consistent encapsulation of the business step. The skill group can consist of:

Trigger skill – this skill is used as a “main door” to execute a business step, so that is an entry point that can be called by the intent, entity or memory parameters values. We always use -trigger postfix for those skills.

Input skill – this skill is used to capture a user’s input. We separate user’s input skill because we can process it differently: either we allow jumping out of the input skill to another skill group (for example, when we ask about the user’s name), or we capture any user’s input and ignore intents/entities that are triggered by utterance (for example, when we need to capture the comment for the leave request). We always use -input postfix for those skills.

Function skill – this skill is used to execute an external function, so we use it to either call the webhook to the backend system, or API of the cloud service. We always use -function postfix for those skills.

Fallback skill – this skill we also call a “business fallback” that is a “back door” by its nature. We use it to try to return the user back to the skill group (business step) in case the user – be it by chance or on purpose – leaves the business step (skill group). For example, user jumped from one skill group to another and executed that business step completely, so we return them back to the skill they requested before. In this case, as soon as the user is routed by SAP Conversational AI to the general fallback skill (the grey one), we execute NLP once again from the general fallback skill and use the chance to return to the skills group via business fallback skill. We always use -fallback postfix for those skills.

Step 1: Find zxas-function-skill-template-fallback business fallback skill of the Function skills group in your forked version of Skybuffer Foundation Content chatbot:

Step 2: Fork this skill into your bot, rename it and add the skill to the group. How to do it was already shown in Day 3 blog post.

Step 3: Create an Intent for your QnA skill.

Step 4: Start adapting the skills in the Function skills group. Go to the Build tab and select the -trigger skill (explained above) from the newly created skills group. Go to the Trigger section of the skill and replace the template intent with your brand-new skill intent you’ve created on the previous step.

Step 5: For the Function skills groups we have an additional block in the trigger section with rt_scenario_active memory parameter, this runtime memory parameter is used to activate a new scenario in the context of the dialogue. That is the key concept of Skybuffer open-source AI content that each skill group can be steered via memory parameter, so it is possible to dynamically disable a skill in a certain chatbot channel where you would not like the skill to be executable.

Note: Please remember that the memory parameter rt_scenario_active must be unset at the logical end of the scenario.

Replace the technical name of the template scenario that is mapped to rt_scenario_active memory parameter with the technical name of your own scenario (feel free to invent it):

Note: We use camel style for rt_scenario_active memory parameter value to make it easier to read.

Step 6: Now your trigger condition of the -trigger skill is ready and should look as follows:

Step 7: Go to the Actions section. Switch to the edit mode in the first logical block. Replace the value of the memory parameter rt_return_to_function with the name of your -trigger skill:

Step 8: Replace the name of the memory parameter rt_scenario_<scenario name> with the name of your scenario (highlighted in yellow in the picture below).

Note: There is no restriction in our concept for the name of this memory parameter, however, in order to simplify debugging, we always try to keep the same name of the memory parameter rt_scenario_<scenario name> and the value we saved in the memory parameter rt_scenario_active

According to Skybuffer development concept for the open-source AI content, the memory parameter of rt_scenario_<scenario name> is activated at the beginning of the skillset and deactivated at the end of the skillset.

Note (major information about the central fallback skill): In our Skybuffer AI development methodology, we stick to two possible values for the memory parameter rt_scenario_<scenario name> that are “active” and “discussed”. rt_scenario_<scenario name> is used in the trigger section as a trigger condition in the -trigger and -fallback (business fallback) skills of the skill group to go back to the main skill from the central fallback skill (grey coloured fallback). Once the chatbot appears in the central fallback skill, we call SAP Conversational AI dialogue API to re-trigger NLP and try to return back to the previous active scenario that has not been processed completely, or that has not been cancelled by the user. In such a way we control such kind of “return to the skill” action by the value of the memory parameter rt_scenario_<scenario name> that is set to “active”.

Step 9: Press the Save button.

Note: rt_return_to_function memory parameter is used in Skybuffer Hybrid Chats solution to configure categorization (please, refer to Day 2 blog post)

Step 10: Now let us configure the next action blocks. We need to replace the text of [Your question in English] in two logical blocks of the skill.

Step 10.1 (English block) In the text section you need to switch to the Edit mode, input the text of the reply and save your entries.

Step 10.2 (Non-English block) Here we need to modify rt_source memory parameter value.

Note: to translate more than one phrase at a time, we use special conditions and translate arrays of phrases. To translate the array, activate the memory parameter rt_list_captions before calling /translate webhook.

Array in rt_source memory parameter should be in square brackets [], and each phrase should be put in quotes and separated with commas.

Note: Customize your bot replies translation using the Hybrid Chats – Chatbot Vocabulary application (find more details in Day 2 blog post)

Step 11: To display the translated array, we use a specific memory parameter rt_message_captions. This is an array, so to show it you need to use the structure {{memory.rt_message_captions.[n]}}, where n – sequence number of the translated phrase.

Note: please consider that numbering in the array starts with 0.

Step 12: Let us create a skill (the second level skills group) which will capture user’s answer and collect it in the memory variable, so that it would be possible to move to the embedded QnA layer of our advanced QnA skill.

Return to the Build tab. Find zxas-get-template-trigger skill in your forked version of Skybuffer Foundation Content chatbot:

Note: we use so-called “get skills”, that we usually do not call directly, to model re-usable actions in the chatbot business steps, for example, “get name”, “get email”, “get comment” and etc.

Fork this skill into your bot, rename it and add the skill to the group. In such a way you will follow object-oriented conversational AI (OO-CAI) implementation approach and keep all skills encapsulated into the skill group to simplify development and organize your chatbot skills better.

Step 13: Start adaptation of just forked get skill. Go to the Build tab and select Triggers section of the new skill. Replace the template rt_scenario_active with your specific scenario parameter which will be used in this skill. As we have already explained the technical purpose of the memory parameter rt_scenario_active and what we usually set as its value, we will not spend time here again, however, please keep using camel structures in the memory parameter value to simplify the debugging.

Step 14: Go to the Actions section. Here is the place to capture the user’s input. We can technically capture all the data types, like numbers, date and time, locations using entities, also we can capture whole user input or set parameters according to the recognized sentiments. In our case we will use sentiments because our question has emotional answers.

So, in the conditions section for our action, we will check rt_sentiment in negative, vnegative values set (user does not want our support), and also we will check that target memory parameter is absent.

As a result of the sentiment analysis, for example, we will set memory parameter rt_assistance_level to “low” that we might probably use in the main layer of the advanced QnA skill to provide an answer according to the user’s input that we captured in the get-skill.

Note: please remember to unset memory parameter rt_scenario_active.

Step 15: Add the block with the opposite condition sentiment that is not in negative, vnegative values set.

Note: the memory parameter rt_return_to_function is set on the main (root) skills layer of the advanced QnA skills and is used for dynamic redirection to the business scenario skill after the user’s input in the get-skill is captured.

Step 16: we configured the get skill (embedded layer of the advanced QnA skill) and now we can return to the Function skills group (the root layer of our advanced QnA skill).

Go to the Build tab and select the -trigger function skill (yes, the one with the -trigger postfix at the end of the skill name).

Step 17: go to the previously configured actions and replace the template parameters with the newly created get skill details.

Note: let us explain the meaning of those memory parameters according to our AI skills development concept:

rt_scenario_active – runtime memory parameter used to activate a new scenario in the dialogue. We use the same value of this memory parameter in all the skills of the skills group, so we can understand which AI scenario (skills group) is currently active. That is very meaningful, for example, for the mapping of an AI skill and backend API supposed to return the value for the skills group. Please, remember that rt_scenario_active should be unset at the logical end of the scenario.

rt_skill – memory parameter for recording current skill and checking in the trigger section of the -function skill (yes, the one that has -function as a postfix of the name of the skill) of the skills group. That is very useful once we deal with cascades of AI skill groups.

rt_goto_function – memory parameter for recording the target skill which is used for dynamic redirection to the get-skill, central fallback skill, or another business skill outside the current skill group. The main purpose of the dynamic redirection between the skills from different skills group is to break the link between skills groups and avoid non-desired relationship-based forking of other skills group once we fork the skills group we want to be forked.

Step 18: Repeat the same activities for non-English action block of the skill.

Note: You can add as many embedded QnA layers (get skill skills groups) as it is required for your advanced QnA skill, so you can simply create new skills groups of get skills to collect specific user’s inputs, and also, we recommend you to use our standard open-source AI content of get skills to collect general user data like name, phone number, email, etc. You can allocate foundation get skills we provide in the open-source version of our AI content by skills group name, for example.

Step 19: let us, for example, in our blog post of the advanced QnA skill creation, ask the user about additional information in case they aren’t satisfied with our support.

Add condition block where rt_assistance_level is low and redirect to get comment skill provided as part of Skybuffer open-source AI content, so that you can see the value of the content and how fast it is to be used in order to capture user’s input information.

Step 20: Add the same block for non-English actions (replies). Please, consider that you should use /translate webhook to get access to Hybrid Chats – Vocabulary (Fiori application) where you can store translated phrases into the language you want the chatbot to communicate with your users in.

Note: you can always get access to Hybrid Chats demo tenant from our Day 1 post.

Step 21: Once we are in the -input skill (one that has -input as a postfix in the skill’s name) to capture the user’s input, we need to think about the redirection to the -function skill.

(1) Check that the parameters needed for Function execution are present.

(2) Set:

rt_scenario_active as “<current business scenario id>”

rt_skill as “<current skill ID>” for check in -trigger of -function skill

(3) Redirect to the -function skill

Step 22: Modification of the input skill. The purpose of the -input skill is to redirect to the skill from both outside and inside the skill group, where some user input is expected. Access your -input skill, open triggers section and correct the condition for the memory parameter rt_skill (expected previous skill value) and the condition for the memory parameter rt_scenario_active (name of active scenario for get-skill) parameters.

Step 23: Go to the Actions section and navigate to the last message block. This block is used to allow passing by the -input skill and triggering another scenario with the intent recognized from the user input utterance by the NLP engine:

Step 24: Set the current value of the memory parameter rt_scenario_<scenario_name> to “discussed”

Step 25: Now we will do the modification of the -function skill. Access your –function skill, open the triggers section and correct the value of the memory parameter rt_skill (expected previous skill value) and the value of the memory parameter rt_scenario_active (name of active scenario for get-skill) parameters.

Step 26: Navigate to the Actions section and make the following activities that are marked in the same sequence on the picture below:

(1) Check the memory parameter which we got in the get-skill.

(2) Replace the template answer

(3) Set the current scenario as discussed

(4) Unset the memory parameter

Before the activities execution:

After the activities execution:

Step 27: Configure non-English block of the skill

Step 28: Now repeat steps 26-27 for other values of the assistance levels memory parameter rt_assistance_level.

Step 29: Perform the modification of the business -fallback skill. Access your new -fallback skill, go to the Trigger section and replace the value of the memory parameter  _memory.rt_intent[0].slug with the name of your new intent without @ sign and hit <ENTER>:

Step 29: Your fallback skill adjustment is now completed.

Step 30: Test and make sure that your new skill is ready to provide replies in any language! For sure, please do not forget to adjust Hybrid Chats – Vocabulary list of the chatbot replies, so that you could avoid automatic translation of the replies.

Conclusion

Now you have completed Day 5 guidelines and you know how to build a support chatbot that can collect user data and additional information from users to provide more person-oriented and better-quality support. Approximately it takes you only 15-20 minutes to add -get and -function skills groups to the chatbot.

Generally speaking, after you go through Day 1, Day 2, Day 3, Day 4 and Day 5 and follow the guidelines, your chatbot will be able to:

  • Capture verified users’ contact details and generate new leads for you
  • Speak about the provided services
  • Seamlessly integrate an operator
  • Categorize conversations
  • Provide replies in various languages without any additional training
  • Provide customized replies that are not translated automatically
  • Capture support requests in case operators are offline or Hybrid Chats are connected to SAP Conversational AI chatbot in the operator-free mode
  • Save all conversations so that you could always review them
  • Provide information according to QnA knowledge base that is added as a set of QnA skills
  • Collect user data and provide more personal information for your clients.

Looks like it is ready to Go Live and support your Clients, Employees or Business Partners, what would you say?

P.S. You can also find the entire list of our blog posts under the links below:

Day 1 | Skybuffer Enterprise-Level Conversational AI Content Made Public for SAP Community Development

Day 2 | Skybuffer Community Chatbot | How to Customize Your Chatbot “get-help” skill

Day 3 | Skybuffer Community Chatbot | How to Create Multilingual Question-and-Answers Scenario Fast and Easy

Day 4 | Skybuffer Community Chatbot | How to Bring Your Own Bot to Hybrid Chats

Day 5 | Skybuffer Community Chatbot | How to Create Multilingual “Question and Answer” Complex Scenario

Day 6 | Success Story | SAP Innovation Award | Cognitively Automated Customer Care

Be the first to leave a comment
You must be Logged on to comment or reply to a post.