SAP Intelligent RPA 2.0 is an updated product version offering simple-to-use bot building capabilities following the Low-Code/No-Code approach creating the path to Hyperautomation.
With SAP Intelligent RPA 2.0, expert developers, citizen developers and business process experts can build bots to augment and automate human tasks across LoBs to save time, costs and reduce human errors in business operations.
Before jumping to more new features, a quick reminder that the SAP Intelligent RPA Cloud Studio is the web application used to design bots to automate your applications. It is already available in your SAP Intelligent RPA 1.0 (Cloud Studio documentation – SAP Intelligent RPA Trial tenant), and it will be complemented with the features in this blog post. At the time I am writing the post, to participate in the SAP Intelligent RPA 2.0 BETA program, please read the last paragraph of this blog post where Andreas explains what to do. Note that when SAP Intelligent RPA 2.0 will be released, the 2.0 features will automatically appear in your tenant without you having to do anything other than update your agent to the latest version to run the new 2.0 features.
This is the 3rd part of “No code” with SAP Intelligent RPA 2.0 (part 1 and part 2) from the “Low Code – No code” series. In my last blog posts on “No Code” in SAP Intelligent RPA 2.0, I explained key concepts like automation, input and output parameters, variables, data types, conditions, loops, automation modularity, SDK activity, Excel helper and tester.
Now, I will explain the concepts of application screen capture, application element declaration and the design of application automation. If you want to get your bot to interact with applications such as SAP GUI transactions, SAP UI5 forms, websites or windows applications, understanding those concepts is very important. The automation you will then design will be executed by the Desktop Agent that will interact with those applications.
The first step is to create an application in the Cloud Studio and identify, capture and define its screen and then identify and declare the screen’s elements. Then, the second step is to create an automation and associate activities to the screen’s elements via drag and drop.
Let’s start with the relationship of those different concepts. An application is composed of screens. A screen has a technology (SAP GUI, UI5, Web, UI Automation, Windows …) and it is composed of captures. A capture is composed of declared elements.
- An application can have screens referring to several technologies as we will showcase in the videos below with UIAutomation and SAPGUI
- By default, there is one capture by screen and in that case the capture list is not displayed. All examples in this blog post showcase only one capture per screen. The multiple captures are usually used when only part of the screen is changing, and bot designers want to share some elements between those two screen’s states.
So, what really matters for now is that you remember the relationship between an application that is composed of screens, that are in turn composed of captures that are themselves composed of elements.
Also, what is important to understand is that applications, screens, captures and elements must be identified by the bot during the automation execution. This identification is done by a set of criteria you set for applications, screens, captures and elements that will help the bot to recognize and automate all of them in the running application.
In order to help the user, Cloud Studio performs a smart capture and provides default criteria based on the technology. Of course, those applications, screens, captures and elements default criteria must be reviewed by the designer to make sure they are correctly set and generic enough. One simple way to define generic criteria is to replace the criteria is operator by either the contains or starts operators.
For example, to define the criteria of the application, the user should check that the criteria are generic enough to stay valid and identify the application during all its execution. Similarly for the screen, the user must validate that its criteria are generic enough to support for example all targeted machines with screens that have different languages.
Lastly, the user must declare capture’s elements for the bot to interact with input fields, output fields and buttons… Here again, the Cloud Studio helps project designers by providing default criteria definitions that the designer must review and adapt. Like applications, screens and captures, elements must be uniquely identified in the page so they can be recognized during the bot execution to perform an action on it.
Last, to make sure the Desktop Agent will correctly recognize your application declaration and its screens, captures and element’s declarations during the execution, project designers can test everything with the tester. The video below showcases this tester but also how to capture the SAPLogon with the UIAutomation application and declare its elements to automate it.
Once the project’s designer has declared and tested the application, screen, capture and elements declared, he can add the application screen in an automation in order to define the bot interact flow with the run application.
The project’s designer uses an application screen in an automation by associating an activity to its declared element, and this just by dragging and dropping the activity to its visual representation. For example, he can simply drag the Click activity and drop it on a button to get this screen’s button automated during bot execution. The video below showcases how to automate capture a previously declared SAPLogon application’s screen connection list and login button.
Until now, the application screen element was as simple as input fields, output fields and buttons. But a screen could be composed of more complex elements such as a hierarchical tree. The project’s designer again can automate them by just by dragging and dropping the right activities on it and passing the right context. For example, the video below showcases the capture of a SAPGUI technology screen as well as the automation of a hierarchical tree by getting the node context with Get Node activity and then passing this node context in the expand or double click activities.
Up to now we have used only one screen application per automation. But it is possible to add as many screen applications as needed per application and this way cascade screen automation. Note that screen applications could be associated with previously presented SDK activity, data or controls and this in the screen applications or after it.
The below video showcases how to capture and automate a cascade of application screens all via drag and drop.
As explained in my previous blog post, the concept of application modularity is a key concept to decompose a complex automation into simpler automations and this way reduce duplication of work, enable reuse of automations across multiple projects and possibly divide a large task among various programmers or various stages of a project. Application modularity help also to conquer an application automation by associating one or several applications to an automation that will then be used in a higher-level automation.
The video below showcases how to use automation modularity in order to automate a complete SAP GUI application flow with SAP Intelligent RPA 2.0.
I hope you have now a good idea how to automate applications following this third blog post on “No-Code development with SAP Intelligent RPA 2.0”. In the next blog post, we will review how to perform a typical and well-known RPA project all in a No-Code manner.
For more information on SAP Intelligent RPA
- Exchange knowledge:
- Learn more:
- Try SAP Intelligent RPA for Free:
- Follow us on: