Skip to Content

Summary

Introduction

A BRF object could be seen, placing ourselves in an artificial intelligence context, as an intelligent agent acting in an enviromnent. So, building a BRF object could be seen as building a deterministic agent . An intelligent agent is essentially composed of sensors and actuators by means of which it perceives and interacts with the environment and a decision logic (behavior) that, processing the data received from sensors (current or past information, if our agent is equipped with a memory), commands the actuators. Our BRF object is activated, ie perceives and interacts, whenever it was raised (triggered) an event. A BRF object runs through the entire SAP system. So, purely theoretical, the environment is composed of all system’s data. The sensors of our BRF object are nothing but its context through which it perceives features of interest for its decision logic; the actuators are the software entities that perform all possible actions it can do (function modules, badi ecc.). What is interesting is that the behavior of our agent is fully defined within the BRF object; ie for changing its behavior it won’t be necessary to modify directly the abap code. In particular, but without loss of generality, we can say that the decision logic of an agent is based on recognition of states (ie specific combinations of perceived values: current or past, if previously stored) of the environment. Namely the recognition of a state is related to a set of actions.

Agent definition

Mathematically, calling:

  • P the array of perceptions
  • F the array of functions of recognition of a particular state on the basis of perceptions
  • S the array of states of the environment
  • A the array of possible actions
  • B the array of subsets Pi of perceptions used by the respective functions fi
  • Z the array of subsets Ai of actions used by the rules
  • R the array of rules

Suppose we have a number p of perceptions, a numbers of states of interest (and therefore a number s of functions for recognition of each state, a number s of subsets of P and a number s of rules) and a number a of possible actions. Our agent is completely defined by:

agent definition

Example: a Tic Tac Toe Player

To define our tic tac toe player first we must build a model of environment, perceptions, actions, game states and rules.

Environment

The environment will be obviously the game grid of tic tac toe game, in which each cell is identified by a label indicating its alphanumeric coordinates.

A1 B1 C1
A2 B2 C2
A3 B3 C3

Perceptions

Our array of perceptions will be defined by the symbol (O, X) in each box and by the symbol with which the agent is playing (my sign); need to understand if the game situation is advantageous or disadvantageous. Then:

array of perceptions

Synthetically denoted by:

array of perceptions

States

Thinking about what are the states of interest for the definition of a strategy, to simplify the analysis, we can divide them into:

Winning states

These are the states in which the agent can win; so where there are two symbols my sign aligned (horizontal, vertical or diagonal) and the missing cell is free. There are 24 possible states of this type. Assuming you have an operator (function) isfree(cell) which returns true if the cell is free, the states are:

3 for each column
column winning states
3 for each row
row winning states
3 for each diagonal
diagonal winning states

Losing states

These states are the states in which there are two symbols aligned with the symbol of my opponent and the missing cell empty. There are 24 states of this type and are similar to the winner ones:

losing states

Attacking states

These states are states in which it is possible to align two symbols (my sign) on a line in which the missing cell is empty. Also these states are 24.

3 for each corner cell (A1,A3,C1,C3)
attack corner cell states
2 for each center border cell (A2,B1,C2,B3)
attack border center cells
4 for the cell B2 (grid center)
attack center grid

Other states

Grid is empty A specific cell is free
empty grid cell free states

So, definitely:

states array

Actions

Our agent can perform a single action that is to fill, if possible, a specific cell with its symbol. Possible actions, considering the filling of each cell as a separate action, are nine. So:

actions array

Rules

The rules define the game strategy of our agent, linking the states of the game with possible actions. Thinking about a game strategy we can say that a good tic tac toe player perform the first action of one of the 6 groups of rules below:

GR1. I am the first to play, occupies the grid center

GR2. I can win, so i fill the winning cell

GR3. My opponent is going to win, then I’m going to block him filling the right cell

GR4. Grid center is empty, i fill it

GR5. I can attack (there is a line with a cell with my sign and two empty cells), so i fill the box with my sign that leads me to have two symbols aligned

GR6. Fill an empty cell

GR1 GR2 GR3
rules of group1 rules of group2 rules of group3
GR4 GR5 GR6
rules of group4 rules of group5 rules of group6

So:

rules array

Building the BRF object

First of all we start defining needed structures and tables:

ztrismap

where ZTRISVALUE is a data element defined over the domain namesake:

ztrisvalue

Every single table line shows current grid state:

  • MAPNO is the unique id of the match
  • MAPEVO is the current round (at most 9)
  • AGENT identifies the round player

Has been defined also for further use the structure:

ztrispoint_s

where ZTRISPOINT is a data element defined over the domain namesake:

ztrispoint

Now we run the transaction BRF and use thewizard wizard icon. First of all we need a name for the new application class. I named it ZTRIS.

application class definition

Then select all the possible object types (even if we won’t use all of them).

object types

Now define the context as the table ZTRISMAP.

context definition

Define an event named PLAY:

event definition

Finish and reload the object.

object

First of all let’s define the expressions; the idea is to exploit the state of the grid perceived in the context to define boolean expressions useful for the choice of actions to be performed. Therefore it becomes essential to read the context. We proceed defining expressions (right click on the branch of the expressions and choose ‘new expression’) A1,A2, A3, B1, B2, B3, C1, C2, C3, AGENT, MAPNO, MAPEVO. Each of these expressions return the value of its field structure of context. These expressions are implemented by the class OCA001 (Access To Simple Context):

simple context class

defined this way:

expression A1

Save and refresh the left tree. Once defined allexpressions for reading the content of each field in the structure of context (our sensors) i proceed to encode expressions useful for the recognition of previously defined states. For example, let’s define the winning state W01 using the implementation class ‘SAP FormulaInterpreter’:

formula interpreter

defined this way:

winning state w01

At this point using the formula editor in expertmode expert mode icon write the formula:

A1 = AGENT AND A1 = A2 AND IS_INITIAL(A3)

The meaning is “cell A1 contains my symbol and cell A2 contains the same symbol of A1 and cell A3 is empty”.

Now we’re going to define similarly all the states (winning, losing, etc..) previously defined. To proceed faster it is possible (using the right button) to copy previous defined expressions.

For example to define the losing state L01:

A1 <> AGENT AND NOT IS_INITIAL(A1) AND A1= A2 AND IS_INITIAL(A3)

To express the attacking state A01:

A1 = AGENT AND IS_INITIAL(B1) ANDIS_INITIAL(C1)

To express that cell A1 is empty:

IS_INITIAL(A1)

To express the state of empty grid we can make use of expressions previously defined:

FA1 AND FA2 AND FA3 AND FB1 AND FB2 AND FB3 ANDFC1 AND FC2 AND FC3

Now that we have defined the states of our interest, we can proceed defining actions. Right click on the branch actions and choose “new action”. Call it FILL_A1. Enter a brief description and save. Then go to define new the type of action. Choose 0FM001 (Function Module as Action) and save.

action type

Open a new session with the transaction SM37 and create a function module called ZTRISPLAY. The code is available at

https://www.dropbox.com/s/lday4b3fcms5xb4/ztrisplay.txt?dl=1.

The function interface must be defined exactly like this. The function simply inserts a new line in the table ZTRISMAP with the new snapshot grid (right after the agent move). The function uses the name of the action to decide which cell to place the symbol. Expand the tree of our action and choose new. Then we add our function module (should appear in the match-code) in the section of function modules to be executed. Then, in theparameter section, add expressions MAPNO and AGENT (values will be passed to our function module).

ztrisplay action

Repeat for each cell; define the action FILL_A2 and so on using the same function module and passing the same parameters. Even with the actions it is possible to proceed by copy but remember to specify the parameters (won’t not copied). Now that we have defined expressions and actions we can proceed with the definition of the rules. To simplify and reduce the number of rules we can define more complex expressions to group expressions previously defined by the type of action which will later be linked. ForĀ winning states we proceed defining expressions WIN_IN_A1, WIN_IN_A2, … , WIN_IN_C3 where, for example, WIN_IN_A1 = W03 ORW12 OR W21; expressions W03, W12 and W21 represent states in which the agent moving in A1 would win the game; similarly, for losing states we define DEFEND_IN_A1, DEFEND_IN_A2, … , DEFEND_IN_C3 where, for example, DEFEND_IN_A1 = L03 OR L12 OR L21. Thinking aboutĀ attacking expressions we divide them in 3 sets, in order of increasing usefulness:

  • Expressions that identify a state in which i could attack the grid center: define ATTACK_B2 = A03 OR A06 OR A09 OR A12 OR A14 OR A16 OR A18 OR A20; there are 8 attacking states in which i can fill the grid center.
  • Expressions that identify a situation in which i could attack the opposite side (example: my symbol in boxes A1, A2 and A3 free): define these expressions as ATTACK_OS_A1, ATTACK_OS_A3, ATTACK_OS_C1, ATTACK_OS_C3 (each of which consists of one or two positions of attack of this type)
  • Expressions that identify a situation in which i could attack only an adjacent cell (example: my symbol in A2, A1 and A3 free): define these expressions as: ATTACK_AD_A3, ATTACK_AD_A1, ATTACK_AD_C1, ATTACK_AD_B1, ATTACK_AD_A2.

Now we are ready to define the game strategy of the agent by means of rules; click on the event play and next on new ; next add the rules to be processed in the desired order. As previously expressed a good game strategy would be:

  1. I am the first to play, occupies the grid center
  2. I can win, so i fill the winning cell
  3. My opponent is going to win, then I’m going to block him filling the right cell
  4. Grid center is empty, i fill it
  5. I can attack (there is a line with a cell with my sign and two empty cells), so i fill the box with my sign that leads me to have two symbols aligned
  6. Fill an empty cell

Rule 5, after what was previously expressed about the attacking expressions can be splitted this way:

5.1 I can attack the grid center

5.2 I can attack an other side cell

5.3 I can attack an adjacent cell

 

Definitely:

  1. I am the first to play, occupies the grid center
  2. I can win, so i fill the winning cell
  3. My opponent is going to win, then I’m going to block him filling the right cell
  4. Grid center is empty, i fill it
  5. I can attack the grid center
  6. I can attack an other side cell
  7. I can attack an adjacent cell
  8. Fill an empty cell

So, the first rule to add is first rule

first rule

Should also set the termination code to 1 (the highlighted column) for all the rules: the agent performs a single move per turn (to be intended as the invocation of the event PLAY). Now we’re ready to add winning rules.

winning rules

Right after winning rules add the defending rules.

defending rules

Next the grid center attack rule.

center grid attack rule

Next the opposite side cell attack rules.

opposite side attack rules

Next the adjacent cell attack rules.

adjacent cell attack rules

At last the single cell filling rules.

single cell attack rules

Perform a syntax check on the event; if everything’s ok we’re ready to play against our agent. Obviously you need towrite a program to test it in a game against a human opponent. Writing the program takes advantage of the classes (core): if_controller_brf, cl_event_tcontext_simple_brf and cl_event_base_brf. The program code is available at:

https://www.dropbox.com/s/yqwfkozz1qvq0yg/zbrftris.txt?dl=1

https://www.dropbox.com/s/v575iohwesjq1og/zbrftris_top.txt?dl=1

https://www.dropbox.com/s/485ie9y85bmtz0s/zbrftris_forms.txt?dl=1

https://www.dropbox.com/s/5luux8al5f3uqk4/zbrftris_text_elements.txt?dl=1

Grouping expressions

We have defined a large number of expressions; it would be very useful to group them logically. This is possible by using the program SAPLBRF_CUST. We could for example combine expressions defined in 8subsets: WINNING, WIN_IN, LOSING, DEFEND_IN, ATTACK, CONTEXT,ATTACK_IN and FREE. What we do is just a logical grouping: do not have any structural impact on the object BRF. Assign an expression to a group is relatively easy; you need simply to write the group name in the right field. Next save. Similarly for actions ecc.

grouping expressions

Test

Now we are ready to face our agent: i run the program. I choose my own sign and decide to play first.

program

I fill B2 cell.

fill b2

The agent reply filling A1.

fill a1

I fill cell C1 and the agent reply filling A3 blocking my line. I fill cell A2 to block him and the agent block me in C2. I fill cell B3 and the agent reply in B1. I can only fill the last cell C3. Draw game. Below the game table:

game table

Conclusions

The framework BRF can be exploited for the realization of intelligent agents operating in an environment. The agent perceives the environment in which it operates through the context of the event. How the agent interacts with the environment and with which strategy is completely defined within the object BRF. Therefore to change the behavior of the agent is not necessary to write any code but simply define /change its strategy, namely its rules. For example i could define an additional event and use similar expressions to define a more or less performing strategy, in order to face a different skilled player. Obviously, in case you ever need new types of action or need to add a new kind of perception you must write new function modules or extend the context. Nevertheless, the strategy (ie thebehavior of the agent) is completely defined in our BRF object.

To report this post you need to login first.

7 Comments

You must be Logged on to comment or reply to a post.

  1. Kyle McCarter
    Thank you for taking the time to provide such a detailed tutorial. It appears the business use for BRF, and automation of tasks would be bound only by one’s creativity.
    (0) 
  2. Tommaso Beati
    This is the best tutorial on BRF that I’ve ever read!

    Great work Manuel!
    But, just in case I’ve some troubles, may I contact you directly by mail?

    Thank you very much,
    Tommaso.

    (0) 
  3. Sergio Ferrari
    Nice Job Manuel.

    SO from now, I hope to see more BRFplus and less custom objects in Next Agent Determination of your great workflows…

    Sergio

    (0) 

Leave a Reply