Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
LynnS1
Employee
Employee
In part 2 we took a quick look at containerizing the FioriFlask app with Docker which is what SAP Data Hub depends on to build data science solutions in a scalable way. We also introduced OrientDB (ODB) as the database of choice for starting up quick and scaling out in terms of data modeling and storage. Not to mention the technology but the people have been instrumental in advising this build. Thanks Luca and Luigi!

In this 3rd part of the series, we are going to use ODB in a semi-structured NoSQL way to store a network of vertices that represents parts of a dialog and can return responses given any text as an input. It will be “learning” from existing dialogs and monologues taken from open source data sets. These include movie lines, Q & A, and random samples like labeled Myers Briggs (MBTI) personality test results. There is a sample included with the github and a portion extracted during the application set up.

We will continue to demonstrate the Fiori library for network graph visualization which ODB is specifically suited for being a multi-modal database with graph structure. The demo app includes other views of data which we will explore later but start introducing components now to show how to build an intuitive, non-data science and business intelligence user experience (UX).

Lastly, we will show how to connect our FioriFlask app to the containerized ODB instance we set up in part 2 . We will start with a review of the goals, project structure and then dive deep into a few important facets to which again, I invite ideas to improve. We will continue in part 4 with connecting to SAP Data Hub and building the UX for more use cases in a non-industry centric approach but rather the user who is non-data scientist but needs information from various sources.

Why would you be interested in this series?



  • You are struggling to locally develop Fiori apps

  • You don’t have connections to ABAP or HANA

  • You want to try something new and unopinionated.

  • You really like SAP Data Hub and want to see what else Python can do


What you should know to complete this series?



  • Basic Python, Javascript, HTML/XML

  • Familiarity with Fiori resources, specifically hana.ondemand.com.

  • Access to Github to access sample code.


What you need to run it?



  • Any machine with Docker, docker compose and 4GB of RAM


Environment Setup


There are a variety of ways to test out the application but I like to go the fast and cheap way, setting up a cloud service to host a ready made box and then git the app. The tested method was used with Digital Ocean and a ready-made Ubuntu Docker 18.09.02 image. These run about 15 USD a month and are set up within minutes. It comes with all the pre-requisites including Docker and Docker-compose.

Once you have your box, ssh into it using ‘root’ user and emailed password to get to the command prompt and set up a new non-root sudo user. To do this run the following commands:
> adduser demo
# (set a password, confirm it, and enter through the prompts)
> usermod -aG sudo demo
> su demo
> cd /home/demo
> sudo git clone https://github.com/Lscheinman/FioriFlask3.git
> cd FioriFlask3
> sudo docker-compose build

This will start a process lasting about 2 minutes. It's running all the yaml functions including the contained environment build. It should end with the following in which your tags will be different.
Step 7/12 : RUN mkdir -p $INSTALL_PATH
---> Using cache
---> 01a7b54e104d
Step 8/12 : WORKDIR $INSTALL_PATH
---> Using cache
---> 6989d9e16454
Step 9/12 : COPY requirements.txt requirements.txt
---> Using cache
---> 8ed25fecb0dc
Step 10/12 : RUN pip install -r requirements.txt
---> Using cache
---> 8125b33f3cb4
Step 11/12 : COPY . .
---> Using cache
---> 33c26d58b6ae
Step 12/12 : CMD gunicorn -b 0.0.0.0:8000 --access-logfile - "fioriapp.app:create_app()"
---> Using cache
---> 975581bc144f
Successfully built 975581bc144f
Successfully tagged fioriapp_celery:latest

 

Lastly run the command below to setup the application and then serve it on port 8000 for http. There was a fix for the application to find the right ip address for OrientDB within the docker network which was found by Remi Astier and will be described later. Thanks Remi!
> sudo docker-compose up

Following the setup commands above should result in command line text ending in something similar to what we saw in part 2. Unlike in part 2, this time when the Up command executes, the FioriFlask app under service name 'website_1' has some new steps to automate deployment. First, it waits for OrientDB to be set up and listening on ports 2480 and 2424. Then it tries each of the known Docker network addresses which it could be on as sometimes this switches and localhost doesn’t work. It will then start deploying the demo data in the csv in a separate thread so as not to hold up the rest of set up.
orientdb_1  | +-----------------------+------+-----------------------+-----+---------+---------------+---------------+-----------------------+
orientdb_1 | |Name |Status|Databases |Conns|StartedOn|Binary |HTTP |UsedMemory |
orientdb_1 | +-----------------------+------+-----------------------+-----+---------+---------------+---------------+-----------------------+
orientdb_1 | |node1551968111025(*)(@)|ONLINE|test=ONLINE (MASTER) |0 |12:51:22 |172.19.0.2:2424|172.19.0.2:2480|361.18MB/3.83GB (9.20%)|
orientdb_1 | | | |Dialogs=ONLINE (MASTER)| | | | | |
orientdb_1 | +-----------------------+------+-----------------------+-----+---------+---------------+---------------+-----------------------+
orientdb_1 | [OHazelcastPlugin]
orientdb_1 | 2019-03-28 12:51:27:805 WARNI Authenticated clients can execute any kind of code into the server by using the following allowed languages: [sql] [OServerSideScriptInterpreter]
orientdb_1 | 2019-03-28 12:51:27:806 INFO OrientDB Studio available at http://172.19.0.2:2480/studio/index.html [OServer]
website_1 | [OrientModel_init__2019-03-28 12:51:32] localhost failed
website_1 | [OrientModel_init__2019-03-28 12:51:33] successfully connected to 172.19.0.2
website_1 | [Extractor_init_2019-03-28 12:51:33] Running diagnostics on ODB
website_1 | [OrientModel_open_db_2019-03-28 12:51:33] Dialogs opened
website_1 | [OrientModel_check_classes_2019-03-28 12:51:33] All 4 classes found
website_1 | [OrientModel_fill_index_2019-03-28 12:51:33] filling index...
website_1 | [OrientModel_fill_index_2019-03-28 12:51:33] ...of 8 vertices...
website_1 | [Extractor_init_2019-03-28 12:51:33] Following files in data

If you go to your machine's address at port 2480 you should see the same OrientDB screen we saw in part 2 but now with a Dialogs database attached. This will represent the repository for our application and you can look at the schema that was automatically set up by the FioriFlask app startup scripts.



We can use the preset login rules (root, root) and change them another time but let's take a look at the schema which consists of the Monologue vertex and 2 edges representing how Monolgoues will be related to form conversations. There is also a vertex class for reports which will be used during long running extractions which will be covered in greater detail in part 4.



One edge will be Nextline in which multi-line Monologue vertices are built. Let’s take the example, “I like this post. It makes me happy” and, “I like this post. It’s funny.” The two Monologues consist of 2 sentences each but start from the same single sentence. Therefore, we will have a network with 3 nodes and 3 Nextline edges and those edges have the full monologue stored within.

The other type of edge will be Response in which Dialogs are built. If we extend on the example above, we can make the same response back each of the Monologues above. We can respond to both with, “Good for you” which will create a Monologue vertex.

This will mean a lot of edges and we are not optimizing with lightweight edges which is a nice feature of ODB in which pointers are created instead of new table entries. This is possibly a point where readers have a way to increase performance. But one of the ways the application will respond is by selecting a random or similar edge to continue a dialog. When many edges exist between two Monologue entities, such as “No” and “Well maybe”, then the randomization will result in selecting the most common response or next line where most common is defined by quantity of edges with the same or similar tags.

An example of how this structure will look in a graph from an OrientDB point of view is shown below with selecting No and all the incoming and outcoming sentences in which it is part of a conversation or monologue.



Of note here is the ease of use with OrientDB's interface. While it is perfect for the data scientist, we need something that a normal non-technical user will have thus the need to build our Fiori app.

Application Overview


To break down this problem we’re going to need a few things. We need a client for OrientDB, a file manager, a text extractor, and an interface with which users can then try it out in a UX. So these components will be the goal of a new blueprint that we add to the existing SaaS started in parts 1 and 2.

This will be the first extension of the basic app we set up before. The user will click on a new tile, not for dashboards but for a Chat app we'll call Dialogs. In it, the primary objective is to have a chat and update the database with new chat material as it’s tested. There are 2 ways it can be done. One is through uploading a file, and the second is for updating through single entries done direct in the UX. Therefore, in the UX we need a file-uploader, a text entry, buttons for processing, and views that might help the user understand what’s in the Dialogs Database already.



The user will click on the Dialogs tile to bring them to 3 functionalities. This includes

  1. Getting a response given a statement

  2. Creating a multi-line monologue

  3. Creating a dialog with a single statement and subsequent response


In all of these, the tags enable follow on data modeling and search.



Moving to the bottom part of the application to the "Insights" section, we can see 5 different table icons which each give a different view of relevant data. For this blog chapter we will only concentrate on the middle icon for viewing the network graph. Clicking on it prior to any activity will show the default graph that can be copied directly from the SAPUI5 on demand library and give an example of a nicely organized graph.



The snapshot also shows how responsive Fiori is with being able to collapse down based on screen size and provide hide buttons for momentarily unneeded functions.

To update the graph and show interactivity between the app and OrientDB, we can start by getting a response. Testing out with ‘No’, results in a Fiori Toast Message showing the response and an update to the graph, replacing the default view with the new interactivity we have going on.



This shows all the incoming and outgoing nodes related to our entered text, No. We can continue to update the graph by entering more text in any of the 3 interactivity options in this way. The Fiori Network Graph has built in functionality that make it very pleasant to work with as a developer.

First we can expand the view for full screen when the amount of nodes gets out of hand...



 

...search it for nodes and edges...



...or look at details of each node.



This makes the Network Graph library perfect for translating technical interfaces into business interfaces. Let's take a look at the details under the hood the enabled us to get here.

Technical Overview | Back End


First thing we need to do is add a new blueprint and since we’re building an ODB instance of related sentences to build conversations, we’ll call it Dialogs as mentioned before. Just as with the other blueprints __init__.py files, we need to import into the object from the views where it is defined. Therefore we have a new folder within blueprints to hold all the back end logic and will separate the SAPUI5 code into its UX centric location within the static folder. More on the UX after the python review.
#fioriapp/blueprints/dialogs/__init__.py
from fioriapp.blueprints.dialogs.views import dialogs

Before we had only url routes which were handled through the views.py file. We now add the Flask ‘M’ part of MVC with models.py. This will handle all logic that deals with ODB and views will handle routing app specific requests and returning ODB results.

The models.py is just an approach to handling ODB and the necessary functions to get files and transform them into a graph of conversations. The problem can be handled in a variety of ways but in this case we will use 4 custom classes to encapsulate functionality that is common to a lot of data science solution:

  • OrientModel: Client capable of communicating with ODB. As mentioned previously, we are using an existing library called pyorient to establish under the hood connectivity like passing queries and accepting results. This library only works with version 2.2.n. A follow on project will build a driver for version 3+.

  • DataPrep: Worker with access to an app specific folder, ‘data’. This is a new folder in the application structure and is meant to hold user uploaded and out-of-the-box data sets.

  • Extractor: Handles files by cleaning text, creating unique IDs, tags, and relationships between sentence vertices we’ll call Monologues.

  • Queries: Main interface to orchestrate the 3 classes. This is main class imported from models.py into views.py to establish communication to the back-end.


Given the new libraries we are using to handle data preparation and ODB connectivity, we need to update the requirements.txt file. We use a well-known library utilizing panels of data for preparation thus the name Pandas. It is capable of much more that we will show. The latest versions of both Pandas and Pyorient at the time of writing are 0.24.2 and 1.5.5 respectively. So we need to update the requirements folder to ensure docker includes it in the build.
#requirements.txt
...
pandas==0.24.2
pyorient==1.5.5

For sake of this post's length, we won't dive into the python for each functionality, however will highlight a few useful points to understand what's going on with connecting to OrientDB.
#fioriapp/blueprints/dialogs/models.py
import pyorient #NEW LIBRARY FOR ORIENT DB CLIENT
import os, time, string
import pandas as pd #NEW LIBRARY FOR DATA CLEANSING
import json
import click
from threading import Thread
from datetime import datetime
from difflib import SequenceMatcher


def clean(content):
"""
Utility function for returning cleaned strings into a normalized format for keys
:param content:
:return:
"""
try:
content = content.lower().translate(str.maketrans('', '', string.punctuation)).replace(" ", "")
except Exception as e:
click.echo('%s %s' % (get_datetime(), str(e)))
content = None

return content


def get_datetime():
"""
Utility function for returning a common standard datetime
:return:
"""
return datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S')


Starting with 2 utility functions that are used frequently by all classes called clean and get_datetime above, we can see also threading which will handle long running processes like file extraction and SequenceMatcher to look at similarity between strings. These are standard python.
#fioriapp/blueprints/dialogs/models.py
...
class OrientModel():

def __init__(self):
"""
Set up the OrientDB specifically for graphing conversations
Start with a work around for connecting to Dockerized ODB.
1) Wait for ODB to setup and start with sleep
2) Cycle through potential addresses and try connecting to each, breaking when one works
"""
time.sleep(10)
#Line improved by Remi Astier to get away from hard coded values
possible_hosts = socket.gethostbyname_ex(socket.gethostname())[-1]
if len(possible_hosts) > 0:
hostname = possible_hosts[0][:possible_hosts[0].rfind('.')]
i = 1
while i < 10:
possible_hosts.append("%s.%d" % (hostname, i))
i+=1
possible_hosts.append("localhost")
self.user = "root"
self.pswd = "root"
self.stderr = False
self.db_name = "Dialogs"

for h in possible_hosts:
self.client = pyorient.OrientDB("%s" % h, 2424)
try:
self.session_id = self.client.connect(self.user, self.pswd)
click.echo('[OrientModel_init__%s] successfully connected to %s' % (get_datetime(), h))
break
except:
click.echo('[OrientModel_init__%s] %s failed' % (get_datetime(), h))

The standard Docker image we set up in part 2 for OrientDB uses root and root for connecting a standard client but the tricky part is getting our Flask app to find the appropriate channel. I was unable to find any solutions to socket issues when using the standard "localhost" but this work-around solved it and then was improved by Remi so as not be hard coded. However, I noticed that in some instances possible_hosts was a single address but at least it provides the host name. From this we take off the last digit and make a new set between 2 and 6 as that is where I've seen the ODB ip address set.

Initializing the schema of the database is achieved with the following function and is only called if not all of the models are found at startup. This is checked with a helper called check_classes().
#fioriapp/blueprints/dialogs/models.py
...
class OrientModel():
...

def initialize_db(self):
"""
Build the schema in OrientDB using the models established in __init__
1) Create the DB if it hasn't been created
2) Open it if it is not already
3) Cycle through the model configuration
4) Use a rule that if 'id' is part of the model, then it should have an index
:return:
"""
click.echo('[OrientModel_initialize_db_%s] Starting process...' % (get_datetime()))
if self.checks['created'] == False:
self.create_db()
if self.checks['open_db'] == False:
self.open_db()
sql = ""
for m in self.models:
sql = sql+"create class %s extends %s;\n" % (m, self.models[m]['class'])
for k in self.models[m].keys():
if k != 'class':
sql = sql+"create property %s.%s %s;\n" % (m, k, self.models[m][k])
if 'id' in str(k):
sql = sql + "create index %s_%s on %s (%s) UNIQUE ;\n" % (m, k, m, k)

sql = sql + "create sequence idseq type ordered;"
click.echo('[OrientModel_initialize_db_%s]'
' Initializing db with following batch statement'
'\n*************** SQL ***************\n'
'%s\n*************** SQL ***************\n' % (get_datetime(), sql))
self.checks['initialized'] = True

The function prints out to the set up which you should see in the Docker command line output from before. This method makes it easy to adjust the model in a programatic method. It can of course also be updated in the ODB schema interface as shown earlier, but we want to automate deployment so this enables scaling out for new use cases.

All other functionality in the OrientModel class are related to support functions for Create, Update, Retrieve, and Delete (CRUD) actions. The next step up from the ODB level where we persist data is the back end file manager which is a simple class to handle operating system level interactions, namely figuring out what files are in its directory and keeping that information on hand for other functions. This is classified under DataPrep.
#fioriapp/blueprints/dialogs/models.py
...
class DataPrep():

def __init__(self):
"""
Class to deal with the application's back end folder structure. It knows to find the data and upload paths
which will be used to orchestrate interactions between extractions and database transactions.
"""
self.path = os.getcwd()
self.data = os.path.join(self.path, "data")
self.upload = os.path.join(self.data, "upload")
self.acceptable_files = ['csv', 'txt', 'xls', 'xlsx']
self.files = []

def get_folders(self):
for f in os.listdir(self.data):
if os.path.isdir(os.path.join(self.data, f)):
for sub1 in os.listdir(os.path.join(self.data, f)):
if os.path.isdir(os.path.join(self.data, f, sub1)):
for sub2 in os.listdir(os.path.join(self.data, f, sub1)):
if os.path.isfile(os.path.join(self.data, f, sub1, sub2)):
self.files.append(os.path.join(self.data, f, sub1, sub2))
elif os.path.isfile(os.path.join(self.data, f, sub1)):
self.files.append(os.path.join(self.data, f, sub1))
elif os.path.isfile(os.path.join(self.data, f)):
self.files.append(os.path.join(self.data, f))

The main functionality is a nested loop that searches 2 levels deep in any identified folder from the application parent directory down. We also define here what the acceptable files are which is then inherited by the views and prevents from a backend perspective, any other files from entering the system.

Next up, we have a middle level class to handle a majority of the tasks that will be involved with extraction data from files and input from the UX. This we will call the Extractor and it creates its own instance of an OrientModel and DataPrep object.
#fioriapp/blueprints/dialogs/models.py
...
class Extractor():

def __init__(self):

self.odb = OrientModel()
click.echo('[Extractor_init_%s] Running diagnostics on ODB' % (get_datetime()))
self.odb.run_diagnostics()
if self.odb.checks['created'] == False:
self.odb.create_db()
if self.odb.checks['open_db'] == False:
self.odb.open_db()
if self.odb.checks['initialized'] == False:
self.odb.initialize_db()
self.dp = DataPrep()
self.dp.get_folders()
self.report_every = 100
self.last_report_dtg = 0
self.last_lap = 0
# Set up to look at the headers of files and determine the mapping to a common Dialog extraction pattern
self.acceptable_headers = (
{'content': ['posts', 'text'],
'tags': ['type'],
'd_to': ['to'],
'd_from': ['from'],
'd_id': ['dialogueID']
}
)
click.echo('[Extractor_init_%s] Following files in data' % (get_datetime()))
for f in self.dp.list_files()['files']:
click.echo('\t\t%s' % f)
if self.odb.checks['demo_data'] == False:
self.odb.checks['demo_data'] = self.set_demo_data()

It first sets up the ODB client through a series of checks and then runs a collection of any necessary data. Of note here are acceptable_headers. This is just a first pass for handling multiple file types. In this case it maps the MBTI template of labeled conversations to what the Extractor understands as dialog entities and their maps. It can be enhanced with testing on other file types such as Q&A also used here to and determine the appropriate mapping. In this case, we have either a simple monologue or more complex, to and from identified.

Lastly in models.py we have the Queries class which is the high level abstraction which views will interact with. Really, this is all just for readability and reusability but the breakdown of the solution can be approached in a variety of ways. In this case, we have abstracted all of the models.py functionality into this class as it inherits everything from the middle, Extractor layer.
#fioriapp/blueprints/dialogs/models.py
...
class Queries:

def __init__(self):
self.ex = Extractor()

def create_duo(self, **kwargs):
"""
Create a conversation between a from and to entity. The conversation is a simple exchange but each entity can
have multiple lines which is why we have from_lines and to_lines in which a connection is made between the last
thing said by the from entity and the first thing said by the to entity.
:param kwargs:
:return:
"""
data = {'tags': kwargs['tags']}
from_lines = self.ex.ex_segs_from_lines(data, kwargs['nfrom'], False)
from_id = from_lines[-1]
to_lines = self.ex.ex_segs_from_lines(data, kwargs['nto'], False)
to_id = clean(to_lines[0])

if from_id not in self.ex.odb.cache:
try:
self.ex.odb.create_content_node(content=kwargs['nfrom'], tags=kwargs['tags'])
except Exception as e:
if 'RecordDuplicatedException' in str(e):
pass
else:
click.echo('%s UNKNOWN ERROR in create_duo %s' % (get_datetime(), str(e)))
self.ex.odb.cache.append(from_id)
if to_id not in self.ex.odb.cache:
try:
self.ex.odb.create_content_node(content=kwargs['nto'], tags=kwargs['tags'])
except Exception as e:
if 'RecordDuplicatedException' in str(e):
pass
else:
click.echo('%s UNKNOWN ERROR in create_duo %s' % (get_datetime(), str(e)))
self.ex.odb.cache.append(to_id)

self.ex.odb.create_edge(rtype='Response', nfrom=from_id, nto=to_id, tags=kwargs['tags'])

return {
'cont_id': list(set(from_lines + to_lines)),
'message': 'Dialog created from %s to %s' % (from_id, to_id)
}

The create_duo function is an example of how this high level class is helpful in reusability. We can model similar interactions between views and the back end components with these template functions. The same steps with slight augmentations are used for getting a response or creating a monologue. All use different aspects of the Extractor class. The main point here to tie to the next level in the view.py file is the return statement.

The views may do further handling depending on the contents of this template and one of the main keys is the cont_id or Content identification. The application simply normalizes a statement into a lower case with no punctuation or spaces. Like many functions here, there are other options but this provides a simple view of indexing that ties all the way back to the ODB.

Moving to the views file we can take a look at an example of this such as creating the graph and formatting it to the SAPUI5 standard. Here we can see in the comments an example of what is returned by the models.py Queries class and then the graph dictionary format required to render in the UX. We can also see the Q.ex.odb.get_node(cont_id=r) command which is an example of how the abstraction in models.py helps us here. If we are going to be developing with UX specialists, we can show them how some of their data is prepared without exposing all the complexities in models.py. Again, an approach not THE method.
#fioriapp/blueprints/dialogs/views.py
...
def layout_graph(data):
"""
1) Prepare the data list for the graph as a JSON based on data received from the Queries model class in the form:
'a_content': response['d'][0].oRecordData['a_content'],
'a_pid': response['d'][0].oRecordData['a_pid'],
'a_tags': response['d'][0].oRecordData['a_tags'],
'a_create_date': response['d'][0].oRecordData['a_create_date'],
'v_in': [],
'v_out': []
2) Add some additional metrics to provide additional testing opportunity such as random lat/long, string length
3) Create a report which summarizes the data
:param data:
:return:
"""

graph = {
"nodes": [],
"lines": [],
"keys": [],
"report": {'total_nodes': 0,
'avg_str_len': 0,
'tot_str_len': 0}
}

for r in data:
nodes = Q.ex.odb.get_node(cont_id=r)
if r not in graph['keys']:

graph['keys'].append(r)
graph['nodes'].append({
'key': nodes['a_pid'],
'title': str(nodes['a_content'])[:10],
'icon': "sap-icon://message-popup",
'attributes': [
{"label": "Content",
"value": nodes['a_content']},
{"label": "ID",
"value": nodes['a_pid']},
{"label": "Key",
"value": r},
{"label": "Tags",
"value": nodes['a_tags']},
{"label": "Created on",
"value": nodes['a_create_date']},
{"label": "Length",
"value": len(nodes['a_content'])},
{"label": "Geo_lat",
"value": get_random_lat()},
{"label": "Geo_lon",
"value": get_random_lat()}
]
})
graph['report']['total_nodes'] += 1
graph['report']['tot_str_len'] += len(nodes['a_content'])

for rel in nodes['v_in']:
graph['lines'].append({
'from': rel['pid'],
'to': nodes['a_pid']
})
if rel['cont_id'] not in graph['keys']:
graph['report']['total_nodes'] += 1
graph['nodes'].append({
'key': rel['pid'],
'title': str(rel['content'])[:10],
'icon': "sap-icon://message-popup",
'attributes': [
{"label": "Content",
"value": rel['content']},
{"label": "ID",
"value": rel['pid']},
{"label": "Key",
"value": rel['cont_id']},
{"label": "Tags",
"value": rel['tags']},
{"label": "Created on",
"value": rel['create_date']},
{"label": "Length",
"value": len(rel['content'])},
{"label": "Geo_lat",
"value": get_random_lat()},
{"label": "Geo_lon",
"value": get_random_lon()}
]
})
graph['keys'].append(rel['cont_id'])
graph['report']['tot_str_len'] += len(rel['content'])
for rel in nodes['v_out']:
graph['lines'].append({
'from': nodes['a_pid'],
'to': rel['pid']
})
graph['report']['total_nodes'] += 1
if rel['cont_id'] not in graph['keys']:
graph['nodes'].append({
'key': rel['pid'],
'title': str(rel['content'])[:10],
'icon': "sap-icon://message-popup",
'attributes': [
{"label": "Content",
"value": rel['content']},
{"label": "ID",
"value": rel['pid']},
{"label": "Key",
"value": rel['cont_id']},
{"label": "Tags",
"value": rel['tags']},
{"label": "Created on",
"value": rel['create_date']},
{"label": "Length",
"value": len(rel['content'])},
{"label": "Geo_lat",
"value": get_random_lat()},
{"label": "Geo_lon",
"value": get_random_lon()}
]
})
graph['keys'].append(rel['cont_id'])
graph['report']['tot_str_len'] += len(rel['content'])

graph['report']['avg_str_len'] = graph['report']['tot_str_len'] / graph['report']['total_nodes']

return graph

The graph is written into a json with labels that should be familiar in the screen shots. It then creates a report which will give us some metrics to work with in other charts for future developments and more exposure of the Fiori libraries. This is also reflected in the random lat and lon which are coordinates we can use in a geographic map in one of those future developments as well. Given that, let's look at some of the changes we made to the Fiori code.

Technical Overview | Front End


In manifest.json we add a new pattern with the name Dialogs and then tie to the respective object in targets:
"routes": [
{
"pattern": "",
"name": "home",
"target": ["home"]
},
{
"pattern": "FlexibleColumnLayout",
"name": "DashboardAnalytics",
"target": ["FlexibleColumnLayout"]
},
{
"pattern": "FlexibleColumnLayout",
"name": "FlexibleColumnLayout",
"target": ["FlexibleColumnLayout"]
},
{
"pattern": "Dialogs",
"name": "Dialogs",
"target": ["Dialogs"]
}
],
"targets": {
"home": {
"viewName": "Home",
"viewId": "home",
"viewLevel": 1,
"title": "{i18n>title}"
},
"FlexibleColumnLayout": {
"viewType": "XML",
"transition": "slide",
"clearControlAggregation": false,
"viewName": "FlexibleColumnLayout"
},
"Dialogs": {
"viewType": "XML",
"transition": "slide",
"clearControlAggregation": false,
"viewName": "Dialogs"
}

Now we’ll add a new view within the top level of the views folder. We will keep all the Fiori library elements within the static folder of the application structure to avoid separation of useful components into Blueprint specific routes. This creates the ability for UX developers to take the code, test it on its own and the backend developers to concentrate just in their folder area. Maybe this doesn’t make sense to some but is what we’ll do here as we are focus on back end and front end code in a single stack.



We can see there is some division of the front-end application code like how we divided back end functionality into different blueprints. Here we are using the Fragments concept and naming it the same as the blueprint in python. It will contain all the elements needed for this application that is accessible from the launchpad. In this case we will only concentrate on the NetworkGraph.fragment however it like all other fragments are called by the Dialogs.view which is in turn controlled by Dialogs.controller.js. So, we will look at that first.
//fioriapp/static/controller/Dialogs.controller.js
...
onInit: function() {
this.oModelSettings = new JSONModel({
maxIterations: 200,
maxTime: 500,
initialTemperature: 200,
coolDownStep: 1
});
this.getView().setModel(this.oModelSettings, "settings");
this.getView().setModel(sap.ui.getCore().getModel("DialogsModel"), "DialogsModel");
//Initialize the Graph network
this.oGraph = this.byId("graph");
this.oGraph._fZoomLevel = 0.75;
this.demoGraph();
},
...

getResponse: function(sType) {
MessageToast.show("Getting a response");
sap.ui.core.BusyIndicator.show(0);
var oThis = this;
var oData = {
'rtype': oThis.byId('Dialog.get_response.rtype').getSelectedItem().getKey(),
'phrase': oThis.byId('Dialog.get_response.input').getValue(),
'rel_text': oThis.byId('Dialog.get_response.tags').getValue()
};

jQuery.ajax({
url : "/Dialogs/get_response",
type : "POST",
dataType : "json",
async : true,
data : oData,
success : function(response){
MessageToast.show(response.data.message);
oThis.makeGraph(response.graph);
sap.ui.core.BusyIndicator.hide();
},
error: function(response){
console.log(response);
}
});

console.log(sType);
},

updateGraph: function(oData, curModel){

for (var i = 0; i < oData.nodes.length; i++){
if(!((curModel.keys.indexOf(oData.nodes[i].attributes[2].value) > -1))){
curModel.oData.nodes.push(oData.nodes[i]);
curModel.keys.push(oData.nodes[i].attributes[2].value);
}
}
for (var i = 0; i < oData.lines.length; i++){
if(!((curModel.oData.lines.indexOf(oData.lines[i]) > -1))){
curModel.oData.lines.push(oData.lines[i]);
}
}
var oModel = new JSONModel(curModel.oData);
oModel['keys'] = curModel.keys;
this.getView().setModel(oModel);
},

When the Dialogs tile is pressed, the app is initialized and the graph model is set on the container identified in the Dialogs.view fragment. Other visualization elements within the application that depend on data would also have their models initialized in a similar fashion.

We also see the getResponse function which is one of the UX interactivity functions described earlier and maps to the url /Dialogs/get_response defined as an app.route in views.py. The json is posted to the back-end and then the response is processed through a toast message which reads out the content to the screen and then through a graph update with the elements related to the returned statement and response.

Last we look at the file responsible for rendering a majority of the functionality we will use more of as the app progresses. This can be copied directly from the SAPUI5 library showing the ease of developing apps with this method. Then you can change code and experiment with results. In this case the only changed was the layout algorithm from force directed to force base. Again, just a preference, not the only one.
<!--fioriapp/static/view/Fragments/Dialogs/NetworkGraph.fragment.xml
<core:FragmentDefinition
xmlns="sap.suite.ui.commons.networkgraph"
xmlns:l="sap.ui.layout"
xmlns:core="sap.ui.core"
xmlns:layout="sap.suite.ui.commons.networkgraph.layout"
xmlns:m="sap.m">
<l:FixFlex>
<l:fixContent>
<m:FlexBox fitContainer="true" renderType="Bare" wrap="Wrap" id="graphWrapper2">
<m:items>
<Graph
nodes="{/nodes}"
lines="{/lines}"
groups="{/groups}"
id="graph">
<layoutData>
<m:FlexItemData/>
</layoutData>
<layoutAlgorithm>
<layout:ForceBasedLayout
>
</layout:ForceBasedLayout>
</layoutAlgorithm>
<nodes>
<Node
height="{settings>/height}"
key="{key}"
title="{title}"
icon="{icon}"
group="{group}"
attributes="{path:'attributes', templateShareable:true}"
shape="{shape}"
status="{status}"
x="{x}"
y="{y}">
<attributes>
<ElementAttribute
label="{label}"
value="{value}"/>
</attributes>
</Node>
</nodes>
<lines>
<Line
from="{from}"
to="{to}"
status="{status}"
>
</Line>
</lines>
<groups>
<Group
key="{key}"
title="{title}">
</Group>
</groups>
</Graph>
</m:items>
</m:FlexBox>
</l:fixContent>
</l:FixFlex>
</core:FragmentDefinition>

That wraps up this 3rd part of a series. Please feel free to test out the code and suggest better or just other preferred methods and keep a look out for part 4 when we start integrating this as a micro-service image alongside SAP Data Hub.