Skip to Content
|

Category Archives: Uncategorized

Introduction

The data quality can be measured through metrics which in turn helps to identify the issue and helps the performance engineer to create the data or modify the data to adhere to the quality. Data quality depends on type of application, type of tables/views used in the application etc. If the data quality metrics are not adhered, the performance measurement gets compromised.

SAP applications are used by many companies. With the availability of SAP HANA platform, the business applications that are developed at SAP has undergone paradigm shift. The complex operations are push down to the database. The rule of thumb is therefore to get the best performance by doing as much as you can in the database. Applications that are developed on SAP HANA uses new data modeling infrastructure known as Core Data Services (CDS). With CDS views, data models are defined and consumed on the database rather than on the application server. The application developer can now use various built-in functions, extensions etc.

 

Performance Test Process

 

The Fiori application that are developed basically try to make most of the SAP HANA platform. In S/4Hana applications, whenever there is a request made from the Fiori application for retrieving the information, hits the CDS views. The SQL query with CDS view name in FROM clause along with the filters are passed on to the HANA database. The query gets executed in the HANA database and returns the result set to the Fiori UI.

To measure the performance of the Fiori application against single user, usually performance test starts with executing the dry runs and then measure the performance of the application subsequently. The measured performance is then compared with the thresholds that are defined and violation are identified.

 

Data Quality

 

For measuring the performance of the application, the data quality plays a crucial role. The test system wherein the performance measurement needs to be taken, should have adequate quality of data. There are CDS views that are used more frequently and has high volume of transactions when compared to others. So, there is a need to distinguish between the CDS views that has high volume and used more frequently. This kind of views need to adhere to the performance standards and lag in response is not expected. The lag in response may occur due to the following factors:

  1. Filters are not push down correctly
  2. Join conditions or cyclic joins can degrade the performance
  3. Redundant union or joins that exist in the CDS views
  4. Currency conversion not modeled appropriately
  5. Execution of CDS views generates many temporary tables. The cause may be due to materializing of the fields with aggregation on large set of rows

While designing the CDS views the above factors needs to be considered. Also, apart from the above factors there are cases wherein when the system is having very less data (<1000 rows) then the performance issues are not identified. The inherent problem within CDS view gets visible or detected when the system is having bare minimum amount of data based on the annotation like Service Quality and Size are defined.

 

Data Quality Metrics:

 

The CDS views are often categorized into ‘S’, ‘M’, ‘L’, ‘XL’ and ‘XXL’. In the CDS view, ObjectModel.usageType.sizeCategory annotation is used define the size based on the volume of data the view can expect.

The resource consumption on HANA is mainly driven by two factors:

  • The set of data which has to be searched through and
  • The set of data which has to be materialized in order to compute the result set.This metrics helps to identify whether sufficient number of rows exist in the HANA database. This metrics are just an indicator on whether the performance measurement for single user test can be performed or not. If these bare minimum criteria are not met, then one won’t be able to unearthen the defects that may creep over a period, when the data grows.
  • Size category S should have less than 1000 rows. Similarly, size category M should have less than 10^5. For L, it will be 10^7 rows.

 

SQL statement to retrieve the table information:

 

The report program can be written wherein the user input is a CDS view name. With the below statement, initially it is identified whether the given input is a CDS view or not.

 

SELECT OBJECTNAME FROM DDLDEPENDENCY WHERE STATE = ‘A’ AND OBJECTTYPE = ‘STOB’ AND DDLNAME = ‘<CDS Name>’         INTO TABLE @DATA(ENTITYTAB).

 

Here <CDS_Name> to be filled is the CDS view name.

For example:

 

 SELECT OBJECTNAME FROM DDLDEPENDENCY WHERE STATE = 'A' AND OBJECTTYPE = 'STOB' AND DDLNAME = ‘A_CHANGEMASTER’ INTO TABLE @DATA(ENTITYTAB).

 

The “API_CHANGEMASTER” is whitelisted service listed in SAP API Hub ( https://api.sap.com/ ). When this ODATA service is invoked by any client side application (Fiori or custom application) then it internally hits “A_CHANGEMASTER” CDS view.

When we execute the above query, we will be able to retrive the object name. In this case, it is a valid and activated CDS view. Once we get the object name, we can get the number of tables used by the CDS view.

When we want to retrieve all the CDS views that starts with ‘A_C%’ then it can be done as follows:

SELECT DISTINCT SRC~DDLNAME, TADIR~DEVCLASS AS PACKAGE, TADIR~AUTHOR AS AUTHOR
        FROM TADIR INNER JOIN DDDDLSRC AS SRC ON TADIR~OBJ_NAME = SRC~DDLNAME 
       WHERE TADIR~PGMID = 'R3TR' AND TADIR~OBJECT = 'DDLS'
       AND   SRC~AS4LOCAL = 'A'
       AND SRC~DDLNAME = 'A_C%'
       INTO TABLE @DATA(DDLS).

   IF LINES( DDLS ) > 0.
      SELECT OBJECTNAME FROM DDLDEPENDENCY FOR ALL ENTRIES IN @DDLS
        WHERE STATE = 'A' AND OBJECTTYPE = 'STOB' AND DDLNAME = @DDLS-DDLNAME
        INTO TABLE @DATA(ENTITYTAB).

 

Now loop through the tables to find the number of rows that present in the database. For CDS view size category, this is the starting point to know the quality of the data. To be more stringent, based on the type of application, we can also check for Distinct entries that are present in the table. This will help to identify whether the enough data is present in the table. If there is less number of entries, then the quality engineer must create the data before taking the performance measurement.

IF SY-SUBRC = 0.
        CREATE OBJECT LR_VISITOR TYPE CL_DD_DDL_META_NUM_COLLECTOR EXPORTING DESCEND = ABAP_TRUE .
        data l_name type string.

        LOOP AT ENTITYTAB INTO data(LV_ENAME).
          l_name = lv_ename-objectname.
          LR_VISITOR->VISITDDLSOURCE( iv_dsname = l_name ).
          DATA(LR_NUMBERMAP) =  LR_VISITOR->GETNUMBERMAP( ).
          READ TABLE LR_NUMBERMAP ASSIGNING FIELD-SYMBOL(<CDS_VIEW>)
      WITH KEY ENTITY = LV_ENAME-OBJECTNAME.
  IF SY-SUBRC NE 0.
    CONTINUE.
  ENDIF.
ENDIF.

data tablename type string.
  data count type i.
  loop at <cds_view>-numbers-tab_info-table_tab assigning field-symbol(<tab_info>).

    collect <tab_info> into lr_tabs.
    tablename = <tab_info>-TABNAME.
    select count(*) from (tablename) INTO count.
  endloop.

Here we get number of entries present in the table. For the above example, there is only 1 table i.e. AENR. When we query for number of entries in the table AENR using count (*), and output is 100 rows. This is less as the size category annotation is specified as ‘L’. So, bare minimum rows that are required before starting with the performance system is somewhere between 10^5 and less than 10^5.

 

Conclusion:

 

As a prerequiste before starting the performance testing, one must know what kind of CDS views are consumed by the S/4HANA application and accordingly must verify whether the enough data prevails in the system. If not, then one must create the data by executing the scripts. This helps to uncover the defects that are not detected if the system doesn’t have enough data.

 

References:

https://help.sap.com/doc/f2e545608079437ab165c105649b89db/7.5.6/en-US/index.html

 

When researching data we want to find features that help us understand the information. We look for insight in areas like Machine Learning or other fields in Mathematics and Artificial Intelligence. I want to present here a tool initially coming from Mathematics that can be used for exploratory data analysis and give some geometric insight before applying more sophisticated algorithms.

The tool I want to describe is Persistent Homology, member of a set of algorithms known as Topological Data Analysis, [1,2]. In this post I will describe the basic methodology when facing a common data analysis scenario: clustering.

 

SOME IDEAS FROM TOPOLOGY

A space is a set of data with no structure. The first step is to give some structure that can help us understand the data and also make it more interesting. If we define a notion of how close are all the points we are giving structure to this space. This notion is a neighborhood and it tells us if two points are close. With this notion we already have important information: we now know if our data is connected.

The neighborhoods can be whatever we want and the data points can be numbers or words or other type of data. These concepts and ideas are the subject of study of Topology. For us, Topology is the study of the shape of data.

We need to give some definitions, but all are very intuitive. From our point space or dataset, we define the following notion: a simplex. It is easy to visualize what we mean.

So, a 0-simplex is a point. Every point in our data is a 0-simplex. If we have a “line” joining two points that is a 1-simplex, and so on. Of course, a 4-simplex and higher analogues are difficult for us to visualize. We can immediately see what connectedness is. In the image, we have four connected components, a 0-simplex, a 1-simplx, a 2-simplex and a 3-simplex. If we join them with, for example lines we will connect the dataset into one single component. Like this:

The next notion is the neighborhood. We’ll use euclidean distance to say when our points are close, we’ll use circles as neighborhoods. This distance depends on a parameter, the radius of the circle. If we change these parameter we change the size of the neighborhood.

Persistence is an algorithm that changes this parameter from zero to a very large value, one that covers the entire set. With this maximal radius we enclose all our dataset. The algorithm [4] can be put as follows:

  1. We construct a neighborhood for each point and set the parameter to zero.
  2. Increment the value of this parameter and if two neighborhoods intersect, draw a line between the points. These will form a 1-simplex. After that an n-simplex will form at each step until we fill all the space with lines.
  3. Describe in some way the holes of our data has as we increase the parameter. Keep track when they emerge and when they disappear. If the holes and voids persist as we move the parameter, we can say that we found an important feature of a our data

The “some way” part is called Homology and is a field in Mathematics specialized in detecting the structure of space. The reader can refer to the bibliography for these concepts [2].

This algorithm can be shown to detect holes and voids in datasets. An achievement we can mention is that Persistent Homology was used for detecting a new subtype of breast cancer using it to detect clusters in images [3].

We will use R language integrated with the SAP HANA database to work with these tools.

 

VEHICLE DATASET

The dataset is available in [5]. It’s about car accidents and has some specifications. We query in HANA only the data we need for this demo. We use an ID of the accident, the spatial coordinates and categorical data: Local highway authority and Road Type. That’s all we need to start. This data looks like this:

 

 

Then we visualize this data:

 

Now we use the Topological Data Analysis library in the R language for study the data. And store the information to make a visualization later.

DROP PROCEDURE "TDA";
-- procedure with R script using TDA package
CREATE PROCEDURE "TDA" (IN vehic_data "VEHIC_DATA", OUT persistence "PERSISTENCE")
LANGUAGE RLANG AS 
BEGIN
library(TDA)
persist <- function(vehic_data){

    #We point out that the columns are only the spatial coordinates
    #You can find how to construct this example in TDA package documentation

    vehic_vector <- cbind("V1" = vehic_data$longitude, "V2" = vehic_data$latitude)
    xlimit <- c(-5.5, 5.5)
    ylimit <- c(50, 60)
    by <- 0.05
    x_step <- seq(from = Xlim[1], to = Xlim[2], by = by)
    y_step <- seq(from = Ylim[1], to = Ylim[2], by = by)
    grid <- expand.grid(x_step, y_step)
    diag <- gridDiag(X = vehic_vector, FUN = distFct, lim = cbind(xlimit, ylimit), by = by,
                 sublevel = FALSE, library = "Dionysus", printProgress = FALSE)
    # Since gridDiag returns a list, we access this in any way we want:
    diagram <- Diag[["diagram"]]
    topology <- data.frame(cbind("dimension"=a[,1],"death"=a[,2],"birth"=a[,3]))
    return(topology)
    }
# Use the function
persistence <- persist(vehic_data)
END;
-- call to keep results in a table
CALL "TDA" ("VEHIC_DATA", "PERSISTENCE") WITH OVERVIEW;

 

Next we visualize the results. Here I show you the results in R, using package TDA itself, just as an example.

 

This is a Barcode. The barcode shows the persistence of some topological features of our data vs the parameter “time”, this is the radius of our neighborhoods as we increase it. The red line tells us that there is a “hole”, an empty space, and we can check this in the visualization. The other lines represent connected components of the dataset, this means we have clustering. The barcode shows that we can expect 2 or 3 important clusters that will persist even if the data has noise.

The hability to persist is a topological property of the data.

After this analysis, we can start the usual Machine Learning approach: K-means…

Since this data was too dense in its parameters, we have to use other settings in Topological Data Analysis to find better approximations to the persistent characteristics. Euclidean distance only help us as a start, we can change this to more specialized filtering of our data. But we can be sure we have a good approximation, Persistence Homology is robust against noise and smooth changes in the data.

We will explore some of these ideas in the next blogs and compare to the usual approaches in Machine Learning.

 

References

1. Carlsson, Gunnar; Zomorodian, Afra; Collins, Anne; Guibas, Leonidas J. (2005-12-01). “Persistence barcodes for shapes“. International Journal of Shape Modeling. 11 (02): 149–187.

2. Carlsson, Gunnar (2009-01-01). “Topology and data“. Bulletin of the American Mathematical Society. 46 (2): 255–308.

3. Nicolau M., Levine A., Clarsson G. (2010-07-23), “Topology based data analysis identifies a subgroup of breast cancer with a unique mutational profile and excellent survival“, PNAS, 108(17).

4. Otter, Nina; Porter, Mason A.; Tillmann, Ulrike; Grindrod, Peter; Harrington, Heather A. (2015-06-29). “A roadmap for the computation of persistent homology“. arXiv:1506.08903

5. https://data.gov.uk/

 

In this blog, I would like to provide a step-by-step procedure to install and configure new SAP HANA 2.0 Cockpit to monitor and maintain multiple SAP HANA databases.  SAP HANA 2.0 Cockpit is a native web based centralized HANA administrator tool used to perform all database management and monitoring of multiple SAP HANA 2.0 and SAP HANA 1.0 SPS 12 databases.

HANA 2.0 Cockpit was introduced in SAP HANA 2.0 SPS 00 and absorbs functionalities of SAP DB Control Center.  This tool is built as an SAP HANA XS Advanced application and uses SAPUI5 user interfaces.  HANA 2.0 Cockpit is installed as a single stack but does not require a dedicated instance of SAP HANA to operate.

In the future, the SAP HANA 2.0 Cockpit will be the main and only native tool for administration and monitoring of SAP HANA databases, running both on-premise and the cloud.

Unlike the previous release SAP HANA Cockpit 1.0, cockpit 2.0 comes as a separate SAP HANA system.  It runs on a special version of SAP HANA, Express edition with an XS advanced runtime environment included.  You can’t deploy cockpit as an XS Advanced application on an existing SAP HANA instance nor can you deploy XSA applications to HANA Cockpit.  It is recommended for a production environment to install the SAP HANA Cockpit on a dedicated hardware.  For non-production, you can deploy on an existing SAP HANA server as discussed in the   SAP HANA Cockpit Installation and Update Guide.

 

SAP HANA 2.0 Cockpit requires a minimum of 16GB RAM but may need more depends on the number of systems monitored.  It is important to review installation and upgrade guide and SAP Notes to get hardware and operating system requirements before installation.

The following users with the privileges need to exists in the monitored resources:

User: COCKPIT_TECH_USER (to gather information such as state, status, and other generalized KPIs)

Roles:

sap.hana.ide.roles::SecurityAdmin

sap.hana.ide.roles::TraceViewer

System Privileges:

CATALOG READ

Object Privileges:

SELECT on _SYS_STATISTICS

User: COCKPIT_ADMIN_USER (for remote DB login to manage)

System Privileges:

CATALOG READ

DATABASE ADMIN

BACKUP ADMIN

Object Privileges:

SELECT on _SYS_STATISTICS

HANA 2.0 Cockpit can be installed using hdblcm tool in command line or GUI.  The installation includes a special version of SAP HANA Express edition with an XS advance runtime engine, which you can monitor and administer using the Cockpit.  The following screenshots provides some of the installation steps:

Once you successful install HANA Cockpit, you need to login to Cockpit Manager portal to configure Resource Groups, Register Resources, and Cockpit users.

You will monitor and administer registered resources using Cockpit User portal.  The  user portal will provide resource information in tabs for performance and monitoring KPIs, Security, and other SAP HANA options like Application Lifecycle Management, Platform Lifecycle Management, and etc.,

You can download the complete document here .  Hope you can follow the document to install and configure HANA 2.0 Cockpit in your environment and successfully monitor and administer multiple HANA databases.

If you have any questions or comments, please post in the comments section.

 

Thank you.

Venkata Tanguturi

 

With the release of HANA 2.0 SPS 02 database and the fact that HANA 2.0 is mandatory for S/4HANA 1709 release, many customers who are embarking on S/4HANA journey will be faced with the task of upgrading their current HANA 1.0 database to HANA2.0.

For HANA 2.0, there are minimum operating system and hardware requirements that needs to be fulfilled.  These changes may require a OS and/or Hardware upgrades based on your current infrastructure.  Recently, I had to recommend Hardware and OS upgrades to a customer, who was running their HANA 1.0 DB on an Intel Ivy Bridge based hardware and old OS version.  They were planning to upgrade S/4HANA 1610 to S/4HANA 1709 which required HANA 2.0.  These upgrades, if necessary, will add additional time to the overall project and it’s recommended to plan so that minimum requirements can be completed in time before the actual database upgrade.  Also with HANA 2.0, Multitenant Database Containers (MDC) are mandatory staring with HANA 2.0 SPS01.  So, when upgrading to HANA 2.0 SPS01 or higher, if your HANA 1.0 is a single-DB system, then the upgrade will automatically migrate the single-DB to MDC.

Below, I have detailed the steps to perform HANA 2.0 upgrade and other relevant SAP notes and release information.

High level steps include:

  • Download SAP HANA 2.0 Platform Edition Software
  • Check Hardware and Operating system requirements
  • Backup database
  • Update the SAP Host Agent
  • Run hdblcm
  • Check upgrade logs
  • Update LCAPPS
  • Update HANA DB Client
  • Backup database

 

Relevant Documentation and SAP Notes:

Product Availability Matrix (PAM): SAP HANA Platform Edition 2.0

SAP HANA 2.0 – What’s New : HANA 2.0

SAP HANA Server Installation and Update Guide: Installation and Upgrade Guide

Important Release Notes: Release Notes

Note 2423367 – Multitenant database containers will become the standard and only op. mode

Note 2399995 – Hardware requirement for SAP HANA 2.0

Note 2055470 – HANA on POWER Planning and Installation Specific Central Note

Note 2380257 – SAP HANA Platform 2.0 SP00 Release Notes

 

SAP HANA 2.0 Upgrade Procedure:

HDB = SAP HANA Database

 

Please click here to download the HANA 2.0 DB upgrade document in pdf format.  Hope this blog is helpful for the SAP community.  If you have any questions or comments, please let us know in the comments section.

 

Thank you.

Venkata Tanguturi

Hi Guys,

i was having a challenge deleting index COMC_ATTRIBUTE in the upgrade phase of solution manager 7.1 to 7.2.

https://archive.sap.com/discussions/message/15480643?tstart=0#15480643
https://archive.sap.com/discussions/thread/3388311
in reference to this thread https://archive.sap.com/discussions/thread/3293015 – Cannot drop index “we tried tens of options like single and double quotes, but still nothing. Please advice how to drop this index..”

i used this note 2276722
Title: Upgrade fails in SHADOW_IMPORT_INC due to conflicting unique indexes on table COMC_ATTRIBUTE – SAP ASE for Business Suite
Link: https://launchpad.support.sap.com/#/notes/2276722

kind regards,
elizabeth

Posting FI Account documents is a vital step for any Company to manage business flow. SAP provided different ways to post FI documents. One of the way is using FLAT FILE to upload and post FI documents.

Interfacing is one of the techniques used in SAP and of course in other technologies as well for uploading and then updating/Inserting data to the database. The data loaded must have some validation checks to make sure that the correct data is passed to the application , and saved to the database.

There are many interfacing techniques available with in the SAP Environment such as IDOCS, XI/PI , BDC AND LSMW.  In this blog i am explaining the process of uploading data using the flat file for financial postings.

 

Below are the Steps to setup flat file and Object development step.

 

(1) For demonstration purposes, i am taking single record for 26 fields coming into the system. Below is the screen capture of the flat file.

 

 

 

 

(2)Next step is the development of file upload program. I leveraged the file upload program and changed it based on my requirement. Below is the screen capture for creating file upload program.

 

 

(3) Run the program and see how the screen for the file upload program looks like. This one has an option to upload file from the application server or Flat file.

 

(4) Now let us put a break point in the program to see how the file upload program is handling the flat tile data.

(a) Initially the data is loaded from flat file into the it_data1 internal table using file upload program.

(b) Data with in the internal table from flat file.

(5) Now lets move to the next step where we are arranging data in a separate internal table it_data2 for further processing for the interface. I have used SPLIT command to arrange data into the desired fields as shown below.

 

 

 

 

(6)The data is setup in the second internal table for further processing as shown in the above screen capture.

 

(7) data setup for gl account, currency amount and tax in their tables are shown in the below screen capture.

 

 

(8)Now we call BAPI BAPI_ACC_DOCUMENT_CHECK to see if there are any errors in the postings we are doing.

 

 

as you can see the Lt_return table has no entries , indicating there are no errors in the document.

Now lets move to function module BAPI_ACC_DOCUMENT_POST to post financial entries.

 

 

Return table indicates success message, the document is successfully posted.

 

Thanks for reading.

 

References:

file upload program :

https://wiki.scn.sap.com/wiki/display/ABAP/F4+Help+for+Application+Server+and+GUI+Files+using+CL_RSAN_UT_FILES

 

 

 

 

What is SAP HANA Automated Predictive Library (APL)?

SAP HANA APL is an Application Function Library (AFL) which lets you use the data mining capabilities of the SAP Predictive Analytics automated analytics engine on your customer datasets stored in SAP HANA.

The APL is:
A set of functions that you use to implement a predictive modeling process in order to answer simple business questions on your customer datasets.
A set of simplified APL procedures: SAPL (Simple APL) that you can also use to call the APL functions.
You can create the following types of models to answer your business questions:

– Classification/Regression models
– Clustering models
– Time series analysis models
– Recommendation models

Installing SAP APL v2.5.10.x on SAP HANA SP10

 

Software Requirements

You must have the following software installed in order to use this version of SAP APL:

  1. SAP HANA SPS10 and higher
  2. SAP AFL SDK 1.00.090 or greater (this is part of SAP HANA)
  3. unixODBC 64 bits

SAP APL 2.0 Software download path in service market place

unixODBC 64 bits

APL has a dependency on the libodbc.so.1 library included in unixODBC. In the latest unixODBC versions, this library is available only in version libodbc.so.2 . The workaround in this situation is to create a symbolic link to libodbc.so.2 named libodbc.so.1 in the same folder.

http://www.unixodbc.org/

unixODBC installation

cd <unixODBC install folder> (for example /usr/lib64)

ln –s libodbc.so.2 libodbc.so.1

Installing SAP APL v2.5.10.x on SAP HANA SP10

 

SAP APL deployment in the hana server

Note: You need root privileges (sudo) to run the installer.

 

We could check the add-on installation from SAP HANA Studio

 

After the function library has been installed, the HANA script server must be enabled, and the HANA index server should be restarted. The following tables should contain APL entries which show that the APL is available:

 

— check that APL functions are there

select * from “SYS”.”AFL_AREAS”;

select * from “SYS”.”AFL_PACKAGES”;

select * from “SYS”.”AFL_FUNCTIONS” where AREA_NAME=’APL_AREA’;

select * from “SYS”.”AFL_FUNCTION_PARAMETERS” where AREA_NAME=’APL_AREA’;

select “F”.”SCHEMA_NAME”, “A”.”AREA_NAME”, “F”.”FUNCTION_NAME”, “F”.”NO_INPUT_PARAMS”, “F”.”NO_OUTPUT_PARAMS”, “F”.”FUNCTION_TYPE”, “F”.”BUSINESS_CATEGORY_NAME” from “SYS”.”AFL_FUNCTIONS_” F,”SYS”.”AFL_AREAS” A where “A”.”AREA_NAME”=’APL_AREA’ and “A”.”AREA_OID” = “F”.”AREA_OID”;

 

Configuration

This is the script delivered along with software and could find under the samples directory

hostname:/hana/data/HDB/SAP_APL/apl-2.5.0.0-hanasp10-linux_x64/samples/sql/direct # more apl_admin.sql

-- Run this as SYSTEM

connect SYSTEM password manager;



-- Enable script server

alter system alter configuration ('daemon.ini', 'SYSTEM') set ('scriptserver', 'instances') = '1' with reconfigure;

-- Check that APL functions are there

select * from "SYS"."AFL_AREAS";

select * from "SYS"."AFL_PACKAGES";

select * from "SYS"."AFL_FUNCTIONS" where AREA_NAME='APL_AREA';

select "F"."SCHEMA_NAME", "A"."AREA_NAME", "F"."FUNCTION_NAME", "F"."NO_INPUT_PARAMS", "F"."NO_OUTPUT_PARAMS", "F"."FUNCTION_TYPE", "F"."BUSINES

S_CATEGORY_NAME"

from "SYS"."AFL_FUNCTIONS_" F,"SYS"."AFL_AREAS" A

where "A"."AREA_NAME"='APL_AREA' and "A"."AREA_OID" = "F"."AREA_OID";

select * from "SYS"."AFL_FUNCTION_PARAMETERS" where AREA_NAME='APL_AREA';



-- Create a HANA user known as USER_APL, who's meant to run the APL functions

drop user USER_APL cascade;

create user USER_APL password Password1;

alter user USER_APL disable password lifetime;

-- Sample datasets can be imported from the folder /samples/data provided in the APL tarball

-- Grant access to sample datasets

grant select on SCHEMA "APL_SAMPLES" to USER_APL;



-- Grant execution right on APL functions to the user USER_APL

grant AFL__SYS_AFL_APL_AREA_EXECUTE to USER_APL;

grant AFLPM_CREATOR_ERASER_EXECUTE TO USER_APL; 

 

There is one step not shown in the sample SQL, which is creating the APL_SAMPLES schema, this is straightforward such as

create schema APL_SAMPLES;

 

Create table types by using the stored procedure “apl_create_table_types.sql”

 

Import samples data from the download directory

Check the imported content from SAP HANA studio

 

The samples should now all be configured and available for use directly via SQL or using Predictive Analysis 2.0.

 

With the release of SAP Screen Personas 3.0 SP06, support for mobile devices is now available. This feature is provided through the new Slipstream Engine.

Launching the Slipstream Engine

The Slipstream Engine is separate ICF service than the “personas” service that has been used to launch the Screen Personas runtime/editor on the desktop or laptop. So, ensure that the correct URL is provided to the end users.

For example, whereas the path to the traditional Screen Personas service would be /sap/bc/personas, the path to the Slipstream Engine service is /sap/bc/se/m.

The Slipstream Engine in Action

A couple of notes before we start: In this scenario, I’m using an iPhone 8 and its native Safari browser. Furthermore, I don’t have single sign-on on this device, thus, the logon screen.

The Logon Screen

From the beginning, we can see that the SAP Fiori design principles were kept in mind. The logon screen is responsive to the mobile device and, thereby, provides a user interface that is both beautiful and easy to use. No zooming in or awkward scrolling is needed in order to sign into the system.

The Loading Screen

The loading screen too is beautiful. Look at that logo!

The SAP Easy Access Screen

Once the Slipstream Engine has loaded, we are presented with the SAP Easy Access screen that users of the SAP GUI are familiar with. As before, the user is presented with the menu and can navigate to another screen by tapping on a menu item, but notice how the text is easy to read and that the screen elements are adapted to the screen dimensions of the mobile device. This is out-of-box; no flavor was needed to achieve this.

Something that you may be wondering at this point is, “where is the ‘water ripple’ image or corporate logo that appears on the right side of the screen in the SAP GUI?” It’s still there and can be seen by clicking on the right-most circle at the bottom of the screen. This is an ingenious way of providing an adaptive user interface; that is, instead of squeezing all of the content into the screen for the sake of having all of it visible at the same time, the layout is re-arranged so that all content is displayed in the best manner for the given device.

To prove the point above, launch the Slipstream Engine on a desktop. As we can see, since there is a much larger viewing area, both the menu and the image on the right are visible at the same time.

The Header and Toolbar

Other things to notice in the Slipstream Engine are the merged header and toolbar (design elements which were introduced with SAP Belize). The transaction code input field and the toolbar are automatically adapted for the mobile device; they are replaced by respective icons instead simply having the screen elements overlap each other.

Tapping on the transaction code icon will display the input field so that the user can enter a t-code.

Likewise, tapping the toolbar icon displays the application, system, and menu toolbars.

Supported Scenarios

With the initial release of the Slipstream Engine, support is limited to a certain set of transactions (see SAP Note 2557076). Nonetheless, customers can make a request with the SAP Screen Personas product team to add transaction to the set of supported scenarios (see this video for more on the process on how to do this).

A whitelist/blacklist is maintained and can be viewed in the newly-redesigned SAP Screen Personas admin transaction. However, there is no blacklist at this time.

When the user executes a transaction, the Slipstream Engine will check the transaction against a whitelist and blacklist.

If the transaction is on the blacklist, then the transaction will not launch and the following message will be displayed in the status bar: “Transaction <t-code> is not allowed with Slipstream Engine (SAP Note 2557076)”.

However, if the transaction is neither on the whitelist of supported transactions nor the blacklist, then the transaction will launch and the following warning will be displayed in the status bar: “Full support for transaction <t-code> is not guaranteed (SAP Note 2557076)”.

Conclusion

Overall, the Slipstream Engine is a giant step for SAP Screen Personas and makes it easier for organizations to achieve the SAP Fiori user experience. With the Slipstream Engine, organizations are enabled to leverage their existing SAP screens in order to provide user experiences for scenarios that involve mobile devices. I look forward to seeing the awesome flavors that organizations design and provide to their end users.


About the author

Daniel Sanchez is a User Experience Strategy & Technologies consultant at SAP and is based in Palo Alto, California.

Disclaimer: The information in this article is as up-to-date as the and could be on the date that the article was last updated. The author’s opinions are his own.

It’s a common knowledge that those actions which cannot be directly modelled via the CRUDQ operations are implemented via function imports. Examples like – On display screen data retrieval of a different entity not directly in context of bound entity but dependent on few values of bound entity, updating the status of some items (and not entire item), on demand fetching of data from third party service, etc.

 

While creating function imports via the SEGW (project builder) transaction, we don’t have a provision of marking parameters as optional. We may have use cases where same function import behaves differently with different parameters which are all not mandatory. To achieve this, we can override the definition of function import in the DEFINE method of the respective model provider extension class.

 

Sample implementation is demonstrated below.

Figure 1: oData Service with Function Import

 

Sample code is

METHOD define.
*----------------------------------------------------------------------
* Title:   
* A. Also make the function import(Fetch User Configurations)  parameters as optional
*--------------------------------------------------------------------- 
* A. Also make the function import(Fetch User Configurations)  parameters as optional
*----------------------------------------------------------------------
    " Data Declarations
    DATA:
      lo_entity_type TYPE REF TO /iwbep/if_mgw_odata_entity_typ,
      lo_property    TYPE REF TO /iwbep/if_mgw_odata_property,
      lo_action      TYPE REF TO /iwbep/if_mgw_odata_action.

    " Start of Code
    CALL METHOD super->define( ).
    " Mark the parameters as optional in case of 'FetchUserConfigurations' function import
    lo_action = model->get_action( iv_action_name = ‘FetchUserConfigurations’ ).
    lo_property = lo_action->get_input_parameter( iv_name = ‘EmployeeUser’ ).
    lo_property->set_nullable( iv_nullable = abap_true ).

    " End of Code
  ENDMETHOD.

Code Snippet 1: Optional parameters implementation

Note: 1. Pagination cannot be implemented in any Function Import as the IO_TECH_CONTEXT parameter for function import and GET_ENTITYSET methods inherits from different classes even though their name is same.

2. Also note that above properties are in Camel case and not all upper case.

Please share feedback and questions.