|

Category Archives: Uncategorized

Welcome back!

Thanks for completing the Part1 – prerequisites. We will now do the setup or installation of HCP SDK for iOS.

Download the SDK installer

 

Download it from here (as of 19Jan, the version is v1.0.301):

(Note: the above link is private group, you need to have permission to access the group)

Since the SDK has not GA released, customers and partners will have to wait for some time. According to my knowledge, it will be released before SAPPHIRE-2017.

Move the 4 Folders to your Disk (any location is OK just remember where you put them).

 

It has 4 folders

1.     Assistant

2.     Documentation

3.     Frameworks

4.     Tools

Navigate on your disk to the Assistant folder

 

 

 

Move the HCPSDKforiOSAssistant app to your Applications folder

 

Open theHCPSDKforiOSAssistant and click on Settings.

 

HCPMs Instances:

Select one from the dropdown

 

iOS Library path:

Point HCP SDK for iOS library path to the Frameworks folder on your Disk.

 

Click on Save button.

 

Click on new HCPms configuration and the Configured HCPms Instances

 

Add a name for your connection

Add the root URL for your HCPms Admin API URL for example:

https://hcpms-######trial.hanatrial.ondemand.com

Click on tab which should fill the HCPms Admin UI URL:

Add the User example “i302342”

And the Password example “yourSAPallPWD”

Click Save

If you need more details about the SDK, here are the links.

JAM Links

SAP + Apple Internal Resource Site (Public group) https://jam4.sapjam.com/groups/about_page/BG0xQqpa0ZMnvSNnSfBKwK

We have now setup the SDK successfully on our Mac machines. In next blog, we will have a look at the HCPms app configuration.

Thank you.

By default after a service order is saved, there is a distribution lock set, which prevents you from editing this order in status transferring. Click edit button you will meet with error message below until it has successfully been transferred.
In blog Regarding Service Order distribution lock and status I

introduce the step how to avoid this distribution lock. Nevertheless there is still BDOC and an inbound CSA queue automatically generated. The purpose of CSA queue has already been well explained by Rohit Sharma in this thread:
In case you really would like that for a given transaction type, no BDOC and inbound CSA queue should be generated for whatever reasons, you can suppress this behavior by enhancement on function module CRM_ORDER_SAVE_OW.
In this function module, the BDOC and CSA queue will be created by function module CRM_ORDER_UPLOAD_SINGLE only when ALL the THREE condition in IF are fulfilled.
The second condition, lv_send_bdoc is controlled by a switch.
So the solution would be:
Create a new enhancement on function module CRM_ORDER_SAVE_OW:
Insert one line below:
Suppose you would like that for transaction type ZSRV, no BDOC should be generated, then the source code of run method:
METHOD run.
    DATA lv_process_type    TYPE crmt_process_type.

    LOOP AT it_object_list ASSIGNING FIELD-SYMBOL(<guid>).
      CALL FUNCTION 'CRM_ORDERADM_H_READ_OW'
        EXPORTING
          iv_orderadm_h_guid     = <guid>
        IMPORTING
          ev_process_type        = lv_process_type
        EXCEPTIONS
          admin_header_not_found = 1
          OTHERS                 = 2.

      CHECK lv_process_type = 'ZSRV'.

      CALL FUNCTION 'CRM_ORDER_SET_NO_BDOC_SEND_OW'
        EXPORTING
          iv_guid = <guid>
          iv_flag = 'N'.
    ENDLOOP.

  ENDMETHOD.
Now the lv_send_bdoc will be set as false in the runtime according to the switch you set in enhancement, as a result no BDOC and inbound queue will be created any more.
Now after service order is saved you can still continue to edit.

SAP HANA Vora is an in-memory, distributed computing solution that helps organizations uncover actionable business insights from Big Data. SAP HANA Vora can be used to quickly and easily run enriched, interactive analytics on both enterprise and Hadoop data.

In a series of tutorial videos the SAP HANA Academy‘s Tahir Hussain “Bob” Babar details how to install and use the newest release of SAP HANA Vora, SAP HANA Vora 1.3. Bob walks through the steps necessary to install SAP HANA Vora 1.3 on a single node system.

SAP HANA Vora 1.3 Overview

Watch the video below for an introduction to SAP HANA Vora 1.3 and for an overview of the series’ architecture.

Although Hadoop is highly scalable, it’s a challenging infrastructure to manage. For instance in schema flexibility and out-of-the-box enterprise grade analytics. Often specialized programing skills are required to extract business value from the data stored therein. SAP HANA Vora is an in-memory computing framework composed of specialized processing engines purposefully designed for big data environments.

For developers and data scientists, SAP HANA Vora allows the mash-up of enterprise data and data from a Hadoop data lake. For business users, SAP HANA Vora provides modeling capabilities and enterprise features such as graph processing, which displays complex relationships, and also time series modeling, which forecasts future values based on historical data.

Big data is both distributed and processed on multiple nodes. At the lowest layer is the Hadoop Distributed File System, HDFS. HDFS is the primary storage system used by Hadoop applications. It’s distributed to provide high performance access to data across all of the nodes within a cluster.

To process the data you can use tools like Apache Spark. Spark is an open source big data processing framework, which runs in-memory. SAP HANA Vora is an in-memory query engine that plugs into the execution framework to provide enriched interactive analytics on data stored in Hadoop.

As well as being able to preform business intelligence on that data, you can also build your own apps. You can also connect SAP HANA Vora to notebooks such as Jupiter and Zeppelin. It’s also easy to connect SAP HANA Vora to SAP HANA. These are bi-directional. If you build apps on the SAP HANA side you can connect to data in SAP HANA or Hadoop. Bidirectionally if you build your apps, which connect directly to SAP HANA Vora, you can use data contained in SAP HANA as a data source.

For this series imagine that you’re a user who wants to investigate and play with SAP HANA Vora and you want to install it yourself. First you will create a SUSE Linux Instance in Amazon Web Services. You will then use SSH through a Mac Terminal to access the instance. You can also use PuTTY if you have a windows machine.

Then you will use an easy deployment tool, named Ambari, to install and monitor the Hadoop cluster. Next, you will install the bare minimum of Hadoop services, such as HDFS, YARN, and Spark. After testing HDFS and Spark, then you will install SAP HANA Vora.

The two main SAP HANA Tools that you will examine are SAP HANA Vora Manager and the SAP HANA Vora Tools. There will be upcoming videos on the SAP HANA Vora engines.

After getting data into SAP HANA Vora, you will want to get it out. So you will install and use Apache Zeppelin to graphically visualize the data. You can use Apache Zeppelin or any BI tool to connect to SAP HANA Vora. This will enable you to connect to your data in HDFS.

Also, soon there will be some videos on how to connect SAP HANA Vora 1.3 to SAP HANA.

All of the commands and code used through this series can be found on the SAP HANA Vora 1.3 file on the SAP HANA Academy’s GitHub.

Create Linux Instance

In the series’ second video, linked below, Bob shows how to create a Linux Instance in Amazon Web Services.

First Bob creates a VPC network in AWS to house his instances. This ensures that every time the server is stopped and/or started the server name remains the same. Next, within EC2, Bob launches a SUSE Linux 11 Service Pack 4 image. The version that Bob has tested and has confirmed that the SAP HANA Vora installation works on is suse-sles-11-sp4-sapcal-v20160515-hvm-ssd-x86_64.

While configuring, Bob disables the Auto-assign Public IP as he will be using an elastic IP. Bob elects to use an existing security group that will be modified later. When launching the instance, Bob creates a key pair via a PEM file. Make sure to download and store the Key Pair so you can log into your server.

Then, Bob allocates an elastic IP to his recently created VPC. The elastic IP will never change. Finally, Bob associates his elastic IP to his instance. Next, Bob sets the security group so that all traffic can only come from his Mac computer. Bob will be using Terminal on his Mac to access the Linux server.

Connecting to the Instance

In the SAP HANA Vora 1.3 series’ next tutorial Bob shows how to connect to the AWS Linux Node using SSH from Terminal. Bob then details how to prepare the instance for the installation of Ambari, Hadoop and SAP HANA Vora.

If you want to connect to your instance using PuTTY on a Windows machine instead of using Terminal on a Mac than please watch this video from the SAP HANA Academy’s SAP HANA Vora 1.2 playlist.

In Terminal, Bob copies his Vora13.pem.txt key to his HOME/.shh folder.

Next, Bob changes the rights to the PEM file so he can log in using SSH. Finally, Bob logs in by entering the command shown below where he uses his Public IP address.

Once logged in you need to enter a few commands to install some packages that are required by the various SAP HANA Vora Servers. All of the scripts are listed on the SAP HANA Academy’s GitHub in the Vora_1.3_InstallNotes.txt file. For more information on SAP HANA Vora 1.3 Installation please read the SAP HANA Vora Installation and Administration guide found on the Vora 1.3 help.sap.com page.

First, as the root user, Bob makes sure his network time protocol daemon is running. Next, Bob installs a libaio file and changes the config file’s max size.

Next, Bob appends a line to the limits file. Then, Bob exports the locale for every thing as US English. Then, Bob installs a pair of packages, numactl and libtool, that prepare the document store server and the disk engine server respectively. You may need to install additional packages depending on your environment.

Ambari Installation

In the next video, linked below, Bob shows how to install Ambari on the Linux Instance. Ambari is a cluster provisioning tool which is used to both install and monitor Hadoop.

First, in Terminal as the root user, Bob pastes in the set of commands shown below to create a new specific user, cluster_admin, for installing Ambari. These commands can be copied from lines 56-61 of the Vora_1.3_InstallNotes.txt file on GitHub.

After logging in as the cluster_admin user, Bob generates a public/private RSA key file. Then Bob changes the rights to the files and outputs the public key to an authorized_keys folder.

Next Bob enters the command below to put the Ambari 2.2.2 repository into the Linux server’s repository.

After ensuring the repo is up to date, Bob installs the Ambari server. Once Ambari is installed Bob sets it up on the Linux server, This includes using Oracle JDK 1.8. Bob elects not to change any of the advanced database configuration, as Postgres is automatically installed. Finally, Bob restarts the server and then connects to the Ambari login page using port 8080 and his Public IP address.

Hadoop Installation

In the next part of the series, Bob details how to install Hadoop. The Hadoop components HDFS, Hive, Spark and YARN are both installed and configured. Hadoop is used for the processing and storage of extremely large data sets in a distributed computing environment and is a prerequisite for SAP HANA Vora on the Linux instance.

First, Bob logs into Ambari as the default user and goes through the various tasks in the installation wizard. The most important part, is that you choose the stack HDP 2.4. Please check the installation guide for the exact versions that are supported.

Bob copies the target host name and the private key from his Terminal and adds them to the install wizard before confirming his host.

For the services Bob chooses HDFS, YARN+MapReduce2, Spark and Ambari Metrics. HDFS is Apache Hadoop Distributed File System and is where your files are stored. YARN+MapReduce2 helps you to do processing on the server. Spark is an open source processing engine built around speed. Spark is needed because SAP HANA Vora compliments Apache Spark. Ambari Metrics provide information about network and disc space usage. After clicking next the wizard also includes the necessary services ZooKeeper, Hive, Pig and Tez.

Hive Configuration

To continue on with the Hadoop installation, Hive must be configured and in the video below, Bob shows how to configure the Hive Service. This is a prerequisite for installing the Spark Service.

If you want to use the PostgreSQL database you need to create a database on the server. You need a Hive schema and user on the repository database. You will be installing the postgresql.jar file on the Linux server.

Back in Terminal, as the root user, Bob installs the postgresql-jdbc file and changes the rights on the file before making Ambari aware of the file.

Next, Bob logs in as the Postgres user Bob and accesses psql and then runs the commands shown below to create a database, user and password. He calls all three of these Hive.

Now, back as the root user, Bob copies the psql config file before opening it. Then Bob modifies the file to give Hive access to the database.

After the database is restarted, back in Ambari, Bob completes the Hadoop installation. Once done make sure that all of the services have been started.

Testing HDFS & Spark

Now that Hadoop is installed, Bob shows how to test it in the tutorial video below. These tests ensure that Linux Instance is ready for the SAP HANA Vora 1.3 installation.

First, to test HDFS, in Terminal Bob logs in as his HDFS user and creates a new folder in the cluster_admin directory. After giving the user rights, Bob tests HDFS by creating a simple test.csv file that contains a single row with three columns, as the cluster_admin user. Bob then puts that file into the folder he just created and shows that he can output the file as the cluster_admin user. The cluster_admin user can both read and write to HDFS.

Next, to test Spark first Bob sets some paths to the user in the bashrc file by locating his Java and Hadoop homes. Bob then inserts five commands from the GitHub file into the bashrc file.

The first test is to run Spark Shell which Bob successfully does as the cluster_admin user. Then still as the cluster_admin user Bob runs the command shown below to use a Spark library to return the value of Pi.

SAP HANA Vora Installation

In the next tutorial video Bob details how to download, install and configure the SAP HANA Vora package on the Linux Instance that contains Ambari and Hadoop.

First, go to the SAP Service Marketplace and select SAP HANA Vora and choose SAP HANA VORA FOR AMBARI 1. Download the file and then place it into the root directory of your cluster_admin user. Then extract the file into the HDP services folder by running the command below.

You install multiple services within Ambari. The difference compared to SAP HANA Vora 1.2 is that now there is only one service in Ambari. Instead a tool is used to manage and configure the services which is separate from using Ambari. Now you should see vora-manager if you do an ls on your services folder.

Now you need to restart both the Ambari agent and the Ambari server using a pair of commands from the GitHub file. As it’s restarting Ambari will become aware of the additional vora-manager services which will be available for installation in Ambari.

Log back into Ambari and make sure all of the services are running. Click on add services and scroll down to find and select Vora Manager. You need to install three things, Vora Manager Master, Vora Manager Worker and Vora Client on the single node.

On the next screen in the wizard choose advanced vora-manager-config and add your vora_default_java_home and your vora_default_spark_home. You can confirm the path of your Java and Spark home in the Terminal. Finally click deploy at the end of the wizard to complete the installation of SAP HANA Vora 1.3. To confirm make sure that the Vora Manager Master, Vora Manager Worker and Vora Client have all started.

Post Installation

In the next video linked below, Bob covers the post installation steps for SAP HANA Vora 1.3.

If you’re using AWS you may notice that the Vora Manager Worker goes from Live to Not Live after the installation. This is the result of a mismatch in AWS of the internal and external machine names. To fix it go back into the Terminal and create a new AWS file as the cluster_admin user. Then modify the params.py file in the HDP/2.4/services/vora-manager/package/scripts folder by inserting a few lines line. These lines are import socket and self_host = socket.getfqdn() as shown below.

After stoping and then starting the Vora Manager Worker it should be Live once again.

To access the Vora Manager, copy your IP Address and then append :1900 to it. However, you can’t access it unless you generate the password file. So back in Terminal as the root user find the find/ – name ‘genpasswd.sh’ file in the HDP services directory. Then create a user name and password for the Vora Manager.

Next, make the Vora user the owner of the htpassword file and give it the rights. Then place the file into the Vora Manager folder.

Then after you start and stop the Vora Manager you can access the Vora Manager on port 1900 using your login and password.

Vora Manager

In the tutorial linked below Bob details how the SAP HANA Vora Manager works. The Vora Manager is a new feature in SAP HANA Vora 1.3. It is a UI which is used to configure, start/stop and troubleshoot the SAP HANA Vora Servers.

The SAP HANA Vora Manager has four tabs. The User Management tab enables you to create and edit other users. The Nodes tab details the services on and the stats of your Vora nodes.

The most important tab is Services. There you can configure and switch on and off each of your SAP HANA Vora services. Each services has both a configuration and a node assignment tab. You won’t need to change anything for each of the services at the moment unlike in Vora 1.2. The only thing you need to do is start all of the services. Now in the Nodes tab you can see that all of the services have started up.

Next Bob covers some troubleshooting steps in the case that all of the SAP HANA Vora services don’t start up. The log file for the Vora Manager is a folder called var log and can be accessed as the cluster_admin user in the Terminal.  With the log file is a subfolder for each of the services.

Back in the Vora Manager, if you click on the Connection Status icon on the top right corner you can see the pair of third party tools, Consul and Nomad, that are utilized. You can then use these tools for advanced trouble shooting.

Back in the Terminal, turn on the Consul UI, which is a monitoring tool, and then locate the nomad tool found inside its bin folder. Then use the nomad tool to check the status of the services. Next, if you put the name of the SAP HANA Vora service after nomad, then you can see if any issues are occurring.

To turn on the web-based UI for Consul, append :8500 to the external IP Address in a new tab. This works the same way as it did for SAP HANA Vora 1.2.

The final tab, External Links, links you to the SAP HANA Vora Service tools. This is used to model within SAP HANA Vora. Make sure you paste in the external IP address in front of port 9225 to open the Tools. The SAP HANA Vora Tools allows you to create tables in SAP HANA Vora and to model various views. It also allows you to combine datasets together in SAP HANA Vora.

Testing SAP HANA Vora 1.3

In the next tutorial video, linked below, Bob shows how to test to make sure that installation of SAP HANA Vora 1.3 worked. Bob tests both the Vora Spark Shell and the Vora Tools UI.

Back in the Terminal on the Linux server as the cluster_admin user, navigate to the vora bin folder where SAP HANA Vora Spark Shell is contained. Then run the start-spark-shell command.

Once the shell has started, run the command to import the spark.sql.SapSQLContext. Next assign it to a variable called vc. Then enter the SQL below to create a test table. After, enter a show table command to see that the table exists and then enter a select * from command to see the table’s data. This proves the SAP HANA Vora Spark Shell works.

The next test creates a similar table using the SQL Editor in the SAP HANA Vora Tools. Bob runs the same command in the SQL Editor to create the table and then preforms a select *from to view the data.

Back in the SAP HANA Vora Tools home page the testtable is now contained in the data browser.

Zeppelin Installation

In the next video Bob shows how to install Apache Zeppelin. Apache Zeppelin is a new and incubating multi-purposed web-based notebook which brings data ingestion, data exploration, visualization, sharing and collaboration features to Hadoop, Spark and SAP HANA Vora.

For information about Apache Zeppelin visit their website. To see which version of Zeppelin you need you must check which version of scala you’re using. Scala is how you connect to Spark.

To find out your version of scala, launch the SAP HANA Vora Spark Shell as the cluster_admin user on the Terminal in your Linux Instance. Then run the command below to see your version of scala. Which version of scala corresponds to which version of Zeppelin is detailed in the installation guide.

On the Apache Zeppelin downloads page select the binary package with all interpreters of your proper version of Zeppelin to download it. Download it to dropbox or an FTP server so then it can be transferred to your Linux Instance.

Back in Terminal go to the Home directory of the cluster_admin user and run the wget command shown below.

After the Zeppelin tar file is unzipped then insert it into your bashrc file. Then log out and then back into your cluster_admin user and navigate into the zeppelin folder. Now that Zeppelin has been installed it must be linked to SAP HANA Vora.

SAP provides a SAP HANA Vora Spark extension. This is an interpreter that the UI within Zeppelin will use. First, copy the Zeppelin jar file from the Ambari Spark folder and insert it into the Zeppelin folder. Next open the Zeppelin folder that now contains both jar files and run the command shown below. This will remove the interpreter-setting.json file.

Then run a similar uf command to combine the interpreter-setting.json files together. Now it contains an interpreter for both Spark and SAP HANA Vora.

Zeppelin Configuration

In this tutorial video Bob shows how to configure Apache Zeppelin to work with SAP HANA Vora by combining the SAP HANA Vora Interpreter with Zeppelin.

Back in the Terminal navigate to the zeppelin config folder as the cluster_admin user. Copy the zeppelin-env.cmd.template file and change its rights so it can be edited. Open the environment file and insert some class paths for YARN, Hadoop and Spark using the commands shown below.

Then copy the zeppelin-site.xm.template file, change the rights and modify the file by adding the sap.zeppelin.spark.SapSqlInterpreter as the second interpreter. Also change the Zeppelin server port from 8080 to 9099.

Next, go into Ambari and select the YARN service. Then navigate to the advanced tab from the Configs tab and choose to add a custom yarn-site. Give the property a key and then specify the version of HDP as the value. Save the configuration and restart YARN.

To start Zeppelin run the command shown below in the Terminal.

To access Zeppelin append the port 9099 to the end of the public IP address. Next, choose the Interpreters option on Zeppelin. Find and remove the Spark Interpreter. Then create a new Interpreter named Spark and put it in the Spark group. Change the master to yarn client and then add the jar file as an artifact in the dependency. Then restart the Spark Interpreter.

Using Zeppelin

In the final video of the series Bob shows how to use Apache Zeppelin to work with SAP HANA Vora. This confirms that the installation and configuration of Zeppelin on Vora is working.

When using Zeppelin you create notes. So Bob creates a sample note called MyFirst note. Unlike the SAP HANA Vora Tools Zeppelin allows you to display data graphically. Due to the fact that your using an interpreter in Zeppelin you must always prefix statements with %spark.vora. Bob uses the same command to build a table as he did in the Vora Tools’ SQL Editor but with the prefix tacked on.

To view the table run the show table command and to view the data run a select * from statement. With this test we know that the Zeppelin interpreter works when connecting to SAP HANA Vora. That means that both Spark and Hadoop work as well.

To load data into HDFS, open Terminal on the Linux server and login as the cluster_admin user. Then create a simple table (aggdata) with the commands shown below.

Then use the hdfs dfs put command to add the file to hdfs.

Then back in Zeppelin Bob creates a new note and runs a command to create a new table from aggdata.csv. Then after doing a select * from command Bob is able to use the different graphics to visualize the data with Zeppelin.

That concludes the tutorial series on how to install and use SAP HANA Vora 1.3.

Please visit the SAP HANA Academy to learn about SAP HANA, SAP Analtyics, and the SAP HANA Cloud Platform from more than 1,800 free tutorial videos. Subscribe to keep upto date with the latest videos.

All code snippets used in every video are available on GitHub.

Please follow on Twitter @saphanaacademy and connect with us on LinkedIn.

Introduction

This blog is a summary of my presentation at Sap Inside Track Walldorf about connecting SAPs rule engine BRFplus to an ABAP program, userexit or any other piece of runnable code in ABAP.

The intention of this presentation is to point out the possibilities to dynamically call a function defined in BRF plus beside the standard approach to generate the necessary code to call a BRF function from ABAP.

You can find the slides on http://tinyurl.com/sitwdfslides and the sourcecode on https://github.com/andau/zsitwdf.

For the reason that SIT Walldorf this year was IoT related I build up an IoT related example. Well, I am not sure if this example will be really going into production anytime, anywhere 😉 but at least it does its job as simplified model for explaining the interface between ABAP and BRFplus.

I am not giving an introduction into BRFplus in this reading; there is already a lot of content here on the SCN community space. From a business perspective I would recommend the following Youtube video: https://www.youtube.com/watch?v=2ouhJeH02HU as a starting point. From technical perspective for example this blog: BRFplus a real time example gives a good introduction.

The focus in this document is on the pretty cool interface class CL_FDT_FACTORY which can be used to extract metadata and later to call a BRFplus function dynamically. Why would you do this? Well normally yo won’t. If you have a well defined interface it’s for sure the better way to go for the standard way: generating the necessary source code for example with the report FDT_TEMPLATE_FUNCTION_PROCESS. But what if you do not know already which input parameters you will need in your function or the interface is expected to change a lot in future?

 

The business case

That brings me to my chosen example. Lets assume you own a production company and there beside others you have a special machine. This machine has a lot of sensors and modernisation continuously comes around to say “Hello” and add some new sensors. The maintenance people have a status monitor in the SAP ERP system where machine problems together with ERP information (e.g. the needed spare parts  – type, actual location …) are visualised.

The maintenance people can add the sensors to the machine and enrich the status information already sent from the productio machine to the ERP system with the new sensor data, but they are not able to program in ABAP or JAVA. For some reason they like BRFplus and are keen on changing the function for maintenance data evaluation function with the BRFplus GUI.

Now lets have a closer look on our production machine and its connection to the ERP system (OK to be honest I am switchin to ean easier example 😉


BRFplus connected to a simple IoT dev  😉

The coffeemachine is a really high sophisticated one, if there is a change in the sensors (usually when somebody takes a coffee) the machine sends the data to the SAP ERP system. The SAP ERP system calls the BRFplus function where the maintenance people define how the sensor data should be evaluated.
Nothing special, just straightforward how you can use BRFplus. With only one differences before the BRFplus function is called the SAP ERP System gets the context data of the BRFplus function – particularly the defined input data (sensors) and propagates to the BRFplus function only that sensor data received from the coffeemachine that is also defined in the function.

The configuration of BRFplus would look something like this. Well the two screenshots shown below to not cover the entire configuration of the BRFplus function. But I am sure you get fast some useful ideas how to convert the input values BEANS and WATER to valuable maintenance messages.

Signature of the BRFplus function

Rules added to the BRFplus function

 

Extracting metadata with CL_FDT_FACTORY
To get the context data there is a nice interface class CL_FDT_FACTORY which you can use to retrieve the function metadata.

 

Code – Extract context parameters of a function

class ZCL_BRFPLUS_METADATA definition
  public
  final
  create public .

public section.

CLASS-METHODS: 

               getFunctionContextParams
                     importing pFunctionId type if_fdt_types=>id
                     returning  value(pContextParams) type ZT_CONTEXT_PARAMS,

protected section.
private section.
ENDCLASS.



CLASS ZCL_BRFPLUS_METADATA IMPLEMENTATION.

method getFunctionContextParams.
  DATA: contextParam like line of pContextParams.

* get all context Ids
  DATA(contextObjectIds) = CL_FDT_FACTORY=>get_instance(
                                )->get_function( pFunctionId )->get_context_data_objects( ).

  LOOP AT contextObjectIds assigning FIELD-SYMBOL(<contextObjectId>).

* get instance by context id
      CL_FDT_FACTORY=>get_instance_generic(
                    EXPORTING iv_id         = <contextObjectId>
                    IMPORTING eo_instance   = DATA(lo_instance) ).

* populate result table
      contextParam-name = lo_instance->get_name( ).
      contextParam-type = CONV string( CAST if_fdt_element( lo_instance )->get_element_type( ) ).
      append contextParam to pContextParams.

  ENDLOOP.

endmethod.

ENCLASS. 
ENDCLASS.

 

Unittest – Extract Context parameters of a function

class ZCL_BRFPLUS_METADATA_UNIT definition FOR TESTING.
"#AU Risk_Level Harmless
  PUBLIC SECTION.
  private section.
  CONSTANTS: 
             sitWdfFunctionId   type if_fdt_types=>id
                                VALUE '0241750C32391EE6B4D8A1790D2D5E0C',
             sitWdfFunctionName type IF_FDT_TYPES=>NAME
                                VALUE 'COFFEE_MACHINE_STATUS'.

  METHODS:    testGetFunctionContextParams FOR TESTING.
                        
ENDCLASS.

CLASS ZCL_BRFPLUS_METADATA_UNIT IMPLEMENTATION.


 method testGetFunctionContextParams.

   DATA(contextParams)  = ZCL_BRFPLUS_METADATA=>getFunctionContextParams(
                                       pFunctionId = sitWdfFunctionId ).
   CL_AUNIT_ASSERT=>assert_equals( EXP = 2  ACT = lines( contextParams ) ).

   read table contextParams with key name = 'ZSENSOR_FILL_QUANTITY_BEANS' into DATA(contextParam).
   CL_AUNIT_ASSERT=>assert_equals( EXP = 'T' ACT = contextParam-type  ).

   clear contextParam.
   read table contextParams with key name = 'SENSOR_NOT_EXISTING' into contextParam.
   CL_AUNIT_ASSERT=>assert_initial( contextParam ).

 endmethod.
.

ENDCLASS.

 

With the defined context parameters the function can be called dynamically. When there is added a new input parameter (sensor) in the BRFfunction the call out of ABAP has not to be changed.

Code – Dynamic call of BRFplus function

class zcl_brfplus_function definition
  public
  final
  create public .

public section.
    class-methods process
      importing pFunctionname type IF_FDT_TYPES=>NAME
                pSensorValues type ref to ZCL_SENSOR_VALUES
      returning
        value(pMaintenanceMessages) type ZT_MAINTENANCE_MESSAGES.
protected section.
private section.
endclass.



class zcl_brfplus_function implementation.

  method process.
    DATA: contextParams TYPE abap_parmbind_tab,
          contextparam like line of contextParams.
    FIELD-SYMBOLS <resultDataAny> TYPE any.

    GET TIME STAMP FIELD DATA(currentTimestamp).

    "get defined context params of brfplus function
    DATA(functionId) = zcl_brfplus_metadata=>getFunctionId( pFunctionname  ).
    DATA(definedContextParams) = zcl_brfplus_metadata=>getfunctioncontextparams( functionId ).

    "build context information by matching defined context params and sensor values
    pSensorValues->getSensorValues( importing pSensorValues = DATA(sensorValues) ).
    loop at definedContextParams assigning field-symbol(<definedContextParam>).
      loop at sensorValues assigning field-symbol(<sensorvalue>).
        if <definedContextParam>-name = <sensorvalue>-name.
          "move definedcontextParams into context params format for BRFplus call.
          contextparam-name = <definedContextParam>-name.
          GET REFERENCE OF <sensorvalue>-value INTO contextparam-value.
          INSERT contextparam INTO TABLE contextparams.
        endif.
      endloop.
   endloop.

   "prepare and process BRFplus function
   cl_fdt_function_process=>get_data_object_reference( EXPORTING iv_function_id      = functionId
                                                                iv_data_object      = 'ZTABLE_MAINTAINANCE_MESSAGES'
                                                                iv_timestamp        = currentTimestamp
                                                                iv_trace_generation = abap_false
                                                      IMPORTING er_data             = DATA(resultData) ).
   ASSIGN resultData->* TO <resultDataAny>.

  cl_fdt_function_process=>process( EXPORTING iv_function_id = functionId
                                              iv_timestamp   = currentTimestamp
                                    IMPORTING ea_result      = <resultDataAny>
                                    CHANGING  ct_name_value  = contextParams ).


  "return result of brfplus function
  pMaintenanceMessages = <resultDataAny>.

  endmethod.

endclass.

 

Unittest – Dynamic call of BRFplus function

class ltcl_ definition final for testing
  duration short
  risk level harmless.

  private section.
    DATA brfplusFunction type ref to zcl_brfplus_function.
    methods:
      setup,
      okTest2Sensors for testing raising cx_static_check,
      okTest3Sensors for testing raising cx_static_check,
      okTest2SensorsWithInputFrom3 for testing raising cx_static_check,
      failTest2Sensors for testing raising cx_static_check,

      generateDataFor2Sensors
         importing pSensorFillQuantityBeans type int2
                   pSensorFillQuantityWater type int2
         returning value(pSensorValues) type ref to ZCL_SENSOR_VALUES,
      generateDataFor3Sensors
         importing pSensorFillQuantityBeans type int2
                   pSensorFillQuantityWater type int2
                   pSensorFillQuantityTrash type int2
         returning value(pSensorValues) type ref to ZCL_SENSOR_VALUES.

endclass.


class ltcl_ implementation.

  method setup.
    create object brfplusFunction.
  endmethod.

  method okTest2Sensors.

    DATA(sensorValues) = generateDataFor2Sensors( exporting pSensorFillQuantityBeans = 80
                                                                pSensorFillQuantityWater = 80 ).
    DATA(maintenanceMessages) = brfplusFunction->process( exporting pFunctionName = `COFFEE_MACHINE_STATUS` pSensorValues = sensorValues ).


    cl_abap_unit_assert=>assert_equals( exp = 0 act = lines( maintenanceMessages ) ).

  endmethod.

  method okTest3Sensors.

    DATA(sensorValues) = generateDataFor3Sensors( exporting pSensorFillQuantityBeans = 80
                                                            pSensorFillQuantityWater = 80
                                                            pSensorFillQuantityTrash = 20    ).
    DATA(maintenanceMessages) = brfplusFunction->process( exporting pFunctionName = `COFFEE_MACHINE_STATUS_3SENSORS` pSensorValues = sensorValues ).


    cl_abap_unit_assert=>assert_equals( exp = 0 act = lines( maintenanceMessages ) ).

 endmethod.

  method okTest2SensorsWithInputFrom3.

    DATA(sensorValues) = generateDataFor3Sensors( exporting pSensorFillQuantityBeans = 80
                                                            pSensorFillQuantityWater = 80
                                                            pSensorFillQuantityTrash = 80 ).
    DATA(maintenanceMessages) = brfplusFunction->process( exporting pFunctionName = `COFFEE_MACHINE_STATUS` pSensorValues = sensorValues ).

    cl_abap_unit_assert=>assert_equals( exp = 0 act = lines( maintenanceMessages ) ).

  endmethod.

  method failTest2Sensors.

    DATA(sensorValues) = generateDataFor2Sensors( exporting pSensorFillQuantityBeans = 20
                                                            pSensorFillQuantityWater = 80 ).
    DATA(maintenanceMessages) = brfplusFunction->process( exporting pFunctionName = `COFFEE_MACHINE_STATUS` pSensorValues = sensorValues ).
    cl_abap_unit_assert=>assert_equals( exp = 1 act = lines( maintenanceMessages ) ).

    sensorValues = generateDataFor2Sensors( exporting pSensorFillQuantityBeans = 80
                                                      pSensorFillQuantityWater = 10 ).
    maintenanceMessages = brfplusFunction->process( exporting pFunctionName = `COFFEE_MACHINE_STATUS` pSensorValues = sensorValues ).
    cl_abap_unit_assert=>assert_equals( exp = 1 act = lines( maintenanceMessages ) ).

  endmethod.


  method generateDataFor2Sensors.

    create object pSensorValues.
    pSensorValues->addSensorValue( pSensorname = `ZSENSOR_FILL_QUANTITY_BEANS`
                                 pSensorValue = pSensorFillQuantityBeans  ).
    pSensorValues->addSensorValue( pSensorname = `ZSENSOR_FILL_QUANTITY_WATER`
                                 pSensorValue = pSensorFillQuantityWater ).
  endmethod.

  method generateDataFor3Sensors.

    create object pSensorValues.
    pSensorValues->addSensorValue( pSensorname = `ZSENSOR_FILL_QUANTITY_BEANS`
                                 pSensorValue = pSensorFillQuantityBeans  ).
    pSensorValues->addSensorValue( pSensorname = `ZSENSOR_FILL_QUANTITY_WATER`
                                 pSensorValue = pSensorFillQuantityWater ).
    pSensorValues->addSensorValue( pSensorname = `ZSENSOR_FILL_QUANTITY_TRASH`
                                 pSensorValue = pSensorFillQuantityTrash ).

  endmethod.

endclass.

 

 

Conclusion

Finally to come back at our initial coffee machine example. If you, for example start with two sensors – water and beans and decide later to add a trash sensor or a milk sensor or whatever sensor else you have nothing to change in the ABAP code. The key to do this is using the ABAP class CL_FDT_FACTORY. Further you can define your maintenance messages without the need of custom tables and enrich your messages with useful data of the ERP system by selecting the data for example with BRFplus data.

 

Questions and answers at the presentation
During the Q&A part after the presentations there where some interesting questions raised. First of all to stress it onces more this is not a productive running system and was only created for the presentation at SIT Walldorf. I want to point out some of them:

  • How is the performance of BRFplus
    When using BRFplus, performance is of course always a topic. So extracting the metadata before the function call is sometimes not possible and recommendable.
  • Is BRFplus the right tool for a business department?
    A main target of BRFplus is to hand over the configuration to the business department. Is the BRFplus really the right interface for not techie people. I think there is no general answer but its worth a try. Starting to work with BRFplus needs for sure some patience but there are some really nice features like integrated simulation and features like up-/ and download of decision tables with Excel.
  • Handling of several production clients or systems with different configuration.
    As far as I know in this case you should include also the SAP Decision Service Management (DSM) – but I am not already quite familiar with this system
  • What are some examples of scenarios where BRFplus is already used in productive systems?
    In the “Book Business Rule Management with ABAP” from the SAP Press there are shown some cases in the Collection Management, Claim management or finance statisitics.
    Variant configuration is one area where BRFplus fits good in my opinion. The usage of often changing product configurations which are not performance critical can be done with the above shown example as the variant configuration works with characteristics (key/value) pairs and that was also the first area I was thinking about using BRFplus.
  • Is it possible to UnitTest BRFplus?
    Of course its possible to test it with ABAP unittests. There should be also a standalone UnitTest Framework in BRFplus, if somebody has more information please add it below in a comment.

Well there is sure a lot more to discuss. Thus feel free to use the comment section below. This section impatently waits to your question, remark or hint.

Back when I started developing software, life was easy.  I got my stack of 80 column keypunch cards together, walked over to the reader (praying I didn’t drop the stack), loaded them and off the program went.  I know you think I’m joking (or don’t even know what I’m talking about, which is more likely), but sadly, I am not.  The truth is I was taking pre-built stacks and only supplying parameter cards but that’s how I started.  Soon after, thankfully, we moved to green screen terminals, and then to PCs.  Talk about progress!  It was for them anyway.

Seriously though, when I started really developing software, it was mostly client server stuff (using Powerbuilder) and it was Windows and as long as it looked okay and worked on a 1024×768 screen I was okay (don’t even get me started with the QA person who’s first test case was to sit on the keyboard).  Even for my first few web jobs we only supported Internet Explorer and the same resolution commitment had to be met.

My, how things have changed!  Welcome to the new digital reality, where apps run on everything from watches (and sometimes much smaller in headless fashion) to systems with gigantic screens and everything in between.

Even when you narrow the field to just include tablets and smart phones, there is still an enormity of form factors, screen resolutions and networks that, when matrixed together pose a formidable testing challenge.

This leads us to a pretty fundamental question – How do I validate my mobile app?

In my opinion, the answer to that question is ultimately test automation.  The more tests you automate, the broader the coverage you have and the more efficient the end to end process becomes.  But that, my friends, is the subject for another blog.

Let’s answer a simpler question – How do I validate my app on more devices and networks in a way that doesn’t require me to own all of them.  One answer is Device Test Clouds.  Device Test Cloud vendors provide cloud based devices, wired to real networks that organizations can subscribe to in order to meet their testing needs.

In some ways this is not exactly a new topic.  Milja Gillespie discussed the topic in her blog SAP HANA Cloud Platform 3rd Party Integration Framework: Testing Mobile Apps with Keynote Mobile Testing.  Since we released that blog we have also added Perfecto Mobile (http://www.perfectomobile.com) as a second Device Test Cloud provider.

Up until now, however, use of these services required a Fiori mobile developer to first build his app, then switch context into the Fiori mobile admin console, and then test his app.  Workable, but not optimal.  All that changes with the integration of Device Test Cloud support into the Fiori mobile developer experience.  With this most recent update, developers can build and test their application without switching context, launching the Device Test Cloud provider from within SAP Web IDE.

So how do set up this awesome new feature?  I’m glad you asked, because I’m about to tell you!

Step 1 – Enable SAP HCP, mobile service for SAP Fiori:  This process is described in Dhimant Patel’s Blog SAP HCP, mobile service for SAP Fiori – Part 1.

Step 2 – (Optional): Enable a Device Test Cloud provider:  Once the service is enabled, click on the Fiori Mobile tile, then click Go to Admin Console.  Once the console loads, click on Account > Device Test Cloud and select one of the listed providers.  Currently two providers are integrated, Perfecto Mobile and Keynote by dynatrace.  You’ll notice that there is space for a custom provider as well.  Any third party that wants to integrate with our interface can provide you with the appropriate Base URL and the integration should work just fine:

Note: Licensing and support in all of these scenarios comes from the vendor, not SAP.

Another Note: In a production environment, this feature is only available to Account administrators.  If you’re not an account admin, well then you’re pretty much stuck.  You’ll have to contact your account admin and ask him to perform these steps.

Still Another Note: In the trial environment, this step can be skipped, as a Developer can enable the Device Test Cloud provider as part of first use.  However, we’re seeing a few integration issues right now, so I’d recommend that you perform this step now so as to avoid any issues later.

Step 3 – Build Your Packaged App: This process is mostly covered in my blog title The Fiori Mobile Developer Experience has arrived!, so I won’t go into the process in all that much detail here.  But suffice it to say it’s really two steps.  The first step is to take one of the Fiori templates and create / extend your Fiori web app.  The second step is to, using Fiori mobile and the Cloud Build Service, create your packaged app.  This is the process of taking an app that runs purely in a web browser, and packaging it so that it can execute within the context of a native container.  Once that’s done, you’re ready to bring the Device Test Cloud into the picture.

Once you’ve completed the build process, you’ll be left looking at a screen with a QR code:

As discussed previously, if you have an iOS or Android device you can scan the barcode, install the app and do all the testing you want.  But testing only on your own device isn’t really full app validation, is it?  What if you don’t have an iOS or Android device, but your app needs to support it?  What if you need to test on multiple networks?  Device Test Cloud integration allows you to validate your app on all the platforms and networks that your app is intended to support.

Step 4: Launch your app on a cloud device.  To access the Device Test Cloud is easy.  Simply click on the Close button to close this window (The QR code can be accessed again later by right-clicking on the project name, selecting Fiori Mobile > Show Build Results at any time.).  Right-click on the project name, select Fiori Mobile and you should see two menu items – Launch on Device Cloud (iOS) and Launch on Device Cloud (Android):

Only if the app has been signed during the build process will these menu items show up, so make sure you sign your app when you build!

If you skipped Step 2, and the Device Test Cloud provider has not be configured, you will prompted to configure the provider now:

Follow the steps outlined in Step 2, and then come back and launch the Device Test Cloud again.  At this point you’ll see a confirmation message:

Basically this message just says that we’re about to turn you over to a 3rd party, and that as a customer you will be interacting directly with the third party for topics like support and data privacy practices.

A new window will open, and typically the third party will present you with a Welcome screen.

After that, you will need to approve the transfer of user information and the app binary to the third party:

Once you are done with those screens, you will be presented with the third party home screen.  In the case of the Device Test Cloud integration, SAP transfers not only the app, but different information about the app, such as form factors supported and minimum OS level supported so that the third party can intelligently display the correct devices for this app.  In the scenario below, you will notice that only Android devices are displayed, because we are testing an Android app.

Voila!  At this point you can select a device and begin your testing.  How easy was that??

Which third party should you pick? Well, that’s up to you.  Since SAP considers this a category (Device Test Cloud providers), you should expect that everyone listed provides similar basic capabilities (real devices on actual networks, support for automation, etc).  But each vendor has its own unique value proposition. and pricing models and service levels may vary.  You need to do your own due diligence.

Good luck and good testing!

When you use the transition scenario conversion to S/4HANA 1610, in the Prepare phase, a precheck report has to be run to identify important steps to take to make system compatible with conversion process. This report should be run in every system in the landscape SBX/DEV/QAS/PROD although they all may be identical in reality we see that this is often not the case.
You can also refer to System Conversion to S/4HANA 1610 – Part 2 – media download using Maintenance Planner and System Conversion to S/4HANA 1610 – Part 3 – Custom Code Migration Worklist in this series

STEP 1: PRE-CHECKS

In ERP System apply Note 2182725 – S4TC Delivery of the SAP S/4HANA System Conversion Checks Apply the note in client 000, the report also has to be run from 000.

Note 2182725 requires 21 notes listed above. Each of them has to be downloaded and applied manually. Once all these notes are applied please confirm the dialog with confirmed checkmark in the above dialog.

Here is the list of notes applied in ERP6/EHP7/HANA system,The dependent notes are shown on right and Note 2185960 has 36 dependent notes. None of the notes require manual steps and it should be click … click … click and go !

Normally this task is just assigned to some Basis guy and all of the above is just some “Functional stuff”. You do need to have some functional guy go through the notes and provides valuable information. Above Note 2216943 (S4TC SRM_SERVER Master Check for S/4 System Conversion Checks) points to Note 2251946 – Informational note about migration of SAP SRM shopping carts to SAP S/4 HANA SSP. This Note has a guide sc_mig_guide_v1.pdf which is titled “Migration of SAP SRM Shopping Carts during system conversion to SAP S/4HANA”. The issues in your system will be flagged in the pre-check report as is shown in later in the blog.

TIP: The report R_S4_PRE_TRANSITION_CHECKS has to be run in client 000.

In Client 000 enter tCode SE38 program R_S4_PRE_TRANSITION_CHECKS
Initially screen will show field names below but after the first execution you will see the proper text.

Please see report below:

If required open a ticket on the corresponding component for check class mentioned in the error message. For add-on contact add-on vendor.

TIP: In the above part of the report we see some of the items that are already explained in simplification report in detail like SAP ERP shopping cart is considered obsolete and is replaced by SAP S/4HANA application. All this is explained in detail in the sc_mig_guide_v1.pdf file attached to Note discussed earlier.

STEP 2: ACCOUNTING PRE-CHECKS

This is a check for Finance as per Note 2333236 – Composite SAP Notes: Enhancements and message texts for the program RASFIN_MIGR_PRECHECK

Also please refer the comprehensive document FIN_S4_CONV_1610.PDF – Converting Your Accounting Components to SAP S/4HANA attached to Note 2332030 Conversion of accounting to SAP S/4HANA

Apply Note 1939592 – SFIN: Pre-Check Report for migrating to New Asset Accounting to get latest version of report RASFIN_MIGR_PRECHECK

TIP: Please note the report has to be run every productive client.

In Client 100 enter tCode SE38 execute RASFIN_MIGR_PRECHECK

You also need to run reports below:

Hope the above information has been useful in your efforts for S/4HANA conversion.

Mahesh Sardesai
Product Expert – S/4HANA RIG (Regional Implementation Group)
SAP Canada

 

The SAP BI 2017 Conference will be in Orlando from February 27-March 2nd and it is considered a premier event for SAP BI professionals, consultants and customers.  The program includes 165 sessions and over 60 new hours of in-depth educational sessions, hands-on labs, ask the expert sessions, interactive discussion forums, networking events and live demos. The event will bring out some of the smartest and most well respected individuals and customers in the SAP business intelligence, analytics and reporting communities.

It’s an excellent value for the high quality education offered.  In 4 days, you’ll have access to: educational sessions, workshops, how-to clinics, roundtables, keynote presentations directly from SAP about its future direction and the latest trends, private one-on-one consultations with partners and peers and networking with peers from other companies

I will be attending this year and look forward to attending several case study customer stories:

Case study: How Callaway Golf utilizes embedded SAP BW functionality with SAP ECC on SAP HANA for real-time reporting Calloway Golf
Case study: British American Tobacco’s road map to enable analytics alongside SAP BW 7.5 on SAP HANA and SAP BusinessObjects BI 4.2 British American Tobacco
Case study: How C Spire leverages SAP BusinessObjects Design Studio to build interactive dashboards for multiple user types and multiple data sources C Spire
Case study: How Newcastle University migrated 9 SAP Business Suite, SAP BW, and SAP Java-based systems to SAP HANA Newcastle University
Case study: How Lockheed Martin leverages SAP ONE Support Launchpad and SAP BusinessObjects Lumira to optimize its SAP investment Lockheed Martin

I also look forward to attending Ingo Hilgefort sessions:

What you need to know to build, implement, and manage an effective BI road map and strategy that delivers measurable results!

Using SAP BusinessObjects Design Studio for self-service BI

Program Content – There will be several sessions that cover core areas such as BI implementations and upgrades, SAP HANA, SAP BusinessObjects, Self-service BI, business analytics, reporting and much more. There is truly something for everyone, and it is being held alongside of the HANA2017, Admin & Infrastructure 2017 and IoT2017– all sessions are open to attendees at BI 2017.

Networking – One of the under rated parts of any SAP event and something I always find most useful is networking. The BI2017 event is a great opportunity to network with peers and meet new people. The who’s who of the SAP BI industry will be at this event looking to swap war stories and business cards, connect on LinkedIn and Twitter and share information as well as discuss the sessions from each day

New this year – expect to hear and learn more about SAP BusinessObjects Lumira 2.0.  See more of what is new here.

Hopefully I have shared why you should attend the BI2017. If you are an SAP BusinessObjects user or provide technical support for SAP BusinessObjects BI this is one event that you won’t want to miss and don’t forget to register today.

 

A look back at this BI2015 wrap video:

 

 

To start with, I install VMware ESXi on my Intel NUC6i5SYK. Next I could just deploy the HANA 2.0, express edition Server + applications virtual machine, but that is a 12 GB download and does not allow a fully qualified host name for the XS advanced applications.

Therefore, I am using an alternative approach, similar to Upgrade your HANA, express edition to HANA 2, and start with installing the HANA, express edition Server only virtual machine. For this I use the VMware OVF Tool:

And follow the progress in my VMware ESXi Embedded Host Client:

Since this is a small virtual machine, the deployment does not take long, and to prepare for the subsequent Server + applications installation, I increase the memory and add a second drive onto which I will mount the data volumes later:

I then start the virtual machine, secure the installation, and connect to it with HANA Studio to stop the database for uninstallation:

Uninstalling the HANA, express edition is straight forward and does not take long:

Before I restart the system, I change the hostname to a fully qualified one, that I had previously reserved with Dyn to avoid having to edit any hosts files and especially to set a signed certificate for my XSA applications later as well:

After a restart, I sFTP the two HANA 2.0, express edition binary installer files for the Server only Installer and the Applications to the /tmp folder of my virtual machine (do NOT use a folder under /root since the installation would fail reproducibly due to an authorization issue!):

Next, I extract both archives:

And start the installation:

As a result, I got a fully working HANA 2.0, express edition Server + applications system that I can connect to with the XS Advanced Command-Line Client and set a signed SSL certificate (please check SAP Note 2243019 – Providing SSL certificates for domains defined in SAP HANA extended application services, advanced model to ensure that your key and certificate are in the required format – if they were not, this is a guide How to convert a certificate into the appropriate format):

Since I have to restart my system for these changes to take effect anyway, I use this opportunity to mount the data volumes to the second virtual disk I had created earlier.

First I check, to which device my new virtual disk has been mapped to:

Then I create a primary partition:

And format it with xfs:

sudo mkfs.xfs /dev/sdb1

As a result, I can mount my device and move the data volumes onto it (please remember to sudo each command ;):

To automatically mount the device after a restart, I add a respective configuration line to /etc/fstab:

After a restart I am rewarded with a nice disk layout:

And a secure login to my XSA applications without the need for any hosts file modifications:

 

 

Can you give me a short introduction of yourself?

My name is Arndt Sieburg and I’m SAP Solution Architect and Project Manager at SolunITy. My focus areas are SAP BW, Business Intelligence in general and SAP HANA. I love mountains, hiking in summer and skiing in winter.

Why you are here?

I want to learn and try out something new, especially at the Learn, Play & Hack day. In addition, I attend to network with others and to share knowledge and experiences.

What presentation/track was the most impressive/interesting for you? What sort of presentations/tracks you have followed until now?

The learn, play and hack day on Friday was a great challenge to get something new to run. I also liked the geospatial modeling in HANA, the different analytic options in SAP Business Suite and S/4HANA and the business sketching session.

What is your opinion about the IoT sessions and what was your developing task?

Our team had the challenge to use FitBit heart beat sensor data in order to control a drone. My task was to develop the communication for the drone. Finally we could use only SMS to send data to the drone and we got a working prototype with an Android smartphone.

How did you learn about the SAP Community and what made you want to join?

I heard about SAP Inside Track from the SAP Stammtisch in Munich several years ago and I attended a SIT event in Hamburg too. The event concept is great for networking, knowledge sharing and learning something new beyond my current scope of work.

Have you experienced the Community helping you to drive/to support some of your topics?

The community network motivated me to dig into new areas like SAP HANA Development. I expect more a long-term benefit from participating, an immediate business benefit is not important.

Finally, any advice for us?

  • The new community platform sap.com makes it difficult to find content.
  • I love the concept of combining a hands-on day with the networking event and I will definitely attend several SITs per year.

In SAP HANA 2.0 SPS 00, the dynamic tiering option supports HANA system replication to protect your production system against unexpected downtime due to disasters.

What is system replication  and how does it work?  System replication is a mirroring configuration where you set up a secondary system that is an exact copy of the main, or primary, system.

The primary system sends redo log buffers to the secondary system, where they are persisted  and replayed to build a shadow database.

The secondary system is passive until the primary system becomes unavailable and you perform a takeover. At this point, the secondary system becomes a standalone server using the data from the redo log buffers.

And the benefit?  System replication supports rapid failover for planned downtime or to address local faults or storage corruption.

For a thorough background in HANA system replication, see the resources in blog post HANA System Replication – Take-over process by Frank Bannert.

You’ll need to be aware that there are some behavior differences when your landscape includes dynamic tiering.  For example:

  • You can only enable 2-tier synchronous mode replication,. In 2-tier replication, there is only one secondary system per primary system. Synchronous mode means that the primary system waits for confirmation that the log is persisted in the secondary before committing a transaction.
  • Failback always requires full initialization (full data shipping) for the dynamic tiering service. The landscape is not available for takeover (not replication safe) until initialization is complete for all services.
  • Some downtime is required for software upgrade.

See SAP Note 2356851 for a complete description of behavior differences between DT and HANA and a list of supported configuration parameters.

Want to learn more?  Check this blog in March for a link to a video tutorial for setting up system replication and performing takeover in a landscape with dynamic tiering.