Skip to Content
|

Category Archives: Uncategorized

In general nearly everyone knows that with new installations based on NW750 there are no cluster tables anymore. When you migrate to HANA one prerequisite is that all cluster and pool tables have to be declustered/depooled.
But is this really correct? In my last projects there were a lot of issues regarding cluster and pool tables. SUM and SWPM have not worked as expected and so some data was not migrated / depooled / declustered automatically in cause of bugs.
So I want to shed some light on this topic to help to understand the usage, procedure and checks.

  1. Clarification Cluster and Pool tables
  2. Check Pool and Cluster tables in the system
  3. Procedure of depooling and declustering using R3load
  4. Splitting behaviour
  5. ABAP dictionary adaption and checks
  6. HANA checks
  7. Technical takeways
  8. SAP notes

 

1) Clarification Cluster and Pool tables

First of all what are cluster and pool tables in detail? (SAP details regarding releases in note 1892354)

  • Cluster tables combine information from several tables logically belonging together. They allow efficient access to a whole application object without incurring joins on database level. This means they can only be logical read by DBSL. A normal native SQL won’t result in correct data by default.
  • Pool tables combine a large number of individual small tables into one database table, which adressed problems with large numbers of individual database objects. This tables can be read by native SQL by default without any trouble.

Do you know how many cluster/pool tables existing in an ERP EhP8 system? Have a look into table DD06L. In my system I count 162.
Cluster: 53
Pool: 109


2) Check Pool and Cluster tables in the system

At first it is wrong that HANA can’t run with cluster and pool tables. HANA can handle it, but it is not wise to do so for technical reasons.
There are still some cluster / pool tables after the migration by default!
Check out your systems and the note 1849890:
“There are pooled tables that cannot be converted into transparent tables for technical reasons. Keep these tables as pooled tables. This might apply to the following tables:
GLP1, GLP2, GLPPC, GLS1, GLS2, GLSPC, GLT1, GLT2, JVS1, T157T. It might also apply to all pooled tables with the prefix M_.”

Check it out by yourself for your systems (DBACOCKPIT->Diagnostics->SQL Editor or HANA Studio or any other SQL Client):

SELECT * FROM M_TABLES where TABLE_NAME in (SELECT TABNAME FROM DD02L WHERE TABCLASS = 'CLUSTER') ORDER BY record_count DESC;
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT TABNAME FROM DD02L WHERE TABCLASS = 'POOL') ORDER BY record_count DESC;

#All in one:
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT TABNAME FROM DD02L WHERE TABCLASS = 'POOL' OR TABCLASS = 'CLUSTER') ORDER BY record_count DESC;

So which tables should be declustered / depooled after migration and what exactly happens to data and ABAP dictionary?
Ok, let’s start with which tables currently existing in the system with such classification. This can be answered by a DB query on table DD06L which works on anyDB _AND_ HANA:

 

SELECT * FROM M_TABLES where TABLE_NAME in (SELECT SQLTAB FROM DD06L WHERE SQLCLASS = 'POOL') ORDER BY record_count DESC
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT SQLTAB FROM DD06L WHERE SQLCLASS = 'CLUSTER') ORDER BY record_count DESC

#All in one:
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT SQLTAB FROM DD06L WHERE SQLCLASS = 'CLUSTER' OR SQLCLASS = 'POOL') ORDER BY record_count DESC

 


 

3) Procedure of depooling and declustering using R3load

Second step after we identified the filled tables is what happens in case of declustering/depooling?
You can select which transparent table belongs to a cluster/pool table by selecting DD02L:

SELECT TABNAME, TABCLASS, SQLTAB FROM DD02L WHERE TABCLASS = 'POOL' OR TABCLASS = 'CLUSTER';

Where TABNAME is the transparent table and SQLTAB is the cluster/pool table.

I take an example with a well known cluster called CDCLS. This one is normally one of the biggest in an ERP system.
SE11 layout:

This table will be splitted in two transparent tables called CDPOS and PCDPOS. This can be done in the ABAP layer described in note 2227432 or when you use SWPM or SUM DMO in a migration procedure => also after failed migration it can happen afterwards with note 2054699.

For small tables you can use the ABAP approach. For bigger clusters like CDCLS I strongly recommend to use the R3load procedure because of the size and duration. R3load is called with an option decluster=true.

This result in different ways:

  • for pool tables it is easy to select the dataso no difference to other tables
  • for cluster table a logical structure mapping will be created.

I won’t go to deep because the most migrations today will happen via SUM and here everything happens automatically, but some words to understand the high level:

#Declustering will happen with this option in the properties file
export_monitor_cmd.properties:
decluster=true

Packages (depending on procedure SWPM/SUM):

  • SAPCLUST.STR => Cluster table structures
  • SAPCDCLS.STR.logical => logical structure of CDCLS

 

tab: PCDPOS
att: SDOCU 2 ?N Tc all PCDPOS~0 SDOCU 2
ref: CDCLS
fld: MANDANT CLNT 3 0 0 not_null 1
fld: OBJECTCLAS CHAR 15 0 0 not_null 2
fld: OBJECTID CHAR 90 0 0 not_null 3
fld: CHANGENR CHAR 10 0 0 not_null 4
fld: TABNAME CHAR 30 0 0 not_null 5
fld: TABKEY CHAR 70 0 0 not_null 6
fld: FNAME CHAR 30 0 0 not_null 7
fld: CHNGIND CHAR 1 0 0 not_null 8
fld: TEXT_CASE CHAR 1 0 0 not_null 0
fld: UNIT_OLD UNIT 3 0 0 not_null 0
fld: UNIT_NEW UNIT 3 0 0 not_null 0
fld: CUKY_OLD CUKY 5 0 0 not_null 0
fld: CUKY_NEW CUKY 5 0 0 not_null 0
fld: VALUE_NEW CHAR 254 0 0 not_null 0
fld: VALUE_OLD CHAR 254 0 0 not_null 0

tab: CDPOS
att: SDOCU 6 ?N Tc all CDPOS~0 SDOCU 6
ref: CDCLS
fld: MANDANT CLNT 3 0 0 not_null 1
fld: OBJECTCLAS CHAR 15 0 0 not_null 2
fld: OBJECTID CHAR 90 0 0 not_null 3
fld: CHANGENR CHAR 10 0 0 not_null 4
fld: TABNAME CHAR 30 0 0 not_null 5
fld: TABKEY CHAR 70 0 0 not_null 6
fld: FNAME CHAR 30 0 0 not_null 7
fld: CHNGIND CHAR 1 0 0 not_null 8
fld: TEXT_CASE CHAR 1 0 0 not_null 0
fld: UNIT_OLD UNIT 3 0 0 not_null 0
fld: UNIT_NEW UNIT 3 0 0 not_null 0
fld: CUKY_OLD CUKY 5 0 0 not_null 0
fld: CUKY_NEW CUKY 5 0 0 not_null 0
fld: VALUE_NEW CHAR 254 0 0 not_null 0
fld: VALUE_OLD CHAR 254 0 0 not_null 0
fld: _DATAAGING DATS 8 0 0 not_null 0

 


4) Splitting behaviour

Be careful if you have splitted your cluster tables with SWPM and R3ta. This will result in a behaviour which differs from transparent tables:
CDCLS-1*
SAPCDCLS-1*
SAPCDCLS.STR.logical

The SAPCDCLS* files will be the correct files for searching for errors. These are needed and automatically created by the procedure for the logical mapping. Don’t get irretated by import/export logs as well as on things like the migration time stats from migtime.

 


 

5) ABAP dictionary adaption and checks

Ok, now we know the declustering and depooling happens during export. But as you know also the ABAP dictionary has to be adjusted in other case the tables are not known and can’t be used by the ABAP stack.
For this the following reports exist:

  • RUTCSADAPT => adjust cluster dictionary structures
  • RUTPOADAPT => adjust pool dictionary structures

The reports will be called automatically by SWPM (if you check declustering/depooling in dialog phase) and SUM. They will just adjust the dictionary, they won’t migrate or decluster data!

This can only happen if the cluster tables are empty which should be the result of the declustering/depooling, because all data imported into the transparent tables.
To check if all data were transfered successfully execute report SHDB_MIGRATION_CHECK (note 1785060). This report should always be part of your cutover procedure as postprocessing task.

So if this report finished without errors AND warnings you should be fine, shouldn’t you? Seems to be wrong because the report won’t check all cluster / pool tables. Some of them are excluded for reason (see 1849890), some are just not checked. For example pool tables like KAPOL, KBPOL, UTAB. There is no official documentation for there existence as pool table on HANA.
They are getting depooled but not checked by the report. May be SAP will adjust documentation and the check report in the future.

 


 

6) HANA checks

There is another check on HANA with SQL statement from the SQL collection attached to note 1969700.
You can use the Minichecks (HANA_Configuration_MiniChecks*) or HANA_Tables_SpecialTables.
The statements will check if there are still any records in tables ‘CDCLS’, ‘EDI40’, ‘KAPOL’, ‘KOCLU’ and ‘RFBLG’ .
Other once like KBPOL or UTAB won’t be checked in the current version.
I currently have several customers with different pool tables which were correctly splitted and filled into transparent tables, but some entries existing in the new tables AND in the old original pool table. The dictionary structures are correct the transparent table are in use and there were no migration errors. OSS messages are still in processing why this happened… I assume that the procedure is buggy or there are technical reasons which are not offically documented.

 


 

7) Technical takeaways

  • difference between cluster and pool tables
  • design cluster and pool tables on HANA
  • depooling and declustering procedure
  • ABAP adaption with reports RUTPOADAPT and RUTCSADAPT
  • check report SHDB_MIGRATION_CHECK
  • HANA statements for checking cluster/pool tables: HANA_Configuration_MiniChecks* and HANA_Tables_SpecialTables

8) SAP notes

1849890 – Changing tables from pooled to transparent

1785060 – Recommendations for performing the migration to SAP HANA

1892354 – SAP Strategy for Cluster and Pool Tables

2054699 – Subsequent declustering of an SAP system on SAP HANA using R3load

2227432 – How to: Declustering and depooling with NW 7.4 SP03 and higher on databases other than SAP HANA

1784377 – Checking pool tables and cluster tables

2634739 – How to get a list about the existing pool and cluster tables in the system?

1920165 – Downport: Find access to physical pool/cluster tables

1897665 – R3load: declustering support for tables in / namespace

I got a blog-worthy surprise when I did a quick performance test on two different ways to use the new-ish (7.4) table expressions to perform a search on an internal table.

My scenario involved internal table searches where the values would be missing a lot of the time. The unexpected result was that when a row can’t be found, using table expressions with exceptions is ten times slower on average than all other lookup methods, including the good ol’ fashioned READ TABLE with SY-SUBRC.

Quick primer for those still getting accustomed to 7.4 ABAP (feel free to skip to the results):

With table expressions, finding an entry in a table is really neat. I have a table that includes a field “ID”. So the following READ TABLE can be rewritten using a table expression (the bit in square brackets):

READ TABLE itab INTO row WITH KEY id = find_id.
"Equivalent to:
row = itab[ id = find_id ].

If the matching entry cannot be found, you need to catch the exception:

  TRY.
      row = itab[ id = find_id ].
    CATCH cx_sy_itab_line_not_found.
      ...
  ENDTRY.

Another alternative is to use a VALUE constructor expression where you can add the keyword OPTIONAL (or DEFAULT) to just initialize any non-found values.

row = VALUE #( itab[ id = find_id ] OPTIONAL ).
IF row IS INITIAL.
  ...

This particular usage of VALUE may seem a little awkward. I mean, why use x = VALUE #( y ) when you can just write x = y? The VALUE constructor’s sole purpose here is the OPTIONAL bit that lets us do away with the exception.

As I was working on a performance-sensitive component, I tested it to see what performs best.

Results:

For completeness I also added two other ways to look for an entry in a table: The trusty READ TABLE  … WITH KEY, and using line_exists( ). I tested a million failed lookups on a 7.50 HANA system. Here are the results:

READ TABLE itab INTO row WITH KEY id = find_id.    :   619,614
IF line_exists( itab[ id = find_id ] ).            :   539,158

row = VALUE #( itab[ id = find_id ] OPTIONAL ).    :   889,382

  row = itab[ id = find_id ]. 
CATCH cx_sy_itab_line_not_found.                   : 6,228,768

I did not expect the last one at all.

So the take-away here for me is that a TRY-CATCH may be easier to read, but should not be used in performance-sensitive code unless you expect the values to be found most of the time. I’m happy to sacrifice a little performance for readability, but this is a significant impact.

I suspect that this applies to CATCH blocks in general, but that’s another analysis for another day.

For comparison, I also re-ran the same but this time with a lookup value that did exist. Two things happened:

  • the exception was no longer an influence (to be expected)
  • line_exists( ) became a poor choice in my scenario, because we need to double up the the search in order to also read the record:
  IF line_exists( itab[ id = find_id ] ).
    row = itab[ id = find_id ].

To summarise:

  • If you don’t need the data, line_exists( ) is fastest.
  • If performance is number 1 priority and you need the data, READ TABLE is fastest.
  • For compact and/or readable code, use table expressions.
    (Yes, I know, ‘new’ ABAP can be used to make code either more readable or more cryptic, but that’s another discussion)
  • TRY-CATCH with table expressions can be a useful way to structure your code (e.g. to use a single catch handler for multiple expressions), but be mindful of the expected failure rate and performance-criticality of the component. If we’re talking anything less than thousands in a short space of time then you can safely ignore the impact.

文章目录

Jerry的前一篇文章企业数字化转型与SAP云平台介绍了SAP云平台在企业数字化转型中的重要地位和作用。作为一个平台即服务(PaaS)解决方案, SAP云平台实现了高度的虚拟化,包括CPU计算资源,存储,网络和数据库等实体的虚拟化,使用户可以按需使用各种资源。然而这些虚拟化资源从实现角度而言,最终仍然需要运行在物理服务器上。这些物理服务器所处的地理位置,在云计算领域内称为数据中心。
SAP官网有个链接专门介绍SAP数据中心策略。
下图是截至2018年6月20日,SAP官网上的SAP数据中心全球分布图。
图中这些数据中心对应着我们在SAP云平台Cockpit的Regions标签页里看到的列表内容。
Jerry一度很好奇,到底SAP数据中心,在现实世界里是怎样一种存在?终于我在SAP官网的这个链接里找到了介绍。

SAP数据中心内部的组成部分

下图是位于德国St.Leon-Rot的SAP数据中心的布局和组件构成示意图。
我们用Cloud Service Level Agreement(服务水平协议)来衡量云服务的高可用性。比如如果一个云服务其SLA为99.99%,意味着每个月服务出现故障的时间只能占总时间的0.001%,即4分钟多一点的时间,或者折合成年计算,就是每年该服务处于离线状态的时间不得超过50分钟。
为了确保云服务的高可用性,不间断的电源供应,以及良好的硬件散热装置就成为数据中心必不可少的设施。数据中心连接当地公用事业企业运营的两个独立的电网系统。若其中一个系统出现故障,则另一个继续供电。每个数据中心里备有13台柴油发电机(上图的Diesel generators),总共发电量为29兆瓦,确保数据中心在紧急情况下的电力供应。当地电力供应公司和柴油发电机提供电压为20千伏的电力,再由上图的变压器(Transformers)转换成380伏特。此外,数据中心的电池组能提供15分钟的电力供应,用于在从发生电力故障到启动柴油发电机这个时间窗口内的紧急供电。电池组的最大充电容量会定期检查,如果显著降低会进行更换。
高性能服务器运行时产生的热量是相当惊人的,所以散热就成了数据中心一个永恒的话题。数据中心机房里机柜和机架的摆放是很有讲究的,甚至有一套国际统一标准,这些标准的代号通常以ANSI/TIA开头,全称是American National Standard Institute/Telecommunications Industry Association。比如ANSI/TIA/EIA-606-A标准,规定在数据机房中必须使用两个字母或阿拉伯数字来标记每一块边长为600毫米架空地板。
对机柜/机架列的摆放的前部、背部和高度,ANSI/TIA/EIA标准做了以下的规定:
  • 机柜/机架列的背部间距最小需要间隔0.6米(2英尺),推荐间隔1米(3英尺);
  • 机柜/机架的最高允许的高度为2.4米(8英尺),推荐的最大高度为2.1米(7英尺);
  • 机柜/机架列的前部间隔最小为1米(3英尺),如果有较深的设备放置在机架/机柜中时,推荐间隔1.2米(4英尺);
看一些网上找到的实物图:
再回到位于德国St.Leon-Rot的SAP数据中心的布局和组件构成示意图,下图的三个组件都和散热有关:
  • Cooling water
  • Turbo-cooling
  • Heat exchanger
当外界温度低于12~13摄氏度时,机房内的空调装置将机房内充满服务器工作时产生巨大热量的热空气抽除至室外,以室外的冷空气替代。当室外温度高于13摄氏度时,空调系统会使用水冷方式进行散热(和上世纪末90年代末本世纪初电脑发烧友使用水冷装置辅助散发CPU超频时产生的大量热量方法一致)。Rot数据中心共有6个turbo-cooling单元,它们只有部分处于运行状态,其他则处于备用和候补只用。一旦数据中心处于运行中的冷却系统出现故障,在备用单元启动之前,数据中心将使用预留的30万升4摄氏度以下的冰水,确保备用冷却单元启用前数据中心的散热需求。
散热单元本身也需要散热。Rot数据中心有18个热交换装置(heat exchangers), 将冷却装置产生的热量排到外界。在夏季,当外界室温高于26摄氏度时,热交换装置也会使用水冷散热方式,通过水分的蒸发达到更佳的散热效果。Rot数据中心配有专属的供水工厂确保夏季用于热交换装置使用的水供应,同时Rot本地的市政供水系统也预留了部分水供应配额作为Rot专属供水工厂的后备。
希望下面这张3D的示意图能让大家对于数据中心内部组成有一个更直观的了解。

SAP数据中心的安全性

数据中心物理站点的安全性:SAP数据中心处于7×24小时不间断的监控之下。所谓的陷阱房间(Man trap Room)确保只有合法人员才能进入一个安全控制区域。程序猿可以把陷阱房间类比成在代码里执行业务逻辑之前必要的权限检查。本质上它就是进入一个安全控制区域之前的一个小房间,一个有意设计的”陷阱”。这个”陷阱”,让数据中心安全系统对来访者进行安全验证,如果发现有试图未经授权的进入,会触发警报。进入高安全性要求的区域,需要通过生物特征扫描的方式进行认证。
数据的安全性:一方面,数据中心的入侵检测系统(intrusion detection system)会对网络传输进行即时监视, 监控输入数据并识别可疑活动;另一方面,由不同厂家制造的防火墙可以保护数据。此外,备份文件和数据以加密形式与客户交换,或通过安全的光纤电缆传输。
硬件的安全性:所有虚拟服务器和物理服务器,SAP HANA数据库,存储单元和网络都可以访问物理硬件池。若单个元件出现故障,其正在服务的负载可以立即转移到其他元件上,不会影响系统稳定性。如果因火灾引发硬件故障,数据可以从备份系统中恢复。
数据中心的消防措施:这么多服务器聚集在一起,万一着火了怎么办?数据中心被分为多个防火分区。数千个火灾探测器和吸气式烟雾探测器监控着所有机房。一旦探测器检测到过热电子元件散发出的特殊气体,会发出预警。如果发生火灾,会自动向消防部门发出警报,同时喷射出洁净的INERGEN灭火气体,扑灭火情。灭火气体的压力也会定期检查确保处于达标状态。
数据中心的建筑安全性:数据中心由 10万吨钢筋混凝土构筑而成,支架是 480 根混凝土立柱,每根柱子深入地下 16 米。外墙厚度高达 30 厘米,同样由钢筋混凝土砌成。服务器机房由三面混凝土墙环绕保护。这种设计可以有效保护数据中心免受各种自然界或者人为破坏,比如抵御风暴甚至小型飞机坠毁事件。
数据隐私:客户数据的处理均在客户选择的授权范围内处理,不会转发给第三方。
数据备份:数据备份通过磁盘对拷的方式进行,确保了数据的快速创建和恢复。除了每日例行的完整备份之外,每日内的不同时间段还会创建多个临时备份版本,并且像完整备份一样进行归档处理,存储于其他介质上。

SAP数据中心的绿色运营

早在SAP于德国St.Leon-Rot创建第一个数据中心时,如何高效使用能源成为了优先级最高需要考虑的话题。得益于绿色电力的引入以及其他类型能源的高效利用,SAP 全球各地的数据中心于2014年成功地实现了温室气体零排放。德国第二大的TUV(Technischer überwachungs-Verein,技术检验协会)检测机构Rhineland,给予SAP数据中心年度审查的结果是最高级别:Premium。
早在2014年,SAP就宣布其数据中心使用的能源为100%可再生资源。

SAP云平台编程环境

再回到SAP云平台的Cockpit。Region标签页里每个Region代表地球上的一个数据中心。
您可能已经观察到了,为什么有的数据中心,其Infrastructure提供商是亚马逊,微软,谷歌这些厂商,而有的数据中心,却又是由SAP自己来扮演Infrastructure提供商的角色呢?
这个差异就得从SAP云平台的开发环境,Neo和CloudFoundry说起了。对于Neo开发环境来说,SAP包办了IaaS中的基础设施I和PaaS中的平台P。而对于CloudFoundry开发环境,
从上图看出其底层基础设施是由第三方的数据中心提供商负责搭建的,SAP只负责搭建和运维平台层的服务,即CloudFoundry本身。
至于Neo和CloudFoundry这两个开发环境的区别和各自的使用场景,Jerry会在将来的文章继续介绍。

Conversational User Interfaces(CUI) are the new UIs. We are all aware of the rise of the intelligent assistants in the market and they are becoming part of our day to day conversations. We as consumers are beginning to use the same to also interact with enterprise software too – lookup for clothing trends and place an order in your favourite retail store. Analysts are predicting this to be the next biggest paradigm shift in information technology.

Most of you would be aware that SAP acquired a bot building platform called Recast.AI early this year. This has been integrated with SAP portfolio and is now generally available as SAP Conversational AI.

Some of the biggest strengths of SAP Conversational AI are its Natural Language Processing (NLP) capabilities, Off-the-shelf bot which are pre-trained for each industry, Integration with multiple channels like Slack, Skype, Messenger etc. I have been working with my colleague Joni Liu (Chatbot expert) on how to integrate the chatbot with an application on SAP Cloud Platform. Since the Fiori Launchpad is the central point of accessing business information, we thought we would try and integrate the chatbot with the Launchpad. Below are the steps we followed and you can try this too using the trial account.

In order to enhance the Fiori Launchpad on SAP Cloud Platform, you would need to build Shell plugins. If you would like to know more about Shell plugin and how to create your own Fiori Launchpad, I would highly recommend you to go through this openSAP course where this has been covered in detail.

Before we start with configuring things in the Cloud Platform, you would need to obviously create your chatbot. You can register for a free trial account in Recast.AI website. There are plenty of tutorials which can help you get started.

In the below screen, I have created a chatbot for Supplier interaction and added few intents to support the interaction with suppliers.

In the “Connect” tab, you can configure the chatbot to be embedded within other channels like Skype, Twitter etc.  For this demo, I am selecting Webchat as this needs to embedded within a web page.

Create a new Webchat configuration and select the color scheme, Header logo and title which you like to show within the Fiori Launchpad.

You also have the option to customize the bot/user picture along with welcome message

At the bottom, you can provide further customizations and give a name to the webchat channel.

When you click on “Create” button, this would provide you a webchat script. Copy this for use within the Shell plugin which would be created later. In particular, you would need the Channel ID and Token details.

Now its time to switch to your Cloud Platform Trial account. Launch “SAP WebIDE Full-Stack” service. We no longer use the old “Web IDE” service based on the announcement here.

 

Enable the Feature “ SAP Fiori Launchpad Extensibility” and restart the IDE.

Create a new project based on a template

Select “SAP Fiori Launchpad Plugin”

Provide a project name

In the Template customization, provide the plugin ID and a Title. Since we don’t need sample code for header/footer, leave the checkboxes uncheked.

In the component.js file, add a line within the function init() to invoke the function renderRecastChatbot(). Below is the definition of renderRecastChatbot(). Note that the ChannelID and Token values are the ones copied earlier from Recast.AI.

renderRecastChatbot: function() {
			if (!document.getElementById("recast-webchat")) {
				var s = document.createElement("script");
				s.setAttribute("id", "recast-webchat");
				s.setAttribute("src", "https://cdn.recast.ai/webchat/webchat.js");
				document.body.appendChild(s);
			}
			s.setAttribute("channelId", "49b174d8-1246-4721-ae8c-c84104a28fbf");
			s.setAttribute("token", "358a49c73ddfbba38ebbb36c78e5253b");
},

This is how the component.js file would looks like after making the changes.

You don’t need to change any other file. You can deploy the application to your SAP Cloud Platform account after saving the changes.

The next step, is to prepare a Fiori Launchpad site. In the Fiori Configuration Cockpit, add a new app based on app type “Shell Plugin”. Below are the value which I provided based on my application.

Property Value
App Type Shell Plugin
Shell Plugin Type Custom
Component URL /
SAPUI5 Component com.sap.myushellplugin
HTML5 App name myshellplugin

Below is the configuration of my Shell plugin app

Publish your site and test the Launchpad. You should be able to see the chatbot on the bottom right hand corner with the onboarding message.

You can click on it and start interacting with it. In this case, since this is a supplier portal which I have setup, the chatbot can be used to assist the supplier with queries around the status of orders.

 

 

The Case for BI

When SMB wants to gain further insight into the company’s data with ease one of the more powerful options is SAP Analytics Cloud especially when looking into the gained value compare to the low monthly user license costs.

SAP Analytics helps companies make better decisions. We believe that digital transformation is changing the nature of decision making in the enterprise.

The evolution in decision-making mirrors an evolution in data and analytics. Democratization and visualization made it possible to decentralize decision-making. But companies are now finding that access and simplicity aren’t enough to support mission-critical business processes.  They need to be able to analyze the past while planning and predicting the future, moment by moment in the present.

SAP’s BI strategy update, announced in February, means that all future innovation for data discovery use cases will be made in SAP Analytics Cloud. The following FAQ will provide further insight.

SAP Analytics Cloud is a single experience for decision making that allows users to discover, visualize, plan and predict, all in one place. Giving everyone, whether in front of the customer or in the boardroom, the power to find new insights and take action.

SAP Analytics Cloud for Business Intelligence

Connect and prepare your data

  • Prepare and model cloud and on-premise data from your browser
  • Data connectors from SAP (BW, HANA, SAP Cloud Platform, Universes) and non-SAP sources
  • Linking aggregated data
  • Cross dataset calculations

Visualize and build your views

  • Design, visualize, and create your stories live
  • Add simple location analytics into your visualizations
  • Personalize your own dashboard views

Share with your team

  • Simply collaborate with your team
  • Take action on your data
  • Use permissions to control who can view and edit your analytics

 

 

SAP Business One out-of-the-box Analytics vs. SAP Analytics Cloud

SAP Business One Analytics SAP Analytics Cloud
Operational/Tactical Analytics Strategic Analytics
BI and some Predictive capabilities BI, Predictive & Planning capabilities
Empowers Key User, Facilitates End User Empowers the End User
Limited self-service & flexibility Self-service & flexibility for everyone
Analytics within the context of B1 Agnostic platform open to all applications & data
On-premise or Managed cloud Pure Cloud solution (public or private)
No built-in geospacial support Built-in Geospacial visualization included (Esri))

Connecting SAP Analytics Cloud to SAP Business One

The following illustrates all the possible options of live data connection for connecting SAC to other systems (SAP Business One connection will be via SAP HANA)

There are 2 main options for Live Data Connections to SAP HANA (more can be found here):

  • Direct Connection (using CORS)
    • You don’t want to set up a reverse proxy on your local network and put SAP Analytics Cloud behind it.
    • You are not connecting to an SAP Cloud Platform system.
  • Path Connection (using Reverse Proxy to map into the HANA app framework / XS engine)
    • You already have a reverse proxy set up on your local network and must access SAP Analytics Cloud through it.
    • You do not want to enable CORS support on your SAP HANA system.
    • You do not want to configure CORS on multiple systems. You can add multiple systems as paths instead of enabling CORS on every system.
    • You are not connecting to an SAP Cloud Platform system.

Connecting SAP Analytics Cloud to SAP Business One using a reverse proxy

Configure the reverse proxy

SAP Analytics Cloud connection configuration

 

Hi!

SAP Agile Data Preparation is a great tool for business users in the context of self service analytics. And for IT professionals/developers is suitable for prototyping or even for developing quick data combinations or transformations.

Based on this example…

https://help.sap.com/viewer/81fc9ae476d74e3f9ecad81de1a5f085/1.0.22/en-US/98bc010e33de479cb44bcc50dbbc9978.html

I recorded a 7 minutes video where I show the following:

a) Get data from an Excel file

b) Unpivot the data

c) Add a calculated column

d) Publish the result as a HANA calculation view

e) Access the view from Analysis for Office

 

Are you using Agile data preparation? In which context? Let me know your thoughts.

Cheers,

Miguel

When you hear the words “Central Finance” – what is your perception? How would you define it?

Coming from the ASUG Annual Conference, and specifically from a pre-conferece workshop, a few misconceptions surfaced about what Central Finance is, and what it is not. So let’s talk about the terminology, and then take a look at the options of getting there.

Terminology – What’s What?

First, the definitions, which are also illustrated in the diagram below:

  • SAP S/4HANA Finance – the financial software that is a part of the SAP S/4HANA Enterprise Management solution, the next generation business suite. The functionality encompasses all capabilities of the Finance portfolio, including Financial Planning and Analysis; Accounting and Financial Close; Finance Operations; Treasury Management; Enterprise Risk and Compliance; and Cybersecurity and Data Protection.
  • Central Finance – a deployment mode that allows organizations to move to SAP S/4HANA Finance by bringing together Finance information from multiple back-ends, whether they are SAP or non-SAP back-ends. That being said – from a systems perspective, Central Finance IS a SAP S/4HANA Finance system. It is simply a deployment mode of how SAP S/4HANA Finance is implemented in relation to other systems that may already be in place.
  • SAP S/4HANA for central finance foundation – the formal name for the deployment mode of Central Finance, which from a systems perspective, is specifically the tool within SAP S/4HANA Finance that allows organizations to map the data between their back-ends (SAP and non-SAP) and the instance of SAP S/4HANA Finance (Central Finance) that centralizes the information from those back-ends.

Options of Moving to Central Finance

Unlike the nineties, “big bang” implementations of any new systems and scenarios are becoming increasingly rare. And while there are several options of moving from multiple back-ends into one instance of SAP S/4HANA Finance, it is certainly not an “all or nothing” scenario.

Keep in mind that with each option, you need to consider where your system of record lies for financial disclosures, meaning deciding where the information stored that is required for financial disclosures.

Consider the following options:

  • Moving one ERP at a time to Central Finance. In this scenario, for a period-end close, the system of record remains in each source system until all ERP systems have been mapped to Central Finance.
  • Moving one Finance process at a time to Central Finance. You may move just accounts receivable into Central Finance, or just asset accounting. Here again, for the period-end close, the system of record remains in the source systems until all processes relevant to the close are mapped to Central Finance.
  • Moving one legal entity at a time. You can also move one legal entity or one business unit at a time into Central Finance. The same is true here of the period-end close, the system of record remains in the source systems until all legal entities relevant to the close are mapped to Central Finance.
  • Moving all Finance processes in all back-end systems to Central Finance. While this is a “big bang” approach for Finance, it does not affect any of the operations and logistics processes that are still carried out in the various back-ends. In this scenario, Central Finance can become the system of record, which is especially valuable if there are consolidations processes and intercompany reconciliations that are carried out here.

The biggest benefit of using the Central Finance deployment mode is that finance teams can leverage the benefits of SAP S/4HANA Finance, without disrupting the processes in the source ERP systems, and without needing to immediately convert each backend system to SAP S/4HANA.

Once manufacturing and operations are ready, then can then move their processes into the Central Finance instance. Again, Central Finance is a full SAP S/4HANA system, in which only Finance capabilities have been implemented. The logistics processes can then also slowly move into the Central Finance instance, there is no need to re-implement; companies then simply begin to use the other capabilities of this SAP S/4HANA instance.

 

For more Information

For additional information, please visit

Most modern managers rely on some type of productivity or time tracking software to understand their workers’ productivity habits. Unfortunately, this software can’t tell you everything.

As with most forms of technology, productivity software isn’t a catch-all solution to eliminate your problems; it’s a tool, and the secret to using it effectively is to be judicious, understanding where its strengths and weaknesses lie.

So what is it that productivity software can tell you about your employees’ work habits, and where does it come up short?

What Software Can Tell You

Productivity software is adept at giving your objective, quantitative data about what your employees are doing. For example, it can typically tell you:

  • How long certain tasks and projects take. Any software with time tracking integration can tell you how long it takes your employees to complete certain tasks and projects. This is valuable for a number of reasons; it can help you gauge your employees’ efforts, recognize problematic tasks, and reward your best employees for an exceptional performance.
  • Employee efficiency. Performance tracking software is also good for conducting employee performance reviews—at least to an extent. It’s easy to compare two workers against each other, and zoom in on efficiency issues as they arise, especially as they become chronic.
  • Billing information. Many employers use time tracking as a way to handle billing efficiently. With independent contractors, you can figure out the rate for a given project instantly. You can also use employee hours to determine what to bill your clients.
  • Relative ROI (to an extent). If you know the hourly rates of your employees, you can also gauge the relative ROI of each task or project. For example, if it took your employees 20 hours to complete tasks that resulted in a $20,000 project, you can consider it a win as long as you’re paying them less than $1,000 an hour.

Where Software Falls Short

However, more subjective factors are more difficult to determine:

  • Sources of inefficiency. Your software may be able to tell you that your employees are taking longer to complete tasks than they should, but it can’t tell you why. For example, if your employees show a drop in productivity, you might never be aware that it’s due to a window treatment that isn’t letting in enough natural light.
  • Employee software use. You should also treat self-reported hours with a slight degree of skepticism. Employees have the power to start and stop a time tracking clock at will, and may even be able to log hours retroactively. Accordingly, there’s an honor system in place.
  • Methods of improvement. Performance tracking software might help you see where an issue exists, but it can’t give you firm recommendations on how to improve. If a project has a negative ROI, it won’t help you learn why.
  • Morale, retention, and longevity. Performance isn’t the only worthwhile metric to track for your employees. Performance tracking software can’t tell you much about morale, or about your rates of employee retention.

Relying on Multiple Tools

Your best path forward is to rely on productivity software as one of several tools you use to monitor, analyze, and understand your employees. Obtaining both quantitative and qualitative data, from multiple sources, will help you establish a better-rounded perspective, and provide you with all the insights you need to make a judgment call.

No system is perfect, but some productivity software platforms are inherently better than others. If you’re interested in upgrading your current performance-related software, make sure to sign up for a trial of SAP’s core platform.

  • Motivation

Having introduced the native DataStore Object (NDSO) in general in part one of this blog series, I put a focus on the basic services it offers (e.g. delta capabilities) in part two.

As already mentioned in the first blog. SAP supports the application driven approach and the native, SQL driven approach to Enterprise Data Warehousing and offers corresponding applications and tools in its portfolio. Especially for customers using SAP BW/4HANA there is of course always the option to implement parts of certain scenarios natively on the SAP HANA database and use capabilities the SAP HANA platform provides in the context of EDW. In those ‘mixed scenarios’ integration is key so – here we go with blog three and take a closer look on how to integrate the NDSO (and its data) in mixed scenarios with SAP BW/4HANA.

 

  • Introduction

To set the scene, think about a scenario where a company wants to integrate sales data from a recently acquired company in the US for some first joint sales reports.

Picture 1: Scenario overview

 

As a starting point for some first proof of concepts an NDSO could be chosen which can regularly be fed by flat files. To integrate the data (from US) with the sales data from the rest of the company, a CompositeProvider can be taken to serve as the reporting layer for a joint analysis of the data.

In addition, the advanced DataStore object (ADSO) in SAP BW/4HANA for the European sales data is not shown explicitly here same as the creation of the NDSO for the US data (the creation of the NDSO was shown in blog 1). However, both objects have the same structure. The NDSO is field based. For the columns of the ADSO InfoObjects have been created in SAP BW/4HANA. Both objects already contain some data. The scenario was built With the SAP HANA Data Warehouse Foundation 2.0 SP03 on top of a SAP HANA 2.0 database and SAP BW/4HANA SP08. Bothe the data from the ADSO in BW/4HANA and the data of the NDSO ware located on the same database in the same schema.

Picture 2: ADSO in BW/4HANA, structure at the top, data from EMEA only

 

 

  • Scenario creation

DataSource:

Accessing the data of the NDSO from SAP BW/4HANA is quite straight forward by creating a DataSource in SAP BW/4HANA.

Picture 3: Creation of the DataSource, option to directly choose a NDSO

 

In this scenario we want to access the data of the NDSO directly (without persisting the data in BW/4HANA) so direct access should be allowed:

Picture 4: General extraction properties of the DataSource

 

The fields of the DataSource are automatically taken form the NDSO object.

 

 

Open ODS View:

Having activated the DataSource an Open ODS View can be created. One important thing to facilitate access from BW/4HANA to the NDSO is to allocate access rights to the system user which is used by SAP BW/4HANA to access the NDSO. Doing so a new role is created on SAP HANA:

 

Picture 5: Role to access schema of the NDSO

And this role is assigned to the system user in SAP BW/4HANA:

 

Picture 6: Role assignment system user in SAP BW/4HANA

 

Building the Open ODS View on top of the DataSource is also quite straight forward. The fields of the DataSource are copied to the structure of the view. From a semantic perspective, the characteristics and keyfigures in the ADSO (containing data from EMEA) and from the Open ODS View (Sales data from US) are the same. In the ADSO the fields are represented by InfoObjects in the Open ODS View the structure definition just contains fields. However, to be able to define the UNION conditions in the Composite Provider the objects should have the same technical names or the functionality to associate the plain fields of the Open ODS View structure with the corresponding InfoObjects from the ADSO can be used to establish the logical relationship between the objects and at the same time the technical requirement to define the UNION join conditions in the CompositeProvider.

 

Picture 7: Structure Open ODS View; Association of characteristic SalesOrderID

 

CompositeProvider:

Finally, to bring the data together for a joint analysis a CompositeProvider must be defined. In this case the NDSO and the Open ODS View are combines with a UNION condition.

 

Picture 8: Definition CompositeProider

 

Field ‘Region’ for the Open ODS View is filled with a constant value of ‘2’ because the source data is not providing this information. In the master data InfoObject in BW for region the value is representing region US.A simple report created with SAP Analysis for Microsoft Office shows the combined results:

 

Picture 9: Combined report SAP BW/4HANA and native DSO

 


  •  Further Information

Scenario:

We will be creating OData for creating and reading Media files and consume it in SAPUI5 application.

Prerequisites:

  1. A trial account on SAP Cloud Platform with Web IDE Service enabled.
  2. Destination to On premises system is maintained in the SAP Cloud Platform
  3. Hana Cloud Connector should be in active state.
  4. Component IW_FND and IW_BEP both have to be at least on SAP Net Weaver Gateway 2.0 SP09 or component SAP_GWFND is on SAP Net Weaver 7.40 SP08.
  5. Active security session management.
  6. Currently, URLs containing segment parameters such as; mo ;o, or ;v=.. Cannot be processed in the soft-state processing mode.

Step-by-Step Procedure:

  1. Creating the Odata Service.
  2. Register the Odata Service and test it from Gateway.
  3. Consume the Odata Service in UI5 Application.

STEP 1:  Creating the Odata Service

  1. Create Table in SE11. (You may put any name as you like, i have used name ZFILE for the Table).
  2. Goto SEGW, Create OData project, import your table (ZFILE) from DDIC structure to create ENTITY_TYPE.
  3. Select the entity type “ZFILE” you just created and Choose the check box media as selected.
  4. Now generate Runtime Artifacts and Redefine the DEFINE method of the model provider extension call.
    method DEFINE.
          super->define( ).
    
    Data: lo_entity TYPE REF TO /iwbep/if_mgw_odata_entity_typ,
    
    lo_property TYPE REF TO /iwbep/if_mgw_odata_property.
    
    lo_entity = model->get_entity_type( iv_entity_name = 'ZFILE' ).
    
    IF lo_entity IS BOUND.
    
    lo_property = lo_entity->get_property( iv_property_name = 'Filename').
    
    lo_property->set_as_content_type( ).
    
    ENDIF.
      endmethod.
    ​
  5. Click on the Methods tab and redefine the method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CREATE_STREAM .
    method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CREATE_STREAM.
    
    data: lw_file type zfile.
    field-symbols:<fs_key> type /iwbep/s_mgw_name_value_pair.
    read table it_key_tab assigning <fs_key> index 1.
    lw_file-filename = iv_slug.
    lw_file-value    = is_media_resource-value.
    lw_file-mimetype = is_media_resource-mime_type.
    lw_file-sydate  = sy-datum.
    lw_file-sytime  = sy-uzeit.
    
    insert into zfile values lw_file.
    
    endmethod.​
  6. Now click on the Methods tab and redefine the method  /IWBEP/IF_MGW_APPL_SRV_RUNTIME~GET_STREAM .
    method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~GET_STREAM.
    
    data :  ls_stream  type ty_s_media_resource,
    
    ls_upld    type zfile.
    
    lv_filename type char30.
    
    field-symbols:<fs_key> type /iwbep/s_mgw_name_value_pair.
    
    read table it_key_tab assigning <fs_key> index 2.
    lv_filename = <fs_key>-value.
    select single * from zfile into ls_upld where filename = lv_filename.
    if ls_upld is not initial.
    ls_stream-value = ls_upld-value.
    ls_stream-mime_type = ls_upld-mimetype.
    copy_data_to_ref( exporting is_data = ls_stream   changing  cr_data = er_stream ).
    
    endif.
    
    endmethod.​
  7. Now click on the Methods tab and redefine the method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~UPDATE_STREAM.
    method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~UPDATE_STREAM.
    
    data: lw_file type zfile.
    field-symbols:<fs_key> type /iwbep/s_mgw_name_value_pair.
    
    read table it_key_tab assigning <fs_key> index 1.
    lw_file-filename = <fs_key>-value.
    lw_file-value    = is_media_resource-value.
    lw_file-mimetype = is_media_resource-mime_type.
    lw_file-sydate  = sy-datum.
    lw_file-sytime  = sy-uzeit.
    modify zfile from lw_file.
    
    endmethod.
    ​
  8. Now click on the Methods tab and redefine the method FILESET_GET_ENTITYSET.

    METHOD fileset_get_entityset.
      DATA:
          it_final   TYPE STANDARD TABLE OF zfile,
          lt_filters TYPE                   /iwbep/t_mgw_select_option,
          ls_filter  TYPE                   /iwbep/s_mgw_select_option,
          ls_so      TYPE                   /iwbep/s_cod_select_option,
          p_name     TYPE c LENGTH 15.
    
      SELECT
        mandt
        filename
        SYDATE
        SYTIME
        VALUE
        MIMETYPE
    
      FROM zfile
      INTO TABLE et_entityset.
    
      ENDMETHOD.
    ​

STEP 2:  Register the Odata Service

  1. Register the Odata service to the Gateway server.
  2. Open the Odata service with SAP Gateway Client.
  3. Check get stream method with following URL (Will be used for downloading purpose) .
    /sap/opu/odata/sap/ZFILE_SRV/FILESet(Mandt=’200′,Filename=’mobile1.jpg’)/$value.
  4. Check create stream method with following URL (Will be used for uploading purpose).
    /sap/opu/odata/sap/ZFILE_SRV/FILESet
    Select the Post Radio button and select the FILESet Entity Set. Select “Add File” button and choose the File to be uploaded.Click on “Add Header” button and enter the value of header.Enter value SLUG in Header Name.Enter name of the file in Header Value.
    NOTE : Response of the URL may contain following error, but this should not cause any problem. The file is uploaded successfully. 

STEP 3: Consume the Odata Service in UI5 Application.

  1. Go to SAP Web IDE, and create a new project with UI5 application template.(assuming that name of view and controller is VIEW1)
  2. Open the view1.view.xml file and put below code there.
    <mvc:View xmlns:html="http://www.w3.org/1999/xhtml" xmlns:mvc="sap.ui.core.mvc" xmlns="sap.m" xmlns:u="sap.ui.unified" controllerName="File_Upload.controller.View1" displayBlock="true">
    <App>
    <pages>
    <Page title="Upload The File">
    <content>
    
    <Label text="Put your Documents here" width="100%" id="__label0"/>
    <u:FileUploader id="fileUploader" useMultipart="false" name="myFileUpload" uploadUrl="/destinations/TRN/sap/opu/odata/sap/ZFILE_SRV/FILESet" width="400px" tooltip="Upload your file to the local server" uploadComplete="handleUploadComplete"/>
    <Button text="Upload File" press="handleUploadPress"/>
     
    <List id="itemlist" headerText="Files" class="sapUiResponsiveMargin" width="auto" items="{ path : 'Data>/FILESet' }">
    <items>
    <ObjectListItem id="listItem" title="{Data>Filename}">
    <ObjectAttribute text="Download" active="true" press="fun"/>
    </ObjectListItem>
    </items>
    </List>
    </content>
    </Page>
    </pages>
    </App>
    </mvc:View>​
  3. Open the view.controller.js file and put below code there.
    sap.ui.define([
        "sap/ui/core/mvc/Controller",
        "sap/ui/model/Filter",
        "sap/ui/model/FilterOperator",
        "sap/ui/model/odata/ODataModel",
        "sap/m/MessageToast",
        "sap/m/Button",
        "sap/m/Dialog",
        "sap/m/MessageBox",
        "sap/m/List",
        "sap/m/StandardListItem"
    ], function(Controller, Filter, FilterOperator, ODataModel, MessageToast, Button, Dialog, MessageBox, List, StandardListItem) {
        "use strict";
    
    
    
        var name;
        var mandt;
    
        var oModel = new sap.ui.model.odata.ODataModel("Put path of Odata with destination Here");
        return Controller.extend("File_Upload.controller.View1", {
            handleUploadComplete: function() {
                sap.m.MessageToast.show("File Uploaded");
                var oFilerefresh = this.getView().byId("itemlist");
                oFilerefresh.getModel("Data").refresh(true);
                sap.m.MessageToast.show("File refreshed");
    
            },
            handleUploadPress: function() {
                var oFileUploader = this.getView().byId("fileUploader");
                if (oFileUploader.getValue() === "") {
                    MessageToast.show("Please Choose any File");
                }
                oFileUploader.addHeaderParameter(new sap.ui.unified.FileUploaderParameter({
                    name: "SLUG",
                    value: oFileUploader.getValue()
                }));
                oFileUploader.addHeaderParameter(new sap.ui.unified.FileUploaderParameter({
                    name: "po",
                    value: "12234"
                }));
    
                oFileUploader.addHeaderParameter(new sap.ui.unified.FileUploaderParameter({
                    name: "x-csrf-token",
                    value: oModel.getSecurityToken()
                }));
                oFileUploader.setSendXHR(true);
    
                oFileUploader.upload();
    
    
    
            },
            fun: function(oEvent) {
                var ctx = oEvent.getSource().getBindingContext("Data");
                name = ctx.getObject().Filename;
                mandt = ctx.getObject().Mandt;
                var oModel = new sap.ui.model.odata.ODataModel("Put path of Odata with destination Here");
                oModel.getData("/Data");
                oModel.read("/FILESet(Mandt='" + mandt + "',Filename='" + name + "')/$value", {
    
                    success: function(oData, response) {
                        var file = response.requestUri;
                        window.open(file);
    
                    },
                    error: function() {
    
    
                    }
                });
    
            },
    
        });
    });​