Skip to Content
|

Category Archives: Uncategorized

While attending our Partner Advisory Council for Innovation in NYC last week, I was inspired to write another blog on innovation by a presentation of delaware, a global SAP Build Partner, who is member of this Partner Council. Using SAP technology, such as the SAP Cloud Platform or Leonardo services, delaware is at the edge of what digitalization offers for innovation. Thierry and Steven, two engineers from delaware I met at this Partner Council event, have a wealth of experience with digital innovation and therefore I would like to share this interview with you.

Delaware has a very special way to embrace innovation: they co-innovate with customers in a way, where the customer is part of the innovation process without any sales intention in the first place. It is about joint learning, about creativity and diversity. In fact, they claim themselves to have a fast learning curve jointly with their customers on innovative techniques (AI, IoT, AR/VR, Bots, Blockchain,…) due to their co-innovation program with customers. In their second year with the so-called Del20 program, they have an ecosystem of 40 companies involved in the process of co-innovation with an average of 100 participants.

But innovation is also about failure, as long as failure is discovered fast! We all know, that innovation can fail in different phases – worst case is that the product is out on the market and then you realize, that usability or other aspects do not have the desired effect. Other times, failure is a matter of not bringing the right resources together or being too narrow-minded to getting the best solution.

Q: Thierry and Steven, you are running the Del20 program – an innovation project sponsored by your company which fosters digital experiments, as you call it. Tell our readers a bit about this program.

Thierry: “Our reasoning was that by working on a specific problem brought by our customers around a use-case and with real data, innovation would be more efficient and effective.  The initial idea was to sponsor 4 experiments annually, injecting 25 man-days on each of those experiments free of charge for our customers”

Steven: “Customers are enthusiastic about this innovation program, as for them the challenge is not to find a problem or a business case, but it is more a lack of resources, capabilities, ideas and courage  to do something about it.”

Q: In my introduction of this blog, I mentioned failure being part of the game. You gave an interesting example about one experiment applied to bakeries, which unfortunately failed. Please, share some insights and tell us more about the “culture of failure”.

Steven: “As the DEL20 has an objective to increase trust in unproven innovative techniques, we encourage the members of the ecosystem to get out of their comfort zone and propose experiments that have a high risk. That is why they are called experiments. The whole ecosystem is doing its best to get the best possible learning experience and therefore we also appreciate learnings from experiments which didn’t succeed.”

Thierry: “Indeed, last year we had an experiment around artificial intelligence: The aim was to intervene pro-actively in the production parameters of a bakery, the moment the algorithm detects a high correlation of events that would cause scrap in the production chain. However, the data of the machines per batch were not enough, we needed to be able to track the data of the product along the whole production process and also report reason codes for the defects, waste and scrap of each step. These data were missing. The biggest challenge were the manufacturing steps with different temperature extremes for the sensors (baking + 60 degrees) and (freezing the bred -20 degrees) were very difficult to track.The learning curve about this failed experiment was probably a lot higher and much more appreciated by the group, than success. Failure is an option in innovation, but fail fast and for sure: “Dare & Do”.”

Q: I remember from your presentation, that your innovation program has certain “rules” – such as: any customer can apply to join, no paperwork (that is, IP  is not the show-stopper) and no competition (meaning participants are very open in this program). Apart from these aspects, what is the essence of the success of your co-innovation setting?

Steven: “On the IP (intellectual property) side we have made it clear from the beginning, that there would not be any IP restraints in this network. In short, if you fear that you are going to share something very strategic or disruptive for your market and really do not want to disclose this, well then don’t. The participants know that what they share, is their own choice to share and the community always returns valuable feedback.”

Thierry: “Correct, we foster speed, fun and convenience, so we tried to eliminate all hurdles such as membership fees, competition in class, sector specialization, paperwork on IP etc. Our members do like the idea of being part of a network of companies originating from different sectors as they can learn over the sectors and could also explore things from different sectors or even start new business models together. “Trust” is the biggest argument of success of the ecosystem.”

Q: Now a personal question: There is a lot of innovation changing our personal lives. Sometimes, we experience minor failures or let’s call it “user-experience misunderstandings”… Whether by accident you hit the child-lock of your digital stove and wonder how to unlock it or whether you are looking desperately for the sensor of the toilet to get the flush working. Have you experienced digital failure in your personal lives recently?

Steven: “During my recent overnight flight with an upgrade to business class, I wanted to switch on my light, but everything around me was touchscreen based. So, intuitively, I went through the digital menu to find the light switch to conclude after a while there was nothing on the touchscreen. Then I tried the non-digital approach and just turned the head of the light – and the light switched on. This is a nice example that we tend to forget that simple things as switching on a light can still go analog.”

 

Thanks a lot, Thierry Bruyneel and Steven Lenaerts, for your time and insights.

# # #

Ulrike Fempel is a book author and a senior business development manager with over 20 years of global experience in IT. Recognized as significant driver by peers, management as well as partners, she has a passion for sciences. Contact her on Twitter / LinkedIn.

For more information about SAP Cloud Platform, please relate to the following links

  • SAP Cloud Platform general product and pricing information for partners and customers.
  • SAP App Center: the SAP digital marketplace where customers can buy solutions directly from SAP partners.

 

Introduction

As OpenUI5 is becoming more popular at large SAP accounts there is a demand to also have the open source version of the backend. This is to have the fully opensource end-to-end UI5 solution. One of the options that are making this possible is the usage of:

  • Payara –  Java Enterprise Edition (Java EE) application server

Together with

  • Olingo library – for handling oData calls

and

  • OpenShift – In order to manage and deploy your Java EE oData services as containers.

Required steps

To set up the open source backend you need to have the server platform. It can be a physical server, local virtual box or virtual private server (VPS). Start with installing OpenShift on your server. Detailed description for Unbuntu 16.04 you will find in OepnShift blog:

Installing OpenShift (for Ubuntu 16.04): https://blog.openshift.com/installing-openshift-3-7-1-30-minutes/

Once OpenShift is installed setup OpenShift project and add Payara template to it:

Add Payara template to your OpenShift project by following Readme from Payara Openshift example: https://github.com/nextstepman/payara-openshift-example

Here is how the correctly added Payara template looks like:

Now using the template created in the previous step (“payara-maven3″) you can create Payara Olingo application. You can either create your own based on Olingo tutorial (http://olingo.apache.org/doc/odata2/tutorials/AnnotationProcessorExtension.html) or use our example repository: https://bitbucket.org/nype/triton-example

Here is the screenshot after our repository import:

Now your oData service is available for consumption with SAP UI5 or OpenUI5 application at: http://triton.YOUR_SERVER/Triton/ODataService.svc

If you like this blog please consider indicating this to other members of the community by pressing the like button above.

Having deployed my SAP Data Hub Distributed Runtime to the Google Cloud Platform manually and connected the first SAP Data Hub, developer edition to my SAP Data Hub Cockpit, in this blog I will show how to make the Full System Landscape SAP Data Hub 1.0 SPS 03, trial edition work for you.

The SAP Data Hub 1.0 SPS 03, trial edition is delivered via the SAP Cloud Appliance Library and deployed to the Google Cloud Platform. If you preferred an installation on Microsoft Azure, you could have a look at my blog how to Deploy your SAP Data Hub Distributed Runtime to the Microsoft Azure Container Service.

To start with, I chose the SAP Data Hub 1.0 SPS 03, trial edition from the SAP Cloud Appliance Library:

In preparation for creating an instance, I need to create a project in my Google Cloud Platform account and within that a service account with the respective roles so that the SAP Cloud Appliance Library can leverage to access it as described in Getting Started with SAP Data Hub, trial edition:

  • Compute Instance Admin (v1)
  • Compute Network Admin
  • Compute Security Admin
  • Kubernetes Engine Admin
  • Service Account User
  • Storage Admin

In addition, I also need to activate the:

  • Cloud Resource Manager API
  • Compute Engine API
  • Kubernetes Engine API

With this I export my service account key in JSON format and create my instance:

After a while my instance is ready and I connect to my SAP Data Hub Cockpit to check the connection to my SAP Data Hub Distributed Runtime:

While this is a so called Full System Landscape, out of the box it is missing the Hadoop integration, which would have to be added manually.

If you like this blog please consider indicating this to other members of the community by pressing the like button in the header.

SAP Solution Manager 7.2 Dashboard Builder is a powerful framework for SAP Systems Management.

It helps in visualizing the Solution Manager database content, so that it becomes possible to create useful reporting dashboards.

In some cases (e.g. when creating an “SAP Security Dashboard”) number-based tiles for a general system overview are necessary:

  • Total Count of all managed Systems
  • Total Count of all ABAP Systems
  • Total Count of all Java Systems
  • Total Count of other Systems
  • Number of Systems with outdated synchronization data

Unfortunately, there is no simple way to do this “out of the box”.

This blogpost describes how to use a custom function module to display LMDB data in SAP Solution Manager Dashboard Builder.

Data Source

The following data from LMDB is needed:

  • SID
  • System Type
  • Last Synchronization Timestamp

These values are stored in LMDB table „LAPI_TECH_SYSTEM“:

Custom Function Module as Interface LAPI_TECH_SYSTEM -> Dashboard Builder

SAP provides:

“DSH_SAMPLE_FM_DATASET – test function module for dashboard builder”

as an example of how to deliver data to Dashboard Builder.

Using this template, it is easy to create a custom function module, e.g.:

“Z_DSH_SYSINFO_FM_DATASET – lapi_tech_system data function module for dashboard builder”

Step 1 (SE37):

Copy “DSH_SAMPLE_FM_DATASET” to “Z_DSH_SYSINFO_FM_DATASET”.

Step 2 (Tables tab):

Customize Structure “INC_DATA_S” to:

 SID CAPTION SYSTEMTYPE SYNC_TIMESTAMP COUNTER_ST COUNTER_EXPIRED
<SID> <CAPTION> <SYSTEMTYPE> YYYY.MM.DD HH:MM:SS 0 | 1 1
  • COUNTER_ST is a key figure for filtering and counting different System Types.
  • COUNTER_EXPIRED is a key figure for filtering Systems with outdated synchronization data.

Sample Structure as in INC_DATA_S:

Customized Structure ZDSH_SYSINFO:

Step 3 (Source code tab):

FUNCTION z_dsh_sysinfo_fm_dataset.
*"----------------------------------------------------------------------
*"*"Local Interface:
*"  IMPORTING
*"     VALUE(I_T_SEL_OPTIONS) TYPE  DSH_RSPARAMS_TT OPTIONAL
*"  EXPORTING
*"     REFERENCE(ET_TEXTS) TYPE  DSH_SRC_TEXT_TABLE
*"  TABLES
*"      DATASET STRUCTURE  ZDSH_SYSINFO
*"----------------------------------------------------------------------

* Function module Z_DSH_SYSINFO_FM_DATASET selects data from Solution Manager table LAPI_TECH_SYSTEM (Technical Systems for DIAGLS landscape API)
* and provides it for Solution Manager Dashboard Builder.
* It helps to create number-based Tiles like:
* - Total Count of all ABAP Systems
* - Total Count of all Java Systems
* - Total Count of all other Systems
* - Number of Systems that have a synchronization timestamp older than 30 days

* See function module DSH_SAMPLE_FM_DATASET as a reference

  TABLES:lapi_tech_system. " Import table
  DATA:etab         TYPE zdsh_sysinfo, " Temporary structure for dataset export
       lv_date      TYPE datum,
       lv_date_char TYPE char14,
       lv_duration  TYPE int4,
       lv_time      TYPE uzeit,
       lv_date_ext  TYPE char10,
       lv_time_ext  TYPE char8.

  SELECT * FROM lapi_tech_system.

    MOVE lapi_tech_system-sid TO etab-sid. " SID
    MOVE lapi_tech_system-caption TO etab-caption. " Caption
    MOVE lapi_tech_system-type TO etab-systemtype. " System Type

*   Processing Last synchronization timestamp:
    MOVE lapi_tech_system-sync_timestamp TO lv_date_char.
    MOVE lv_date_char(8) TO lv_date.
    MOVE lv_date_char+8(6) TO lv_time.
    CONCATENATE lv_date_char(4) lv_date_char+4(2) lv_date_char+6(2) INTO lv_date_ext SEPARATED BY '.'.
    WRITE lv_time TO lv_time_ext.
    CONCATENATE lv_date_ext lv_time_ext INTO etab-sync_timestamp SEPARATED BY space.

*   Determine systems with synchronization timestamp older than 30 days
*   (expired systems)
    call function 'DURATION_DETERMINE'
      exporting
        unit                       = 'TAG'
*       FACTORY_CALENDAR           =
      importing
        duration                   = lv_duration
      changing
        start_date                 = lv_date
        start_time                 = lv_time
        end_date                   = sy-datum
        end_time                   = sy-uzeit
      exceptions
        factory_calendar_not_found = 1
        date_out_of_calendar_range = 2
        date_not_valid             = 3
        unit_conversion_error      = 4
        si_unit_missing            = 5
        parameters_not_valid       = 6
        others                     = 7.
    IF lv_duration < 30.
      MOVE 0 TO etab-counter_expired. " Counter KPI for Expired Systems
    ELSE.
      MOVE 1 TO etab-counter_expired.
    ENDIF.

    MOVE 1 TO etab-counter_st. " Counter KPI for System Type

    APPEND etab TO dataset.

  ENDSELECT.

ENDFUNCTION.

 

Display Data in Dashboard Builder

Tile: Total Count of All Systems

Parameter Value
KPI Type Custom
Name Total
Subhead
Description Systems
Visualization Number-based
Size 1 X 1
Unit
Data Source Type Function Module
Data Source Name Z_DSH_SYSINFO_FM_DATASET
Detail Page Template Drill-Down views
Rows
Columns Key Figures
Filter 1 Key Figures: Counter Systemtype
Thresholds Counter Systemtype: >= 0 Show as Green

 

Tile: Systems with outdated synchronization data

Parameter Value
KPI Type Custom
Name Outdated Data
Subhead Older than 20 days
Description Systems
Visualization Number-based
Size 1 X 1
Unit
Data Source Type Function Module
Data Source Name Z_DSH_SYSINFO_FM_DATASET
Detail Page Template Drill-Down views
Rows
Columns Key Figures
Filter 1 Key Figures: Expired System
Thresholds Expired System: >= 0 Show as Red

 

Tile: Distribution by System Type

Parameter Value
KPI Type Custom
Name Distribution
Subhead by System Type
Description
Visualization Pie chart
Size 2 X 2
Unit
Data Source Type Function Module
Data Source Name Z_DSH_SYSINFO_FM_DATASET
Detail Page Template None
Rows Technical System Type
Columns Key Figures
Filter 1 Key Figures: Counter Systemtype
Thresholds

 

Tiles: Total Count of NW AS ABAP, NW AS JAVA & Other Systems

Parameter Value
KPI Type Custom
Name NW AS ABAP [NW AS JAVA, Other]
Subhead
Description Systems
Visualization Number-based
Size 1 X 1
Unit
Data Source Type Function Module
Data Source Name Z_DSH_SYSINFO_FM_DATASET
Detail Page Template Drill-Down views
Rows
Columns Key Figures
Filter 1 Key Figures: Counter Systemtype
Filter 2 (ABAP Systems) Technical System Type: ABAP
Filter 2 (JAVA Systems) Technical System Type: JAVA
Filter 2 (Other Systems) Technical System Type: ! ABAP && ! JAVA
Thresholds

 

Related Links:

https://help.sap.com/doc/saphelp_sm72_sp03/7.2.03/en-US/86/7e155646183a35e10000000a44538d/frameset.htm SAP Documentation: SAP Solution Manager Dashboard Builder – Using a Function Module as a Data Source

https://blogs.sap.com/2017/02/28/sap-solution-manager-7.2-dashboard-builder/ SAP Solution Manager 7.2 – Dashboard Builder

https://blogs.sap.com/2017/05/16/sap-solution-manager-7.2-dashboard-builder-configuration/ SAP Solution Manager 7.2 – Dashboard Builder configuration

https://blogs.sap.com/2017/11/14/sap-solution-manager-7.2-dashboard-builder-new-features-with-sp06/ SAP Solution Manager 7.2 Dashboard Builder – new features with SP06

このページは、以下の英語ページの抄訳です。最新の情報については、英語ページを参照してください。

https://blogs.sap.com/2014/06/18/factors-to-consider-for-utilizing-materialized-views/

この記事のオリジナルは Glenn Paulley が sybase.com に 2009 年 9 月に掲載したものです。その中で、Glenn は 2種類のマテリアライズドビューについて説明するとともに、マテリアライズドビューを使用する際に考慮すべき点について解説しています。

(2006年にリリースされた) Version 10より、SQL Anywhere では、遅延反映型マテリアライズドビューをサポートしています。(2008年にリリースされた) version 11 では、即時反映型マテリアライズドビューをサポートしました。これら2種類の大きな違いは、以下のとおりです。

  • 遅延反映型マテリアライズドビューでは、クエリオプティマイザーは、staleデータを含む1またはそれ以上のマテリアライズドビューを利用してクエリに答えます。あらゆるビューの 「stale性」や、クエリへの答えにビューを使用するか否かは、全てデータベース管理者のコントロール下にあります。また、データベース管理者は、データの正確性とマテリアライズドビューが提供するパフォーマンスの向上と、そのビューの更新反映コストをトレードオフすることが可能です。
  • 逆に、即時反映型マテリアライズドビューは、マテリアライズドビューの定義のベースとなるベーステーブルの更新と同一のトランザクション内で更新されます。即時反映型ビューは、それぞれの更新オペレーションにともなったビューへの反映の必要性を犠牲に、派生的に生成される最新ビューのベースとなるベーステーブルの分単位レベルのコピーを提供します。

まとめると、遅延反映型ビューでは、マテリアライズドビューのメンテナンスコストのamotizationが可能であるのに対して、即時反映型マテリアライズドビューでは、更新されたそれぞれのトランザクションでビューの反映にオーバーヘッドが発生します。そのため、同時実行トランザクション間の競合が発生する可能性があります。

マテリアライズドビューを遅延反映型にするのか即時反映型にするのかは、マテリアライズドビューを利用するかどうかを決める場合に、データベース管理者が考慮すべき要素の1つです —– 文字通り、これは、「view selection problem」として知られています。

また、遅延反映か即時反映かのトレードオフの他にも考慮すべき点があります。

以下に、マテリアライズドビューの利用を検討する場合に考慮すべき項目のチェックリストを挙げます。

  • マテリアライズドビューを作成することによってメリットのあるクエリのセットは何か?

この質問に回答するには、個々のクエリの定義と頻度の両方の詳細について考慮することに加え、システムのクエリの負荷分析も必要になります。

まずは、頻繁に実行され、expensiveなクエリ、特に、レスポンス時間への要求が重要なexpensiveなクエリから始めるのが良いでしょう。

Sybase Central (現SQL Central) SQL Anywhere プラグインに含まれる SAP SQL Anywhere のアプリケーション プロファイリング機能を使用すると、アプリケーションのワークロードをキャプチャし、それに含まれる「heavy hitters」を特定することができます。

複数の共通クエリにおいてメリットのあるマテリアライズドビューは、最も大きなメリットを享受できることを表しています。

なぜならば、ビューのためのストレージやメンテナンスコストはコンスタントに必要なものの、マテリアライズドビューのメリットは利用に伴って増加するからです。

また、単一のクエリでも、複数のマテリアライズドビューを使用できることを覚えておいてください。

複雑なマテリアライズドビューを複数のビューに分割すると、より大規模なクエリのセットをアシストするためにオプティマイザーがマテリアライズドビューを利用する可能性があります。

集合 (GROUP BY) を含むマテリアライズドビューを検討している場合には、複数のクエリに対してより広く適用できる基本関数をマテリアライズする方が良いことが多いものです。

例えば、AVG() は、SUM() と COUNT(*)のコンビネーションより得ることができます。

SQL Anywhereのクエリオプティマイザーは、非常にインテリジェントであるため、オリジナルのクエリにAVG()が含まれる場合には、マテリアライズドビューからSUM() と COUNT() を利用します。

  • クエリパフォーマンスの潜在的な改善が、マテリアライズドビューのストレージやメンテナンスコストを上回るのか?

データベース管理者は、マテリアライズドビュー — とそのインデックス — のためのスペースの要件とそれによる潜在的なクエリパフォーマンスの改善と、そのビューのメンテナンスコストをトレードオフする必要があります。

この時、データベース管理者はアプリケーションのリクエストによる更新パターンを認識している必要があります。

ヘビーに更新されるベーステーブルのマテリアライズドビューには、次の2つの理由で、許容できないメンテナンスコストが存在する可能性があります。

マテリアライズドビュー自身への更新のコスト、そして、マテリアライズドビューを含むテーブル(またはインデックス)に対する同時更新による更新トランザクションのロックの競合の増加です。2つめの問題は、適切な容量計画がないと、アセスメントは困難です。

データベース管理者は、他のベーステーブル同様、マテリアライズドビューにインデックスをつけることができることを認識していないことがよくあります。

インデックスは、特にアプリケーションのクエリの中、ビューには含まれていないテーブルへの追加のjoinが含まれる場合に役に立ちます。

インデックスが存在する場合には、オプティマイザーには、より物理的なオペレーターの選択肢 — 特に indexed nested-loop join — があり、これによりスピードを大幅に向上することができます。

  • もしオプティマイザーがある1つのケースにおいてマテリアライズドビューからのstale データを利用することを選択し、他ではベースとなる(そして最新の)ベーステーブルを処理することを選択した場合、同一のクエリにおいて、異なる結果を返すことは許容されるのか?
  • マテリアライズドビューの格納データは、staleになってもよいのか?
  • 認められなくなる前には、データはどのようにstaleになることが可能か?

後半の質問は、即時vs遅延反映型マテリアライズドビューのトレードオフに関係します。

上で説明したとおり、遅延反映は、データのstale性のexpense時に、データベース管理者が複数の更新トランザクションにわたりビューのメンテナンスをアモタイズすることができます。

アプリケーションにとって、遅延メンテナンスビューによるメリットがあるかどうかは、主にビジネス要素による判断であり、システム要素によるものではありません。

この記事の詳細情報を提供してくれた同僚のAnil Goel に感謝します。

 

===

 

SAP SQL Anywhere に関する詳細情報は、SAP SQL Anywhere Communityページ<英語> を参照してください。

 

上記のコミュニティーに掲載されている技術情報は、順次SAP SQL Anywhere 日本語コミュニティに掲載しています。

 

SAP SQL Anywhere に関してはまずはこちらをご参照ください。無期限でご利用いただける無償の Developers Edition もこちらからダウンロードが可能です。

 

SAP SQL Anywhere に関して技術的な質問のある方はコミュニティに登録し、
「+ Actions」から「Ask a Question」機能をご利用ください。

Language には「Japanese」、
Primary Tag には「SAP SQL Anywhere」、
User Tagに「sql anywhere」、「sql anywhere japanese question」

を選択してください。

不具合につきましては、サポート契約者様専用の問い合わせ方法にてお問い合わせください。

 

======================
ご購入に関するお問い合わせ

こちらよりお問い合わせください。

SAP Cloud Platform Big Data Services is a High Performance Big Data as a Service (BDaaS) solution based on Apache Hadoop.

In this blog, we will show you how to connect SAP Lumira to this BDaaS solution to visualize a Hive dataset.

We will use the desktop edition of SAP Lumira 1.31.10 installed on Windows 10 64-bit. You can also use SAP Lumira Discovery 2.1 to do the same.

Prior to establish the connection between SAP Lumira and the Big Data Services, you need to ensure that your PuTTY profile has configured an SSH tunnel that locally forwards port 10000 on your Windows machine to port 10000 of the Hive Server 2 service that is running within the Big Data Services. For more details, you can consult the Big Data Services documentation.

Then, follow the steps below:

1. Connect to your Big Data Services Workbench using PuTTY

 

2. Launch SAP Lumira

 

3. Create a new dataset (menu File -> New) by selecting the SQL on Hadoop source

 

4. Enter your credentials to connect to the Hive Server 2. Host should be set to localhost, Port should be set to 10000 and User should be set to your Big Data Services Workbench Username.
Do not enter anything in the Password field as PuTTY is already using a private SSH key to connect to the Workbench

 

5. Choose any Hive table in your schema and select the columns you want to be part of the dataset. In this example, we chose to include all the columns. Click the Create button to prepare and acquire the data

 

6. Build a visualization of your dataset

 

 

You can see how easy it is to load a dataset and create a report to display Hive data

One fine day, Every thing was moving smoothly,Until  a basic assignment operation failed. 😐

a = { "val" : 10}
{val: 10}
b = a
{val: 10}
b.val = 30
30
a -> {val: 30}

A basic operation where assigning value made me debug everything for hours.

Later I found, As the value of a which was assigned to b. Any change in b is reflected to a at the same time.

Well it was the reference of object in b that was working as a pointer to a and thus changing its value.

How to solve this??  Just destroy the instance of that object. 

How i Did?

b = JSON.stringify(a)
"{"val":30}"
// this destroyed the object and all its reference.
JSON.parse(b)
{val: 30}  // here the object is in original state.
b.val = 40 // assigned new value to b
40  
a
{val: 30} // value of a is no changed

this is not a big issue but for people who are new with JavaScript, this can be a pain .

 

PiLog’s Material Master Taxonomy for SAP MDG

PiLog Group and Partner Innovation Lifecycle Services from SAP were engaged in a ‘co-innovated with SAP’ program to build an extension to SAP Master Data Governance solution


Partner Brief:

PiLog Group has an overall Industry experience of 21 years and is internationally recognized as leaders and subject matter experts in Quality Master Data Solutions and Services across Industry Domains, to ensure trusted authoritative quality data, adding value to society. PiLog’s processes and systems of MDG are designed for compliance with ISO 8000, an International standard for quality master data. For further details, you can visit Pilog website.

Product Brief:

PiLog’s Material Master Taxonomysolution is fully integrated into SAP Master Data Governance. It effectively manages the material classification to deliver duplicated, multi-lingual descriptions with industry proven content as per ISO processes & methodology.

More information on the detailed feature of PiLog’s Material Master Taxonomy solution can be found in SAP App Centre and  SAP Certified Solutions Directory


Testimonials:

“Customers were often adding such taxonomies individually on project base. Now we have a partner with proven industry content that offers such a taxonomy also on top of Master Data Governance in a standardized way. We are looking forward to leverage the partner’s expertise for a cloud-based taxonomy solution as well!” Ingo Rothley, SAP Product Manager Master Data Governance

 “Co-Innovated with SAP” service from Partner Innovation Lifecycle Services is an amazing program, offering loads of guidance throughout the engagement, ensuring that the partner adheres to the SAP standards, along with the comprehensive documentation that eases the rollout to the customers. The brand gives high quality assurance to the customers that the product meets world class standards & thoroughly tested by SAP professionals. It also helped us go-to-market quicker than what we anticipated and hence we highly recommend the program to other partners as there is great value for money. Imad A Syed, Chief Information Officer , Pilog Group


Success Story:

https://www.sap.com/documents/2018/03/7e487ea6-f67c-0010-82c7-eda71af511fa.html

Close alignment between Partner Innovation Lifecycle Services, development, product/solution management teams and PiLog India helped to bring the required expertise and ensure a competitive partner solution in the market place.

To learn more about ‘co-innovated with SAP’ program , kindly visit the website or contact us at coinnovate@sap.com

The requirement I am going to describe is the following (taken out from this guide):

A Cloud user creates the prospect 123. When the Cloud team is ready to convert them to an account, an email is sent to the master data governance team. The master data governance team manually creates customer 9001 and references the Cloud prospect 123 in a text field. By entering (and persisting) the customer ID of Cloud in the ERP customer master, it is guaranteed that the IDoc that is send out from ERP identifies the corresponding Cloud instance, does not create a duplicate but updates the existing prospect instance and finally remove the prospect flag.

When I set up this requirement, I did not find a detailed instruction how to do so. I only found this Documentation – which actually helped me. But the instrutions of note 577502 to get the new field on the account screen of note 577502 are in my point of view not detailed enough. That’s why I am creating this detailed blog now to share my learning.

First of all, it is necessary to bring a custom own text field on the customer master screen in ERP. In this field, the C4C prospect ID should be entered. This value is then send to C4C and will trigger to remove the prospect flag. Here is a preview Screenshot how this will look after executing all the steps I am going to describe:

To get this field on the screen, note 577502 was helpful but did not provide all the steps in detail. So I am trying to describe this in more detail:

  • The screen field we want to add, should be linked to an append in database table KNA1.
  • Due do this, first this Append needs to be created. To create it, got transaction SE11. Enter KNA1 for ‘Database Table’ and press Display.
  • Press the create append Button and choose a name for your new append.
  • Enter the field which should contain the C4C prospect ID as a 10-character text field:
  • Save and activate the Append. Now this field should be available in table KNA1.

As a next step, the new tab in the customer master screen which should contain this additional field should be created. These are the necessary steps:

  • Goto transaction SPRO, path Logistics->General->BusinessPartner->Customers->Control->Adoption of Customer’s Own Master Data Fields->Prepare Modification-Free Enhancement of Customer Master Record
  • Add a new entry like on the screenshot
  • Select this line and click on ‘Label Tab Pages’. Enter a function code and a Description
  • Save.
  • To have the new screen group visible on the screen, an active implementation of BADI CUSTOMER_ADD_DATA must be created. It is only necessary to implement one method, CHECK_ADD_ON_ACTIVE. That’s my coding for the new screen group ZC:
    METHOD if_ex_customer_add_data~check_add_on_active.
         IF i_screen_group = 'ZC'.
           e_add_on_active = 'X'.
         ENDIF.
    
       ENDMETHOD.
    ​
  • Don’t forget to activate your BADI implementation.
  • Now the screen group and tab are ready, next a sub-screen which can be embedded there has to be created
  • For this a new function-pool needs to be created. Goto transaction SE80, choose Function Group in the dropdown list and choose a name.
  • Navigate in the new function group to Screens and create a new subscreen (right mouse-click). Any number is ok, I chose 9000.
  • On the subscreen, add your field which should contain the prospect ID (e.g. the field should link to the dictionary field which you created as append to table KNA1).
  • The screen field should only be ready for input in transaction XD01 and XD02, in XD03 of course it should be read only. So, I added a Module status_9000 with the following coding logic for the screen:

*&---------------------------------------------------------------------*
*&      Module  STATUS_9000  OUTPUT
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
MODULE status_9000 OUTPUT.
*  SET PF-STATUS 'xxxxxxxx'.
*  SET TITLEBAR 'xxx'.
 LOOP AT SCREEN.
IF sy-tcode EQ 'XD03'.
screen-input = '0'.
ELSEIF sy-tcode EQ 'XD01' OR sy-tcode EQ 'XD02'.
screen-input = '1'.
ENDIF.
MODIFY SCREEN.
ENDLOOP.
ENDMODULE.
  • The next step is to create an implementation for BADI CUSTOMER_ADD_DATA_CS. This implementation will contain the application logic to be able to access the field and save it’s value on the database.
  • Goto transaction SE18 and enter the BADI CUSTOMER_ADD_DATA_CS. In the BAdI display screen, click on Enhancement Implementation ‘Create’ and create a new implementation.
  • First, this implementation should only be called for our screen group ‘ZC’. That’s why first we set a filter Values. To do so, got the properties tab in the new implementation and add a new filter for screen group ‘ZC’
  • Next implementations for methods GET_TAXI_SCREEN, SET_DATA and GET_DATA must be created. See below my coding for these methods.
    method IF_EX_CUSTOMER_ADD_DATA_CS~GET_TAXI_SCREEN.
        if flt_val = 'ZC' and i_taxi_fcode = 'ZC4C'.
       e_Screen = '9000'.
       e_program = 'SAPLZC4C_ENH'.
       endif.
      endmethod.
    
    
    method IF_EX_CUSTOMER_ADD_DATA_CS~SET_DATA.
    CALL FUNCTION 'Z_SET_C4C_DATA'
      EXPORTING
        iv_zc4cprospid       = s_kna1-zc4cprospid
              .
    endmethod.
    
    method IF_EX_CUSTOMER_ADD_DATA_CS~GET_DATA.
    
        CALL FUNCTION 'Z_GET_C4C_DATA'
         IMPORTING
           EV_ZC4CPROSPID       = S_KNA1-ZC4CPROSPID
                  .
      endmethod.
    
  • In this implementation I call my created screen 9000, in case the screen group is ZC. So, this will bring the field on the screen. In method SET_DATA the data entered in the new screen field will be transported to table KNA1 to make sure the data will be saved on the database. Method GET_DATA in contrary will retrieve the data in our Z-field to have the value available on the screen field. Both these methods call a Z-Function module, Z_GET_C4C_DATA and Z_SET_C4C_DATA which are created in our new function group ZC4C_ENH. In the top Include of this function group (LZC4C_ENHTOP) I added the statement TABLES: KNA1. Through this statement, inside the function modules access to runtime structure KNA1 is provided. Due to this, by setting the data to KNA1, the standard application logic will then update our field for the prospect ID to the database.
  • The following is the coding for my mentioned Z-function module and top Include
    LZC4C_ENHTOP
    FUNCTION-POOL ZC4C_ENH.                     "MESSAGE-ID ..
    TABLES: KNA1.
    * INCLUDE LZC4C_ENHD...              
    
    FUNCTION Z_GET_C4C_DATA.
    *"----------------------------------------------------------------------
    *"*"Local Interface:
    *"  EXPORTING
    *"     REFERENCE(EV_ZC4CPROSPID) TYPE  CHAR10
    *"----------------------------------------------------------------------
    
    ev_zc4cprospid = kna1-zc4cprospid.
    
    ENDFUNCTION.
    
    FUNCTION Z_SET_C4C_DATA.
    *"----------------------------------------------------------------------
    *"*"Local Interface:
    *"  IMPORTING
    *"     REFERENCE(IV_ZC4CPROSPID) TYPE  CHAR10
    *"----------------------------------------------------------------------
    kna1-zc4cprospid = iv_zc4cprospid.
    ENDFUNCTION.
    ​

With those steps, the new Z-field is on the customer master screen and our logic makes sure that the value will be persisted and changed on the database.

The next step now is to get the content of this field in the IDOC which is send to Cloud for Customer.

For this purpose, an implementation of BADI CUSTOMER_ADD_DATA_BI must be created.

In this implementation method FILL_ALE_SEGMENTS_OWN_DATA is the right one to implement. The content of the Z-field must be mapped to IDOC segment E1KNA1H (of IDOC DEBMAS_CFS), with TDOBJEKT ‘SOD_ID’, TDNAME must get the value of the Z-field (ZC4CPROSPID). These data must be appended to the IDOC data structure to add it to the IDOC. Please see my coding below how I achieved this which speaks more than much explication.

  METHOD if_ex_customer_add_data_bi~fill_ale_segments_own_data.
    DATA: ls_idoc_data TYPE edidd.
    DATA: zc4cprospid TYPE char10.
    FIELD-SYMBOLS: <ls_data> TYPE kna1.
    DATA: ls_e1kna1h TYPE e1kna1h.

    IF i_segment_name = 'E1KNA11'.
      ASSIGN i_data_ref->* TO <ls_data>.
      SELECT SINGLE zc4cprospid FROM kna1 INTO zc4cprospid WHERE kunnr EQ <ls_data>-kunnr.
      IF zc4cprospid IS NOT INITIAL.
        ls_e1kna1h-tdobject = 'SOD_ID'.
        ls_e1kna1h-tdname = zc4cprospid.
        ls_idoc_data-segnam = 'E1KNA1H'.
        ls_idoc_data-sdata = ls_e1kna1h.
        APPEND ls_idoc_data TO t_idoc_data.
      ENDIF.
    ENDIF.

 

In HCI, there is a standard mapping on these fields, which makes sure the prospect will be converted to an account on Cloud for Customer side.

That’s basically it.

Summarized, with these changes the Prospect ID maintained in the customer master on ERP side will be send to Cloud for Customer. In Cloud for Customer, it will be checked whether this Prospect exists. In this case, the prospect flag will be removed and the prospect in Cloud for Customer is converted into an account.

 

In this Blog, I am going to describe step by step how to add an extension field to the Sales order in ERP and send this extension to Cloud for Customer. This will also include the way back – sending the field from Cloud for Customer to ERP.

Requiered steps on ERP side

Add the enhancement field to the Sales Order Screen

  1. The first step I am doing is to create a new text field as append to table VBAK. This is the field which will contain the data for my replication scenario. In my parallel Blog  I am describing in detail how to create an append for customer master table KNA1 – just to the same steps for table KNA1. On the screenshot below you can see my field.
  2. This field should be made available on a sales order screen. In the SD Sales order, there are is a tab Additional Data B on header level – this is reserved for custom own fields. And that’s where I am putting my field now.The respective Screen for this Tab is number 8309 in program SAPMV45A. Open the screen layout (through transaction SE80 or SE54). And add the Input/Output-Field which should point to your Append in VBAK (e.g. NAME is the field of the Append field). Save and activate the screen changes.
  3. Then navigate to the flow logic tab of the screen 8309 and add a new Module in the PBO logic.I added the new MODULE field_attribute. See my coding for this Module below.
    MODULE field_attribute OUTPUT.
     if t180-trtyp eq 'A'.
    loop at screen.
      if screen-name = 'VBAK-ZZC4CINFO'.
        screen-input = 0.
        MODIFY SCREEN.
       endif.
       endloop.
     else.
      loop at screen.
      if screen-name = 'VBAK-ZZC4CINFO'.
        screen-input = 1.
    
        MODIFY SCREEN.
       endif.
       endloop.
       endif.
    
    
    ENDMODULE.
    

    What I do in this coding is just to have the field ready for input in create and change mode (e.g. transaction VA01 and VA02). In display mode (e.g. transaction VA03) the field should not be ready for input.

That’s all the logic which is needed for the sales order itself. Since the field points to the append in table VBAK, the application logic will handle that values entered for the field on the screen and it will be saved/changed on the database.

Add the content of the enhancement field to the IDOC beeing sent to C4C

  1. My first idea to add the custom own field to the IDOC beeing send to C4C to create an IDOC extension for the respective IDOC, like I did in my parallel BLOGBLOGBLOG.But I soon figured out that this is not possible for the Sales Order IDOC, since this IDOC got generated from a BAPI. Due to this, the correct way to enhance the IDOC is to use the BAPI extensions. How to do this is described in detail in note 143580. Based on this note, as we deal with an enhancement on structure VBAK, the following structures need to be enhanced:
    1. VBAKKOZ (add your enhancement field as an append, as done in the begging for table VBAK)
    2. VBAKKOZX (add your enhancement field as append, but only as  character field length 1. This structure will control whether or not the field should be updated, it will only contain values ‘X’ or blank).
    3. BAPE_VBAK (add your enhancement field as an append)
    4. BAPE_VBAKX (add your enhancement field as append, but only as  character field length 1. This structure will control whether or not the field should be updated, it will only contain values ‘X’ or blank)

Through this enhancement, the value in the extension field will be automatically put in the IDOC. But these data have a special structure. SAP Note 143580 describes this as well with an example.

As an example, that’s how the IDOC content looks when I put the text DemoC4C in the text field:

This data is being sent to C4C. But in C4C, I only want the content of the field itself (e.g. DemoC4C without 000000036). The challenge is how to get these data in the right format for C4C. I did not find a BADI on ERP side with which I could cut the content. Maybe there is one, but I managed to solve this with an offset rule in the iFLOW in HCI. I will describe in a later step how we managed this complex offset rule. Before, I am going to describe the steps executed on C4C side.

 

Required steps on C4C and HCI side

  1. First step is to add this field in the sales order on C4C side. To do so, go to the Sales Orders Work center and open one sales order. In the menu choose ‘Adapt->Edit Master Layout’.
    1. Place the cursor where you want to add this field and choose ‘Add Field’. In the pop-up which opens, choose ‘New Field’ and create the field as required (see my screenshot on the data I used. Press Save.
    2. Now this field must be added to the service definition. Press the change properties symbol on this new field and choose Field Definition.
    3. Now this field must be added to the service definition. Press the change properties symbol on this new field and choose Field Definition.
    4. In the window which gets opened, choose Tab Services. The Service where this field should be added is CustomerOrderRepliacationIn. Press the button ‘Add field’ in the Action column. Then save the changes. Now this field is added to service.
    5. This new service definition should now be uploaded to HCI. To do so, you must download it from the communication scenario.The communication scenario from which to download is Sales Order Replication to SAP Business Suite, the service is Replicate Sales Order to SAP Business Suite. Please see this blog which desccribes how to do so https://blogs.sap.com/2013/12/12/guide-how-to-download-wsdl-and-api-documentation-of-a-business-object/
    6. Logon to your HCI account and choose package SAP Hybris Cloud for Customer Integration with SAP ERP.
    7. Click on Replicate Sales Order and Sales Quote from SAP Business Suite. Select Mapping and click on the Resource name. Within the Mapping on the right side you need to upload the WSDL downloaded in the previous step.Then change the Mapping to link field STRUCTURE and VALUEPART1 in segment E101BAPIPAREX with field ExtERPText.Then create a mapping expression, to extract the value entered in the field on ERP side from the special structure of this BAPI field.See the screenshot how mapping looks.

 

This mapping rule does the following:

If field „STRUCTURE“ in segment “E101BAPIPAREX” of the ERP IDoc contains the string “BAPE_VBAK” the extension field “ExtERPText” in C4C will be filled based on the following condition:

If field “VALUEPART1” has more than 10 characters, the first 10 characters are cut off and the remaining characters are mapped to the field “ExtERPText” in C4C.

If field “VALUEPART1” has 10 characters or less, an empty string is mapped to the field “ExtERPText” in C4C

 

  1. For the other direction Cloud for Customer to ERP also the changed WSDL definition must be downloaded from Cloud for Customer and uploaded to HCI.
  2. The field must be added as in the previous step here also in the outbound request Sales Order Request – General. See my screenshot on the fields added to the service.
  3. To download it, the respective Communication Arrangement is Sales Order Replication to SAP Business Suite.
  4. Select Mapping and click on the Resource name. Within the Mapping, on the left side upload the WSDL definition you downloaded previously from Cloud for Customer.
    1. For this direction, I did the Mapping without any additional rule. I just mapped the custom own field to VALUEPART1 of E1BPPAREX. Putting the data in a right format to be handled by the BAPI on ERP side I will do with an ERP BADI in the next step.

 

 

Implement a Mapping BADI on ERP side

  1. To get the data coming from C4C in the enhancment field in the correct format which can be handled by the ERP BAPI to create/change the sales order I implemented enhancement spot COD_SLS_SE_SPOT_SALESORDER.

That’s my coding for the mentioned enhancment spot:

METHOD if_cod_sls_se_salesorder_repl~adjust_import_data.
    DATA: ls_extension TYPE bapiparex.
    TYPES: BEGIN OF zfield,
             vbeln TYPE vbak-vbeln,
             info  TYPE vbak-zzc4cinfo,
           END OF zfield.

    TYPES: BEGIN OF zfieldx,
             vbeln TYPE vbak-vbeln,
             info  TYPE char1,
           END OF zfieldx.

    DATA: ls_zfield TYPE zfield.
    DATA: ls_zfieldx TYPE zfieldx.


    LOOP AT ct_order_extensions INTO ls_extension.
      IF ls_extension-structure EQ 'BAPE_VBAK' AND ls_extension-valuepart1 IS NOT INITIAL.
        ls_zfield-vbeln = iv_salesdocument.
        ls_zfield-info = ls_extension-valuepart1.
        CLEAR ls_extension-valuepart1 .
        ls_extension-valuepart1 = ls_zfield.

        MODIFY ct_order_extensions FROM ls_extension.
        CLEAR ls_extension-valuepart1.
        ls_zfieldx-vbeln = iv_salesdocument.
        ls_zfieldx-info = 'X'.
        ls_extension-structure = 'BAPE_VBAKX'.
        ls_extension-valuepart1 = ls_zfieldx.
        APPEND ls_extension TO ct_order_extensions.
        EXIT.

     ELSEIF ls_extension-structure EQ 'BAPE_VBAK' AND ls_extension-valuepart1 IS INITIAL.
        CLEAR ls_extension.
        MODIFY ct_order_extensions FROM ls_extension.
      ENDIF.
    ENDLOOP.


  ENDMETHOD.

This coding basically brings the value of the enhancement field in the format which is explained by the mentioned note 143580.

 

That’s it – with those steps you should be able to replicate custom own fields in the sales order bidirectional.