Skip to Content
|

Category Archives: Uncategorized

There is an old habit from the early days of SAP on IBM i:
When copying an SAP system on IBM i via homogeneous system copy or when refreshing the current SAP database schema with the data from a different SAP system, sometimes the data is just copied with plain OS tools (SAVLIB/RSTLIB) from a save file without using the SAP Software Provisioning Manager (SWPM).
While it can work in some scenarios, this is not the officially recommended way to do a homogeneous system copy import or database refresh because it can cause errors.

More detailed information:
Starting with NW 7.40, it is typical that the SAP database schema permanently contains objects of type *PGM and *SRVPGM (like SQL triggers, user-defined-functions, and variables). Other than objects of type *FILE (for SQL tables, views, indexes), references from programs or service programs to other database objects (for example the base table on which a trigger got defined) are not adjusted automatically during restore. That means, objects of type *PGM and *SRVPGM will continue to “point” to the original location resulting in data inconsistencies in the worst case. This can happen if your SAP user profiles are for some reason configured (maybe misconfigured) to have access to other SAP systems than their own and the source schema resides on the same LPAR.
The only way to correct the references is to recreate the affected objects with the new schema name.
SWPM does all needed steps in the post load phase using dedicated ABAP reports.
For more information how the affected objects are created, see SAP Note 2368628.

We strongly recommend that you do a homogeneous System Copy import or a DB Refresh by using the SWPM.

On IBM i, the SWPM offers two methods to export and import the SAP schema:
a) SAP generic: R3Load(for ABAP) / JLoad(for Java) or
b) Database specific: IBM i specific database tools SAVLIB/RSTLIB.
When you are using the IBM i specific tools, the SWPM will restore (RSTLIB) the database library from a save file. The creation of the database library export (SAVLIB) into a save file is done manually.

The recommended steps of a system copy or DB Refresh are described the SAP System Copy documentation.
In general when using the IBM i specific database tools SAVLIB/RSTLIB, you must do the following steps:

  1. Create manually a save file of the SAP database library. This save file is called the SAP database export. For convenience reasons the SWPM is not involved when performing a database library export using the command SAVLIB. For more information, see SAP Note 585277.
  2. Only for the DB Refresh: Before you start the Refresh Database Content option on IBM i, you must delete the SAP database library of your target SAP system. This is described in detail in the SAP System Copy documentation, section Copying the Database Only – Refresh Database Content.
  3. Start the SWPM as described in the SAP System Copy documentation for IBM i, section Running the Installer.
  4. In the SWPM Welcome screen choose the option you want to use:
    a) <SAP_Product_Name> → IBM Db2 for i → System Copy → Target System or
    b) Generic Options → IBM Db2 for i → Refresh Database Content

  5. Fill the parameter values in the dialog phase of the SWPM as described in the system copy guide. Provide the name and location of your save file (your SAP database export).
  6. Review all parameters in the Summary screen of the SWPM at the end of the dialog phase and click Next to execute the SAP System Copy Import or DB Refresh.

In the execution phase of the SWPM the data of the save file will be restored (RSTLIB) into the SAP database library. Finally, ABAP reports will do additional configuration steps. Especially for NW 7.40 and higher objects of type *PGM and *SRVPGM are recreated consistently for the new SAP database library.

SAP recommends that you use the latest SWPM. At the time of writing, this is SWPM 1.0 SP 23 patch level 05. For more information, see SAP Note 1680045.

Four years ago, we’ve started building tools to enhance the developer experience for creating hybrid mobile apps with the SAP Mobile SDK. Over the years, we have transitioned from local tools to cloud based tools and services. We now leave the last piece of the HAT local tools behind us and focus on cloud only.

End of last year, we have published this blog post in which we’ve informed you about our plans to announce “end-of-maintenance” for the local add-on component of HAT. Some took this as the official announcement; but that was not the case. We still had to release a couple of features with which users could continue developing hybrid apps in the cloud:

To be absolutely clear, this announcement is about the HAT LOCAL ADD-ON. This is the piece you had to download from the SAP Store and install on your local machine. The other parts of Hybrid App Toolkit continue as normal. Our main component is the SAP Web IDE feature, which you can find in the SAP Web IDE Preferences. This will continue to provide our developer experience around building hybrid apps, in the cloud. You don’t need to install anything, besides a web browser.

Below are a number of known issues for the current version of the HAT Local add-on:

  1. Apps built using UI5 debug library version show a blank screen on Android.
    • Users are advised to use the release build option.
  2. For the installer, Bower is outdated. As a result, the installer will not start.
    • Users are advised to either:
      • update bower using   npm install -g bower
      • or add this line in the .bowerrc file :  “registry”: “https://registry.bower.io”,
  3. The SAP Mobile SDK used for the last release of HAT Local Add-on is version 3.0 SP15, which can be considered outdated by now. Meanwhile version 3.0 SP16, with several patch releases was shipped. The SDK team has already released Mobile SDK version 3.1 and our Cloud Build Service will soon update to this version (together with an Apache Cordova update).
    • Users can update to newer versions of the SDK.
    • To be clear: we don’t provide support for this.
  4. The Cordova version we’ve used for the last release is quite dated (version 6.5.0). Meanwhile, Cordova is already at version 8.0 with bug fixes and security patches.
    • Users can update to newer Cordova versions.
    • To be clear: we don’t provide support for this.
  5. The Node.js version (and npm) we’ve used is rather dated.
    • Users can update to newer versions at their own risk.
    • To be clear: we don’t provide support for this.
  6. The UI5 libraries packed in the apps are quite dated.
    • Users who need the latest version of UI5 in their hybrid app can use the CLI to manually build the app with the SDK. Again, we don’t provide support for this.

Advice

Users who want to continue building hybrid apps with Hybrid App Toolkit and avoid any of the above issues are strongly advised to use our Cloud Build Service.

A few other points of advice:

  • Users who rely on our tooling for publishing App Updates (update web container content via Mobile Services) are advised to switch to using the SAP Mobile SDK and CLI tools.
  • Users having an issue with cloud builds (due to security, corporate policies, etc.) are advised to switch to the Mobile SDK (with CLI and/or other tooling/IDE).
  • Advanced users are most likely already working with the SDK and local tooling. But just to highlight this again: you will be able to get the most out of hybrid app development with CLI tools (and the SAP Mobile SDK).
  • If you have already installed the HAT local add-on on your machine, and it works … don’t touch it unless really necessary.
  • If you are new to mobile app development and want to get started, please consider using Mobile Development Kit (MDK), SAP Cloud Platform SDK for iOS and SAP Cloud Platform SDK for Android.

At this point, we stop supporting the local add-on. We do not plan any more (patch) releases. What will stay available for now are:

  • The user interface items in SAP Web IDE related to it
  • The user manual section
  • SAP Store download

 

If you don’t use the HAT local add-on or the IoT feature, please disable these settings in the Preferences, so we can keep your SAP Web IDE menus clean:

 

Removal

At some point in future (probably before 2019), we expect that the underlying tools and frameworks have evolved too much to keep the HAT local add-on as a viable tool. Think about changes in the SAP Mobile SDK, Apache Cordova, Node.js, Bower, Android tools, iOS tools, UI5, etc. Once we have reached this point, we will proceed to remove anything related the HAT local add-on.

What remains

Hybrid App Toolkit will continue to be available in SAP Web IDE Full-Stack as a feature allowing you to develop hybrid apps. You will be able to build those apps in the cloud, using our Cloud Build Service provided through Mobile Services. It is not required to install any tooling on your local machine.

Feedback

We are open to feedback on this. Feel free to comment on this blog post, or if you prefer, you can reach out to me or my colleagues from Product Management.

 

Ludo Noens , Product Owner – Hybrid Application Toolkit

Eric Solberg , Senior Director – Product Management

Britt Womelsdorf , Area Product Manager

This blog describes how to develop Python based applications for SAP HANA Extended Application services Advanced model, using the SHINE application as an example.

A few words about SHINE before we begin.
SAP HANA Interactive Education, or SHINE, is a demo application that makes it easy to learn how to build applications on SAP HANA extended application services advanced model.SHINE is available for free download via the service market place and also available in the GitHub.The standard SHINE application showcases the use of Nodejs and Java runtimes in XS Advanced.

In this blog we describe the implementation of business logic as a python based microservice, in SHINE.

Scenario:

The SHINE application has a scenario which illustrates a comprehensive Purchase Order (PO) Worklist that acts as an interface for a Purchase Department Manager to manage the purchase orders created by his or her department.

In this blog, we showcase the usage of python runtime in XS Advanced in a simple Export to Excel functionality in PO Worklist scenario. This feature downloads all purchase-order data into an Excel spreadsheet.

Prerequisites:

You have a system with SAP HANA 2.0 and XS Advanced installed.For details on how to install XS Advanced, see here. You can also use a HANA express installation.

Setting up the Python build from sources

You must upload a Python runtime, in your XS Advanced system, that will be used to run your Python applications. To check if a Python runtime is already setup run `xs runtimes`. If you don’t see an entry for Python you need to set one up by following the below steps:

Login to the XSA system via a terminal using the root user.

  1. Building Python from sources requires several development packages to be installed on the OS. The repositories providing these packages need to be configured and are specific on different Linux distributions. Here is a list of the packages and sample command to install it on SLES:

 

Package Install on SLES
tk-devel zypper install tk-devel
tcl-devel zypper install tcl-devel
libffi-devel zypper install libffi-devel
openssl-devel zypper install openssl-devel
readline-devel zypper install readline-devel
sqlite3-devel zypper install sqlite3-devel
ncurses-devel zypper install ncurses-devel
xz-devel zypper install xz-devel

 

  1. Download the Python-3.6.x sources from https://www.python.org/downloads/ and extract them to a local folder.
  2. Run a build from the local folder.

Before executing the command, create an empty directory python_runtime where the python runtime will be installed.

./configure --prefix=<path to python_runtime folder> --exec-prefix=<path to python_runtime folder>

make -j4

make altinstall  

The build by default will install setuptools and pip.

The build results directory i.e., python_runtime looks like:

├── bin

├── include

├── lib

└── share

 

Note: If there are any missing packages reported by the command above, you need to install the devel packages missing as explained in step-1. and re-run the ./configure command.

Create Python runtime in XS Advanced

To use the built-in Python to run the applications on XS Advanced, you need to create a new runtime using the `xs create-runtime` command. Here is an example using the build directories specified in the previous section:

 xs create-runtime -p <path to python_runtime> 

 

Download/Clone SHINE

 Download or clone SHINE from here.

The python specific source code is present in the core-python folder. 

 

Vendor Python dependencies

 The overall recommendation for XSA applications, is for them to be deployed as self-contained – they need to carry all the dependencies so that the staging process does not require any network calls. The python build pack provides a mechanism for that – application can vendor their dependencies by creating a vendor folder in their root directory by executing the following steps:

  1. As we are using SAP developed python modules, to get these modules please follow the below steps:
    1. Go to SAP Support Portal
    2. Click on Support Package and Patches
    3. Expand by Alphabetical Index (A-Z)
    4. Click on Alphabet X
    5. Select XS PYTHON 1
    6. Download ZIP
    7. Extract the above ZIP to the folder sap_python_dependencies (you need to create this folder locally in some location)
  1. Execute the following command from shine/core-python directory to download the dependencies in vendor folder
 pip download -d vendor -r requirements.txt -f <sap_python_dependencies_path> 

If pip command is not recognized or if you have multiple pip versions, then you need to set the alias for the same, using the below command:

 alias pip=<path to python_runtime>/bin/pip3.6

You can check this by executing command `pip -V`.

 <python_runtime> is the directory where you have installed Python-3.6.x in above steps.

 Note: You should always make sure you are vendorizing the dependencies for the correct platform. The above steps are recommended way to vendor the dependencies specifically on Ubuntu platform. So if you are developing on anything other than Ubuntu, use the –platform flag. See pip download for more details.

Build and Deploy SHINE with python runtime on XS Advanced

Build and Deploy SHINE with Python runtime by following steps:

1.     Setup MTA Archive Builder

Follow the steps here, to download and set up the MTA Archive builder.

2.     Build and deploy the SHINE application

  1. Copy the downloaded mta.jar into the root folder of the SHINE project
  1. In the mta.yaml replace the UAA endpoint by following the steps below:a. Navigate to the resource uaab. Replace the url property of the resource uaa and controller to your respective UAA and controller end point URLs. In a port based routing system, it will be of the following format:
http(s)://<host-name>:3<instance-number>32/uaa-security​
http(s)://<host-name>:3<instance-number>30/

For example, in HANA express the UAA endpoint can be,

https://hxehost:3<instance-number>32/uaa-security

 Note: In HANAExpress VM install has default instance as 90, Binary install is a user-defined   number

3. Run the following command to build the MTAR archive:

java -jar mta.jar --build-target=XSA --mtar=shine.mtar build 

  1. After the build is done, navigate to the corresponding space where you want the application to be deployed.
xs target -s <space-name> 
  1. And then, deploy the generated mtar using the following command:
 xs deploy shine.mtar  
  1. Open the deployed shine-web application’s URL to see the running application.
  2. Click on the Purchase Order Worklist tile.
  3. Click on the Export to excel button.

The excel containing the purchase orders is downloaded via the python service.

 

 

Project Structure and Code Snippets

 Core-python module folder structure is explained below:

 requirements.txt à This file is used to specify all dependencies for the python module.

 runtime.txt à This file is used to specify the version of the python runtime.

 Procfile à This file is used to specify the command to run the python application.

 check.py à This file contains the application authorization and authentication code.

 server.py à This file contains the application logic to download the Purchase Order excel.

 

server.py

This code snippet is the rest service to download the excel which internally calls a method get_po_work_list_data(). The path is mapped using the route() decorator @app.route(‘/some/path’) in the method level.

This code snippet is responsible for querying the database and generating the excel workbook.

 

If you would like to know more about SHINE, the complete documentation can be found here.

Happy Coding!

There are many important components when working with the integration framework for SAP Business One. In this blog we would list and describe them to give you an overview. In addition, we are working on different documents to help you in your daily work with the integration framework.

To get an overview about integration and the integration framework please refer to the central blog or the other blogs:
Integration Framework for SAP Business One (B1if) – Central Blog
Integration Framework Version 1 – Concept of Scenario Development

 

Additional Components in the Integration Framework for SAP Business One

Around the scenario packages there are other components, which are required for data exchanges, or supporting the work with the integration framework.

Find a summarized description of each of them below:

  • System Landscape Directory (SLD)
    The System Landscape Directory (SLD) is the central place to maintain and administer all systems and their respective connections to each other. Each system entry contains the parameters for the connections that are used by the integration framework. It is also possible to define connectivity in the functional atoms, independent to the SLD entries for example.To interact with different entities each SLD entry provides a set of connectivity types, for SAP Business One, the Service Layer, DI API, and JDBC. The SAP ERP system type has settings for RFC and Web services, and so on. The SLD furthermore provides the File System, HTTP System, Web Service, Connecting to Databases, FTP System, E-Mail, and Timer system types. These system types are templates, pre-configured with the connectivity parameters for the inbound and outbound channel types used in scenarios.

    To define a new SLD entry; select an existing system type, then complete the system definition by adding specific network connectivity information, such as host name, IP address, or login information for a database.

    A system type for each SAP Business One major release is available. To interact with SAP Business One the options to use the standard APIs such as DI API or the Service Layer for SAP Business One, version for SAP HANA are available.

    An SAP Business One company database is considered as a system. When the integration framework is installed as SAP Business One the SLD is synchronized with the SLD from the local SAP Business One server. The company databases are automatically registered as systems in the SLD of the integration framework. It is also possible to create each system manually. And, for even more flexibility, it is also possible to define connectivity in the functional atoms, independent of the SLD entries, for example.

    Further information:
    Document about Company SLD Configuration

  • BizStore
    The BizStore is a persistency layer where all relevant documents related to the integration framework and the created scenarios are stored. This means when a new scenario is created, all information is stored here, as well as each scenario step with its respective XML documents.The BizStore can be accessed by using the embedded XML editor; or using an external editor, which shows the scenarios structure. If an XML editor supports WebDav it is possible to connect to the web folders of the BizStore directly.
  • Scenario Activation
    Existing or newly created scenarios must be set up for the specific system landscape and activated before they can be used. This setup and activation can be done manually or by running the Setup Wizard. The Setup Wizard runs a step by step process for the scenario activation. This can help in finding issues or missing information within the scenario setup.If a scenario is set up with a timer trigger, it can be triggered manually after activation. This can be done at any time and it is not restricted to any specific limitations.
  • Monitoring – Message Log
    To analyze the integration runtime, the integration framework provides monitoring of messages and processes. The message log captures all messages being processed through the integration framework for troubleshooting scenarios. This includes information such as status, scenario package and step, sender and receiver system, trigger, date and time, plus any error messages.
  • Scenario Transport
    Scenario packages or individual scenario steps can be transported from one framework installation to another. To transport them, the scenario package or step can be exported as a zip file from the source integration framework and imported into any destination integration framework.One example for this could be that a scenario is created on a test environment and after final testing, it can be transported to the productive environment. This is considered as the best practice to create scenarios.

Integration Framework Model and its Benefit

The concept of having a generic inbound and outbound phase for each scenario step provides flexibility in scenario transportation from one system environment to another. In the System Landscape Directory (SLD), scenario packages do not reference the actual sender or receiver system directly as hard-coded entries. This means that after the transportation of a scenario is complete only the SLD settings need to be set. Only at package setup, the concrete systems for Inbound and Outbound defined in the SLD (System Landscape Directory) are selected.

This concept is graphically represented below:

                                                                        Integration Model 1 Entities

The left side shows the different scenario packages stored within the integration framework. These scenario packages contain one or multiple scenario steps.

The right side shows the System Land-scape Directory (SLD) that contains all systems of your integration landscape. The different scenario packages rely on the information stored within the SLD.

 

Happy integrating!

Krisztian Papai, Justin McGuire, Annemarie Kiefer, and Miriam Rieger

 

Related Links

openSAP course: In Action – Integration Framework for SAP Business One
SAP Help Portal: Integration Framework for SAP Business One
Document: Configure Connectivity to SAP Business One Service Layer


Related Blogs

Integration Framework for SAP Business One (B1if) – Central Blog
Integration Framework Version 1 – Concept of Scenario Development

To rework a Las Vegas truism, what happens at SAP TechEd 2018 most definitely doesn’t stay there. That’s because all three events ─ Las VegasBarcelona and Bangalore ─ are delivering a brilliantly crafted new SAP TechEd Learning Journey for people to pursue before, during and after each conference.

“SAP TechEd attendees are at the forefront of the evolution to the Intelligent Enterprise, and this year’s learning journey is designed to get them there faster,” said Björn Goerke, chief technology officer and president of SAP Cloud Platform. “People can quickly get ahead of the curve with education before SAP TechEd, prioritize their experience onsite, and set a path for achievement afterwards based on their evolving, individual learning goals for innovation.”

In this VIDEO, Christoph Liebig, head of strategy and product management, SAP Cloud Platform, explains what participants can expect at SAP TechEd.

Recommendations for non-stop learning

It couldn’t be easier for participants to find all the relevant content for the tracks that interest them. Attendees have their pick of 35 first-rate learning journeys across eight conference tracks. Watch for updates as each learning journey will be fully displayed very soon.

  • Integration Out-of-the-Box
  • Application Landscapes and Cloud-Native Architectures
  • Consumer Grade Experience
  • Applied Intelligence
  • Next Gen Data Management
  • Open Platform
  • Security by Default
  • Explore SAP

One click on each conference track reveals the full SAP TechEd Learning Journey for that topic. Participants can select what content interests them the most, depending on their individual skill levels and interests. Here’s an example of how it works. Suppose you’re interested in Applied Intelligence.

Curious specialists and generalists, managers and others can start with “Explore,” which shows relevant overview lectures covering SAP Leonardo technologies.

Architects, IT decision makers, or anyone with some basic knowledge can enter the “Discover” path to find sessions with greater details on what these technologies ─ machine learning, blockchain, IoT, analytics, and more ─ can do for your business.

Experienced developers, architects, IT specialists, and others can click on “Learn” to find hands-on, how-to sessions on applying these innovations in your real life environment.

Anyone who’s ready to build on their knowledge in this area can enter “Expand Skills” for learning opportunities outside of SAP TechEd events. These encompass courses on the openSAP MOOCs platform, as well as expert resources on the SAP Help Portal and SAP Community.

SAP TechEd attendees this year also receive access to selected learning content in the SAP Learning Hub that’s relevant to the event. Registration for SAP TechEd in Las VegasBarcelona and Bangalore has kicked off. Don’t miss this unprecedented opportunity to chart your personal learning journey with SAP.

Follow me @smgaler

Related content:

SAP TechEd 2018: Register for this Year’s Ultimate How-to Learning Adventure

VIDEO: Find out more about the SAP TechEd Learning Journey

VIDEOS: How SAP TechEd 2018 Helps You Make Your Enterprise Intelligent

VIDEO: Can’t-miss demos from last year’s SAP TechEd

 

A key element in application development is creating and sharing reusable components, to ensure consistency and quality, and to make the development easier and faster.

SAPUI5 provides the option to build your own control and other reusable components but SAP Web IDE was lacking the support in this important capability.

Not anymore!

With our latest release, you can create, build, deploy and reuse SAP Fiori/UI5 libraries/controls.

Here’s how:

 

Create a new SAP Fiori Library

The best way to create something new in SAP Web IDE is using our rich collection of templates.

We now added a template for creating an SAP Fiori Library.

Click on File > New > Project from Template.

In the Category dropdown select SAP Fiori Library.

Select the SAP Fiori Library template and click Next.

 

Provide a Title and make sure the Add default control checkbox remains checked. This will create a sample control in your new library.

Click Finish.

Review the generated content in your workspace. You can see the library was created with an src folder that contains the sample control and future code, and a test folder that contains an existing QUnit test and future automated tests.

 

Add an SAPUI5 Control

You can easily add a new SAPUI5 control to your library, or add one from an existing library.

Right-click the library and choose New > Control or New > Control from Existing Library.

The wizard will guide you through the needed steps and a new control will be added to your controls folder.

 

Add Reference to Library

You can easily add a reference to a library in your SAPUI5 app.

Right-click your existing SAPUI5 app and choose Project > Add Reference to Library.

Select the repository containing the desired library, select the checkbox of the library and click Add.

Then your app is updated with a dependency to the library (in its manifest.json file), and this allows you to use any control included in it.

 

Use a Control from the Library

For the sake of the example, I have created a new Library from a template and it contains the sample control.

Then in my SAPUI5 app, in my XML view, I have added the control from the library:

 

Run the App

If you are referring to already deployed libraries there are no further steps.

However if you want to run the libraries that are located in your workspace, do the following:

Enter the Run Configurations of your app and create a new one.

In the URL Components tab, add the sap-ui-xx-lesssupport parameter and set its value to true.

In the Advanced Settings tab, under the Application Resources section, select the Use my workspace first checkbox.

Save and run your app.

 

This is how my app with the sample control looks like:

 

Pretty cool right?

We still have a way to go, but hopefully these capabilities will be beneficial for you 🙂

For more information, see the documentation.

 

 

SAP Business One (SAP B1) Query Generator helps a user to develop SQL Queries on the fly very easily. You have two major options to generate a query here –

  1.  You can either select the required tables and conditions in the generator to build the required query.
  2. You can build the query in the SQL Server Management Studio (SSMS) on the concerned SAP B1 company database directly, and paste the same into the query generator’s output-query window.

The second case allows you to develop complex queries or it may be easy if you are comfortable with SSMS already.

But in the 2nd case there is a catch – you may create any complex query and that may be giving you the right result in the SSMS, but it may not be behaving properly in Query Generator’s output. It happens especially in the case of union operation in the query.

Document number shows Invoices and Credit Notes as in the below screenshot –

Overall, the result looks the same in both the SSMS and the Query Generator’s output, but the behavior of the document-links in the Query Generator’s output will be erratic. Say for e.g. if you have written a query of invoices and their linked corresponding credit-notes in a complex query where a union operator has been used then the document-links to the corresponding credit-notes will take you to the wrong documents (documents which have no links to their corresponding listed parent documents) from the Query Generator’s result windows.

AR Invoice with the same number as the credit Memo:

Code for Invoice Memo and Credit Memo:

SELECT T0.[DocNum], T0.[DocDate], T0.[DocStatus], T0.[CardCode], T0.[CardName], T0.[NumAtCard], T0.[Address2],
T1.[LineNum], T1.[ItemCode], T1.[Dscription], T1.[Quantity], T1.[Price], T1.[LineTotal],
  T2.[CityS], T2.[StateS], T2.[ZipCodeS], T2.[CountryS] FROM OINV T0 INNER JOIN INV1 T1
   ON T0.DocEntry = T1.DocEntry INNER JOIN INV12 T2 ON T0.DocEntry = T2.DocEntry WHERE T0.[DocDate] >= '20171001'
and T0.[DocDate] < '20180101'
and T1.[ItemCode] Like '%%90700%%'
UNION ALL

 

SELECT T0.[DocNum], T0.[DocDate], T0.[DocStatus], T0.[CardCode], T0.[CardName], T0.[NumAtCard], T0.[Address2], 
T1.[LineNum], T1.[ItemCode], T1.[Dscription], T1.[Quantity], T1.[Price], T1.[LineTotal],null AS CityS,null AS StateS,
null AS ZipCodeS,null AS CountryS 
FROM ORIN T0 INNER JOIN RIN1 T1 ON T0.DocEntry = T1.DocEntry WHERE T0.[DocDate] >= '20140101'
and T0.[DocDate] < '20150101'
and T1.[ItemCode] Like '%%90700%%'

SAP Business One / SAP B1 has earned the top spot for Small/Medium-sized Enterprises (SME) in recent times. The productivity and efficiency it offers is unparalleled among all the ERPs in the market within the given segment. It is one simple yet highly capable software incapacitating the requirements of any SME. Now, you can easily integrate SAP Business One with eCommerce, Marketplace, CRM, Shipping and POS systems to automate the business process with the help of APPSeCONNECT.

About APPSeCONNECT:

APPSeCONNECT is a smart iPaas Solution for Enterprises to integrate SAP Business One with other Business Applications.

This is part 3 of my ongoing blog series about setting up our central ATC system.

Part 1: Setting the stage and Part 2: Preparing the systems

By now we’ve become fairly comfortable with how our new central ATC system behaves and how to connect satellite systems to it. So it’s time to get down to the nitty-gritty details like prototyping the baseline checks, tweaking the message priorities and setting up the exemption process. In parallel, I’m putting together the necessary documentation to share with the developers before going live with central ATC checks in a couple of weeks from now.


Setting the baseline

The first thing I needed to do was to decide which checks to include in the variant for what would define our baseline. In order to do that, I however first had to think about and define which checks should go into our future check-variants used during task/transport release where prio 1 & 2 findings will prevent the release. As my related question unfortunately didn’t get any responses I just went ahead and started with the check-variant we already had, taking out and adding checks not quite at random but basically just following my “gut-feeling” of what would make sense for us and what wouldn’t.

We made the decision to have some checks prevent task/transport release even if they came from “old” code and not just from newly created or changed sections. One such example are hard updates to SAP-tables (with the exception of TVARVC) where we know that we have way too many of in legacy code. We plan to use the exemption process to at least document the places and reasons for why these updates are in the code (and why they cannot easily be changed to better methods). Because of this decision, the check-variant used for the baselines contains a few checks less than the one in the works for transport release.

Once I had the check-variant defined I set up a run series to check all objects in all packages starting with “Z*” in the sandbox system:

Some 5 hours later, the job had finished with just a “few” findings and check failures:

The missing prerequisites errors no longer occurred for the impacted objects in a rerun after we downloaded an updated version of OSS-Note 2270689 – Remote Analysis (for source system) in the sandbox system. The other errors – as far as I can tell – are actual issues in the source code like syntax-errors or missing screen-definitions.

One thing was a bit weird while the checks for the baseline were being executed: for most of the time, refreshing the status screen showed updates in increments of 50 objects and the process kept running at a steady pace. But during some periods of time (more than an hour actually) nothing much happened as far as visible progress goes. Checking via SM50 it looked as if the process was somewhat stuck in routines related to class CL_CI_PROVIDE_CHECKSUM where – judging from taking some peeks via SAT – quite a lot happened in just a few seconds, with some counts getting into the high 10th of thousands within 10 seconds.

After getting the results I added them to the baseline:

I then “played” with the results to check that this really works as described in the related blog post published by Olga Dolinskaja . And, guess what? It does!


Tweaking the message priorities

One task I was looking forward to the least was tweaking the message priorities. Even though this can be done easily enough from either transaction ATC or SCI and the check variants definition by going to Code Inspector –> Management of –> Message Priorities it is a bit of a tedious task.

Our plan is to start with just a few checks preventing a task/transport release which in turn entails having to reclassify quite a lot of checks from message priority 1 or 2 to 3. In order to know which checks needed to be tweaked, we did the following steps:

  1. Properly connected our development systems via trusted-RFC connections to the central ATC system
  2. Defined the central check variant to be used during task/transport release: CHECKS_TRANSPORT_RELEASE (tip: copy the name to the clipboard as there’s no F4-help in the satellite systems!)
  3. Defined the check variants in the satellite development systems with reference to the central variant in the check system: CENTRAL_TRANSPORT_RELEASE (I plan to make more such referenced checks available for developers and will have all the variant names start with “CENTRAL”)
  4. Have a program select all open transports once a day and execute the ATC-checks for its objects via the central check now available (some more details about this step can be found here)
  5. Take a look each day at the results in the satellite systems to see which findings get flagged with which priority and update them as needed to get closer to the envisioned future state for task/transport release

Doing it like this has the advantage, that I can see how the findings from a transport change from day to day dependent on the tweaks I apply to the priorities. I therefore don’t have to guess, what will happen, but will actually know (or so I hope!). It should also help with not inconveniencing the developers too much once we go live as I can tell – and show! – them what to expect beforehand.


Defining the exemption process

ABAP development happens in many locations and also by external developers around the globe so we knew in advance that we’d have to take a close look at how best to define the exemption process before going live with checks preventing the release of tasks/transports. Of particular concern was the apparent restriction (described in another of Olga’s neat blog posts) that a developer needed to select a specific approver for an exemption request as most developers wouldn’t really know who of the perhaps handful potential approvers might actually be available to “hit the button” at any given time.

We have now figured out a way to do this within the current environment:

  1. We defined a new user “ATC_APPROVER” which is available in the satellite systems and the central ATC system with the needed authorizations
  2. We will have an email group address created for this user
  3. Emails triggered by the exemption process will go to this group address from where they’ll be passed on to the actual approvers
  4. The point person (or whoever gets to it first) logs in to the central ATC-system with her/his own user ID and approves or rejects the request(s)

We already checked and when everybody a) has the required authorizations and b) is listed as an approver in the ATC “Maintain Approver” settings exemption requests can be updated by whoever is available – it doesn’t have to be the ID listed as approver. Once the approval or rejection is done, the username switches to whoever actually did it.

At first, it looked as if we’d had to manually tweak the settings of the job to send out notifications about requested and processed exemptions because the standard configuration would only run the job once per day – obviously not enough for our requirements! Luckily enough, we found OSS-Note 2619387 – ATC: Downport Immediate E-Mail Notification and after its implementation, it’s now possible to trigger notifications immediately:

This should hopefully give us the needed flexibility to turn around exemption requests fairly quickly – esp. as we hope to start all of this with just a small set of checks where exemptions might play a role and get requested.


Next Steps

To get our local, global and external developers on board with the new process, I already invited them to another workshop-like session during which I’ll show them what has been set up and how we plan to implement it. After the session – and before I go on vacation 🙂 – we’ll go live with the newly defined check-variant BUT without preventing task/transport release for a couple of weeks. That way, everybody can get accustomed to the new checks and can also try the exemption process without potential rejections having any kind of real effect. The time can also be used to collect feedback about which message priorities might still need to be tweaked before really hitting the switch.

That’s it for now – I’ll keep you posted about how it goes the closer we get to the final stage.

This year, SAP TechEd is the ultimate how-to learning adventure for developers of the Intelligent Enterprise and we’re inviting some lucky learners to join us and meet the SAP Knowledge and Education team at the events!

Get ready for the ultimate SAP technical education conference, SAP TechEd! The conference takes place at three locations throughout October and November, Las Vegas (Oct 2-5), Barcelona (Oct 23-25), and Bangalore (Nov 28-30). If you’re learning with our digital platforms, openSAP and SAP Learning Hub, we’d like to invite you to enter our contest to win one of ten tickets per SAP TechEd event.

So, how do you enter?

We want to know how you learn about SAP! To share this with us, we want you to create a short video (max 2 minutes) telling us about how you learn with SAP. Here are some hints about the type of things you can include:

  • How long have you been learning with openSAP and SAP Learning Hub?
  • How do you use openSAP and SAP Learning Hub?
  • What topics/learning journeys at SAP TechEd are you most interested in?
  • Tell us why you should get a ticket and why attending SAP TechEd 2018 will support your further growth!

Once your video is ready, here are the next steps:

  • Upload it to a video platform of your choice (such as YouTube or Vimeo) and you can set the settings to private, once we can access it with a direct link. Or you can upload it to a sharing platform (such as Dropbox, Google Drive, Microsoft OneDrive.)
  • Go to the openSAP course, SAP TechEd Recap 2017. (You will need to enroll if you haven’t already).
  • Under Learnings, you’ll see the entry, “2018 Ticket Entry”
  • In the discussion forum, “Start a new topic”
  • Add the link to your video and don’t forget to include which SAP TechEd location you’d like to attend. (We will not be able to reach out to everyone to follow up so please include this information or your application will not be judged.)
  • Deadline: Entries will close on September 3 at 09:00 UTC

All valid entries will be entered in a final draw. If you’re one of the lucky winners, the SAP Knowledge and Education team will be looking forward to meeting you at the event! You can find all the information you need about attending SAP TechEd on the official website.

Best of luck with your entry! Find additional information about SAP TechEd learning journeys.

Please note: the winner(s) will receive free entry to the SAP TechEd conference in either Las Vegas, Barcelona, or Bangalore – location must be included in your entry. The winners will be responsible for organizing and the payment of their own travel expenses related to attending the conference. Tickets are non-transferable

The new version of the SAP S/4HANA Cloud SDK Java libraries is available since today. You can update your dependencies to version 2.3.1 and consume the new version from Maven Central.
In this blog post, we will walk you through the highlights of this release. For a complete overview, visit our release notes for the Java libraries. The release notes also include the change log of all our releases so far.
At the end of the article, you will find a set of instructions on how to update to the new version.

Java Libraries: Release Highlights

Update of Java VDM to SAP S/4HANA Cloud 1808

Recently, SAP released SAP S/4HANA Cloud 1808.

With version 2.3.1, the SAP S/4HANA Cloud SDK updates the Java VDM (VDM) for OData services of SAP S/4HANA Cloud 1808 to support all newly released or updated OData services of an SAP S/4HANA Cloud 1808 system. As explained in the blog post about the VDM itself, the VDM greatly simplifies reading and writing data from an SAP S/4HANA system in your Java code.

You can use the SDK to connect to all OData services listed in the SAP API Business Hub for SAP S/4HANA Cloud. As usual, the Java representations of all OData services are available from the package com.sap.cloud.sdk.s4hana.datamodel.odata.services.
Furthermore, there are selected BAPIs available that can be accessed via the Java VDM for BAPIs, which we have also updated for SAP S/4HANA Cloud 1808.

Executing OData requests in a Resilient Manner

Applications that access SAP S/4HANA systems or other downstream services are by definition distributed systems. In a distributed system, developers need to take special precautions to guard against failures when accessing those dependencies to build a resilient, fault-tolerant applications.
Making it easy to build resilient applications has always been an important goal of the SAP S/4HANA Cloud SDK, for example, by providing the ErpCommand class and other classes which make constructing a Hystrix command easy. For more details about the concept of resilience and the support available in the SDK so far, consult this tutorial.

Now, with version 2.3.1, we make it even easier to execute OData requests using the Java VDM in a resilient manner, automatically wrapped in a Hystrix command. Simply use the new method asResilientCommand available from any fluent helper of the Java VDM. This method returns an instance of a pre-configured Hystrix command that you can use like any Hystrix command to perform a resilient call, for example, by executing the command synchronously or asynchronously. This is supported for any OData operation of SAP S/4HANA.

To execute an OData request in a resilient manner using the new method, instead of calling, for example, service.getAllBusinessPartner().execute(), call service.getAllBusinessPartner().asResilientCommand().execute(). That is, build up the OData request as you are used to with the Java VDM, and then transfer it into a command by calling asResilientCommand before executing it. A more complete example could look as follows. Note that we can supply a fallback as a lambda function (which receives the exception that occured during execution as parameter).

// construct OData request as usual
new DefaultBusinessPartnerService()
    .getAllBusinessPartner()
    .filter(...).select(...)
    // wrap in Hystrix command and specify a fallback
    .asResilientCommand()
    .withFallback( (e) -> Collections.emptyList() ) // ignoring execution exception "e"
    // execute the Hystrix command (or queue / observe it)
    .execute();

The convenient way to resilient execution of Java VDM requests uses under the hood the new class ODataRequestCommand, which represents a sensible-default implementation of ErpCommand for OData requests.

Further Improvements

We fixed the issue that .gitignore files had not been correctly generated by archetypes. Now, each project generated by one of our archetypes (see this tutorial for an example) includes a .gitignore file in the root folder with sensible defaults. Most importantly, credential files will be ignored to avoid accidentally committing them to Git. Please ensure that you have never committed those to your Git repository.

The application programming model for SAP Cloud Platform makes provisioning OData services easy, besides other features. Projects that want to use both, the OData consumption capabilities or other features of the SAP S/4HANA Cloud SDK and the libraries for the OData provisioning capabilities, will now find it easier to use both together with consistent versions. The SAP S/4HANA Cloud SDK manages the version of the OData v2 provisioning libraries of the application programming model in the BOM of the SAP S/4HANA Cloud SDK. This allows to quickly use the corresponding libraries for providing OData v2 services by simply adding required dependencies to the dependencies of a project; consistent versions will automatically be ensured thanks to the BOM of the SAP S/4HANA Cloud SDK. In particular, we manage the following dependencies required for provisioning OData v2 or v4 services: com.sap.cloud.servicesdk.prov:api (v2, v4), com.sap.cloud.servicesdk.prov:odata2.web, com.sap.cloud.servicesdk.prov:odata2.xsa, com.sap.cloud.servicesdk.prov:odatav2-hybrid, com.sap.cloud.servicesdk.prov:odatav2-prov, and com.sap.cloud.servicesdk.prov:odatav4.

At the same time, we have upgraded the dependencies relevant for the application programming model for SAP Cloud Platform from version 1.18.0 to 1.19.0.

When errors, for example, regarding null handling and type conversion, occur during the parsing of OData payloads in the VDM, we now throw unchecked exceptions. Previously, checked exceptions would be thrown. In cases where the errors cannot be recovered from, an ODataPayloadParsingFailedException will be thrown, which is a RuntimeException.

Several further improvements are listed in the full release notes.

How to Update the Java Libraries

To update the version of the SAP S/4HANA Cloud SDK Java libraries used in an existing project, proceed as follows:

  • Open the pom.xml file in the root folder of your project.
  • Locate the dependency management section and therein the sdk-bom dependency.
  • Update the version of that dependency to 2.3.1.

With this, you are already done thanks to the “bill of material” (BOM) approach. Your dependency should look like this:

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>com.sap.cloud.s4hana</groupId>
            <artifactId>sdk-bom</artifactId>
            <version>2.3.1</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
    <!-- possibly further managed dependencies ... -->
</dependencyManagement>

You can now recompile your project (be aware of the compatibility notes, though) and leverage the new features of the SAP S/4HANA Cloud SDK in version 2.3.1.

Of course, you can also generate a new project that uses version 2.3.1 from the start by running the Maven archetypes for Neo or Cloud Foundry with -DarchetypeVersion=2.3.1 (or RELEASE).