Prototyping your scenarios with BW7.4-on-HANA in the Cloud – piece of cake with SDATA
I hope you all heard about the exciting new functionality that is offered by BW-on-HANA especially with the latest feature package BW7.4 SP08 (SCN or openSAP course). And of course, you know of the easy way to get a free trial version of BW7.4 SP08 on HANA (link). Now the big question is how can you test your reporting scenarios with your data in such an environment? With the SDATA tool we have made exactly this extremely easy and comfortable!
The SDATA tool allows you to export a complete reporting scenario from your production, test or development system, all data and metadata. And then import it e.g. into a new trial cloud system. There you can run it, test it, play around with it, and gain experience with the new possibilities not on an abstract and artificial data set, but with your data.
I guess I did get the attention of most of you by now;-), so let’s have a deeper look at how this works.
The SDATA tool is available with the following releases:
- BW7.4 SP08 or higher,
- BW7.31 SP11 or higher,
- BW7.30 SP11 or higher.
For pilot scenarios the SDATA tool can be made available on lower SP levels of the above releases. Please see note 2117680 for details.
My demo example is a BW Query with Sales data SDATADEMO. It is defined on a MultiProvider and contains a keyfigure with exception aggregation (Customer Count) and a calculated keyfigure (Plan-Actual). The source system is on release BW7.30 based on a classic database. I want to test if this Query is sped up by an exception aggregation pushdown if I migrate to HANA on the latest BW release.
Transaction RSDATA allows you to specify a transfer path, in my case the source is a local BW system and the target is a folder on my PC. In the next step I specify my scenario. A scenario can be a simple InfoProvider (an InfoCube or a DSO) or a MultiProvider, but it can also be a specific BW Query with complex calculations attached.
Once I have identified one or several scenarios I start the “object collection”. As part of this process step, all depended objects are collected that are necessary to run this scenario. For our Query scenario this means all required query elements (like CKFs, RKFs, variables, …), the MultiProvider and its definition, the PartProviders and the InfoObjects. In contrast to a metadata transport collection this collection also includes the data itself. In the resulting list after the collection you see not only an entry for the metadata, but also the transactional data (request-wise) and the masterdata, including texts and hierarchy data.
In this screen I can also deselect parts of the scenario. E.g. since my query only reads data from year 2010 and the “plan” InfoCube, I could deselect the data files (object type DREQ) from the Sales InfoCube 2006 and so reduce the data volume for the transfer.
Once I start the transfer of the data, the data of these objects is written into the specified folder in a compressed and internal format. The data files cannot
be read since they are compressed but can only be imported into any other BW system where the SDATA tool is available. The metadata files are stored as (“readable”) XML files.
In the target system I again start transaction RSDATA. This time my transfer path is not from BW system to file, but from file (folder) to the BW system. I then specify the name of my scenario, here the name of my BW Query SDATADEMO.
SDATA reads the files and collects the objects that are included in these files: Queries, Query elements, InfoProviders, … . Once I start the object collection SDATA determines the status of these objects in the target system – in my case it is an “empty” system, therefore all objects are marked as “does not exist”. Similar to the export process I can deselect parts of the scenario, e.g. some of the data packages for InfoProviders that are not relevant here.
I can then start the data import – currently only per dialog, but in future also in batch mode (which will then require the files to be located on the application server). The process creates all InfoObjects, InfoProviders and other TLOGO objects in the correct sequence and then starts the load of transactional and master data.
Once the complete scenario is available I can see my InfoProviders in the Admin Workbench and I can immediately execute the query. But I can also load additional data, re-model the scenario by using the new BW-on-HANA objects CompositeProvider and/or advanced DSO and run additional tests. All this using my metadata and my data!
Enjoy 🙂 !
Please note that SDATA does not replace and should not be used instead of the standard BW metadata transports. SDATA is only designed for the copy of test and prototype reporting scenarios into non-productive systems, e.g. test scenarios within Cloud systems!
This example shows the easiest case because the import is done into an “empty” system. In this case there are no metadata overwrites or merges but everything is newly created in exactly the same way, as it was in the source system. If you import a second scenario into this system, the SDATA tool will determine what already exists and doesn’t have to be re-imported. If the metadata is consistent between source and target the import is not problematic, but if there are differences the SDATA tool tries to merge the metadata.
The data is exported and imported package-wise, so technically there are no limits with respect to size. Nevertheless the missing batch mode and no parallelization currently probably force you to be restrictive here and e.g. deselect the data of InfoProviders that are not required. To allow the long export and import time, it may be necessary to increase the maximum runtime for dialog processes in the system. We have tested SDATA with real customer systems and successfully transferred scenarios with several tens of millions of rows.
SDATA supports the most important InfoProviders like MultiProviders, InfoCubes, InfoObjects and DSO (active data table only). It does not support, currently, customer exits, and other customizing like currency settings, fiscal variants, … . So if you import into an “empty” system like the AWS trial-system, you may have to do some customizations first to adopt the system to your specification. We are working on more tool support in this area as well. For a list of the current restrictions please see note 2098307.
Instead of importing into a Cloud system, you can use the same mechanism of course for an import into an on premise sandbox system.
SDATA offers much more options than described in this document. Please refer to the official online help and a more detailed description attached as PDF to note 2018326.