Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
pfefferf
Active Contributor

Current Situation (at least <= SPS10)

In different projects there is almost always the need to create some catalog objects which cannot be transported by the lifecycle management tools like it is possible for repository objects. Such catalog objects are for example:

  • Virtual Tables
  • Triggers
  • Indices and Full Text Indices on Catalog Tables
  • ...

Because of the lack of corresponding repository objects or a transport mechanism, by default the creation of the objects has to be done on each target system manually. In many project set ups the development team is not allowed to create the objects by their own on test, quality and production systems. So other services like e.g. application operation has to do the manual steps. But in big landscapes with automatic deployments and several system copies/system set ups, it is not really an option to do the things manually because it is to error-prone and time consuming.

Why I wrote "at least <= SPS10" in the header line of that paragraph? With SPS11 the new HANA Development Infrastructure (HDI) will be introduced. It is a new service layer of the HANA database that simplifies the deployment  of artifacts. For that new approach it is planned to support many artifacts  (like triggers, virtual tables) which are not supported in the "old" world. With SPS11 HDI will be shipped in a beta stadium. It is expected that it will be available in general with SPS12.

Solution Approach

In that space the question how to deliver such objects automatically was discussed already several times, so I wanted to share how I solved that issue in our projects for almost all situations (there are still some gaps regarding special object dependency situations which require some manual effort, but 90% are running automatically now). Consider that the following approach is just one approach of several possible ones.

The idea behind the "transport" of catalog objects like triggers is, to deliver the create statements for the objects in transportable repository objects like procedures (.hdbprocedure). So in a first step I started to create repository procedures containing the create statements for the required objects. That was already an improvement because the "complexity" of the create statements was encapsulated in a procedure which just had to be executed by the responsible team on the target systems. But because the procedures still had to be executed manually in the required target systems, I searched for an option to automatize the execution.. The answer was a thing that HANA XS provides out of the box. It was an XSJOB. So I started to implement an XSJOB which calls an XSJS function. The XSJS function then calls the procedures which create the catalog objects. On the target systems the XSJOB than could be scheduled in the required time intervals (in my case hourly right after the hourly deployment of new builds). All objects, the XSJOB definition, the XSJS file and the procedures, are repository objects which can be transported by the standard lifecycle management tool (e.g. transport of changes via CTS+).

Maybe someone asks now why the XSJS function is necessary, cause a XSJOB can directly call a procedure. The answer is that XSJS provides the better options for logic orchestration, better error handling and SMTP usage for mail notifications in case of errors.

Following picture gives a brief overview about the used objects/services:

One point which has to be considered for the procedure implementation is, that the procedures have to check if the object which has to be created already exists. That is necessary, cause of the scheduling. Each time the procedure is called after the object is created, the procedure would result in an error, cause of the already existing object. There are two options to react on that situation. The first option is to avoid the execution of the create functionality. That makes sense for cases for which no changes are expected in the created objects. The second option is to drop the object and to create it new. So changes are also reflected by the re-creation on target systems. But for that option dependencies to other objects should be analyzed/considered upfront.

Example

With a simple example I wanna describe the necessary steps and how they could look like. In that example a trigger will be created which inserts a log entry into a log table after each insert into a specific table.

So we have these two tables. eTest01 for which the insert trigger should be created and eTest01Log in which the trigger should insert an entry after each insert on eTest01 (the id value of eTest01 and a time stamp).

First we define the procedure which creates the insert trigger for table eTest01. It is a very simple case where the procedure checks if the trigger is already existing. If not the trigger is created.

Next the XSJS function is displayed which executes the procedure. Consider that the schema, package and procedure name is defined in a JSON object which can be enhanced by further procedures. So not for each new procedure an own call has to be implemented.

After the XSJS function is defined an XSJOB definition is created which calls the XSJS function.

In the XSJOB scheduler the XSJOB has to be activated then. The XSJOB scheduler is available in the HANA XS Admin tool which can be reached via URL http(s)://<host>:<port>/sap/hana/xs/admin.

For the activation following information has to be supplied:

  • User name (and password) which is used to execute the job.
  • Language
  • Start/End time interval in which the job will be executed
  • Active flag has to be set to true

If everything worked fine in the job log a schedule entry and entries for the executed runs should be found.

Finally in the catalog the working trigger can be found.

Hints

If you maybe wanna try out the approach by yourself I wanna give you following hints:

  • The user used in the XSJOB scheduler for the job execution must have the privileges to create (and if necessary drop) the objects in the defined schemas. Consider also that the used user is the owner of the objects which are created. So please use a dedicated user which is never removed from the system, otherwise also your objects will be lost. I would also not recommend to use DEFINER MODE procedures which are executed by _SYS_REPO.
  • In the XSJS file I added a comment that an exception handling has to be done in case the exeuction of a procedure raises an exception. For my case I implemented a logic which collects the errors and sends it then to a defined pool mail address using the SMTP XSJS library.
  • In case it is necessary to define system specific things for the execution of a job (e.g. the ABAP schema name which has in most cases a different name on each system following the template SAP<system id>), this can be done by parameters. Parameters can be entered in the XSJOB file directly, but also in the XSJOB scheduler to define them system specific.
  • To activate the XSJOB scheduler itself role sap.hana.xs.admin.roles::JobSchedulerAdministrator is necessary.
  • To schedule a XSJOB role sap.hana.xs.admin.roles::JobAdministrator is necessary.
  • The XSDS Procedures is available with HANA SPS09. In earlier releases you can call the procedure by the legacy DB interface $.db.

References

Labels in this area