Having done a few database ports before, I never anticipated this to be an easy one. It turned out to be a real challenge, and I think the tips that I got from the SDN and SAP experts that helped me move through the port are very worth being passed on to future generations.
My personal approach to any database port dictates splitting the actual work into two big phases. The end result of phase one is the initial DDL (data definition language) script, correcly runnable on a target database platform and producing the required tables, indexes and all other DDL elements. Phase two is fixing the actual code and producing the correct SQL queries for the target database platform. If the queries are simple, it is usually mostly QA work to make sure they can run on the target platform.
Since the Open SQL platform was completely new to us, I first read all available documentation about it provided by SAP and SDN. As often, the documentation was very explicit about some things and very silent about others. So, by trial and error I compiled several important points that one needs to know before planning and plunging into the port effort.
1. Schema deployment
Before even getting to the schema deployment we found out that we needed to shorten quite a few table, index, and even some column names, due to an 18 character restriction imposed by an Open SQL standard.
Unlike a “normal” non-Open SQL way of doing things, we can’t use DDL (data definition language) SQL scripts to populate the initial database schema. The only way application tables get defined in Open SQL is via a tool called “Java Dictionary”. Currently Java Dicitonary is available as a part of NetWeaver Developer Studio (we used Sneak Preview version 6.40). So for a large schema deployment it does sound like a lot of work to define the schema. Especially since there is no official way to import an existing SQL script into Java Dictionary. We store the application database schema definitions in ERWin database modeling tool and use it to generate the database scripts. There are over 300 tables and 500 indexes in our application schema. Since we must support non-Open SQL deployments it becomes a dilemma to keep the two schemas in sync. Thanks to a project import feature in NetWeaver Studio Java Dictionary, we found that we could generate the schema definition in Java Dictionary XML format which could be then imported into Java Dictionary and used to deploy it into the database. Once the schema is ready it can be deployed in a form of SDA file, deployable via a regular SDM infrastructure, so there would be no need for a NetWeaver Studio when it comes to a customer deployment. The SDA can be deployed using the Java Dictionary or directly via the SAP SDM. It was really important for us to be able to avoid any manual schema manipulation so that the schema compatibility between Open SQL and non-Open SQL is uncompromised.
Another important consideration is that Open SQL schema is deployed into the default schema used by SAP WebAS. Since there could be multiple applications deployed on the same WebAS instance we made sure that table and index names do not conflict with other application tables and indexes. In order to achieve this we prefixed all our application table and index names with a vendor-specific prefix “NRX_”.
2. Schema initialization
Since our application schema is normalized to a reasonable extent, we do use standard dictionary and type tables to enforce data integrity and object typing.
These dictionary tables are initialized before the application is executed. Normally it is done at the same time the schema is created. However, SAP WebAS restricts database initialization scripts to 2000 bytes, which is too small for us. So the solution was to run the script from inside the application when it is being bootstrapped for the first time.
3. Foreign key constrains
The current version of Open SQL does not allow specification of the foreign key constraints on application tables. This really turns the database into a “dumb object store,” but I was assured that it was probably going to changed in the future versions of Open SQL. I haven’t come up with the final solution to this one yet. One possibility is: provide a database script with constraint specification for each platform that Open SQL supports (likely most if not all of them will be able to use the same SQL-92 standard script); however, schema upgrades via Java Dictionary might not work, since Java Dictionary doesn’t take foreign key constraints into account.
4. Unique ID generation
Since Open SQL is a real cross-database platform solution it can’t rely on a database-specific way of getting a unique sequence number or using an auto-generated id column. It leaves this matter up to the developer to decide how to get a new ID. I was advised to use GUIDs instead of IDs, but that would be a huge performance hit for a large-scale database with millions of rows and complex joins. So we had to come up with a custom non-blocking mechanism to generate unique numeric sequences similar to Oracle’s Sequence objects in a cluster multi-transactional environment.
5. Current database timestamp
Since our application is deployed in a cluster, in some places it requires a cluster-wide timestamp that would allow to correctly synchronize data updates and deltas. Currently we use the database server to provide us with this timestamp, since the database server is a shared resource across the cluster. However, it is not a standard feature across different database platforms, so Open SQL didn’t provide a way to retrieve the database-related timestamp. SQL-92 standard specifies a function CURRENT_TIMESTAMP or CURRENT_DATE, but since the current version of Open SQL supports only the entry level of SQL-92, it prohibts use of any functions. We haven’t come up with a final solution yet since it is not a 100 percent safe solution to use one of the application servers in the cluster serving as a timestamp service.
6. Case-insensitive searches
Since our application has to deal with user-initiated searches on large data sets, it is impractical to do these searches in-memory. So the current search model is SQL-based. In order to support case-insensitive searches we use the character set UPPER/LOWER standard SQL functions to eliminate the character case. Of course it is not ideal solution and it works only for standard European character sets, but since it satisfied our current customer requirements it was relatively inexpensive to implement and easy to use for our development team. However with Open SQL it doesn’t work due to the function use restriction. As I mentioned before, Open SQL conforms to entry level of SQL-92 standard. There is almost no support for any SQL functions, with a few basic numerical and aggregate functions. So it appeared we couldn’t move forward with the port since this was a show-stopper for us. After a few considerations we decided to make use of a com.sap.sql.NativeSQLAccess class provided for situations like this. It allows the developer to completely bypass Open SQL layer in extreme cases like ours. Thanks to this class we were not required to plunge into a major development effort to support the case-insensitive searches with Open SQL in the current version of our application. And it was safe for us to use these UPPER/LOWER SQL functions because they are officially supported by allplatforms that Open SQL supports.
7. Restricting a number of retrieved rows
As I have mentioned before our application needs to support many complex user-initiated searches, based on SQL queries.
Normally we restrict a number of rows a user can retrieve from the database to save on network traffic and database resources. To enforce that in Oracle, a ROWNUM pseudo-column is used. Naturally, due to its declared cross-database support, Open SQL can’t provide a similar function. Therefore we could only use the JDBC-provided settings to adjust the number of fetched rows size and have the application fetch only a required number of rows.
8. Binary object (BLOB) support
Unlike Oracle JDBC driver, which supports streaming an object directly into the database via its getBinaryOutputStream() call, Open SQL only supports the basic JDBC setBinaryStream call. Albeit standard, setBinaryStream is more expensive for the application, since the application has to know the exact size of the stream beforehand, which means an HTTP-based stream must be saved to a file first and then transferred to the database.
So, with a few exceptions, I would like to say that the Open SQL port worked out pretty well and really helped us to make the application database schema more cross-database compatible. I hope very much that some of the above issues will be taken into consideration for the next version of Open SQL release and it will really become a de-facto standard for cross-database support in SAP.
Also, I would like to thank all SAP employees that helped us with suggestions and advices during this port, especially Stephan Boecker, Solution Architect, EMEA, xApp partner program and Zornitsa Yankulova, an active member of SDN community. Good luck to you all.