Skip to Content

This blog is part of the larger series on all new developer features in SAP HANA SPS 09:http://scn.sap.com/community/developer-center/hana/blog/2014/12/02/sap-hana-sps-09-new-developer-features

In this blog we will have a first look at the new XSJS database interface in SAP HANA SPS09.  This is a completely redesigned and rebuilt database interface which replaces the current implementation in the $.db package.  This new interface, which is available as a separate API in $.hdb, focuses on several key areas of improvements.

Performance gains

  • Achieves higher throughput on both read and write operations of a single session
  • Better scale out support and usage of multiple nodes in a HANA scale out scenario
  • Reduce the amount of remote process communication even in distributed query scenarios

Usability improvements

  • Simple and easy to use JavaScript interface that accepts and returns JavaScript variables and JSON objects. No more type specific getter/setters
  • Code reduction
  • No boilerplate code

Light-weight architecture

  • Based upon a new, thin C++ client library
  • No SQL processing in the XS Layer itself. Push all SQL processing into the Index Server
  • Uses internal HANA communication protocol optimizations

The usage of this new API is best explained with a few samples.

The old database interface

First lets look at the pre-SPS 09 database interface.


var productId = $.request.parameters.get("ProductId");
  productId = typeof productId !== 'undefined' ? productId : 'HT-1000'
var conn = $.db.getConnection();
var query = 'SELECT * FROM "SAP_HANA_EPM_NEXT"."sap.hana.democontent.epmNext.data::EPM.Purchase.Item" ' +
            ' WHERE "PRODUCT.PRODUCTID" = ? ';
var pstmt = conn.prepareStatement(query);
    pstmt.setString(1, productId);
var rs = pstmt.executeQuery();
var body = '';
while (rs.next()) {
  var gross = rs.getDecimal(6);
  if(gross >= 500){
  body += rs.getNString(1) + "\t" + rs.getNString(2) + "\t" +
         rs.getNString(3) + "\t" + rs.getDecimal(6) + "\n";
  }
}rs.close();
pstmt.close();
$.response.setBody(body);
$.response.contentType = 'application/vnd.ms-excel; charset=utf-16le';
$.response.headers.set('Content-Disposition',
  'attachment; filename=Excel.xls');
$.response.status = $.net.http.OK;

Notice in this example how you build the query string but then must set the input parameters via a separate setString function. Not only is this extra code, but also error prone because you must use the correct function call for the data type being set.

More troublesome, however, is the result set object returned from the query.  This rs object is a special object than can only be iterated over once in order. No direct index support.  Its contents aren’t visible in the debugger and you have to use similar type specific getters to retrieve individual column values.

The new database interface

Now for the same example rewritten with the new database interface in SPS 09.


var productId = $.request.parameters.get("ProductId");
  productId = typeof productId !== 'undefined' ? productId : 'HT-1000'
var conn = $.hdb.getConnection();
var query = 'SELECT * FROM "SAP_HANA_EPM_NEXT"."sap.hana.democontent.epmNext.data::EPM.Purchase.Item"' +
            ' WHERE "PRODUCT.PRODUCTID" = ?';
var rs = conn.executeQuery(query,productId);
var body = '';
for(var i = 0; i < rs.length; i++){
   if(rs[i]["GROSSAMOUNT"] >= 500){
  body += rs[i]["HEADER.PURCHASEORDERID"] + "\t" + rs[i]["PURCHASEORDERITEM"] + "\t" +
  rs[i]["PRODUCT.PRODUCTID"] + "\t" + rs[i]["GROSSAMOUNT"] + "\n";
   }
}
$.response.setBody(body);
$.response.contentType = 'application/vnd.ms-excel; charset=utf-16le';
$.response.headers.set('Content-Disposition',
  'attachment; filename=Excel.xls');
$.response.status = $.net.http.OK;

The most striking difference is the removal of the need for the type specific getters or setters.  Now you simply pass in your JavaScript variable and the interface determines the type.  The result set is no longer some special object type, but instead a JSON object.  You process it in your JavaScript as you would any other JSON (direct index access, easy looping, or combination of both); accessing the columns by name.  The other advantage is that the result set object might look and act like a JSON object, but in fact it is rather special. It doesn’t materialize the data into the JavaScript VM. Instead only pointers to the data are maintained in the JavaScript VM as long as only read operations are performed on the data.  The helps to keep the memory requirements of the JavaScript VM lower.

This also means that you can view the contents of this result set object easily within the debugger.

HANABlog3.png

Another excellent advantage of this new interface is that because the result set object of a query is JSON its ready for output. So often most of the processing in an XSJS service was just to convert the Result Set to JSON so it can be passed to the client side. Now we can these results and directly insert them into a response object.


var connection = $.hdb.getConnection();
var results = connection.executeQuery(
  'SELECT * FROM "sap.hana.democontent.epmNext.data::EPM.MasterData.Employees" ' +
  'WHERE LOGINNAME <> ?', 'EPM_USER');
$.response.setBody(JSON.stringify(results));

But this new interface doesn’t just help with SQL statements.  It also provides similar benefits to calling SQLScript stored procedures from XSJS. This interface creates what appears to be a JavaScript function to serve as a proxy for calling the stored procedure.  We can then easily pass in/out JavaScript variables and JSON objects for the procedure interface. No more having to insert data into temporary tables just to pass it into a procedure call.


var connection = $.hdb.getConnection();
var partnerRole = $.request.parameters.get("PartnerRole");
partnerRole = typeof partnerRole !== 'undefined' ? partnerRole : '01';
var getBpAddressesByRole = connection.loadProcedure("SAP_HANA_EPM_NEXT",
"sap.hana.democontent.epmNext.procedures::get_bp_addresses_by_role");
var results = getBpAddressesByRole(partnerRole);
//Pass output to response
$.response.status = $.net.http.OK;
$.response.contentType = "application/json";
$.response.setBody(JSON.stringify(results));

To report this post you need to login first.

36 Comments

You must be Logged on to comment or reply to a post.

  1. Sreehari V Pillai

    The yummy part – We can then easily pass in/out JavaScript variables and JSON objects for the procedure interface. No more having to insert data into temporary tables just to pass it into a procedure call.

    (0) 
  2. Priya Jha

    Hi Thomas,

                     While we have the foreign key defined for a table, and we use DELETE in the parent table, will it cause deletion for that particular record, in the child table also?

    Please guide.

    Thanks

    Priya

    (0) 
  3. Monissha Agil

    Hi Thomas,

    This might be a very elementary doubt, but kindly clear it if my understandings were wrong.

    you have specified the following as one of the advantages of new hdb interface.

    ‘No SQL processing in the XS Layer itself. Push all SQL processing into the Index Server’

    We are already pushing the complex logic to xsjs instead of client so that it could happen fast within the database, hence minimizing the flow of data between client and server.

    So was the older interface executing all the processing logic in some XS layer and not in index server where all the data resides? Does not this also involves the overhead of moving data between between the two same as it could happen between server and client?

    Thanks & Regards,

    Monissha

    (0) 
    1. Thomas Jung Post author

      >So was the older interface executing all the processing logic in some XS layer and not in index server where all the data resides?

      No not exactly.  But you have to remember that the xsengine process is a basically a copy of the indexserver – meaning it has access to all the same APIs and source code.  Therefore it tried in the past to pre-process the SQL statement before sending to the indexserver for processing. This in theory sounded good but sometimes lead to inefficiencies, particularly when working with distributed queries created by partitioned tables across multiple nodes in HANA.  The new database interface does a better job of offloading this work to the indexserver.


      > Does not this also involves the overhead of moving data between between the two same as it could happen between server and client?

      We always had to move the data between the indexserver and xsengine process. However this is not at all the same as moving it between server and client. We are talking about moving data between two processes at the OS level. That closeness still exists and can be used to optimize the interface. For example you can have the result set object of a SQL Query look like JSON in XSJS, but really the data isn’t materialized into the xsengine. The xsengine just has pointers to where the data still resides in the indexserver. Still this isn’t an excuse to bring large result sets back in the XS layer. It remains a lightweight pass through layer and all the data aggregation, filtering, etc should already be done in the database layer regardless.

      (0) 
  4. Kevin Che

    Nice post Thomas.

    I have a question regarding the concurrency request of HANA XS app.

    For example, two users want to edit the same record in HANA Database at the same time.

    Is there any mechanism in HANA XS layer to prevent overwritten or maybe just lock the table when the first user want to edit it?

    Or we have to implement the lock mechanism by SQL transaction in DB layer but not XS layer?

    Thanks a lot.

    Kevin

    (0) 
      1. Lars Breddemann

        As I heard questions around locking in the context of UI5/ODATA application development, I think I would be worthwhile to prepare some presentation/documentation/blog/… that shows the actual usage and the consequences of the optimistic locking approach along actual examples.

        Especially ABAP-veterans and DB-developers used to ENQUEUE/DEQUEUE or the implicit record lock handling of the DBMS usually have a hard time wrapping their head around this.

        – Lars

        (0) 
        1. Kevin Che

          Hi Lars,

          I just implemented a simple method as below reply.

          I believe there would be more and better solutions.

          Looking forwards to your sharing of the optimistic locking approach. 🙂

          Could you please just show the general idea of the approach regarding XS app if possible?

          So that I could compare with my current design and learn to improve it.

          Many thanks.

          Cheers,

          Kevin

          (0) 
      2. Kevin Che

        Hi Thomas,

        Thanks for your answer and the post you provided.

        I am currently trying to implement the lock mechanism by adding a column named “lock” in the database table which would be edited and then let the xsjs service firstly check the lock value and judge whether it is being edited or not.

        My assumption use case is that: 

        From UI side,  user maintain the info by selecting time range and type content in the text box. User can insert new data into the table or modify the existing data and I will use Ajax to call xsjs to send the data to database.

        the table would be like this which has four columns

        (Date , PersonID , Content , Lock )

        During the editing process, the xsjs would set the lock value to 1 so that another process cannot edit this record. And after editing or canceling, the lock value would be set to 0 again. It works fine when editing the existed records.

        But problem comes what if multiple users want to insert new records at the same time.

        If the first user editing the content and does not finish submit the added data, due to there is no record in database so I cannot set the lock value, the other user could do some insert operation in the same time period without an alert.

        Currently my design is when user select the time range, xsjs would check whether DB has existing record or not.

        If not, just insert default value into database first and then deal with the insert request as an update request. So that we can lock this record by setting the lock value.

        That’s the solution I came up with. I believe there would be more and better solutions to be shared by you. 🙂

        Thanks and best regards,

        Kevin

        (0) 
        1. Thomas Jung Post author

          I really wouldn’t suggest the approach you described. Its full of problems. What happens to orphan locks for instance. I would really suggest optimistic locking.

          You don’t have to wait for someone from SAP to describe optimisitc locking for you. Its hardly anything SAP specific. I would suggest a Google search on the term for some very good results. Also studying the Odata.org materials on the concurrency control specification:

          http://www.odata.org/documentation/odata-version-2-0/operations/#ConcurrencycontrolandETags

          Once again OData and Concurrency Control isn’t SAP or HANA specific; but it is something we plan to use when it comes to transactional UI5 and Fiori applications.

          Its important to note that eTag/Concurrency control isn’t like placing a lock as Lars mentioned. You have to design and think about this very differently.

          (0) 
          1. Kevin Che

            Thanks a lot Thomas.

            Your information is very helpful to me.

            I would do more research on the optimistic concurrency control as you suggested.

            Thanks & Regards,

            Kevin

            (0) 
  5. Jay Thoden van Velzen

    The most striking difference is the removal of the need for the type specific getters or setters.  Now you simply pass in your JavaScript variable and the interface determines the type.

    That also means it is incumbent on the developer to do type-checking and variable/parameter sanitation before passing anything blindly through. Much of such parameters will be user supplied, and thus can be anything including things you may not expect.

    This may be obvious and assumed by Thomas, but let’s make it explicit:

    Never pass anything into any query where you don’t know what it is. I’d go much further than simply checking whether it’s undefined or not, as in the code sample, and check for length, boundaries, weird characters (single quotes in whatever encoding, <script> tags, embedded JavaScript, etc.), whether it converts to a number if you know it should be a number, etc.

    (0) 
    1. Thomas Jung Post author

      I think you are misrepresenting the database interface here.  The HDB interface will still do type checks. If you pass in character data into a numeric type database column it will throw an exception which you can catch.  Or you can check this before going into the database. That’s a design decision and not unlike the old DB interface. Just now you don’t have the data types hard coded into your source based upon function setter/getters. 

      Also SQL Injection is protected against by the parameterized HDB interface (just like the old DB interface). You make sound like the application developer has to check for embedded commands – but in fact the DB interface escapes and protects from SQL injection.

      I think it is rather obvious that if you don’t want certain patterns of data inserted, the application has to check for that. I don’t see why you would expect the database interface to look for script tags, java script, etc within a character string.

      (0) 
      1. Jay Thoden van Velzen

        Agreed, and I made the point this was rather obvious, and I certainly don’t expect the DB to perform these checks for us. I come from Java, and actually like the fact that forces me to match data types for my own protection. And that is in a strongly-typed language, not JavaScript, which has some challenges, in that area. Making things easier is great, but always comes with some trade-offs as well.

        I just wanted to make sure we mentioned this new way of setting parameters doesn’t take away the responsibility of the developer to verify inputs into the application, and we don’t get new/less experienced developers thinking it is OK to just grab a parameter value and pass it into a query as long as it is not undefined.

        (0) 
      2. Matthias Nott

        I’ve somewhat a yes and no feeling. I do agree with you that things should happen where they should happen. OTOH, one approach that is worth considering is to help the user of the interface that we give to him avoid making mistakes. An well designed architecture should not break if we have a programmer who’s under stress, his timeline asks him to take decisions, etc. That happens all the time, and what I really think we should avoid from happening is because of some application that, as you said, should do proper checking, didn’t do that, is then HANA from SAP that exposed that customer data out there. That would imply that “could not SAP have checked that?” Remember, “1”+2 = “12”.

        If we make something an SEP – somebody else’s problem – then we should make that very explicit. So for example, we could easily define a dictionary of regular expressions that we check any query – parameterized and not – against. And the user will have the option to amend that dictionary and also to remove from it – but then he will have to sign with his own blood that this is what he really wants to do. And we should somehow even publicly distribute update these kinds of things – very much like the javascript blockers these days that have a set of pre-defined rule sets that you can include (and overwrite) and that are maintained elsewhere. And of course, he’ll also sign with his own blood that it is ultimately up to him to verify; it is meant as an assistance.

        Just suggesting. Again, I do agree with you that if an application is fundamentally flawed, there is only so much that you can do to fix that. But what we probably really want is that our customers simplify their developments, and get to market quicker. That should involve that we provide them with certain functionality that will allow him to be less perfect, i.e. respond better to changing requirements and restricted timelines. Especially in times of Agile, and users very much focusing on the user level of architecture rather than the architecture itself, this might be quite useful.

        (0) 
  6. Bruno Terlizzi De Renzo

    Hello Thomas,

    Can you please clarify one thing?

    I’m using this new API to call a procdeure that should insert data into a table.

    The XSJS is called every minute by a XSJOB.

    The problem is that this call is not commiting, so altough the INSERT command is executed, the data is not saved.

    The new API does not use  “$.db.getConnection()” so I have no way to use the commit() function after the procedure call in XSJS.

    Via debug I’ve looked for another object that could have the commit() function but could not find any.

    So, can you please clarify how to commit this stored procedure call via XSJS?

    Best regards,

    Bruno Renzo

    (0) 
      1. Bruno Terlizzi De Renzo

        Hi Thomas,

        Thanks for the quick response.

        I’m using the template from the documentation and my code is like this:

        ——————————————————–

        // import XS Procedures library

        var XSProc = $.import(“sap.hana.xs.libs.dbutils”, “procedures”);

        // set a schema where temporary tables can be created for passing table-valued parameters to the procedure

        XSProc.setTempSchema(“SAPHANADB”);

        // load the procedure

        var proc = XSProc.procedure(“SAPHANADB”, “PackageName”, “ProcedureName”);

        // call the procedure

        var result = proc(“String1”, variable, “String2“, “String3“, 15);

        ——————————————————-

        As you can see I have no object for connection and the call fully works (I have a remote debug session in HANA Studio and the procedure is called normally by the XSJS job), just don’t commit.

        I appreciate if you can tell what am I doing wrong…

        I’m in HANA rev 94.

        Thanks again

        (0) 
        1. Thomas Jung Post author

          So you are using the XSDS library not the new database library ($.hdb) like you said in your previous posting.  Actually for your scenario I’d probably just use the new $.hdb instead.  But if you want to use XSDS to call the procedure you have to create the connection object and pass it into the interface:

          JSDoc: Namespace: procedures

          Its included in the optional config parameter:

          <static> procedure(schema, package, proc, config) → {procedures~procedurecaller}

          Returns a function which calls a stored procedure.

          Parameters:
          Name Type Argument Description
          schema string the schema name where the stored procedure is located
          package string the package name of the stored procedure
          proc string the name of the stored procedure
          config Object <optional> a configuration object containing one or several of the following properties

          Properties
          Name Type Argument Description
          connection $.db.Connection <optional> a connection object to be used for calling the procedure
          input function <optional> an array of functions each of which is applied to the corresponding element in the array of input arguments before invoking the procedure
          output function | string <optional>

          an array of functions each of which is applied to the corresponding element in the array of output objects. It can also be the string ‘raw’ if the unmodified ResultSet should be returned

          (0) 
            1. Bruno Terlizzi De Renzo

              Thanks! This will help me!

              I’ll try again with the XSDS transaction object.

              Yes, currently I’ve switched back to $hdb.

              Cheers!

              P.S.: If you are going to SAPPHIRE, watch the S/4HANA demo during the keynote of the last day.

              (0) 
  7. Sergio Guerrero

    Thomas Jung when I am using the new DB API and I am executing a stored procedure, I am trying to read the resultset (or array on SP09) as result[‘$resultSets’]; and I am able to see it. Based on the documentation, it says that result[‘$resultSets’]; is not enumerable so I cannot see it on a for loop. Anyways, when I am seeing the response on POSTMan REST client, it shows as {  0: {}, 1: {}… n: {} }   how can I change those objects into an actual array so that when it shows on my browser, I can see something like

    {

      result: [ { }, { }…{ } ], // this is the format i would like to see

      maybeOtherProperties: {}

    }

    (0) 
    1. Thomas Jung Post author

      Well you can iterate over it but you have to take into account that the first level of the hierarchy is the array of record numbers. Personally sometimes I like to flatten that out and remove that level of the hierarchy.  I use this generic routine to do that:

      /**

      @function Puts a JSON object into the Response Object

      @param {object} jsonOut – JSON Object

      */

      function outputJSON(jsonOut){

        var out = [];

        for(var i=0; i<jsonOut.length;i++){

        out.push(jsonOut[i]);

        }

        $.response.status = $.net.http.OK;

        $.response.contentType = “application/json”;

        $.response.setBody(JSON.stringify(out));

      }

      So here is the resultSet JSON as returned from $.hdb:

      Ex1.png

      but then after passing it through the above routine it looks like this:

      Ex2.png\

      Is that more of what you wanted?

      (0) 
      1. Sergio Guerrero

        yes that’s kind of what i am looking for, however, i am still seeing the number index.

        This may still work though. Thank you

        Actually this also works:

        var output = {

           products: result[‘$resultSets’],

           maybeAnotherOutArray: [ ‘abc’, ‘bsv’, ‘shuhs’],

           someintArray: [1,2,3,4,556,44]

        }

        then my output in the browser would be what i need

        {

           products: // some objects {}, {}, {}  

           maybeAnotherOutArray: [ ‘abc’, ‘bsv’, ‘shuhs’],

           someintArray: [1,2,3,4,556,44]

        }

        Thank you again

        (0) 
        1. Thomas Jung Post author

          What level did you pass into that routine. If you sent in the whole JSON ResultSet then yes the record numbers would still be there. There is the level for return parameter name as well.  So in my above example I explicit passed in my one parameter:

          function hdbFlattenedTest(){

            outputJSON(_selection().EX_BP_ADDRESSES);

          }

          You’ll have to adjust the logic according to your situation.

          (0) 
              1. Thomas Jung Post author

                That’s just the name of the function inside my example. I wrote the database selection once and called it from the two different entry points – one that outputs the $.hdb direct and the other that flattens it. Nothing special.

                Here’s the whole example I created to take the screen shots:

                function hdbDirectTest(){

                  var results = _selection();

                //Pass output to response

                $.response.status = $.net.http.OK;

                $.response.contentType = “application/json”;

                $.response.setBody(JSON.stringify(results));

                }

                function hdbFlattenedTest(){

                  outputJSON(_selection().EX_BP_ADDRESSES);

                }

                function _selection(){

                  var connection = $.hdb.getConnection();

                  var partnerRole = $.request.parameters.get(“PartnerRole”);

                  partnerRole = typeof partnerRole !== ‘undefined’ ? partnerRole : ’01’;

                  var getBpAddressesByRole = connection.loadProcedure(“SAP_HANA_EPM_NEXT”,

                  “sap.hana.democontent.epmNext.procedures::get_bp_addresses_by_role”);

                  var results = getBpAddressesByRole(partnerRole);

                  return results;

                }

                /**

                @function Puts a JSON object into the Response Object

                @param {object} jsonOut – JSON Object

                */

                function outputJSON(jsonOut){

                  var out = [];

                  for(var i=0; i<jsonOut.length;i++){

                  out.push(jsonOut[i]);

                  }

                  $.response.status = $.net.http.OK;

                  $.response.contentType = “application/json”;

                  $.response.setBody(JSON.stringify(out));

                }

                var aCmd = $.request.parameters.get(‘cmd’);

                switch (aCmd) {

                case “direct”:

                  hdbDirectTest();

                  break;

                case “flattened”:

                  hdbFlattenedTest();

                  break;

                default:

                  hdbDirectTest();

                  break;

                }

                (0) 
  8. Lucas Ferraz

    It’s a much better interface, but if you select a CLOB column with the $.hdb interface and the result content is a JSON string, it doesn’t work. It returns a empty object instead of the JSON String. This doesn’t happen with the $.db interface.

    I can’t use NVARCHAR or VARCHAR types because it’s length limitations of 5000 chars, neither TEXT because input parameters in procedures cannot be TEXT..

    (0) 
  9. Ashutosh Hans

    Hi Thomas,

    In my application I have used the old database API and I am calling a stored procedure.

    The procedure takes approx. 1 hour to run and in exactly 10 minutes “connection timed out” msg is displayed.

    Is there any way to increase timeout/time for this database connection.

    The following parameters are set to corresponding values :

    idle_connection_timeout_application_hdbstudio = -1

    idle_connection_timeout_application_statisticsserver = 60

    tcp_keepalive_time = 3600

    sessiontimeout = 3600

    This issue is similar to Connection timed out | SCN

    Thanks for help

    Regards,

    Ashutosh

    (0) 
    1. Thomas Jung Post author

      As your question is not at all related to the topic of this blog, please post it as its own question in the discussion forums.

      (0) 
  10. Nehal Fonseca

    Hi Thomas Jung

    I think the new HDB DB interface is great, don’t have to worry about the data types, which makes concentrating on the business part of the code a lot easier. However I did face some issues, was wondering if you have any solutions to it.

    1. Columns starting with special characters like underscore or slash, dont seem to be allowed in the new synatx. I get syntax errors here.

    2. How do you deal with Rawbinary  or Varbinary data transfers? I get runtime errors here, something to the effect of datatype being incompatible. I

    I faced both issues while posting data from my on-premise HANA DB to HCP over XSHTTP destinations and the cloud connector. I used JSON parsing as a medium. The tables on either side were identical. The errors were on the HCP XSJS side. On premise went through just fine.

    Nehal.

    (0) 

Leave a Reply