As an independent consultant, I need to keep track of my billable hours, send invoices, follow up on overdue invoices, file my income taxes/VAT every month/quarter, follow up on prospective new clients, etc. As I would like to focus on my daily job and not too much on the dreadful chores which consume a considerable amount of time I’d rather spend with my wife and kids, I would like to automate as much of these tasks as possible.

And so, for a couple of years now (1), I’m developing an app on SAP HANA Cloud Platform that ultimately performs all these tasks without much of my intervenence.

This blog — and I intend to make it a series — is aimed at beginners, who do have (some) J2EE knowledge, but would like to know how you could develop on a Platform-as-a-Service such as SAP HANA Cloud Platform. In each blog, I will focus on a specific part of my application which uses some kind of SAP HCP functionality, and explain what I did, and why. Hopefully, the scenario’s I’m about to outline more or less match with your scenario, and this blog series could be of help.

So, as an appetizer, I would like to focus on a rather common task: uploading an MS Excel spreadsheet to a database, and use OData to display, modify and submit its results.

Uploading and persisting

In order to store the spreadsheet data, we need a table. Luckily, SAP HANA Cloud Platform comes out of the box cloud with both MaxDB and HANA DB for persistence (2). And for local testing, I prefer to use a locally installed Derby database (see Essentials – Working with the local database on how to set it up)

Now, the spreadsheet I would like to update contains mutations from my bank account, which I download from my online banking website. As the spreadsheet contains all the data I’m interested in, I created a JPA project in Eclipse, and created a JPA entity ‘MutationStaging’ with attributes matching my mutations spreadsheet:

Screen Shot 2015-04-28 at 23.35.19.png

(The reason for a ‘staging’ table is I want to perform some cleaning and modification while — from a user perspective — still in the ‘upload’ phase. Once done, all the data is then transfered to the more definite — but identical in structure — ’Mutation’ table, and the ‘MutationStaging’ table is then cleared to cater for a subsequent upload)

To actually upload the spreadsheet, I did not use the extra overhead with MIME uploads using OData/REST, but rather use a standard HttpServlet. Of course, to do something meaningful with a blob of data in the form of an .xls file, I used two external libraries: Apache HTTPComponents  and FileUpload for basic file handling, and the lightweight JExcelAPI (3)

My Maven POM file for these two libraries looks like this:

  <!– Apache HttpComponents –>













  <!– Apache Commons FileUpload –>






  <!– JXL –>






The doPost method of my upload servlet looks like this:


  * @see HttpServlet#doPost(HttpServletRequest request, HttpServletResponse response)


  protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

      try {

          if (ServletFileUpload.isMultipartContent(request)) {

              FileItemFactory   factory        = new DiskFileItemFactory();

              ServletFileUpload upload         = new ServletFileUpload(factory);

              List<FileItem>    items          = upload.parseRequest(request);

              byte[]            excelFileBytes = this.getDataFromExcelWorkSheet(items);

              if (excelFileBytes == null) {

                  throw new Exception(“Data not imported successfully.”);




      } catch (FileUploadException e) {


      } catch (Exception e) {




The Apache HttpComponents API processes the file into a List of FileItems, which is then converted to an array of bytes in the getDataFromExcelWorksheet method:


     * Gets the data from the Excel worksheet as a byte array.


     * @param items

     * @return byte[]


    private byte[] getDataFromExcelWorkSheet(List<FileItem> items) {

        byte[] excelFileBytes = null;

        if (items != null) {

            Iterator<FileItem> iter = items.iterator();

            while (iter.hasNext()) {

                FileItem item =;

                if (!item.isFormField() && item.getSize() > 0) {

                    try {

                        excelFileBytes = item.get();

                    } catch (Exception e) {


                        return null;





        return excelFileBytes;


after which the file is processed, and each worksheet row is stored as a new record in the MutationStaging table:

  private void processExcelFile(byte[] excelFile) {

      EntityManager em = emf.createEntityManager();


      try {

          Workbook wb = Workbook.getWorkbook(new ByteArrayInputStream(excelFile));

          Sheet sheet = wb.getSheet(0);

          for (int i=1; i<sheet.getRows(); i++) {

              MutationStaging mutation = this.getMutationFromExcelRow(sheet, i);




      } catch (BiffException e) {



      } catch (IndexOutOfBoundsException e) {



      } catch (IOException e) {






  private MutationStaging getMutationFromExcelRow(Sheet sheet, int rowNumber) {

      MutationStaging mutation    = new MutationStaging();

      Cell[]          cellsInRow  = sheet.getRow(rowNumber);

      BigDecimal      amount      = new BigDecimal(cellsInRow[2].getContents());

      BigDecimal      tax         = amount.multiply(TAX_PERCENTAGE);

      tax = tax.divide(HUNDRED, BigDecimal.ROUND_HALF_UP);








      return mutation;


As you see, I’m also filling an extra field in the table called ‘tax’, which I calculate by getting the ‘TAX_PERCENTAGE’ of the ‘amount’ value. Also, I don’t fill all the fields defined in the JPA entity, but this is just an example.

That’s the J2EE middleware part. For the frontend, I use the SAPUI5 library to create the user interface, and interact with the middleware. In the UI, I use a simple sap.ui.commons.FileUploader control:







        tooltip=“Upload your file to the local server”

        uploadComplete=handleUploadComplete />

    <Button text=“Upload File” press=handleUploadPress />

where property ‘uploadUrl’ points to the relative path of the upload servlet, and event ‘uploadComplete’ is catched by the ‘handleUploadComplete’ event handler in the view controller.

The upload itself is triggered by pressing the button, which ‘press’ event triggers the ‘handleUploadPress’ event handler in the controller.

The controller Javascript code for these two event handlers is fairly simple:

    handleUploadPress : function(oEvent) {

        var oFileUploader = this.getView().byId(“fileUploader”);



    handleUploadComplete : function(oEvent) {

        var iReadyState = oEvent.getParameter(“readyStateXHR”),

            iHTTPStatus = oEvent.getParameter(“status”);

        if (iReadyState === 4 && iHTTPStatus == 200) {

  “Upload OK!”);



        else {

  “Upload Failed! ReadyState=” + iReadyState + “, HTTP Status=” + iHTTPStatus);



So now we’re able to upload the Excel spreadsheet, and store its contents in a database table.

Displaying and updating the uploaded data

We now need a means to display our just stored data from the ‘MutationStaging’ table. We could of course use an HttpServlet again to retrieve the data, but a much better way would be to use OData’s built in CRUD (Create, Read, Update, Delete) functionality. For that, we use another highly useful API: Apache Olingo Library.

First, we need to create a factory class which accesses our JPA persistence:

public class JpaEntityManagerFactory {

  public static final String               DATA_SOURCE_NAME      = java:comp/env/jdbc/DefaultDB;

  public static final String               PERSISTENCE_UNIT_NAME = qbiz-jpa;

  private static      EntityManagerFactory entityManagerFactory  = null;


  * Returns the singleton EntityManagerFactory instance for accessing the default database.


  * @return the singleton EntityManagerFactory instance

  * @throws NamingException

  *     if a naming exception occurs during initialization

  * @throws SQLException

  *     if a database occurs during initialization


  public static synchronized EntityManagerFactory getEntityManagerFactory() throws NamingException, SQLException {

      if (entityManagerFactory == null) {

          InitialContext      ctx        = new InitialContext();

          DataSource          ds         = (DataSource) ctx.lookup(DATA_SOURCE_NAME);

          Map<String, Object> properties = new HashMap<String, Object>();

          properties.put(PersistenceUnitProperties.NON_JTA_DATASOURCE, ds);

          entityManagerFactory = Persistence.createEntityManagerFactory(PERSISTENCE_UNIT_NAME, properties);


      return entityManagerFactory;



In this class, we specify the JNDI datasource path for the SAP HANA Cloud Platform database, as well as the name for our JPA persistence unit, in this case ‘qbiz-jpa’.

Then, in order to be able to access our data using OData services, we create another factory class which extends from Olingo’s ODataJPAServiceFactory class and hooks to the just created JPAEntityManagerFactory class:

  public class QBizServiceFactory extends ODataJPAServiceFactory {

      private static final String PERSISTENCE_UNIT_NAME = qbiz-jpa;


      public ODataJPAContext initializeODataJPAContext() throws ODataJPARuntimeException {

          ODataJPAContext oDataJPAContext = this.getODataJPAContext();

          try {

              EntityManagerFactory emf = JpaEntityManagerFactory.getEntityManagerFactory();



              return oDataJPAContext;

          } catch (Exception e) {

              throw new RuntimeException(e);




In the application’s web.xml file, we then define the ODataServlet which links to the QbizServiceFactory class, and specify a mapping so we can access the OData service via an URL. If the context root for my application is ‘qbiz-web’, and the defined URL mapping as specified in the web.xml file is ‘qbiz.svc’ I can now test my OData service locally, using any of the supported OData functions such as sorting with URL http://localhost:8080/qbiz-web/qbiz.svc/MutationStagings?$orderby=Date :

Screen Shot 2015-04-29 at 00.09.04.png

Awesome! This means we could now define an ODataModel in our UI, and retrieve the data from the ‘MutationStaging’ JPA entity and display it in an SAPUI5 table.

Now since OData is one-way binding by default, and any update to the data will result in an update request to the J2EE layer, I wanted some extra flexibility. In addition, I don’t want to update to the ‘MutationStaging’ table, but to a — identical in structure — ‘Mutation’ table.

So after the ODataModel is populated, I then copy its data results to a clientside JSONModel. This allows you to do any modifications without having a separate request to the backend.

Once done with modifying the data, we can then simply use the OData batch operations to submit all data at once. The SAPUI5 supplied ODataModel comes at hand again here, so we can just code everything in Javascript, no need for additional Java coding.

If we want to submit our data at the push of a button, we only need to code the event handler for the ‘press’ event for that button:

    doAccept: function(oEvent) {

        var aBatchOperations = [],

            oUIModel         = this.getView().getModel(),

            oODataModel      = this.getView().getModel(“OData”);

        var aMutationStaging = oUIModel.getProperty(/MutationStagings);

        for (var i=0; i<aMutationStaging.length; i++) {

            aBatchOperations.push(oODataModel.createBatchOperation(/Mutations, “POST”, aMutationStaging[i]));




            function(oData, oResponse, aErrorResponses) {

                if (util.ExceptionHandler.checkBatchResponse(oResponse)){




            function(oError) {





…and our modified data is then submitted to the ‘Mutation’ table. A small check using the OData service reveals the data is successfully submitted:

Screen Shot 2015-04-29 at 00.12.57.png

Next time…

In the next blog — which will probably arrive shortly after SAPPHIRENOW — I would like to show how to utilize the Document Service of SAP HANA Cloud Platform to store customized documents (invoices and reminder letters, in this case) which are generated using Apache Velocity.

In the meantime, please let me know any issues, doubts, or room for improvements in the comment section.

1) Since it was called NetWeaver Cloud, anyone remember? 😉

2) Although there’s nothing wrong with MaxDB, it kinda feels like having a Mercedes S class, but with the smallest engine possible and velour upholstery… So HANA DB it is 😉

3) I may need to switch to Apache POI soon, but this works for me for quite some years now without issues

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Rui Nogueira

    Thanks a lot Robin for this great blog post!

    I’ve used Apache POI myself and it is easy to use. Let me know if you’d need a helping hand.



    1. Robin van het Hof Post author

      Thanks Rui, much appreciated!

      And stay tuned for the next episode about SAP HCP Document Service, together with Apache Velocity document templating and PDF creation.


Leave a Reply