Skip to Content

Just a quick thank you to the team before I crash ZZZZ. I thought delivering a “Predictive Market” was a cool mini-project to base all the cool SAP technologies – HANA, Gravity, Gateway, Mobility, Yahoo Pipe integration. I did worry that maybe I chewed a little bit too much for 30 hours – but it turned out to be unfounded.  This has been the most fun I’ve had in a while.

It’s a nice takeaway from the Innojam to have the honor of conceptualizing for the first team to win it in Australia.  I hesitate to use the word ‘lead’ as all the members were quite senior and I believe each made the most of the exercise .  The achievement of the “Predictive Market” components is almost just a side-effect of the keenness of your desire to get us through.

All the other teams were quite strong too and I’m privileged to be part of this event.

  • (US) Harness the forecasting power of the crowd through “Predictive Markets”
  • Geo-aware Ipad App for  disaster management 
  • UML auto-documentation in Streamwork (maddoc)
  • A iphone App to manage Assets in real time.
  • A mobile/financial  scenario supported by Ipad based financial reporting

Chris P and maddoc : You have my respect for taking on the most technical of the projects and bending Gravity/BPML to your will.

To the asset management team – you’re the only one to implement SUP in the timeframe.  I consider that an amazing achievement given none have been implemented in Melbourne (apart from demo ones to my knowledge).

To the team, Tony, Rod, Nigel, Paul, John, Glen, Sarat…I’m privileged to be a part of your company. I also know there’s a lot that we did not cover in the demo. The Gravity Schema, the market settlement engine, the iOS User interface, some HANA objects. I hope you consider that as a reflection of the good stuff we collectively achieved and Rod+Tony presented.

I’m also oddly proud of the fact that our HANA loads took down {edit out Walldorf}  Sydney twice yesterday night. Nice stress test! Hehehe.

Thank you Andrew, Dan, Jurgen, Rui  and the rest of the SAP Team.  Thank you University of Technology Sydney .

Next year, we’ll try not to  clog the whole AsiaPac region with Information.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Former Member
    I’m looking forward to InnoJam in Las Vegas, even more so after your blog.

    “I’m also oddly proud of the fact that our HANA loads took down Walldorf twice yesterday night. Nice stress test!”

    Love it!  When you can take something down in a non-critical way – It means you’ve found an issue.  Of course, it’s only funny to those of us that didn’t have to fix the issue.  To those that did, well now they know where the problem is.

    What did you do that took the system down?  Curious minds want to know.


      1. Former Member

        Even more good news.   Now you know what not to do with Hana.

        Maybe another blog about what not to do with data loads?  Wilbo / Chris?

        It would be nice to know what you did so I don’t do it.  Odds are pretty good that I find out what not to do.  Someone once said if you are not making mistakes you are not trying hard enough.  Or something like that 🙂

        Great blog!  (And response – of course)


        1. Wilbert Sison Post author
          Had a quick chat with Sarat over email. It looked like we had problems  with too many database commits in the same session.  He said the first try was with SQL Script. The second time was from Java. 

          We were concentrating on getting the scenario running so we didn’t care too much about programming practice.  We must have done something you wouldn’t want to do in a productive environment. I’ve ask Sarat to comment here. He’s in the middle of writing his blog too. That would be good reading. 🙂

          Hi Sarat – how many records did we end up loading? Was it 5million?

          1. Former Member
            You are right Wilbert.

            We are trying to have fun rather than mimicking the productive environment.
            The observed slowness might have been due to  many factors.

            There are too many variables including the Java algorithm, Garbage collection, the driver, network, other teams trying to do things, someone trying to copy massive files onto the server, the procedure we wrote using SQLScript, the nature of row commits coming at a rate of thousands of records per minute….many things.


            1. Former Member
              Sarat, Wilbert, Kaushik,

              Thank you for all the comments!   I’ve learned a lot from both the blogs and the comments.  Sarat – I’m looking forward to your blog.

              Of course good programming practices get thrown out the door when you have to program VERY fast.  I know we do a code review here.   That would be counterproductive at an InnoJam when we usually are scrambling to finish.  So Yes, I would guess that I’m going to be “slamming some code” into the system as well in Vegas at Innojam.  Hopefully not too badly, but it just depends on how close we are running to our deadlines.

              Again thank you for the great comments.  I learn so much from SCN!


        2. Former Member
          Hi Michelle,

          It was great fun at InnoJam, especially with HANA.
          The possibilities of what can be done are amazing (Even for a HANA novice like me :-)).

          We worked on a minimal setup HANA box located somewhere in Sydney.

          We did make couple of interesting observations.
          Including the Modelling Studio going into unresponsive mode for sometime when we tried to execute a SQL Script with 10,000 insert statements.

          We also developed a simple Java class to push records into a column based table using JDBC. In that class, we executed the HANA procedure call in an infinite loop. We launched nearly half a dozen of these instances from command prompt. That is… around 6 infinite loops generating data and invoking the same insert procedure=>same table. We did a very basic row count check on the table. On average, we are pumping 12000 rows per minute.

          The goal is to have a couple of millions of rows in the table in few hours and then execute another set of procedures in HANA to process them simultaneously to look at the processing power and run some good analytics.

          After couple of hours, we observed there were only 500,000+ rows (opposed to the expected 1.5 million rows, based on our crude calculation) and the output on the console of our Java instances is slowing down.

          At that time, we were told that the server is running slow and will be restarted rather than trying to understand why is it slow, so that…. the InnoJaming teams could keep going with their work.

          After that, we did run some other infinite loops, but not at the rate we did before.

          If you have any questions on what we did or how we did or any other observation, please feel free to let me know.
          I am no expert in HANA as of now, but will try to elaborate on what we did in those 30 hours.



Leave a Reply