Lessons from an ice cream maker – developing hybrid apps (HANA / JVM) on the HANA Cloud Platform
One of the most important enhancements in the HANA Cloud Platform (HCP) last year was the ability to use the HANA XS engine to create applications – an ever-increasing number of threads in the HCP developer forum on HANA XS usage reflects the popularity of this feature. Yet this functionality is still largely segregated from the existing platform and its JVM focus. The relative isolation of the two fundamental feature-sets means that developers really can’t optimally use the platform. This problem is especially true for HANA XS developers in that they can’t exploit the framework services already present in the JVM-based environment. It is as if such applications were ice cream cones with two flavors (not scoops!) that exist side-by-side – touching but not really blended.
This isn’t something I made up – there are and they are struggling with the architectural complexity involved in the creation of such applications.
Note: I know that “hybrid” usually has a different meaning in the SAP Cloudsphere (OnPremise vs. OnDemand, PrivateCloud vs. PublicCloud, etc) but in this blog it will have a different meaning.
I’d like to explore the existing integration possibilities between these two flavors.
Existing Usage scenarios
Note: In this blog, I’ll focus on JAVA-based components when I talk about JVM-focused components but applications from any programming language which can run in the HCP JVM face the same issues.
Using HANA XS without JVM integration
In this scenario, the HANA XS components are using the HCP infrastructure as an IaaS and no integration with JVM-based components is present. Many of the existing threads on this topic in the HCP discussion forum on SCN concern this scenario.
Using HANA as a data source in JVM components via JPA
In this scenario, the JVM-based components use Java Persistence API (JPA) to access HANA data in the database. These components could also use JDBC to access the data – or stored procedures.
Using HANA XS functionality in JVM-based components via OData
In this scenario, JVM-based components use REST calls to access OData interfaces that have been created by HANA-XS based code. There are a variety of frameworks available for JVM-based components to easily access OData interfaces including Apache Olingo.
Using JVM-based functionality in HANA XS components via HTTP
Some difficulties here include the fact that destination administration for the HANA XS necessary for Outbound Connectivity is distinct from the Destination administration necessary to use the Connectivity Service in the HCP – this increases the complexity of the administrative work.
As this final scenario depicts, there is the potential for HANA XS apps to access JVM-based applications based on HTTP. Yet, this is access is relatively primitive and is based on JVM-based applications provide a HTTP interface. Existing JAVA frameworks that might provide useful functionality but which don’t have HTTP-based interfaces, however, won’t be available to HANA XS based applications.
Future usage scenarios based on River / RDL
River is a new development language for HANA. My assumption is that sometime in the future that it will also be available in the HCP environment.
Note: Yes – I know that River is just a design-time and not a run-time tool. My assumption is that it generates code to deal with such integrated-related tasks.
Note: River-designed HANA XS components could also support OData integrations as mentioned above.
River’s additional ability to use SOAP out-of-the-box, however, doesn’t bring us a fundamental integration shift that might be possible.
A recent blog about the design principles for River provides some interesting areas to consider:
In today’s technology landscape, applications are rarely built using one technology. While most developers like to “start fresh” when coding an application, the reality is that there are considerably large, well established, code bases out there with years of work put into them. Even new applications will most likely be built on top of these.
Additionally, it is very likely that a developer would encounter the need to leverage some technology-specific runtime capability that isn’t easily expressed in River or supported by the River compiler.
To this end, we adopt an openness principle: River allows the consumption of components created using other technologies, or “breaking out” from River code to other technologies. The challenge is of course to allow this consumption without compromising other aspects mentioned above, namely simplicity and coherency, but also River’s ability to optimize the running code – invoking a runtime-specific code binds the execution to that runtime container.
These are a very intriguing statements and a recent picture seen on Twitter shows which languages might be relevant for such “break-outs”
What is unclear is exactly how the break-out functionality might be integrated in River. Would you develop in RDL and the framework would create JAVA code (reminds me of Spring Roo)? Or would you embed JAVA code directly into your RDL code?
In our model of HCP integration patterns, this evolution results in an interesting diagram.
To be truthful, I don’t know if this would bring the two HCP flavors closer together. Under the covers, the only existing integration path is still HTTP / OData. JVM-based components without a HTTP interface still have to be wrapped with such functionality to allow access.
In this blog, I’ve focused on the architectural considerations when blending the two development flavors. There are a variety of other related topics that are interesting to examine.
- How do you determine what functionality is developed in each flavor? Currently, there are limitations associated with the use of HANA XS within HCP (can you use Hana XS UI Integration Services?) – some of which exist due to fundamental characteristics of HCP as a PaaS that supports multi-tenancy. would alleviate some of these issues. As the evel of sophistication of HANA XS – and indeed the HANA DB itself – increases, there will be some overlap in terms of feature-set and then decisions must be made which HANA XS features will be available in the HCP.
- There are currently just a few JVM-based Framework Services that are provided by the HCP itself. Many of these already have a HTTP interface that could be accessed by HANA XS developers. As partners and other developers start providing their own services in the JVM world, then they must be made aware that HANA XS developers are potential users of their features and these new JVM-based services must be developed accordingly.
Each scoop of ice cream with multiple flavors is unique and has different amounts of the used flavors – resulting in different swirls. This visual individuality combined with the merged taste is why we love ice cream. Hybrid applications in HCP are similar – there may be general patterns but each application is unique.
When I look at the integration scenarios described above, I get the feeling that the HANA XS and JVM-based flavors in the HCP will largely remain separate. Although a tighter integration is desirable, the question is whether a deeper integration is even possible. The two environments run in two separate runtime containers – JVM vs HANA database – so a tighter integration is difficult. The challenge is to optimally use each environment to create innovative applications. As more developers work in this platform and publicly describe their experiences, working “recipes” will be help guide developers to exploit the platform’s potential.
interesting post once again! In general, I believe that we (HCP team) and you are on the same page when it comes to interoperability of platform services: yes, these should be usable from all supported runtimes (as applicable!)
There are some exceptions though: so, XS would not need the persistence service to access the underlying datasource given it already resides right at the source, agree?
True, (as of now) you cannot directly invoke a XS JS function directly from within a JVM-based application, but that's not a limitation in my point of view. In fact, I believe that the scenarios you mentioned that leverage REST/OData are addressing the best way to do polyglot development and what I would personally recommend to focus on (at least for the foreseeable future.)
I see little drawbacks with this approach... actually, it allows you to leverage (external) interfaces you'd most likely have to provide for the UI layer or mobile devices anyway. In fact, that may be truer to componentization best-practices or the general SOA mindset (Note: I'm referring to SOA as a concept, not the technical mix of standards and protocols!)
It also avoids introducing another wrapper layer that would otherwise be required (see JCo as a RFC/Java bridge.)
Well, at least that's true for consuming XS JS functions from the JVM side. I do agree, that on the XS side it would be good to have a higher-level API to ease the usage of the platform services on top of REST/Odata. This could in fact be very similar to the persistence API provided as part of XS, which is indeed very similar to its Java pendant 😉
Still, I think we are well on track: you can already(!) use the Document Service via the provided REST/Atom interface, and we certainly won't stop there! As I said, I think we are all on the same page and we are fully committed to finalize this symbiosis of NEO and HANA for good. Stay tuned!!!
PS: Oh, and yes... I'd rather avoid using the term "hybrid" in this context as we are typically using it to refer to scenarios in which cloud apps are connected to on-premise systems. From all I know, polyglot may be the most fitting term. cc: Ethan Jewett
I agree completely - there is an important distinction between an integration method being possible and it being the best choice. With all the work going regarding OData, it is probably the ideal for many scenarios.
I also agree that the access from the JVM-based flavor towards the HANA XS / HANA world is currently more mature than the other direction. This is probably based that the first scenario is more standardized / older whereas the HANA XS -> JVM scenario is relatively new.
That is good to hear. I could imagine HCP-specific add-ons that might be added to the HANA DB to increase the efficiency of this interaction.
P.S. I also agree that polyglot is more appropriate than hybrid. I'll remember that in my next blog
Trying to make me angry Matthias? 😉
Great blog Dick. One question: Won't River expose OData web services that could be consumed from the Java-based HCP platform? My understanding was that River generated OData services by default.
I definitely like 'polyglot' better than 'hybrid' - it is a more accurate description of the scenarios in which I'm interested.
Yes that is my understanding as well. I mentioned it in a note in the blog but didn't have a diagram showing that possibility.
I can confirm. Via annotations you can expose any function as an OData service via River.
Not at all, the opposite! It was meant as a sign of my appreciation for teaching me that after all "polyglot" may indeed is a term that developers use! (And rightly so!)
Yup - I got that! And thanks - I was pleased to see that you are considering the term now. It does make sense here, I think, though there are many options. I need to use more winking smileys in my messages to make them clear. ;;;-)
Interesting blog again.. Comparison with Blended Ice-cream... 😎