Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
matt_steiner
Active Contributor

Open Source for the win

One of the best aspects of Open Source software is that it is easy to use! You can freely download the software and usually there is a great documentation and sample coding that comes along with it, resulting in a very low entry barrier. Furthermore, usually the code quality of popular project is really good and sets the gold standard to strive for. These two factors lead the way to mass adoption, which further improves the quality of code as any bugs that there may are are usually discovered and fixed. Last, but not least - this mass adoption results in steadily growing communities experienced with these open source projects and usually these communities are freely sharing their expertise answering questions on social forums like stackoverflow. These are the reasons that drove us to start sharing samples, tools and frameworks as Outbound Open Source projects ongithub.

As mentioned in part 1 of this series we have forked a very simple demo application called "Granny's Addressbook" and want to make it more enterprise-ready throughout the course of the next weeks. We'll share our approach and considerations in accompanying blogs, yet as a software developer there's only a single point of truth - the source code!

Let's get started

So, before we start about enhancing the application we first need to obtain it and then make it run! Once we accomplished this we can have a closer look, analyze the architecture of the application and then start to make it better. But we said we take it step-by-step and hence let's focus on making the application as-is first.

"Enterprise Granny" is a Maven-based application and the source code is available on github here: https://github.com/SAP/cloud-enterprise-granny

Those that have never worked with Git and Maven before, may want to check-out this tutorial I wrote a while ago, which walks you through the process of installing the necessary tools. With that knowledge you should be able to clone the github repo (repository) and import it into your local Eclipse IDE.

[Ref] Essentials - Working with Git & Maven in Eclipse

Making it run

If you browse the commit history of the project to see the first changes I applied after forking it you'll notice mainly two changes called 'meta' and 'DB abstraction'. Now, the first was mainly about getting rid of some old artefacts (Eclipse specific configuration files) and some meta-information. Much more interesting is the changes I applied to make the application DB-agnostic. The original "Granny application" used HSQL or PostgreSQL as the underlying DB [Ref]. The first is merely a in-memory database (which does not store data permanently) and the second isn't available in the SAP HANA Cloud Platform.

Even more important, we don't want to tie our application to a particular database anyway as this would limit us to a specific environment. Instead we want to leverages a concept called "DB as a Service", which allows us to use standard APIs to interact with a default datasource that is automaticallyprovisioned by the platform at runtime. This comes in handy during the development of the application as we can deploy it locally and to the cloud without having to apply any changes to the configuration. Let's have a closer look at how this is accomplished...

Making Granny DB-agnostic

So, let's have a closer look at the changes I applied (here's the corresponding commit log). The following five files have been changed:

Let's go through them one by one:

pom.xml

When talking to people that are not too familiar with Maven I usually describe it as sort of a recipe (as in cooking). In a nutshell it lists all the required dependencies (ingredients) and the build instructions (how to cook). In this file I basically removed all the dependencies we no longer need (HSQL, PostgreSQL and Hibernate) and replace them with the required EclipseLink dependencies. I also added a dependency to Derby, which is the DB I use locally. (For more information on how-to use Derby as your local DB, please see this blog: Essentials - Working with the local database).

Address object

While trying to make the project run locally I ended up with some strange SQL errors. At a closer look I realized that they were caused by the leading underscore used by the ID field in the Address object. Some databases may be able to handle it, but not Derby (see here), and for the sake of portability I decided to make a minimally invasive change by introducing a single @Column annotation, which makes sure that the column name in the DB is always just "ID". This change has no impact on the application though.

persistence.xml

All that was needed to be done here is to replace the JPA Provider from Hibernate to EclispeLink.

app-context.xml

Again, I mostly changed the JPA related configuration from Hibernate to Eclipse. The most important change to notice is the way I declared the DataSource though:

<jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/DefaultDB" />

Hereby, we get rid of the hard-coded JDBC configuration to obtain a reference to the DataSource, but instead declare that it is provided by the runtime container via a standard JNDI lookup with the provided name. So, how does that work? How hoes the DataSource become accessible via JNDI?

web.xml

Well, the answer is simple and brings us to the last change applied in the web.xml file. Here, we declare a so called resource reference to the DataSource. The rest if taken care of by the platform runtime. As stated in the documentation:

The persistence service binds the default data source that maps to your Web application's database schema as a resource to Tomcat's JNDI (Java Naming and Directory Interface) registry. Before you can consume the data source in your Web application, you need to declare a reference to the default data source provided by the persistence service.

Wrap-up

That's it: the application is now ready to run on both your local server as well as in the cloud. Before we start applying further changes to make the application more enterprise-ready it may make sense to have a closer look at its architecture in order to get a better understanding of the pain points. So, in the next part of this series we'll do exactly that and discuss the good, the bad and the ugly. Stay tuned...

PS: If you should have any further questions related to the changes discussed here, please feel encouraged to do so! :wink:

4 Comments