Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
matt_steiner
Active Contributor

Intro


Once you've worked in this fast-paced industry for a while you understand that technologies come and go. The longer you are in the business the more familiar are the patterns of the hype cycle accompanying the latest trends. That's when you are slowly getting fed up of terms like game changing, disruptive, innovative and so on... that's when you start to automatically take it with a grain of salt when you hear someone talk about 'best practices'.

I have to confess that I'm guilty of using the term myself in the context of presenting proven software architecture and design patterns to my fellow peers. To my defense, for several years in my role as a software architect I was in charge of leading the development of enterprise applications using emerging technologies. Since there was no such thing as prior art that one could have relied on it made sense to share our experiences with other pioneers and document what worked and what didn't to establish some sort of guidelines. In that context it seems to be legitimate to refer to best practices...

So, what separates good architecture from the pack? In my opinion, it boils down to simplicity and flexibility!

It takes a while to develop an understanding of how-to design software by breaking it up into modular components and to grasp the importance of separation of concerns (SoC). The cleaner the design of these components, the simpler it is to understand how exactly a piece of software works (= simplicity) and what it would take to alter or enhance it (= flexibility).
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change. In the struggle for survival, the fittest win out at the expense of their rivals because they succeed in adapting themselves best to their environment." - attributed to Charles Darwin [REF]

Your mileage may vary, but for me being "adaptable to change" has always been a key characteristic of good architecture. If that's the case than it shouldn't be too hard to ... let's say... to get Granny to run on Cloud Foundry, right? After all, we always promote Granny as being a role-model for quality coding, isn't it?

Challenge accepted! [REF]

Making Granny PaaS-agnostic


Using Spring Cloud it is fairly easy to adjust the source-code of Granny to make it PaaS-agnostic! In fact, it's easy as 1-2-3...

Step 1 - Managing dependencies


Well, naturally we have to add the dependencies to Spring Cloud, or more specifically to the respective Spring Cloud Connectors to the Maven pom.xml file.

<!-- CloudFoundry/Heroku -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-cloudfoundry-connector</artifactId>
<version>${org.springframework.cloud-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-heroku-connector</artifactId>
<version>${org.springframework.cloud-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-spring-service-connector</artifactId>
<version>${org.springframework.cloud-version}</version>
</dependency>
<!-- SAP HANA Cloud Platform -->
<dependency>
<groupId>com.sap.hana.cloud</groupId>
<artifactId>spring-cloud-sap-connector</artifactId>
<version>${com.sap.hana.cloud-version}</version>
</dependency>
<dependency>
<groupId>com.sap.hana.cloud</groupId>
<artifactId>spring-cloud-cloudfoundry-hana-service-connector</artifactId>
<version>${com.sap.hana.cloud-version}</version>
</dependency>




 

Note:We also added the dependencies to the recently released Spring Cloud Connectors for SAP HANA Cloud Platform. Please read this blog post for further information.



Step 2 - Adjusting the configuration


Very similar to what the NEO stack of HCP is doing out-of-the-box the Spring Cloud Connectors for Cloud Foundry provides an existing (relational) datasource to a running application by auto-wiring it using dependency injection. Thanks to the respective Spring Cloud Connector for HCP who provides the datasource acquired via JNDI in the same fashion we now have a common approach of declaring the datasource within our application configuration:




<!--
<beans profile="dev, prod" >
<jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/DefaultDB" />
</beans>



-->
<beans profile="dev, prod, cloud">
<cloud:data-source id="dataSource"/>
</beans>

Likewise, we slightly adjust the environment profile specified in the web.xml to state cloud. One could argue about the fitness of cloud as an environment profile name given there may be multiple instances of the app running in the cloud (e.g. dev, staging, QA and production), but I guess the authors of the Spring Cloud Connector assume that each instance has it's own DB wired automatically and for them it was more important to distinguish between local and cloud environments. Anyway, you get the idea and should be able to make an educated decision in your own applications.



 

Note: To be honest, the approach of providing a hard-coded environment profile name within the web.xml is indeed sub-optimal and it would be much better to develop a custom ApplicationContextInitializer that is smart enough to detect the environment it is running in using some sort of environment variables (e.g. as illustrated here). We'll fix that in a subsequent commit!



Step #3 - Providing a Cloud Foundry specific manifest


The last remaining step is to provide a Cloud Foundry specific manifest.yml that defines several attributes  needed for the deployment/execution of the application. The content is pretty much self-explanatory, but maybe we should highlight the services section. Right now, a service called hana_shared is referenced, which is not yet (!!!) available outside of SAP. If you want to run this application in your Cloud Foundry environment you need to adjust that service section to match your environment (e.g. by replacing hana_shared with postgresql or the like.)

Outro


So, that's pretty much all it takes. Now, we have "one [codebase] for all [cloud platforms]" - pretty cool, don't you think?