Blogs by

Aron MacDonald

In two recent blog I demonstrated how easy it is to call HANA views from Apache Spark and push down more complex SQL logic to HANA Calling HANA Views from Apache Spark | SCN Optimising

In a recent blog I demonstrated how easy it is to call HANA views from Apache Spark. Calling HANA Views from Apache Spark | SCN As you start using it with larger Tables or Views

Open Source Apache Spark  is fast becoming the de facto standard for Big Data processing and analytics. It’s an ‘in-memory’ data processing engine, utilising the distributed computing power of 10’s or even 1000’s of logical

For those building apps, with on-premise Native Hana, the Web IDE is now available  from SPS11 onwards. See the Developing with XS Advanced: A TinyWorld Tutorial  for a great intro. The minor downside, at the

SAP Hana Vora is a ‘Big Data’ In-memory reporting engine sitting on top of an Hadoop Cluster. Data can be loaded into the Hadoop Cluster memory from multiple source e.g. HANA, The Hadoop File system

For an overview of what SAP Hana Vora is then please check out: SAP HANA Vora: An Overview [SAP HANA Academy] Learn How to Install SAP HANA Vora on a Single Node SAP HANA Vora

Most people using HANA XS by now are probably familiar with the SHINE demo content: http://help.sap.com/hana/SAP_HANA_Interactive_Education_SHINE_en.pdf It has a nice example of downloading the results of a query to EXCEL. With a small tweak I’ve

For those of you that have started to explore HADOOP you might be familiarly with HUE (Hadoop User Interface)  or Linux command line for moving files to and from the HADOOP Distributed File system (HDFS).

An interesting new feature of SPS8 is now the ability to easily see the temporary memory used by a long running statement. To activate this feature you need to switch on ‘enable_tracking’ and ‘memory_tracking’ in

This is a companion to my earlier blog, where I demonstrated HADOOP HBASE records being read by HANA and presented using SAPUI5. Tip of the iceberg: Using Hana with Hadoop Hbase In this blog I