Skip to Content
Technical Articles
Author's profile photo Dmitry Kuznetsov

First look at Google’s SAP Accelerator for Order to Cash

You may have heard that Google has released some content for SAP. Sure enough, within a month I already have a connection between the two beasts (my S/4HANA and Google Big Query) up and running and after some trial and error have a few observations to share.

 

Terminology

for those SAP-minded willing to get familiar with Google jargon:

  • Big Query – sort of Data Warehousing suite. The closest comparison would be SAP upcoming Data Warehouse Cloud or BW would be conceptually close
  • Pipelines in Cloud Data Fusion – sort of Transformation, DTP and Process Chain somewhat blended together (in BW-speak)
  • Looker – visualization tool, sort of SAP Analytics Cloud
  • SAP Accelerators in Google Cloud – pre-delivered set of pipelines and visualizations that Google has come up with

As the release notes suggest, we have both back-end content (Pipelines and related schemas) and front-end (Looker visualizations). In this post I focus on back-end.

 

Observations

  1. The content is written to serve both ECC and S/4HANA. Which, after you think about I, makes more sense for those ECC customers, actually. Because those running S/4HANA can already do quite a lot of analytics in the system directly or using (embedded) BW. So, the data does not need to leave the premises in the first instance if it’s S/4HANA, because that would be like putting a spaceship on the train.
  2. The above (Accelerator fit for both ECC and S/4) results in content, wherein “naked tables” are being extracted (imagine, how many of them are needed to at least build up Material master data) and only then, after renaming of the fields and storing that data in Big Query, does Google build up a Material master-data table inside using ETL steps. While this makes sense to some (mostly dealing with ECC), it does not for the others. Keeping Material as an example, we know that in S/4HANA there are already a bunch of standard views, resulting in a dimension view I_Material. The latter is, actually, a result of joining 10-20 other tables with all the fields neatly renamed, etc. In other words, there is quite a bit of work done by SAP for us. Now, using SAP Accelerator in Google Cloud, similar operation happens, but it does so by running quite a number of pipelines (haven’t yet found any scheduling tool that would put those pipelines in sequence for dependency). All of this does not hurt if we just plug-and-play the Accelerator-delivered content. But in reality, we never really plug and play. We extend, enrich, add logic, etc. Thus, my primary question design-wise is: why am I not reusing the content SAP has already delivered in the form of views? Then I can spare a few days of work on avoiding unnecessary tasks like renaming the fields (anybody volunteering to rename 537 fields in ACDOCA using a fancy cloud web-based user interface 😉 ? )
  3. The good news is that the connector Google is using (SAP Table Reader) is also capable of understanding the SQL views as good as it does understand tables. So, I went on and produced a Material master-data table not of 10 staging tables plus the product of their joins on Google Big Query side, but rather extracted all I needed using a single S/4HANA view IMATERIAL (which is an SQL view running behind the CDS I_Material). This worked like a charm.
  4. Delta. Ah, yes! At least examining the standard Google-delivered content in SAP Accelerator I have not found anything that would have an “Update” operation. So far all I find are inserts only. And now, therefore, I am trying to imagine, how to load those millions of historical records (e.g. same ACDOCA table) every few hours for no apparent reason. Probably we would need to come up with something like “load me the moving year, but not all the rest” Split the table into History and Current? Put the union on top? Here some smarter Big Query developer would be handy to come up with pseudo-delta approach.

 

To sum it all up, first impression is: connector is good, design delivered by Google in the form of Accelerator – questionable (probably works best for old-school ECC systems), absence of delta-extraction will need some good engineering. Keep on discovering things yourselves 😉 it’s not as hard as it seems.

 

Cheers,

Dmitry Kuznetsov

Assigned Tags

      5 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Dhrubojyoti Saha
      Dhrubojyoti Saha

      Great blog Dmitry. The synergies are getting interesting. DWC must have some ace up its sleeve to carve a place for itself amongst the competition.

      Author's profile photo Pierre-Yves Guillou
      Pierre-Yves Guillou

      Thanks for the time to share this information

      We are always in such a situation: trying to extract everything, but not covering the delta mechanisms.

      I am surprise that Google decided to replicate the data...I would have expected them to consume the data directly from the underlying DB to avoid data duplication.

      Author's profile photo Mani Chinnaiah
      Mani Chinnaiah

      Timely Blog - Dimitry

      Author's profile photo Frank Hoffmann
      Frank Hoffmann

      I found your blog because there is little documentation to work the joins and it is difficult to process it.

      The accelerators do not bring basic tables for very common sap models. Large s/4 tables stop working without extensive errors. It looks like one of gogle's intern projects. A nice idea but incomplete for real SAP customers.

      Have you used fusion for production or seen anyone who has it? Without delta and schedules it sounds like no.

      Thank you

       

      Author's profile photo Dmitry Kuznetsov
      Dmitry Kuznetsov
      Blog Post Author

      Frank, my personal opinion here: it is just released very first version, probably nobody really uses it productively. Just some guessing here and hence the name of the blog "first look". So, I'd advise to analyze it at your own pace, too.