Keep your North Star in sight: staying true to Agile Analytics in SAP Analytics Cloud
Yesterday, my colleague David was announcing major upcoming improvements to expect for SAP Analytics Cloud’s data preparation . Let me go back to the origin of those changes and how we came up with this project.
One of the most interesting aspect of building a product is its evolution. Products are in constant change, adjusting to the many factors that influence it: customer asks, new technologies, architectures, business opportunities, new ideas. The more success a product has in the market, the more input the product team gets. It is in those moments that having a clear and focused product direction is important; a north star to help the navigation. There is a high risk of getting diverted along the way to take a few detours and eventually get lost.
Reflecting on where SAC data preparation was some time ago, that was one of those defining moments. Looking holistically to the inputs our users were providing when dealing with data in SAC, they were asking for:
- More flexibility when dealing with modelled data
- More data preparation capabilities to avoid forcing manipulations to be done in the source
The overall direction of SAP Analytics Cloud was oriented to provide Agile Analytics with a different workflow, so the data analyst can quickly shape his raw data into information, to answer last minute requests. That use case needed strong improvements. Planners using SAC have a dedicated experience that works well: they can start by defining the structure of their model before importing the data. The struggle was more in situations where the data analysts needed flexibility and rapid adjustments, to cope with the full cycle of self service. We have focused our attention on bringing greater agility to the “data access to cleansing of the data and analysis” workflow.
We also faced a choice: can there be incremental improvements, with the benefit of having shorter timelines or was there a greater opportunity to re-think the whole solution and take a more patient approach. We took the risk of thinking long term: to provide the best agile experience, the targeted workflow required a brand-new architecture to deal with the acquired data. This new architecture would add to the current benefits SAC already provides and leverage those differentiators that makes it unique: predictive and planning capabilities to self-service BI workflows.
We are nearing the end of this effort and starting a new, exciting journey to be the tool of choice for data-analyst in their never-ending quest to transform data into information and decisions.
At the core of this innovation are three key ingredients:
- Building a new wrangling stack based on SAC Datasets, with the immediate benefit of making data preparation much more tightly integrated with stories.
- Leveraging the flexibility and efficiency of the Datasets for non-governed situations: Datasets are ready-to-be-analyzed data objects and can be edited at any point in time. Whereas models represent the ideal next step in terms of governed data and can be created from a dataset.
- Introducing a powerful new wrangling language that will gradually include new capable of directly editing the underlying transformation graphs
The overall experience that this new stack brings to SAC for acquired data was shared with Beta customers, and their reaction has been fantastic. The most common feedback was about the efficiency and speed users were gaining when immersed in this new experience. How tightly integrated the data preparation steps were now integrated with the story was seen as a game changer, and there are a few features that are noteworthy.
The whole UI has changed, with the introduction of a new model overview panel where all of the modelling is happening via an easy drag-n-drop experience.
The expression editor, which has been optimized to provide a comfortable programming experience, is home to a new language, codenamed “Omega”. This language provides many functions to manipulate strings, dates, numeric values, but also geo data (I am curious to see how users are going to leverage this new “distance” function!).
Overall, I am very excited to see this brand-new experience coming out in SAP Analytics Cloud QRC.Q3 and see how well it addresses our users’ needs for Agile Analytics on acquired data. This was no small feat for the team, but it kept true to its North Star and guiding principles and hopefully now our users will reap the benefits.
Does this only apply to datasets - the model is still unchanged?
refresh of data - schedule? Is that possible
Yes, this applies to Datasets. Those data objects have the benefit of being much more flexible and can directly be brought in stories as a semantic layer is directly inferred once it is ccreated.
For quick ad-hoc analysis, when creating a DataSet + Story from a data source connection and you have to add 1 more column/field because you missed it in the query, then you have to start over again from scratch creating entirely new dataset and redo transformations and redo your story, is that correct?
How is this more flexible/agile compared to model approach where we have similar issue but at least link between model & story stays, saving the work to re-make the story entirely?
Why is the dataset available only in CF? Сan be expected to appear in Neo too?
with QRC.Q3, Datasets will be available for both CF and NEO tenants.
Is it correct that is also would be possible to change the underlying model
in a story. So to re-align a story with another model containing identical
dimensions ? This could be interesting when copying content to your
own custom model.