Skip to Content
Technical Articles
Author's profile photo Witalij Rudnicki

SAP Tech Bytes: Your first Predictive Scenario in SAP Analytics Cloud

If you got interested in Machine Learning, then sooner rather than later you hear about or discover the website called Kaggle. Founded in 2010 Kaggle “allows users to find and publish data sets, explore and build models in a web-based data-science environment, work with other data scientists and machine learning engineers, and enter competitions to solve data science challenges.” (from Wikipedia)

The very first challenge participants enter is the famous Titanic ML competition. The competition is simple: use machine learning to create a model that predicts which passengers survived the Titanic shipwreck. In Kaggle’s own words:

In this competition, you’ll gain access to two similar datasets that include passenger information like name, age, gender, socio-economic class, etc. One dataset is titled `train.csv` and the other is titled `test.csv`.

Train.csv will contain the details of a subset of the passengers on board (891 to be exact) and importantly, will reveal whether they survived or not, also known as the “ground truth”.

The `test.csv` dataset contains similar information but does not disclose the “ground truth” for each passenger. It’s your job to predict these outcomes.

Using the patterns you find in the train.csv data, predict whether the other 418 passengers on board (found in test.csv) survived.

In our case, we are not going to submit the final answers to the competition. Our goal here is to discover, learn, and experiment.

This Kaggle challenge is the classic classification problem. What’s more, it is a binary classification, where output takes one of only two possible values.

I would like to use this very popular exercise to get you introduced to using Machine Learning with different SAP products.

The Machine Learning process usually follows the cycle of well-defined steps from data acquisition and preparation to building an ML model and using it to get predictions based on the new data.

Check openSAP course Getting Started with Data Science by Stuart Clarke, if you are interested in in-depth details of the ML process.

We are starting with SAP Analytics Cloud

For that, I am using the trial edition of SAP Analytics Cloud, so that you can replicate steps too.

Please note that the current version of the trial edition is 2021.13. Some of the functionality or UI navigations presented here might not yet be available in productive instances with quarterly update cycles.

One of the Augmented Analytics features of the product is the Smart Predict. This feature is a no-code Machine Learning. It automatically learns from your historical data and finds the best relationships or patterns of behavior to easily generate predictions for future events, values, and trends.

Let’s see how Smart Predict helps us to address Kaggle’s Titanic challenge.

As well you can watch me doing all the steps in this SAP Tech Bytes video 📺

1. Identify the ML Scenario

So, we are going to “using the patterns you find in the train data, predict whether the other 418 passengers onboard (found in test data) survived.” The solution should be provided in the form of a file with two columns:

  1. The ID of a passenger,
  2. The predicted value: Yes or No, e.g. encoded as 1 and 0.

2a. Data Acquisition

Go to the Datasets application and create a new dataset importing a CSV file train.csv.

To keep all related artifacts in one place I created a new folder Titanic.

2b. Data Discovery

Once the file is loaded the dataset is available to us to work with.

The dataset has 891 rows, or “observations“. All records together are called “population“.

There are 12 columns in the dataset, or “variables“. In our example we will be predicting the Survived variable, that is our “target“. The “influencers” are variables that describe your data and which serve to explain a target.

The Output view of dimensions and measures, while important when building stories and visualizations, is not relevant at this stage when our focus is on training predictive models.

Let’s switch to the Columns view.

Now we can check the details of columns to understand the data better and to check their quality, like:

  • There are 342 records (or 38.38%) where Survived is 1 (meaning “true”).
  • There are passenger classes 1, 2, and 3 in Pclass and no missing values in records.
  • 77.1% of records are missing a cabin number.
  • A single ticket is not always for a single person but might be for a group/family.
  • The histogram for the Age variable shows if there would be a lot of kids onboard younger than 4 years old. In fact, there is a big number of empty values in this column in the dataset.

To get a better picture of the age distribution, let’s replace empty values with the null value to differentiate them from the value 0. To achieve that create a transformation to replace all empty values with the null.

Now we got a much better view of the distribution of the age of passengers, plus information about the number of records with the null value in the Age column.

Save the dataset.

2c. Data Processing

The transformation done in the previous step is an example of data processing to prepare the dataset to be used in machine learning.

We are not going to do many more data transformations in this cycle, but there is one mandatory activity to prepare variables for ML training: checking and assigning proper statistical types to them.

Here are suggested types in the alphabetical order of columns:

Age Continuous
Cabin Nominal
Embarked Nominal
Fare Continuous
Name Textual
Parents or Children (Parch) Continuous
PassengerId Nominal
Class (Pclass) Ordinal
Seх Nominal
Siblings or Spouses (SibSp) Continuous
Survived Nominal
Ticket Nominal

Please note as well the data type of Survived column is Boolean.

Save the dataset.

3. Model Creation

A Predictive Scenario is a workspace, where you create and compare predictive models to find the best one to bring the best predictions.

Let’s go to the Predictive Scenarios application and create a new Classification.

Save it as Titanic in the folder with the same name. Next:

  1. Select train dataset as a training source.
  2. Edit column details: verify the statistical types, and
  3. … check the PassengerId as a key variable.
  4. Select Survived as a target.

Training is a process that takes these values and uses SAP machine learning algorithms to explore relationships in your data source to come up with the best combinations for the predictive model.

We are not going to do any other settings, for now, so just click Train.

In a few minutes, you should see the first “Model 1” has been trained (a Status is seen in the Status Panel). All that with a single click!

Now it’s time for the debriefing stage in which you assess the predictive model to decide whether or not this model is ready for use. After that, you can decide to apply the model to generate predictions, repeat the training phase with different inputs to improve the model, or create a new predictive model from scratch.

Let’s look closer at what we got:

  1. Our source dataset train has been split into two partitions: Training and Validation. The first one was used to build multiple models, and the second one was used to select the best model, i.e. the model with the best indicators.
  2. Predictive Power is the main measure of predictive model accuracy. The closer its value is to 100%, the more confident you can be when you apply the predictive model to obtain predictions.
  3. Prediction Confidence is your predictive model’s ability to achieve the same degree of accuracy when you apply it to a new dataset that has the same characteristics as the training dataset. This value should be as close as possible to 100%.
  4. During the training, Smart Predict calculates an optimized set of influencers to include in your predictive model.

We will look closer to ways these indicators are calculated, but for now, indicators look good enough. Please note that they might be slightly different for the same dataset, even as a random partitioning into training and validation parts tries to keep data distribution similar.

Let’s check what six variables have been computed as influencers by Smart Predict. Understanding influencers and their contributions give you an explanation of the automatically generated model and therefore understanding how the model makes predictions.

The influencers are sorted by decreasing importance. Gender and cabin class are two top influencers.

For each influencer, we can analyze the influence of different categories (single values, ranges of values, or groups of values) on the target. The higher the absolute value of the influence, the stronger the influence of the category is. The influence of a category can be positive or negative.

Taking the cabin class variable Pclass as an example on the screenshot above:

  • Traveling in the 3rd class has a strong negative influence,
  • Traveling in the 1st and 2nd classes has a positive influence, but it is traveling in the 1st class that has a much stronger influence.

We will spend more time going into the details of ML models later. For now, having quite good indicators and understanding of influencers’ contributions let’s move on.

4. Generating Predictions

In the previous step, the model has been trained and automatically deployed on SAP Analytics Cloud infrastructure.

We need to have a dataset with the population (records with observations) we want to apply the model to get the predicted category. The results will be saved to a generated dataset.

So, first, we need to prepare a SAC dataset with the data from a Kaggle-provided test file.

Go to the Datasets application, and import the test.csv file into the dataset test in the Titanic folder.

The test dataset has 418 records (or observations) and only 11 columns, as the column Survived is missing. That’s the column we want to predict.

To be consistent with the train dataset, let’s replace missing values in the Age variable with the null.

Save the dataset and go back to the Predictive Scenarios application. Open the scenario Titanic, if closed.

Click on the icon Apply Predictive Model.

  1. Choose test dataset as the data source.
  2. Leave replicated columns empty. Only the key column — PassengerId that we marked as a key before the training process — will be replicated from the input test dataset to the generated dataset.
  3. Choose only Predicted Category from Statistics and Predictions. It will be a column with the calculated prediction added to the generated dataset.
  4. Output as test-predictions in the same Titanic folder. This will be our generated dataset.

Click Apply.

Expand the status panel, and you’ll see the status changing from “Trained” to “Applying Pending” to “Applying” and, finally, to “Applied”.

Now go to the Files app where you should find test-predictions generated dataset.

You should see two columns PassengerId and Predicted Category. Out of this group of passengers, 149 are in the category with the 1 value, i.e. are predicted to survive.

These two columns are the way Kaggle expects participants to submit their predictions.

So, is that it? Are we done?

Well, we would be, if the submission to the Kaggle’s challenge would be our goal. But the real goal is to discover, learn and experiment. So, the answer is:

No.

While we used this exercise to create our first predictive scenario and the first predictive classification model in SAP Analytics Cloud, in the next parts we will look closer at classification predictions in the Smart Predict, and will see if we can get a better Machine Learning model.


Stay tuned!
-Vitaliy, aka @Sygyzmundovych

Assigned Tags

      10 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Peter Baumann
      Peter Baumann

      Hi Vitaly,

       

      already did something similar (but in german) 🙂 https://datasciencedummy.wordpress.com/2020/12/28/sac-smart-predict-classification/

      I used Smart Discovery and Smart Insights for explorative analysis in the beginning and found it very useful.

      Nice to see your workflow here. Thank you!

       

      Regards,

      Peter

      Author's profile photo Witalij Rudnicki
      Witalij Rudnicki
      Blog Post Author

      Thank you for sharing your work Peter Baumann!

      I've done a search if someone used this example before but missed your post. Maybe indeed because it was in German.

      I planned the next post with a bit more validation, explanation, plus an attempt to improve the trained model 🙂

      Best regards,
      -Vitaliy

      Author's profile photo Peter Baumann
      Peter Baumann

      Hi Vitaly,

       

      looking forward to see more here 🙂

       

      Best regards,

      Peter

      Author's profile photo Witalij Rudnicki
      Witalij Rudnicki
      Blog Post Author

      Ok, I just posted the second part: SAP Tech Bytes: Understand your classification model in Smart Predict

      Have a good weekend!

      Author's profile photo Philipp Nell
      Philipp Nell

      Very interesting blog post. Thanks for the effort. Take care

      Author's profile photo Witalij Rudnicki
      Witalij Rudnicki
      Blog Post Author

      Thanks Philipp Nell! More to come 🙂

      Author's profile photo Luiz Souza
      Luiz Souza

      Witalij Rudnicki ,

      Such a great step by step process about Predictive !

      I am preparing myself to take the certification in the next days, and it helped me a lot with some concepts!

      Thank you very much!

      Author's profile photo Witalij Rudnicki
      Witalij Rudnicki
      Blog Post Author

      Good luck with your certification exam, Luiz Souza!

      Author's profile photo Luiz Souza
      Luiz Souza

      Yeah I am SAP Certified SAP Analytics Cloud now!

      Thank you!

      Author's profile photo Witalij Rudnicki
      Witalij Rudnicki
      Blog Post Author

      👏🏻 Luiz Souza 

      I need to add this challenge to get SAC-certified to my own list now!