Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
leojmfrancia
Explorer
After getting caught up on the available AI/ML tools within the SAP ecosystem and finding yourself ready to progress on the journey, you might ask - what's next?

If you are like me and have been curious about how SAP's AI/ML offerings can be evaluated in your landscape and are looking for some practical first steps, this may be a great place to start. We observed the points below in selecting the appropriate SAP AI/ML tool to productionize our models - it would be great if you can share your experience and findings in the comments as well.

This blog might be useful to you if:

  1. Your organization has started its AI/ML journey, having at least (1) awareness and (2) vision to use AI/ML - both are early steps for a data-driven business transformation (source: Gartner, subscription required)

  2. You have identified a few tools within the SAP ecosystem that apply to your AI/ML use case, do have a look at my previous blog post on said tools for more details

  3. You have reviewed articles on how to compare tools/models but want to start with something more basic


The scenario used as a reference for pointers discussed in this blog post:

  1. Uses structured and labeled training data set outside S/4HANA for supervised learning specific to a real business case (~100,000 observations) which cannot be shared here; i.e. not taken from open datasets such as Kaggle

  2. Has access to SAP and non-SAP AI/ML tools, primarily through trial versions

  3. Uses AutoML feature in the tools being evaluated, keeping the same parameters (e.g. training time, class balancing, etc.) across all tools


The idea: Standardizing the data, approach, and parameters across the different tools for evaluation will provide a data-driven basis for identifying which tool is best for my use case.

The caveats:

  1. You do not have to compare the tools every time you have a use case. If your organization has some use cases in mind that you want to pilot but have not decided on a tool yet, this might be a worthwhile exercise

  2. As mentioned above, the hints here are based on an AI/ML use case that has structured and labeled training data in a supervised learning use case


Now, for the pointers.

1. Start small and think big


According to Gartner's Roadmap for Data Literacy and Data-Driven Business Transformation (subscription required), the next steps after (1) selling the value and (2) building the vision are (3) assessments and (4) education before (5) embedding AI/ML into the business. While the broader organization may be looking at roadmap steps (3) and (4), it would be beneficial to run pilots that have a high probability of contributing measurable success to the business. An example of a pilot area would be the automated classification of data entries, which can be addressed by SAP Business Services Data Attribute Recommendation (DAR). You can make use of CRISP-DM as your framework in deploying your use case.

The pilot areas explored should align with the broader vision: to ensure the success of the organization.

2. Review architectural fit


Assess how the tools directly address your use case.

Do you require online inference using data within and outside S/4HANA? SAP AI Core or Data Intelligence may be suitable but do make use of the provided capacity unit calculator (see next point).

Are you looking at batch reporting of AI/ML predictions for a fixed number of stakeholders? SAP Analytics Cloud may be suitable.

Do you have a use case specific to S/4HANA? Then it may be worthwhile looking at ISLM.

Does your team have data scientists who prefer to work on hyperscaler ML platforms? You can consider SAP FedML to avoid replicating training data from your business systems to the hyperscaler platforms.

3. Analyze tool pricing against your use case


Different tools have varying pricing metrics, for example:

This blog does not aim to compare the publicly-available costing of the various tools. It may be interesting to analyze the pricing across the various tools given a sample use case - please let me know in the comments if this is something you would like to see in a blog.

4. Maintain consistent data set, type, and AutoML configurations among the tools you are comparing


After you have done the preparation activities, you are now ready for the technical steps such as tool set-up and assessment. Using the same dataset across said tools is straightforward; you have complete control of that most of the time. However, ensuring that your data is read as the same types across the tools you are comparing can be a challenge.

For example, using the H2O.ai AutoML library in Python (whether in SAP Data Intelligence or as a docker image via SAP AI Core), the data type is automatically assigned when you have loaded the data. A data type may end up classified as categorical (e.g., enum in H2O) when you do not intend for it to be that way which then results in an unusable model. This can be avoided by manually assigning the data types (syntax depends on the library) and/or by ensuring your data is clean (no categorical text entries like "TBC" in a column meant for integers).

You also need to ensure that AutoML configurations such as training runtime, class balancing, variables used in the training phase, etc. are consistent across the tools you are testing.


Check the data types!



5. Maintain a consistent holdout sample for testing


Not all tools allow for custom splitting of train, validation, and test data. For example, this comprehensive blog by evgeny.arnautov describes the automated train, test, and validation split in SAP AI Business Services Data Attribute Recommendation as 80, 10, and 10 respectively. Some tools you are comparing may have a different unchangeable in-built split, and some may allow you to customize. In the screenshot below, due to the various train-test split approaches of the tools tested, the F1 score (a measure of a model's accuracy) ranged from 0.66 to 0.94.

While comparing the metrics of your AutoML models may be useful in providing an idea of their performance, said metrics may not be representative of how the models would work in the real world.

To standardize the test data across the tools you are evaluating, ensure that you have a holdout sample that is not used during model training.


Different tools have different training, test, and validation split approaches and some cannot be changed. Take note of this!



Wrapping up


As with almost everything in life, things are usually not that simple so I hope the considerations above provide ideas on how to start your AI/ML journey. The pointers here also loosely apply if you are comparing ML models inside the same tool. Do share in the comments section if you have other pointers in selecting the right AI/ML tools for a use case.

Special thanks to daniel.dahlmeier for his guidance during the early stages of my tool comparison exercise.

Do follow my profile, leojmfrancia, for upcoming posts on Data Science, AI, and ML.

 

Invariably stochastically yours,

Leo
Labels in this area