SAP Data Intelligence – What’s New in DI:2010
SAP Data Intelligence, cloud edition DI:2010 is now available.
Within this blog post, you will find updates on the latest enhancements in DI:2010. We want to share and describe the new functions and features of SAP Data Intelligence for the Q4 2020 release.
If you would like to review what was made available in the previous release, please have a look at this blog post.
This section will give you only a quick preview about the main developments in each topic area. All details will be described in the following sections for each individual topic area.
Connectivity & Integration
This topic area focuses mainly on all kinds of connection and integration capabilities which are used across the product – for example in the Metadata Explorer or on operator level in the Pipeline Modeler.
Integration with cloud solutions from SAP and ABAP (legacy) systems
New Application Consumer is now available, which supports reading from OData, CDI, SCP – Open Connectors.
New Application Producer is now available, which supports:
- BW Service – writing into Advanced Datastore objects under /Datastore folder in the BW connection. This only works for SAP BW/4HANA.
- OData Service – OData as target for OData services that are write-enabled
Flowagent file producer operators enhancements
Flowagent file producers provide a new option to define maximum package size (records) that will be included per file until a new file is created automatically.
Metadata & Governance
In this topic area you will find all features dealing with discovering metadata, working with it and also data preparation functionalities. Sometimes you will find similar information about newly supported systems. The reason is that people only having a look into one area, do not miss information as well as there could also be some more information related to the topic area.
Advanced Rule Script Editing
New advanced mode to create an advanced script with parameters, operators, and functions buttons for a new rule or an imported rule is supported.
Rule Filtering Result Enhancements
Now, when specifying filtering within a rule, the results screen provides a specific count and percentage at the rule level for the number of records that have been filtered.
Rename enrich result column
Data preparation now allows renaming the output columns by double-clicking the column name that you want to change and entering a new name.
Source and target file mapping
Now it allows to map source with the target or resulted file when data preparation is run on multiple source files generating multiple result files.
Redesigned fact sheet UI to have 360-degree views of data without switching context
You can now view all the information about a dataset, better organized into relevant sections – as a fact sheet including an overview of its metadata, a preview of the data, the lineage, tagging; also view the related terms, rules, rulebooks of the dataset add comments, descriptions, tags to dataset to extend relationships and information.
This topic area covers new operators or enhancements of existing operators. Improvements or new functionalities of the Pipeline Modeler and the development of pipelines.
Admin features for schedules
Now an administrator can search and manage the schedules of all users of the tenant to control workload generation.
Archiving of pipelines
Pipelines can now be archived (formerly cleaned-up) to free tenant resources but still have audit information about past executions.
Improved UI style
Style updates for improved user experience and alignment with UI5 standards.
This topic area includes all improvements, updates and way forward for Machine Learning in SAP Data Intelligence.
Improvements for HANA ML operators, which now support:
- outputting the result of the HANA ML inference and HANA ML Forecast in JSON format in row-based order
- writing the HANA ML inference or HANA ML forecast result directly in a HANA table
Store ML scenarios in central Solution Repository
Now, a more convenient and consistent way is provided to backup ML scenarios created in the ML Scenario Manager.
Improvements for supporting large artifacts
In a pipeline, it is now possible to register / retrieve artifacts in batches which helps to process larger volumes easily.
Way forward for ML scenarios in SAP Data Intelligence
- Focus of SAP Data Intelligence is put on ML Orchestration as well as on selected ML Operationalization use cases. For additional information on changes in SAP Data Intelligence Data Science tooling, please refer to SAP Note 2958072
- Boundary conditions for ML reference scenarios in SAP Data Intelligence:
- Data Integration & Data Management is a crucial matter (with a particular focus on SAP applications)
- Focus is on orchestration of data-driven ML processes & operationalization of selected ML scenarios
This topic area includes all services that are provided by the system – like administration, user management or system management.
Import/export to Solution Repository
Users can now import and export solutions to/from the Solution Repository for reliable content sharing. Solutions can be installed to the tenant directly from the Solution Repository.
Solution import with conflict resolution
When importing solutions to the User Workspace the conflict resolution will detect existing files and ask how to resolve the conflict. This works with imports from Solution Repository and from the file system.
Tenant resource quotas
Administrators can create Memory, CPU, and Kubernetes Pod quotas for their SAP DI tenant.
These are the new functions, features and enhancements in SAP Data Intelligence, cloud edition DI:2010 release.
We hope you like them and, by reading the above descriptions, have already identified some areas you would like to try out.
Thank you & Best Regards,
Eduardo and the SAP Data Intelligence PM team
If you are interested, please refer to SAP Data Intelligence 3.1, on-premise edition blog post.
Please also refer to SAP Data Intelligence Community topic page.
Super, thanks, Eduardo!!
Ahnna Schini - FYI
since we are facing performance issues in our DI cloud, I would like to know how we can change the cpu etc quotas. Where do we find this feature and would it increase the performance for e.g. data transform operators?
Thanks for any help.