Information Steward SP12-Updates for Data services
Today, In this blog post, we are going to discuss what are the new updates provided for information steward 4.2 version with latest service pack-12 and what is most useful for data services, the pros, and cons, issues with solutions.
before getting here, please read my previous blog which demonstrated the introduction to Infomation Steward here:
Version: SAP Information Steward 4.2 with SP12 upgrade package(latest till date).
We can focus a few major things could be related to BODS and reporting purposes as well.
Let us start with the updates:
- Excel issue: – I can surely say that it is the major issue faced by any developer 😉
we are having several issues with excel sheets – like column count cannot be exceeded by 250(as per SP6 update) and excel formatting issue.
While generating a report, it is a big challenge to manage column count in 250 itself.
In this update, we have some relief from above aches. 🙂
- From this update, you can choose your wish format to xls/xlsx.
- Now from column count has been increased to huge(details below),
But things that we need mind before choosing default format are:
- Excel XLS file format permits 65536 rows and 256 columns.
- Excel XLXS file format permits 1048576 rows and 16384 columns.
Being a techie, my suggestion is to use XLSX format as it having all potential advantages for reporting and metadata as well(discussed below).
As discussed above, Information Steward now supports Excel XLSX file format for exporting metadata relationship views to Microsoft Excel spreadsheets.
Additionally, select XLSX file format when you import user-defined relationships to Information Steward whereas previously Metadata Management supported only XLS for these tasks.
From this version, for importing or exporting metadata from excels you can use new format-XLSX.
The procedure is here:
You need to create a file connection in CMC by providing the file path.
Then login to IS, go to DataInsight and follow the below steps-
- Go to the File format, click New, select file format.
- Give file format name and description, upload the sample file, where the schema is exactly matching to your source file (flat file) or you can upload the flat file directly.
- Select the Delimiter whether your file is separated by tab or space or comma etc..
- Select first row contains field names, then propose schema.
- Save it and close it
- Go to Workspace home, add files by selecting the file connection and file format
- Then add files to Project.
The detailed information here:
Go to data insight tab: select your project and create a workspace
And add files.
Go to the File format, click New, select file format.
Fill the file format as shown below and save:
Give file format name and description, upload the sample file, where the schema is exactly matching to your source file (flat file) or you can upload the flat file directly.
Now you can view the saved file format and once you click on it, you can view the file format as well.
Finally, once the file(Excel) format has been created:
Go to Workspace home, add files by selecting the file connection and file format
SAP ECC Support:
This is an interesting topic whereas it involves ECC support.
SAP has added 2 new integrator objects:
- SAP Extractor
- SAP Function Module
Along with objects 2 new parameters too:
- collectDomains false
- collectDataElements false
With these extractors as part of the data retrieval mechanisms in the SAP source system. An extractor can fill the extraction structure of a DataSource with the data from SAP source system datasets.
There are application-specific extractors, each of which is hard-coded for the DataSource that was delivered with BI Content, and which fill the extraction structure of this DataSource.
In addition, there are generic extractors with which you can extract more data from the SAP source system and transfer it into BI. Only when you call up the generic extractor by naming the DataSource does it know which data is to be extracted, and from which tables it should read it from and in which structure. This is how it fills different extraction structures and DataSources.
In the next blog posts, we can discuss more on processings.
That’s all about this blog post.