In this last step of this series, we’ll show you how to test out SDI to load data from a file into HANA on HCP. Make sure you have finished the setup of the SDI agent in the previous steps. Once you’ve tested out this simple scenario, you can go back to the parent blog for links to resources to learn more about SDI: Smart Data Integration available for HCP
For this demo we will use a very simple file (data.csv) which looks like this:
In order for the SDI data provisioning agent to read this file, you will need a configuration file that describes this file. You can check the documentation on the SDI file adapter ( File – Administration Guide for SAP HANA Smart Data Integration and SAP HANA Smart Data Quality – SAP Library ) for more details on how to (automatically) create such a file, but a simple configuration file (data.cfg) for the data file above would be:
Fileformat to read sample data
Note: the value (filename) you use for FORCE_FILENAME_PATTERN is case sensitive. So make sure your file name matches exactly. A wildcard (%) can be used to select multiple files that match the pattern.
We assume you have the configuration file (data.cfg) in a directory D:\Data\Config and the data file (data.csv) in D:\Data\Data.
In order to read these files, you need to configure the file adapter in the data provisioning agent. This is done through the configuration tool which we used to configure and connect the agent. First we need to set some properties for the file adapter:
- Go to the menu “File” – “Preferences”
- Navigate to the FileAdapter
- Set the values for the 3 properties:
- Root directory : directory where the data files are stored. We will use directory D:\Data\Data where the data.csv file is stored.
- File format root directory : directory where the configuration files are stored. We use a directory D:\Data\Config for this, with the data.cfg file above.
- Access token : a “password” that you define here in the preferences and will be needed later in the HANA Web IDE when you create the remote source to read the files via this agent.
- Click OK to store the values.
Back on the main screen in the configuration tool, you can select the FileAdapter in the list of adapters and click “Register Adapter”. Once this is completed, the file adapter on this agent will be visible in the HANA Web IDE to create your remote source.
So log in to the HANA WebIDE now (use the SDI_USER to make sure you have the required authorizations as set up in step 4) and open the Catalog editor (https://<your HANA server>.hana.ondemand.com/sap/hana/ide/catalog). Go the the “Provisioning” folder and create a “New Remote Source”.
Complete the required fields to define the remote source:
- Source Name: any name you like as name for the remote source, e.g. “Files”
- Adapter name: select the “FileAdapter” you registered in the previous step. Note: in the list of available adapters, you will see system adapters for database sources used by SDA (Smart Data Access), however these cannot be used on HCP. At the bottom of the list you will see adapters registered from the agents (these names will be in Title case as apposed to the system adapters which are all in UPPER case).
- Location and Agent Name: no change needed, when FileAdapter is selected, location will switch to “agent” (as apposed to indexserver), in case you would have multiple agents with the File Adapter registered, you can select the correct agent in the Agent Name dropdown box.
- Connection info properties: no change needed. Root directory and directory for the fileformat definitions (cfg files) will be picked up from the agent.
- Credentials – Access Token: Here you provide the same access token as what you used in the preferences on the agent. This is a security measure to make sure you have permission to access the files available through the selected agent.
Once you save the remote source, the connection will be tested, you should see a “Remote Source saved successfully” message.
Now you can browse the remote source by expanding the Files remote source in the browser and should see several tables. There are some default tables with technical metadata like CODEPAGES, FILECONTENT, … etc. But you will also see the “Data” table which corresponds to the Data.cfg file format we created earlier. Each cfg file format will result in an additional entry in this list.
In order to access the data in the file, we can now create a virtual table by right clicking on the “Data” entry and select “New Virtual Table”. Provide a name for the virtual table and the schema to create it in.
Finally you can go to the Catalog, browse to your schema and should see the newly created virtual table. You can run SQL against this table now, or just open the content in the Web IDE. Once you see the data you have successfully proofed the agent is able to connect to onPremise sources and send the data to HANA on HCP.
This concludes this blog series. Next you can continue your learning by really replicating the data via a replication task, so that you can schedule regular data loads, you can explore flowgrpahs to add additional transformation steps before storing the data and of course look into more exiting adapters like the (real-time) database adapters, Twitter, Odata, … etc. You will find links to more resources to continue your learning on the main page of this blog series: Smart Data Integration available for HCP.
Have fun !