Technical Articles
How to use Data Lake Files
What is Data Lake Files?
Data lake Files is a component of SAP HANA Cloud that provides secure, efficient storage for large amounts of structured, semi-structured, and unstructured data. Data lake Files is automatically enabled when you provision a data lake instance.
Provisioning creates the data lake Files container as a storage location for files. This file store lets you use data lake as a repository for big data. For more information on provisioning data lake Files with your data lake instance, see Creating SAP HANA Cloud Instances.
Configuration the File Container
I will introduce the step by Rest Api.
- Create HANA DB on BTP with Data Lake
- Note that the storage service type selects SAP Native
- Go to the SAP HANA Cloud on BTP, click Data Lake instance Actions -> Open SAP HANA Cloud Central
- Next, please configure the file container like this URL -> Setting Up Initial Access to HANA Cloud data lake Files
Okay, the data lake file configuration is complete.
Using the File Container
We can start to fetch or upload files through the Rest API.
- Copy the instance ID and execute the following cmd command in the authorized folder locally.
Get list status:
curl --insecure -H "x-sap-filecontainer: {{instance-id}}" --cert ./client.crt --key ./client.key "https://{{instance-id}}.files.hdl.canary-eu10.hanacloud.ondemand.com/webhdfs/v1/user/home/?op=LISTSTATUS" -X GET
You will see:
- Upload file please execute the command
curl --location-trusted --insecure -H "Content-Type:application/octet-stream" -H "x-sap-filecontainer: {{instance-id}}" --cert ./client.crt --key ./client.key --data-binary "@Studies.csv" "https://{{instance-id}}.files.hdl.canary-eu10.hanacloud.ondemand.com/webhdfs/v1/user/home/Studies.csv?op=CREATE&data=true&overwrite=true" -X PUT
- Now get the list status again, you can see the file just uploaded
Read the contents of the file into the DB table
Go to the SAP HANA database explorer and open the account.
Note that in this step, you must ensure that the table fields in the database are the same as those in the csv file.
The IQ table I use here to load the data, please refer to
CALL SYSHDL_BUSINESS_CONTAINER.REMOTE_EXECUTE('
LOAD TABLE MANAGEMENT_STUDIES
(status_code,study_num,description,study_ID,protocol_ID,lastSubjectLastVisit,isLeanStudy,studyPhase,ID)
FROM ''hdlfs:///user/home/archiving/Studies.csv''
format csv
SKIP 1
DELIMITED BY '',''
ESCAPES OFF' );
You can use data lake file to save some unstructured data, or to storage some archiving files, which seems to be a new good choice besides object store and AWS, etc.
Hi Lucia Wu,
Thanks for your explanation on data lake on files. I recreated the steps mentioned above and face unauthorized error in the final step (i.e. loading data into the IQ table). Can you please help me with it.
Did you have a table created in the HDL Relational Engine database called "MANAGEMENT_STUDIES"?
This blog didn't show the step of creating the database table itself, rather it just gave an example of the LOAD TABLE syntax that could be used to load the data from HDLFS assuming that the destination table already existed in the database.
Hi
Thanks for the overview. How can we analyze unstructured data in SAP HANA Cloud ? For e.g. video files, avi files and pdf files.
Regards
Vinieth