Skip to Content
Technical Articles
Author's profile photo Shraddha Shetty

Export/Import an SAP HDI Container for Copy Purposes to a Cloud Store

Introduction

This blog post describes on how you can export an source HDI container as an HDI container group administrator by calling the built-in procedure _SYS_DI#G.EXPORT_CONTAINER_FOR_COPY  and then import it in the same or a different SAP HANA Database instance. In this sample, we are making use of the Amazon Simple Storage Service ( Amazon S3 ). The input of the procedure are the source container and table containing any parameters & Amazon S3 Storage path. After the completion of the export procedure, the cloud storage will consist of objects of the source container’s schema (C1#DI), deployed objects of the source container’s run-time schema (C1), including all dependent data. In this blog, we will go through the below steps in detail.

  • Create S3 Object Store in SAP BTP
  • Download SSL Certificate as PEM format
  • Register the SSL certificates for the Cloud service both in the source and target databases
  • Prepare the container export
  • Export the container
  • Import the container objects in the same database/ different database

Create S3 Object Store in SAP BTP

In your SAP BTP Subaccount, create an object store instance.

Note : If the object store entitlement is not assigned, follow the steps Add a New Entitlement to Your Subaccount

Skip the step 2 and review the details and create an object store instance. Once the status says created, click on the > open the hdi-backup details view.

Create a Service Key.

Now you will have the JSON which consists of credentials for the S3 object store.

Download SSL Certificate as PEM format

Run the below command to download the certificate. Replace the endpoint < bucket >.s3.< region >.amazonaws.com.

openssl s_client -host <endpoint> -port 443 -prexit -showcerts

Where to download certificates from: the “endpoint” above
Cloud Service Endpoint that HDB access to Endpoint to download the SSL Certificate
Google Cloud Storage storage.googleapis.com upload.video.google.com or < bucket>.storage.googleapis.com.
let me recommend to basically set up GTS Root R1 and GTS Root R2 as well (https://pki.goog/repository/)
Google BigQuery www.googleapis.com www.googleapis.com or upload.video.google.com.
let me recommend to basically set up GTS Root R1 and GTS Root R2 as well (https://pki.goog/repository/)
CCloud Swift(Object storage) objectstore-3.< region >.cloud.sap objectstore-3.< region >.cloud.sap
Azure Storage <storage_account>.blob.core.windows.net <storage_account>.blob.core.windows.net (Certificates are different by storage accounts)
AWS Athena the both of
athena..amazonaws.com
< bucket >.s3.amazonaws.com
region: region given as remote source info
bucket: temp data location given as remote source info
the both of
athena.< region >.amazonaws.com (Certificates are different by regions)
s3.amazonaws.com
Additionally, setting the certificates listed up as ‘Root CAs’ in https://www.amazontrust.com/repository/ as well would be more stable.
AWS S3 < bucket >.s3.< region >.amazonaws.com
bucket and region: the given in the path
s3..amazonaws.com (Certificates are different by regions)
Additionally, setting the certificates listed up as ‘Root CAs’ in https://www.amazontrust.com/repository/ as well would be more stable.
Alibaba Cloud OSS < bucket >.< region >.aliyuncs.com** < region >.aliyuncs.com (Maybe certificates are different by regions)

 

The above command will give you more than one certificate hash, which is the certificate chain. Each hyper-scaler support centers recommend to use the certificate of the endpoint server which is the 1st one in the certificate chain downloaded.

Register the SSL certificates for the Cloud service both in the source and target databases

create pse openssl3; 

// - Copy the certificate downloaded in the step 2 of this blog
create certificate from ' 
----BEGIN CERTIFICATE---- 
<obtain the correct S3 certificate from AWS support> 
----END CERTIFICATE---- 
' comment 'S3'; 

select CERTIFICATE_ID from CERTIFICATES where COMMENT = 'S3'; 
// - copy the returned ID into <SELECTED_CERTIFICATE_ID> of the next statement. 

alter pse openssl3 add certificate <SELECTED_CERTIFICATE_ID>; 
// – e.g. alter pse HTTPS add certificate 123456; 

set pse openssl3 purpose REMOTE SOURCE;

Note: SSL Certificates comes with the validity. In order to check the certificate details, you can execute the below SQL

select * from certificates where comment = 'openssl3'

In order to renew the certificate, execute the below SQL script.

// -- Alter the existing PSE and drop the expired certificate
ALTER PSE <pse_name> DROP CERTIFICATE <certificate_id>; 

// create the new certificate, follows previously mentioned steps to generate new certificate 
CREATE CERTIFICATE FROM <certificate> [ COMMENT <comment> ]

//-- repeated as above block
select CERTIFICATE_ID from CERTIFICATES where COMMENT = 'S3'; 
// - copy the returned ID into <SELECTED_CERTIFICATE_ID> of the next statement. 

alter pse openssl3 add certificate <SELECTED_CERTIFICATE_ID>; 
// – e.g. alter pse HTTPS add certificate 123456; 

set pse openssl3 purpose REMOTE SOURCE;

Prepare the container export

Create service user for export.

// --CREATE USERGROUP, OPERATOR, WORKLOAD ADMIN
CREATE USERGROUP "hc-exim";
DROP USER HDI-EXIM-OPERATOR;
CREATE USER HDI-EXIM-OPERATOR PASSWORD <Password>
    NO FORCE_FIRST_PASSWORD_CHANGE
//  --valid until forever
    SET USERGROUP "hc-exim";

CREATE WORKLOAD CLASS "hc-exim-workload" SET 'STATEMENT MEMORY LIMIT' = '100', 'STATEMENT THREAD LIMIT' = '16' ENABLE;
CREATE WORKLOAD MAPPING "hc-exim-mapping" WORKLOAD CLASS "hc-exim-workload" SET 'USER NAME' = 'HDI-EXIM-OPERATOR';

// --  EXPORT with grant option
GRANT EXPORT TO HDI-EXIM-OPERATOR;

Connect to the source system as an HDI container-group administrator user.

Note : Container-group must have the necessary privileges. You can do that by granting them (link).

Export the container

CREATE LOCAL TEMPORARY COLUMN TABLE #PARAMETERS LIKE _SYS_DI.TT_PARAMETERS; 

INSERT INTO #PARAMETERS (KEY, VALUE) VALUES ('target_path', 's3-<region>://<access_key>:<secret_key>@<bucket_name>/<object_id>');

CALL _SYS_DI#<CONTAINER_GROUP_NAME>.EXPORT_CONTAINER_FOR_COPY('<container name>', '', '', #PARAMETERS, ?, ?, ?); 

DROP TABLE #PARAMETERS;

Add the S3 target path with folder name in the value field.

<CONTAINER_GROUP_NAME> to the respective container-group & <container_name> to the container you want to export.

Note : In the SAP BTP environment, <CONTAINER_GROUP_NAME> is by default BROKER_CG. You can view the respective <CONTAINER_GROUP_NAME>  by executing the below script.

select * from _sys_di.m_all_container_groups;

 

Result of executing the procedure will look as shown below:

Import the container objects

In order to import the container, similar steps has to be followed such as registering the SSL certificates for the Cloud service in the target database.

Note : The target system should be the same Cloud version as the source system (or more recent), and the target HDI container must already exist, or be created before the import operation. If the target HDI container is not empty, the content is overwritten.

Object owner <ContainerName>#OO must consist of required dependencies and privileges to access the external objects in any remote containers.

CREATE LOCAL TEMPORARY COLUMN TABLE #PARAMETERS LIKE _SYS_DI.TT_PARAMETERS; 

INSERT INTO #PARAMETERS (KEY, VALUE) VALUES ('source_path', 's3-<region>://<access_key>:<secret_key>@<bucket_name>/<object_id>');

INSERT INTO #PARAMETERS (KEY, VALUE) VALUES ('original_container_name', '<CONTAINER_NAME>');

// -- Call this procedure if the container does not exist and to create a new one
CALL _SYS_DI#BROKER_CG.CREATE_CONTAINER('FLIGHTRESERVATION_HDI_DB_100', _SYS_DI.T_NO_PARAMETERS, ?, ?, ?);

CALL _SYS_DI#<CONTAINER_GROUP_NAME>.IMPORT_CONTAINER_FOR_COPY('<TARGET_CONTAINER_NAME>', '', '', #PARAMETERS, ?, ?, ?);

DROP TABLE #PARAMETERS;

Result of the creating the container :

Creating the containers on SAP BTP Cockpit:

Create the service key which will consist details of schema, run-time user etc.

In Database Explorer, execute the below script.

select * from _sys_di.m_all_containers;

You will get the below results

Execute as below steps by replacing the container name to F0DA548431EA42598AB7821C912E46B6

Importing the container in a different database

You can check the objects of the container by executing the below script

select * from sys.tables where schema_name = 'FLIGHTRESERVATION_HDI_DB_200'

 

SAP Help Documentation Links

Export an SAP HDI Container for Copy Purposes to a Cloud Store

Import an SAP HDI Container for Copy Purposes from a Cloud Store

Please share your valuable feedback

 

 

 

 

 

 

 

Assigned Tags

      2 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Michael Cocquerel
      Michael Cocquerel

      Hello Shraddha Shetty ,
      In addition to database objects, an hdi container is also composed of a related hdi service in CF or XSA that may be bind to other services or contains some shared service keys. When using procedure CREATE_CONTAINER, I supposed it creates database objects only but not the related hdi service. Am I right ? If yes, how do you manage the copy of the hdi service including binding and service keys ?
      Regards,
      Michael

      Author's profile photo Shraddha Shetty
      Shraddha Shetty
      Blog Post Author

      Hi Michael,

      It is a valid point. In that case, we have to create the container using SAP BTP Cockpit( SAP HANA Schemas & HDI Containers )  and use that container for importing of the objects. I will extend this blog post regarding the same. Thank you very much for pointing this out.

      Regards,

      Shraddha