BODS_COMMAND-LINE_Deployment
Command line or scripted deployment for BODS ETL code (atl files) is not straightforward, SAP recommends to use central repository approach for code movement and deployment across various environments. This paper provides a proven method to achieve command-line deployment of atl files to local repositories. This approach is out of our requirement and currently using for our code deployment.
I have developed and tested for BODS4.X in LINUX and UNIX environments , with minor adjustments to these commands we can execute on windows installations. All code snippets are tested for for BODS4.x on LINUX version
“al_engine” is the master piece and facilitates all operations on metadata, like deploying code from command line, altering datastores, generating executable script for jobs. The GUI approach uses this utility to manage repository metadata.
Note: al_engine command usage and options are provided in Appendix section.
All we have developed is basic shell scripts and couple of pearl scripts (when needed a hash comparison), playing with datastores was the hardest part of whole exercise and then creating execution command comes next complex one.
This is back-end deployment process and needs .atl file export of the code need to be deployed to the repository.
We developed separate individual script for each purpose like checking active process in the repository, backing-up the repository, deploying atl file to repository, updating datastores with environment specific configurations, generating the execution command for a specific job, here comes the next challenge that is collaborate all to gather and develop a master script do drive all these individual pieces with greater flexibility and takes care of all possible exceptions. I achieved that by developing one master script with lot of custom options which gives greater flexibility to the user to drive any of the functionality i.e. one script can do all job it just the options need to be supplied by the user. This was the major achievement.
I have organized this document in sections each section talks about individual operation needed for deployment, final section talks about the master script usage; have provided some code snippets in appendix section to help your development.
1. Check active jobs in the repository
This section talks about any active jobs currently running in the repository, the deployment may be inconsistent and could lead to corrupt repository if there are any common objects between the active job and the atl file being deployed to the repository. If there are no shared objects it may be safe to go sometimes but it is not recommended to move forward with deployment if there are any actively running jobs.
This script is responsible for notifying any active running jobs in target repository, this script have capability to notify the user by an e-mail notification after the process is completed and there are no active processes in the repository.
Code snippet:
SELECT INST_MACHINE SERVER,
SERVICE DI_PROCESS,
to_char(START_TIME,‘MM/DD/YYYY HH24:MI:SS’) START_TIME
FROM al_history
WHERE service NOT IN (‘di_job_al_mach_info’, ‘CD_JOB_d0cafae2’)
AND end_time IS NULL;
Note: This query need to be executed in the target repository database
This query lists all processes/jobs with end time NULL which means the process still in progress. For all completed jobs end time will be timestamp of completion. We may cannot completely relay alone on this query because sometimes because of abnormal job aborts BODS metadata may store NULL in end time field. In combination to this of we use the operating system’s process pool also then it will give a solid evidence that there is an active process running in background, below is shell cide snippet for that.
UNIX/LINUX code snippet:
PROCESS_COUNT=$(ps -ef | grep al_engine | grep -i $REPOSITORY | grep -i $REPOSITORY_DATABASE | wc -l)
Refer Appendix for parameters in detail
A shell script with these two snippets can be programmed to poll these two commands for 5 minute intervals and notify the user when there are no active processes in the repository; this will save time and provides flexibility to the administrator.
2. Backup current Repository code
This step is important and essential; we can easily get back the previous state of the repository if something goes wrong with current deployment code. Again al_engine command is used to back up the repository code. There are options to export specific object from the repository too.
BODS 4.x version mandates the passphrase for every atl export from the repository, and same passphrase need to be supplied while importing the atl file to any other repository. While importing If the passphrase is wrong or blank all datastore passwords will be invalid and need to reenter only the passwords of each datastore.
Below code snippet is to export a repository
UNIX/LINUX code snippet:
al_engine –U$REPOSITORY –P$REPOSITORY_PASSWORD -S$DATABASE_INSTANCE –N$DATABASE_TYPE –passphrase$PASS_PHRASE –Q$DATABASE_SERVER -X
Refer Appendix for parameters in detail
This script will generate “export.atl” file in current directory with full backup of the repository, we can rename and move this file to desired location for future purpose usage.
3. Import .atl file
Previous two sections are preliminary steps to make sure if something goes wrong (Backup) or to prevent something going wrong (check active process) .This section talks about the actual deployment process that importing atl file to the repository, this step is a typical step how we import atl file/code to a repository.
Below code snippet is to import atl fileto a repository
UNIX/LINUX code snippet:
al_engine –U$REPOSITORY –P$REPOSITORY_PASSWORD -S$DATABASE_INSTANCE -N$DATABASE_TYPE –passphrase$PASS_PHRASE –Q$DATABASE_SERVER -f$FILE_NAME
Refer Appendix for parameters in detail
This command imports the atl file to the repository, if there are any error/issues with the file or import this command will throw an error, if the import is successful this will display a success message on the console. We can import only one atl file with one command, this command should be executed several times if more than one atl files need to be executed.
$PASSPHRASE should match the passphrase passed to the atl file at the time of export from local repository , for an incorrect or blank passphrase atl file still gets deployed/imported to the repository with blank passwords of all datastores from the atl file.
4. Update DataStores
The datastore configuration may not be the same across Development, QA and Production environments, we can automate or achive through a command to update datastores information of a repository. If the datastore already available with respective configuration in each environment we may can skip that datastore configuration from the atl export. A brand-new datastore need to be configured correctly in the environments. We can achieve this through command-ine.
UNIX/LINUX code snippet:
al_engine -N$DATABASE_TYPE –Q$DATABASE_SERVER -S$DATABASE_INSTANCE –U$REPOSITORY –P$REPOSITORY_PASSWORD –jd$DATASTORE_XML_FILE
Refer Appendix for parameters in detail
One configuration text file for all datastores with details (datastore name, configuration name,database instance, schema and password), will help to set the configurations in one file and use the same file to update datastores, the code snippet need to run iteratively for e6ach datastore configuration
We have tested this process only for ORACE database, other databases and other type of datastore configurations must be tested before implementation.
Datastore configuration information is case sensitive, make sure you are following the same case
5. Create execution command
Al_engine don’t play a bigger role in creating execution command, instead we will write all values and construct the execution command for a job. We need to read the object ID from metadata tables and all other values are from the environment. Generally the execution command is exported from management console; this will not facilitates the developer to pass dynamic values to the global variables defined in the job. i.e. if we need to pass values for the global variables from external environment , it is not possible to generate a generic executable command form management console.
Be low code snippet facilitates to
UNIX/LINUX code snippet:
${LAUNCHER_DIRECTORY} \”${LOG_DIRECTORY}\” -w \”inet:${HOST_NAME}:${PORT_NUMBER}\” \” –PLocaleUTF8 -R\\\”${REPOSITORY_PASSWORD_FILE_NAME}.txt\\\” -G\”${JOB_GUID}\” –r1000 –T14 –Ck –no_use_cache_stats ${CONFIGURATION_NAME} –LocaleGV ${GLOBAL_VARIABLES} –CtBatch –Cm${HOST_NAME} –CaAdministrator –Cj${HOST_NAME} –Cp{PORT_NUMBER} \”
UNIX/LINUX code snippet with Global Variables dynamic values for them:
${LAUNCHER_DIRECTORY} \”${LOG_DIRECTORY}\” -w \”inet:${HOST_NAME}:${PORT_NUMBER}\” \” –PLocaleUTF8 -R\\\”${REPOSITORY_PASSWORD_FILE_NAME}.txt\\\” -G\”${JOB_GUID}\” –r1000 –T14 –Ck –no_use_cache_stats ${CONFIGURATION_NAME} –LocaleGV -GV\”\$gv_parameter_1 =`AL_Encrypt “‘$1′”`;\$gv_parameter_2=`AL_Encrypt “‘$2′”`;\” -CtBatch – Cm${HOST_NAME} –CaAdministrator –Cj${HOST_NAME} –Cp{PORT_NUMBER} \”
Appendix
1. Parameters in detail
Parameter |
Description |
$REPOSITORY |
Local Repository name, typically the database schema name used for the repository |
$REPOSITORY_PASSWORD |
Password for above schema |
$DATABASE_INSTANCE |
Database instance name on which this repository (schema) available |
$PASSPHRASE |
Mandatory for BODS 4.X, optional for BODS 3.X versions Alpha numeric string serves as password for this for atl file. If not provided same passphrase at the time of import of this atl file all datastores will have blank passwords. |
$DATABASE_SERVER |
Physical machine name of the server on which the $DATABASE_INSTANCE is installed/available |
$DATABASE_TYPE |
Type of the database Eg: Oracle |
$FILE_NAME |
.atl Filename to import to the repository This can be a full qualified absolute path with filename (including extension) |
$DATASTORE_XML_FILE |
XML file absolute path, with datastore credentials, see below for sample xml file. |
Creating shell/Executable script |
|
$LAUNCHER_DIRECTORY |
Absolute path of the BODS launcher Typically at /local/apps/bods3/dataservices/bin/AL_RWJobLauncher |
$LOG_DIRECTORY |
Absolute path for to produce log files Typically at /local/apps/bods3/dataservices/log |
$HOST_NAME |
BODS server machine/host name |
$PORT_NUMBER |
BODS executable port number (configured at the time of installation) |
$REPOSITORY_PASSWORD_FILE_NAME |
Password file name. If password file is at different location than the default provide absolute path Default password file path: /local/apps/bods3/dataservices/conf |
$JOB_GUID |
Job ID from metadata tables below is query to get the job ID from metadata tables select distinct trim(GUID) from AL_LANG where upper(name) = upper(‘<JOB_NAME>’) Above query should be executed in target repository database schema. |
$CONFIGURATION_NAME |
System configuration name if there are multiple configurations in the BODS environment. Blank for default or no system configurations. |
2. Datastore XML file
Below is the sample XML file with datastore configuration, first 32 0’s are specific to BODS 4.X version, for previous versions 3.X versions the first 32 0’s can be skipped.
There should be one xml file per configuration per datastore, for example if you have 10 datastores with two configurations each you need to generate 20 XML files.
Automate with a shell/pearl script which reads datastore configuration file, generate below XML file per confutation and update the respective datastore dynamically within the script and remove this XML file immediately after the update is done.
Initially for BODS 3.X version plain XML file was used (without 32 0’s in the beginning of the file), for BODS 4.x the tool expects every value, file passed in encrypted format, the 32 0’s string says this file is encrypted,(Actually not) this is a work around.
SampleXML file
00000000000000000000000000000000<?xml version=“1.0” encoding=“UTF-8” ?>
<Datastore name=“MY_DATA_STORE”>
<DSConfigurations>
<DSConfiguration default=“true” name=“Configuration1”>
<oracle_host_string>ORACLEDATABASESID</oracle_host_string>
<user>SCHEMA</user>
<password>PASSWORD</password>
</DSConfiguration>
</DSConfigurations>
</Datastore>
3. Al_engine Options in detail
-A : BW Request ID consisting of RequestID [30 characters], Selection Date [8 characters], Selection Time [6 characters]
-v : Print version number
-D : Print debug messages
-DRowCount=<n>
-DASID=<s>
-DEngineID=<s>
-DGuiID=<s>
-Did<id> : Specify the Designer session’s unique id
-Dit<it> : Specify the Designer session’s execution iteration number
-Dt<timestamp> : Specify the Designer session’s execution timestamp
-Dscan : Execute in Data Scan mode
-Dclean : Cleanup any Data Scan temporary files
-DDataScanRows =<n>
-T<TraceNumber> : Trace numbers. The numbers are:
-1 : Trace all
1 : Trace row
2 : Trace plan
4 : Trace session
8 : Trace dataflow
16 : Trace transform
32 : Trace user transform
64 : Trace user function
128 : Trace ABAP Query
256 : Trace SQL For SQL transforms
512 : Trace SQL For SQL functions
1024 : Trace SQL For SQL readers
2048 : Trace SQL For SQL loaders
4096 : Trace Show Optimized DataFlows
8192 : Trace Repository SQL
524288 : Trace Nested View Processing
1048576 : Trace Assemblers
4194304 : Trace SAP RFC(BAPI) Function Call
33554432 : Trace adapter/client calls
67108864 : Trace broker communication layer
2147483648 : Trace Audit data
-l<FileName> : Name of the trace log file
-z<FileName> : Name of the error log file (only if any error occurs)
-c<FileName> : Name of the config file
-w<FileName> : Name of the monitor file (must be used together with option -r)
-r : Monitor sample rate (# of rows)
-test : Execute real-time jobs in batch test mode
-nt : Execute in single threaded mode
-np : Execute in single process mode
-no_audit : Execute with Audit turned off
-no_dq_capture : Execute with Data quality statistics capture turned off
-Ksp<SystemConfiguration> : Execute with system configuration
-Ck : Execute in checkpoint mode
-Cr : Execute in checkpoint recovery mode
-Cm<MachineName> : Name of machine that administrates this job
-Ca<AccessServerName> : Name of access server that administrates this job
-Ct<JobType> : Type of this job (e.g. -CtBatch or -CtRTDF)
-Cj<JobServerHostName> : Name of job server’s host that executes this job
-Cp<Port> : Port of job server that executes this job
-CSV : Commandline Substitution Parameters (e.g. -CSV”$$DIR_PATH=C:/temp”)
-U<User> : Repository login user
-P<Password> : Repository login password
-S<Server> : Repository server name
-N<DatabaseType> : Repository database type
-Q<Database> : Repository database
-g : Repository using Windows Authentication (Microsoft SQL Server only)
-X : Export the repository to file “repo_export.atl”
-XX[L] : Export the repository to file “export.xml”
-XI<Filename.xml> : Import information into the repository
-Xp@<ObjectType>@<FileName> : Exports all repository objects of the specified type to the specified file in ATL format.
-Xp@<ObjectType>@<FileName>@<ObjectName> : Export the specific repository object to the ATL file
-Xp@<ObjectType>@<FileName>@<ObjectName>@DE: Export the specific repository object and its dependents with datastore information to the ATL file.
-Xp@<ObjectType>@<FileName>@<ObjectName>@D : Exports the specified repository object and its dependents to the specified file in ATL format, excluding datastore information.
-XX[L]@<ObjectType>@<FileName> : Export the specific repository objects to the XML file
-XX[L]@<ObjectType>@<FileName>@<ObjectName> : Export the specific repository object to the XML file
-XX[L]@<ObjectType>@<FileName>@<ObjectName>@DE: Export the specific repository object and its dependents with datastore information to the XML file
-XX[L]@<ObjectType>@<FileName>@<ObjectName>@D : Export the specific repository object and its dependents without datastore information to the xml file
<ObjectType> can be one of the following
P : Exports all Projects
J : Exports all Jobs
W : Exports all Workflows
D : Exports all Dataflows
T : Exports all Idocs
F : Exports all user defined File formats
X : Exports all XML and DTD Message formats
S : Exports all Datastores
C : Exports all Custom functions
B : Exports all COBOL Copybooks
E : Exports all Excel workbooks
p : Exports all System Profiles
v : Exports all Substitution Parameter Configurations
K : Exports all SDK transform Configurations
[L] – Optionally, export a lean XML.
-XC : Compact repository
-XV<ObjectType>@<ObjectName> : Validate object of type <ObjectType> that exists in the repository
<ObjectType> can be one of the following when validating objects
J : Job
W : Workflow
D : Dataflow
T : ABAP Transform
F : File format
X : XML Schema or DTD Message format
S : Datastore
C : Custom function
B : COBOL Copybook
E : Excel workbook
p : System Profile
v : Substitution Parameter Configuration
K: SDK Transform Configuration
-XR<ObjectType>@<ObjectName> : Remove object of type <ObjectType> from the repository where ObjectName can be “datastore”.”owner”.”name” in case of objects (for example, table, stored procedure, domain, hierarchy, or IDOC) contained in a datastore.
<ObjectType> can be any of the object types mentioned for XV option. In addition they can be one of the following
P : Project
t : Table or Template Table
f : Stored procedure or function
h : Hierarchy
d : Domain
i : IDOC
a : BW Master Transfer Structure
b : BW Master Text Transfer Structure
c : BW Master Transaction Transfer Structure
e : BW Hiearchy Transfer
x : SAP Extractor
-Xi<ObjectType>@<ObjectName> : Imports the specified object into the repository.
<ObjectType> is the same as -XR above
-x : Export internal built-in function information
-xi<datastore> : Print datastore’s imported objects to file “<datastore>_imported_objects.txt”
-f<Filename.atl>[@NoUpgrade] : Import information from ATL into the repository. By default this option upgrades the SDK Tranforms prior to importing them to repository, and does not import the read-only configurations. Specify @NoUpgrade to ignore the upgrade step or to import read-only configuration ATLs (e.g. sample_sdk_transform.atl).
-F<Datastore.Owner.Function> : Import function(s)
-H<filename> : Import a DTD or XML file to Repo
-I<Datastore.Owner.Table> : Import a single table
-M<Datastore> : Import tables and functions
-Y<Datastore.Owner.Treename> : Import a tree class
-el<Datastore.Owner.DBLink> : Import a database link.
-et<Datastore> : Print all imported database links for the current Datastore.
-G<guid> : Execute a session specified by a GUID
-s<Session> : Execute a session
-p<Plan> : Execute a plan
-passphrase<Passphrase> : Import/export the passwords from/to atl using the passphrase.
-epassphrase<base64-encoded-passphrase> : Same as -passphrase except that it accepts base64 encoded data to allow any special character in the passphrase. The passphrase must have been transcoded to UTF8 character set prior to applying base64 encoding.
-GV<global var assign list> : A list of global variable assignments, separated by semicolons t the whole list in double-quotes.
-a<ABAPProgram> : Generate ABAP code
-V<name=value> : Set the environment variable <name> with <value>
-L<list of value> : List of Object Labels from UI (separated by , or ; or space) to filter Use double quotes around list if space used as a separator.
-yr”<repository parameter file in quotes>” : Read repository information from “file” (default path: %link_dir%/conf/)
-gr”<repository parameter file in quotes>” : Write repository information to “file” (default path: %link_dir%/conf/)
-jd”<datastore delta file in quotes>” : Modify datastore values using “file” (default path: %link_dir%/conf/)
-test_repo : Test repository connection
-b : Populate AL_USAGE table
-ep : Populate AL_PARENT_CHILD table
-ec : Populate AL_COLMAP and AL_COLMAP_TEXT tables
Tree following options are for portable database targets (controlled release).
-WE : Delete properties of portable targets for datastore’s database types other than default.
-WP : Populate properties of all portable targets for all datastore’s database types.
-WD<datastore> : Datastore to which -WE and/or -WP is applied. If <datastore> was not specified, the option will apply to all portable targets.
-ClusterLevel<Distribution level> : Execute job with distribution level (e.g. -ClusterLevelJOB, -ClusterLevelDATAFLOW, -ClusterLevelTRANSFORM for sub data flow)
I'm not having any luck using AL_Encrypt in the command line (See Section 5 above) in a Windows environment. Any tips?
Are you encountering any error? please provide error details.
It ended up being a problem with our third-party scheduler and we've figured out a work-around. Thanks for this nice reference.
How do I trace "Work flow"? It's an option in Data Services Designer and Data Services Management Console.
Do you mean the Lineage Analysis ?
Hi Kiran
How did you determine the schema for the XML file used in the al_engine -jd option? It is quite different from the XML you see when you export a datastore as XML?
Thanks
Maurice
Here with -jd option the $DATASTORE_XML_FILE is XML version of the datastore definition.
Kiran
I realise that! The question is what is the schema for that XML?
ok, believe I understand your question now 🙂
Well this XML is simplified version of the XML export of the datastore because when we export datastore it will be with all attributes which are need to create a new datastore, in our case we just need to update essential information for the respective environment.
In that XML file first line should be prefixed with 32 zeros (for BODS 4.X) to pretend that this XML file is an encrypted one, if those 32 zeros are missing al_engine will not update datastore details.
We have created a perl script to read from a text file with all required details of configuration and generates this XML file.
well the XSD could be like below. (makesure to prefix XML file with 32 zeros)
*********************************************************************************
<xs:schema attributeFormDefault="unqualified" elementFormDefault="qualified" xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="Datastore">
<xs:complexType>
<xs:sequence>
<xs:element name="DSConfigurations">
<xs:complexType>
<xs:sequence>
<xs:element name="DSConfiguration">
<xs:complexType>
<xs:sequence>
<xs:element type="xs:string" name="oracle_host_string"/>
<xs:element type="xs:string" name="user"/>
<xs:element type="xs:string" name="password"/>
</xs:sequence>
<xs:attribute type="xs:string" name="default"/>
<xs:attribute type="xs:string" name="name"/>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
<xs:attribute type="xs:string" name="name"/>
</xs:complexType>
</xs:element>
</xs:schema>
*********************************************************************************
Thanks Kiran, so it's a matter of adjusting the elements depending on your database type e.g. Oracle or SQL Server. I presume that in your example the <PASSWORD></PASSWORD> was in clear text rather than encrypted.
Yes its clear text in password field.
In our project owners of password files for production environments will be DBAs and all deployments will be done by them.
While updating any datastores configurations we generate this XML dynamically through a perl script and delete the file immediately after update so that we are not exposing unencrypted database passwords.
Well there is little rick too , what if some one comments the code that deletes this XML file, that could be extreme case but possible.
Let me know if you find better alternative than plain text password.
Hi Kiran , Is it possible to update the tnsnames and usernames for the datastores from command line ? I was able to update the password but not the tnsnames of the datastore
Yes it is possible we nee to supply all details in an XML in the following format
00000000000000000000000000000000<?xml version="1.0" encoding="UTF-8" ?>
<Datastore name="Mydatastore">
<DSConfigurations>
<DSConfiguration default="true" name="Configuration1">
<oracle_host_string>TNSNAME</oracle_host_string>
<user>DB_USER</user>
<password>PASSWORD</password>
</DSConfiguration>
</DSConfigurations>
</Datastore>
Hi,
I want to perform the below step by using command line:
Access the L_<<APPLICATION_NAME>>_REPO Repository via BODS Designer.
1. Right-click on the ‘datastores’ window on the left and highlight all datastores. Once highlighted, right click on one and select Export. This will create an ‘Export’ in the main window.
2. Highlight all the datastores in the main window (named datastores to export), right click on one and select ‘Exclude Tree’
3. Again with the same datastores highlighted, right click and select ‘Include’.
4. Expand one of the datastores and ensure that the lower levels have red crosses on them.
5. Right click on the ‘datastores to export’ window and select ‘Export to ATL File'.
6. A prompt may be displayed to enter a passphrase. Please enter a passphrase and remember while reimporting the file.
The step is just to take a back up of the connection and datastore profile details of datastores present in the repo.
Could you please let me know if that is possible through the commandline? and if possible what option mentioned above can be used.
When i check the options above i can see "S : Exports all Datastores"
But it says all datastores.
what do you mean by all datastores?
Is it only the details from the above steps or will it export the tables and functions in the datastore as well?
Here is the template to export only the datastore without its tables in it.
al_engine -U<REPO_NAME> -P<REPO_PASSWORD> -S<DATABASE_TNS_OD_REPO> -Noracle -passphrase1234 -Q<REPO_DATABASE_SERVER> -Xp@S@<ATL_FILENAME>@<DATASTORE_NAME_TO_BE_EXPORTED>
Hi Kiran,
Thanks for your reply.
But I want to know if i need to export the datastore details excluding tables for all the datastore present in the repo rather than only specific datastore, will the below command work?
al_engine -U<REPO_NAME> -P<REPO_PASSWORD> -S<DATABASE_TNS_OD_REPO> -Noracle -passphrase1234 -Q<REPO_DATABASE_SERVER> -Xp@S@<ATL_FILENAME>
I have verified with the above command and it seems like it is exporting table details for each of the datastore as well.
It dose't seems that we can export all datastores at once excluding tables/functions in them 😉 , (why ca't they provide one)
The only option left is to export each datastore individually with series of scripts
1. Get all datastores in repo
2. Generate export command, execute and generate individual DS file
To import may need to write similar script to import them all.
How can I export a single job? I tried with below option:
al_engine -Urepo_user -Prepo_pass -Srepodb -Noracle -Qdbserver -XpJ@atlfilename@jobname but it has exported all the jobs in the repository.
I am trying to automate bods deployment. I have few more queries:
Can I update Central Repository from command line or get code from central repository?
Is it possible to import the tables that a particluar job uses?
Is there any option to save the job after export and validation?
Please suggest
1. Here is the correct command to export single job , ( you have used -XpJ@ , it should be -Xp@J@)
2. I am not aware of the commands for central repository , I don't think you can access central with al_engine, instead you can get atl contents from repository metadata tables
3. use @DE or @D with the command mentioned in point #1 to export a job's dependants, below is the explanation for these options
4. I am not clear with the question, you will generally export a job/object only if it is valid and already saved to repository
Hi All
Any suggestion how to import BW Master Transaction Structure or Transfer Transaction Structure ?
More precisely how do we specify the System Name to the -I command ?
Please advise.
Hi Kiran Bodla, Can you please help me import and export ATL scripts for windows. Thank you