Java Mapping : Base64 Zipped CSV to XML Conversion
Introduction
In my recent client project i had a requirement where the incoming file was a csv file zipped and Base64 encoded. Source system was Ariba and Target was ECC.
My aim at writing this blog is to provide you with a reusable Java Mapping for similar requirements.
Description
Ariba sends file to PI using webservice. The encoded file is within a tag named “Header”.
Following actions are required to be performed by PI before sending the file to ECC for further processing .
1. Decode base64 file
2. Unzip the content
3. Convert the unzipped CSV file into XML
In order to achive the above i have created a Java Mapping.
Incoming Payload
Encoded file is is within the “HeaderExport” Tag
CSV file after unzipping
Source Code : Please see attachment.
Output of the Mapping
package com.map;
import java.io.ByteArrayInputStream;
import java.lang.Object;
import java.io.ByteArrayOutputStream;
import java.io.InputStream;
import java.io.OutputStream;
import java.util.zip.ZipEntry;
import java.util.zip.ZipInputStream;
import org.w3c.dom.Element;
import javax.xml.bind.DatatypeConverter;
import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import org.w3c.dom.Document;
import org.w3c.dom.NodeList;
import au.com.bytecode.opencsv.CSVReader;
import com.sap.aii.mapping.api.AbstractTransformation;
import com.sap.aii.mapping.api.StreamTransformationException;
import com.sap.aii.mapping.api.TransformationInput;
import com.sap.aii.mapping.api.TransformationOutput;
//csv start
import java.io.BufferedReader;
import java.io.ByteArrayInputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.UnsupportedEncodingException;
import java.util.Map;
import java.util.Properties;
import java.util.StringTokenizer;
import com.sap.aii.mapping.api.AbstractTrace;
import com.sap.aii.mapping.api.DynamicConfiguration;
import com.sap.aii.mapping.api.DynamicConfigurationKey;
import com.sap.aii.mapping.api.StreamTransformation;
import com.sap.aii.mapping.api.StreamTransformationConstants;
import com.sap.aii.mapping.api.StreamTransformationException;
import javax.xml.transform.Result;
import javax.xml.transform.Source;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.Result;
import javax.xml.transform.Source;
import javax.xml.transform.Transformer;
import javax.xml.transform.TransformerFactory;
import javax.xml.transform.dom.DOMSource;
import javax.xml.transform.stream.StreamResult;
import org.w3c.dom.Document;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import com.sap.aii.mapping.api.AbstractTransformation;
import com.sap.aii.mapping.api.StreamTransformationException;
import com.sap.aii.mapping.api.TransformationInput;
import com.sap.aii.mapping.api.TransformationOutput;
//csv end
public class Base64DecodeAndUnzip extends AbstractTransformation {
@SuppressWarnings("null")
@Override
public void transform(TransformationInput input, TransformationOutput output) throws
StreamTransformationException {
try {
// Get base64 string from DOM input
InputStream is = input.getInputPayload().getInputStream();
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = factory.newDocumentBuilder();
Document docIn = builder.parse(is);
NodeList details = docIn.getElementsByTagName("HeaderExportFile");
String b64 = details.item(0).getFirstChild().getNodeValue();
//First decode the base 64 string
byte[] decoded = DatatypeConverter.parseBase64Binary(b64);
//Next, unzip the file (assumption only 1 entry in zip file)
ZipInputStream zis = new ZipInputStream(new ByteArrayInputStream(decoded));
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ZipEntry ze = zis.getNextEntry();
if (ze != null) {
byte[] buffer = new byte[1024];
int read = 0;
while ((read = zis.read(buffer, 0, buffer.length)) != -1) {
baos.write(buffer, 0, read);
}
baos.flush();
zis.closeEntry();
}
zis.close();
// //csvstart
BufferedReader br = null;
int rowsCount = -1;
String line = "";
String cvsSplitBy = ",";
String result = "";
String fresult = "";
String[] records = null;
String[] csvFields = null;
int fieldCount = 0;
InputStream inputStream = new ByteArrayInputStream(baos.toByteArray());
String startElement = "ContractId";
DocumentBuilderFactory domFactory = null;
DocumentBuilder domBuilder = null;
StringTokenizer stringTokenizer = null;
domFactory = DocumentBuilderFactory.newInstance();
domBuilder = domFactory.newDocumentBuilder();
Document newDoc = domBuilder.newDocument();
Element rootElement = newDoc.createElement("Record");
newDoc.appendChild(rootElement);
br = new BufferedReader(new InputStreamReader(inputStream, "UTF-8"));
// CSVReader readerCsv = new CSVReader(br, cvsSplitBy.charAt(0));
if ((line = br.readLine()) != null)
{
//records = line.split(cvsSplitBy);
stringTokenizer = new StringTokenizer(line, cvsSplitBy, false);
if (stringTokenizer.countTokens() > 0 )
while (stringTokenizer.hasMoreElements()){
int i = 0 ;
try{
result = stringTokenizer.nextElement().toString();
i++;
}
catch(Exception e){
}
break;
}
}
if ((line = br.readLine()) != null)
{
//records = line.split(cvsSplitBy);
stringTokenizer = new StringTokenizer(line, cvsSplitBy);
if (stringTokenizer.countTokens() > 0 )
csvFields = new String[stringTokenizer.countTokens()];
int i = 0 ;
while (stringTokenizer.hasMoreElements()){
csvFields[i++] = String.valueOf(stringTokenizer.nextElement()).replaceAll(" ","" );
}
}
// At this point the coulmns are known, now read data by lines
while ((line = br.readLine()) != null) {
stringTokenizer = new StringTokenizer(line, cvsSplitBy);
fieldCount = stringTokenizer.countTokens();
if (fieldCount > 0) {
Element rowElement = newDoc.createElement("row");
int i = 0;
while (stringTokenizer.hasMoreElements()) {
try {
String curValue = String.valueOf(stringTokenizer.nextElement()).replaceAll("\"","");
Element curElement = newDoc.createElement(csvFields[i++]);
curElement.appendChild(newDoc.createTextNode(curValue));
rowElement.appendChild(curElement);
} catch (Exception exp) {
}
}
rootElement.appendChild(rowElement);
rowsCount++;
}
}
br.close();
Transformer transformer = TransformerFactory.newInstance().newTransformer();
Source source = new DOMSource(newDoc);
Result outpayload = new StreamResult(output.getOutputPayload().getOutputStream());
transformer.transform(source,outpayload);
} catch (Exception e) {
throw new StreamTransformationException("Exception: " + e.getMessage(), e);
}
}
}
Hi Anumeha,
we have a similar requirement and I was looking for the source code attachment in the blog - but it seem to be missing. can you please post it again or share via hussain.xi@gmail.com.
thanks.
Nice blog ?
Hi Anumeha.
I've a similar requirement in my actual project. I can't see attachment. Please could you share on rios_juancarlos@hotmail.com
Thanks in advance.
hi Anumeha,
I have to do a similar development in my actual project.
Can you please send me via email the code? martins.sap@gmail.com
Thanks
Since attachment was not accessible . Adding code to the blog as per fellow colleagues request.
Hello,
Could you Please send the Source code. I could not see any attachment. It's urgent.
mail id - satya23487@gmail.com
Hi Anumeha!
Thanks for sharing this. One thought: why not to build the output xml by concatenating row values into simple string using StringBuilder instead of building another DOM tree in memory for output document?
Regards, Evgeniy.
Hello,
Could you Please send the Source code. I could not see any attachment. It's urgent.
mail id - satya23487@gmail.com