Technical Articles
Produce/consume messages in KAFKA with SAP Netweaver using Java Connector – Part 2/3
This is the second part to demonstrating how to get SAP Netweaver to produce/consume messages in KAFKA from SAP.
Link to part 1 Part 1
If you are just keen to see the results, go to Part 3.
Below will cover the following:
- Connect to your SAP server using SAPGUI
- Install license
- Developer Key
- Setting up tools
- Java Development Kit
- Nano
- Setting up KAFKA
- Setting up SAP Java Connector (JCo)
Just to reiterate, I didn’t use the windows EC2 instance that had all the developer tools. I only used the SUSE linux instance installed with the SAP Netweaver backend.
Source code
I’ve put my code on github…. you might find it useful.
Setup your SAP system
You can either use your windows EC2 instance’s SAPGUI and it is already setup with a system to connect to.
If you have your own SAPGUI installed on your own machine. The application server should point to your elastic IP of your EC2 instance, or else it may change after every EC2 restart.
First log in
Client 001
User: DEVELOPER
Password: What you entered when you created your CAL instance
Install trial license
See here , or google how to get a minisap license.
Go to https://go.support.sap.com/minisap/#/minisap and generate a license for A4H, enter your hardware key.
Developer key
The user DEVELOPER should already have a developer key assigned, so in case you are wondering, you don’t need one. Just don’t go and create your own user and expect to be able to get a developer key for it (I found out the hard way).
Setting up tools
First you need to make sure you can SSH into your EC2 instance.
JDK
Both KAFKA and SAP Java Connector requires Java, so it makes sense to start installing that first.
From memory a Java 1.6 comes with the EC2 instance but it is too old for KAFKA, so we need a newer one.
To install
Run command:
sudo zypper --non-interactive install java-1_8_0-openjdk-devel
To test:
Run Command:
java -version
Run Command:
javac
If they didn’t return any error, you are good to go.
Nano
This step is entirely optional. I need an editor in linux, and I like to use nano. You can use whatever you like such as vi.
To install
Run Command:
wget https://download.opensuse.org/repositories/openSUSE:/Leap:/15.1/standard/x86_64/nano-2.9.6-lp151.2.3.x86_64.rpm
sudo rpm -i nano-2.9.6-lp151.2.3.x86_64.rpm
If this link does not work (because I have no control over it, search for somewhere that that might have a nano rpm distribution)
To test:
nano <any file>
To exit:
Ctrl + X
Setting up KAFKA
Install
Run Command:
curl -O http://apache.mirror.amaze.com.au/kafka/2.4.1/kafka_2.12-2.4.1.tgz
tar -xvf kafka_2.12-2.4.1.tgz
rm -rf kafka_2.12-2.4.1.tgz
sudo ln -s ~/kafka_2.12-2.4.1 /opt/kafka
Let’s explain
The script will first go to Apache and download KAFKA. If the link in the above command does not work, then change it to the latest from Apache.
The EC2 instance will download KAFKA tgz file, untar it and we create a symbolic link /opt/kafka to the decompressed kafka directory.
That’s it… now KAKFA is installed!
Configure
We are going with the most basic configuration/setup here, with single zookeeper, single KAFKA server. It’s up to you to explore more complex setup.
Now I’m going to refer to the Apache quick start guide since I can’t explain it any better and there’s no point of me repeating either.
In short, we are going setup one zoo keeper (on port 2181), one KAFKA server (on port 9092). Remember these ports as you will see code referring to them later.
I run the following script to start the zookeeper and KAFKA server. You need to repeat this after each restart.
#!/bin/bash
export KAFKA_HOME="/opt/kafka"
export KAFKA_HEAP_OPTS="-Xmx512M -Xms256M"
nohup sudo $KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties > /dev/null 2>&1 &
sleep 2
nohup $KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties > /dev/null 2>&1 &
sleep 2
Note that I set the KAFKA heap to be smaller as the EC2 instance struggled with the default memory settings.
Remember to chmod it to execute.
chmod +x kafka_start.sh
Create a topic:
I’m going to create a topic “my-kafka-topic”. You can call it whatever you want. You only need to do this once. If you get an error it means your zookeeper or KAFKA server is not started properly.
/opt/kafka/bin/kafka-topics.sh --create --topic my-kafka-topic --zookeeper localhost:2181 --partitions 1 --replication-factor 1
Test it
Produce a message
/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-kafka-topic
Consume the message
In another SSH session, run this:
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-kafka-topic --from-beginning
Setting up SAP JCo
Make sure you get JCo 3.0 and not 3.1
I paid a hefty price of trying to get version 3.1 to work, only to wasted hours on trying to fix a ARPCRFC fast serialization error. So I’m going to use version 3.0 here
Where to download
https://support.sap.com/en/product/connectors/jco.html
Install
Transfer and unzip the file to the EC2 instance
I suggest you download the “sapjco30P_20-10005328.zip” file first in your desktop and upload it to the EC2 instance. I suggest you either put it somewhere public on the internet and use “wget” or “curl” in the SSH terminal to download. I put it in an S3 bucket and use AWS CLI commands to retrieve it – do whatever works for you.
After the file is on the server, you should unzip it to the folder sapjco30.
mkdir sapjco30
cd sapjco30
aws s3 cp s3://<my-s3-bucket>/sapjco30P_20-10005328.zip sapjco30P_20-10005328.zip
unzip sapjco30P_20-10005328.zip
tar -xvf sapjco30P_20-10005328.tgz
If you are not using S3 bucket for transfer, skip the s3 line.
That’s it. SAP Java Connector is ready to be used!
Your directory should look like this.
Now let’s get the simple JCo Server to work. JCo server is basically the receiver of an RFC calls from any NW system. You need to first register this “JCo Server” with the SAP system, kind of like a handshake, so that the NW system knows where to call.
Create RFC Destination
Name | ZMP_JCO_SERVER |
Type | T |
Program ID | ZMP_JCO_SERVER |
Activation type | Registered Server Program |
Test the RFC Destination
The error means it can’t connect to the JCo server.
Configure the JCo Server
I’m going to use the stock standard JCo Server example provided by SAP. You can make this a lot more complex and scalable and performant.
Create the JCo Destination file
This bit took me hours to figure out, as I couldn’t find any documentation that explains this. Perhaps it is straight forward. Standard HTML documentation that comes with the zip file shows you the code and assume you to know how to run it.
This tells the JCo Server code where our SAP system is. The JCo Server will use this to call SAP system to register a program ID.
- EXT_SERVER.jcoServer
#A sample confidguration for a standard registered server
#Replace the dummy values and put this file in the working directory.
#In eclipse environment it is the project root directory.
jco.server.connection_count=2
jco.server.gwhost=localhost
jco.server.progid=ZMP_JCO_SERVER
jco.server.gwserv=sapgw00
jco.server.repository_destination=ABAP_AS1
Make sure you enter the same program Id as the RFC destination file.
For the registration, the ABAP_AS1 is required as the JCo server makes a call to SAP.
- ABAP_AS1.jcoDestination
#Sample destination configuration when using an application server over CPIC
#Replace the dummy values and put this file in the working directory.
#In eclipse environment it is the project root directory.
jco.client.ashost=vhcalnplci
jco.client.sysnr=00
jco.client.client=<sap client>
jco.client.user=<sap user>
jco.client.passwd=<sap password>
jco.client.lang=en
Change your ashost to the server name. localhost might work, haven’t tried.
Compile the jco server example:
Edit the StepByStepServer.java file that comes with the Jco zip file.
Configure the server name to be the name of your file.
DESTINATION_NAME=Program ID of RFC Destination.
static String SERVER_NAME1 = "EXT_SERVER";
static String DESTINATION_NAME1 = "ZMP_JCO_SERVER";
Comment out these lines.
public static void main(String[] a)
{
step1SimpleServer();
// step2SimpleServer();
// step3SimpleTRfcServer();
// step4StaticRepository();
}
Run
javac -cp ~/sapjco30/sapjco3.jar:/opt/kafka/libs/* StepByStepServer.java
java -cp ~/sapjco30/sapjco3.jar:/opt/kafka/libs/*:. StepByStepServer
Test it
Test the RFC destination
It’s working!
Test the function module
Go to transaction SE37 and call function STFC_CONNECTION with the RFC destination.
The call is successful. The Java code response with “Hello World”.
The call is successfully received by the JCo Server.
Finally, all the bits and bobs are set up… Now… back to why we are trying to do this, we want to be able for ABAP to call a java program using Java Connector, and produce a message on KAFKA… See Part 3.
Hi Michael,
nice blog and a nice idea. Have you reported an incident about that fast serialization issue? It should be fixed, no matter where it is. JCo 3.0 is about to phase out, so we should make sure that JCo 3.1 is working properly. Further, you can configure the system always to use classic serialization still, even if transfer is then slower.
Best regards,
Markus
Thanks Markus.
I got a fast serialization APCRFC not supported short dump error when I tried with the 3.1 library. After hours of investigation suggest I need to update the kernel which I didn't want to entertain, and there is no means in the SM59 settings to turn it off and go back to classic.
I didn't get it to work in the end so it would be good to know how to solve this for future reference.
Hi Michael,
If you had read the JavaDoc you would have found fast how to turn off fast serialization like mention by Stefan below and in the release notes the prerequisites for fast serialization support (note https://launchpad.support.sap.com/#/notes/2372888) and could have spared hours of investigation. And you can certainly turn off the fast serializaiton also in the destination configuration (advanced options tab)
Best regards,
Markus
Hi Michael,
you write that you were not able to find any documentation about the *.jcoDestination and *.jcoServer property files. If you are still looking for this, please see the JCo Documentation on pages 7 & 8, which is available as a PDF file from https://support.sap.com/jco. And of course also see the contained JavaDoc from the downloaded JCo SDK archive, here especially the package description for package com.sap.conn.jco.ext and the JavaDoc for classes com.sap.conn.jco.ext.DestinationDataProvider and com.sap.conn.jco.ext.ServerDataProvider.
Kind regards,
Stefan
Hi Stefan
You are right, I missed the PDF documentation on the download page, thanks for pointing that out.
I was relying on documentation in the zip file, and unfortunately I didn't think of going deep into the javadocs to find something so crucial that I personally expect it should be in either the examples documentation or in the configuration section. None the less, I arrived at the outcome eventually 🙂
I can see example jcoDestination and jcoServer files are included in the examples of version 3.1 which is a great hint for me.
I'll read these PDF's to see how I can improve this solution.
Thanks again for the feedback. Much appreciated.
Thanks for doing this tutorial. I have setup the instances as you mentioned, but can't login to the Windows AMI. I decrypted the Administrator password in the AWS console by passing in the downloaded pem file from the SAP portal. It successfully decrypts it, but I can't login with that username to the Front End using the Administrator user. I am a systems integrator that simply wants to test connectivity to Kafka for a PoC, so I don't have an S User to download the GUI to my laptop (which is acceptable to me if I can get that to work).
Thanks again.
Hi Steve
Cheers
Michael
Hi Michael,
After following the blog, I can get an RFC connection test to work but the STFC test fails with this error message in the trace. There must be something else I need to setup within jco but I'm not sure what.
Exception thrown [Tue Mar 02 10:00:00,516]: Exception thrown by application running in JCoServer
com.sap.conn.jco.JCoRuntimeException: (105) JCO_ERROR_APPLICATION_EXCEPTION: handler for STFC_CONNECTION was not installed
at com.sap.conn.jco.server.DefaultServerHandlerFactory.getCallHandler(DefaultServerHandlerFactory.java:66)
at com.sap.conn.jco.server.DefaultServerHandlerFactory$FunctionHandlerFactory.getCallHandler(DefaultServerHandlerFactory.java:130)
at com.sap.conn.jco.rt.DefaultServerWorker$FunctionDispatcher.handleRequest(DefaultServerWorker.java:1007)
at com.sap.conn.jco.rt.DefaultServerWorker$FunctionDispatcher.handleRequest(DefaultServerWorker.java:983)
at com.sap.conn.jco.rt.DefaultServerWorker.dispatchRequest(DefaultServerWorker.java:147)
at com.sap.conn.jco.rt.MiddlewareJavaRfc$JavaRfcServer.dispatchRequest(MiddlewareJavaRfc.java:3644)
at com.sap.conn.jco.rt.MiddlewareJavaRfc$JavaRfcServer.listen(MiddlewareJavaRfc.java:2681)
at com.sap.conn.jco.rt.DefaultServerWorker.dispatch(DefaultServerWorker.java:274)
at com.sap.conn.jco.rt.DefaultServerWorker.loop(DefaultServerWorker.java:362)
at com.sap.conn.jco.rt.DefaultServerWorker.run(DefaultServerWorker.java:231)
at java.lang.Thread.run(Thread.java:748)
Would you be able to help with this?
Hi Wendy
Never seen this error, but googling other people have the same problem.