Produce/consume messages in KAFKA with SAP Netweaver using Java Connector – Part 3/3
This is the final part of my blog…. we are now going to join the dots together.
The funny thing with doing a proof of concept on AWS is like getting a taxi ride… every minute counts… 🙂
We now have an EC2 instance running in AWS with:
- SAP NW backend
- RFC destination setup
- KAFKA setup and we can produce and consume messges from a topic “my-kafka-topic”
- SAP JCo Server setup and connection established with the SAP NW
Now, back to the fun part.
Produce a message from SAP to KAFKA
I copied the Java code from “Tutorialspoint.com” and put it inside StepByStepServer.java provided by SAP. See SimpleProducer.java from https://www.tutorialspoint.com/apache_kafka/apache_kafka_simple_producer_example.htm
(I will not take credit for this code.)
Compile and run the code
You need the following files:
Now this StepByStepServer.java file is a combination of the SAP example “StepByStepServer.java” and some code I copied from here. I didn’t spend too much time on making the code pretty and neat. I just did the necessary to make it work so please don’t judge.
What did I change in StepByStepServer.java:
static String SERVER_NAME1 = "EXT_SERVER"; static String DESTINATION_NAME1 = "ZMP_JCO_SERVER";
In the handleRequest method, I added the following:
String message = function.getImportParameterList().getString("REQUTEXT"); // Assign topicName to string variable String topicName = "my-kafka-topic"; // create instance for properties to access producer configs Properties props = new Properties(); // Assign localhost id props.put("bootstrap.servers", "localhost:9092"); // Set acknowledgements for producer requests. props.put("acks", "all"); // If the request fails, the producer can automatically retry, props.put("retries", 0); // Specify buffer size in config props.put("batch.size", 16384); // Reduce the no of requests less than 0 props.put("linger.ms", 1); // The buffer.memory controls the total amount of memory available to the // producer for buffering. props.put("buffer.memory", 33554432); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); producer = new KafkaProducer<String, String>(props); producer.send(new ProducerRecord<String, String>(topicName, message, message)); function.getExportParameterList().setValue("RESPTEXT", "Message sent successfully"); System.out.println("Message sent successfully"); // producer.close();
What the code does is take the text passed from the STFC_CONNECTION function module, and call the Apache KAFKA producer API with it. It’s that simple.
Compile and run. Notice I need to put both JCo and KAFKA libraries in the classpath now.
export KAFKA_HEAP_OPTS="-Xmx512M -Xms256M" javac -cp ~/sapjco30/sapjco3.jar:/opt/kafka/libs/* StepByStepServer.java
Run the JCo Server
java -cp ~/sapjco30/sapjco3.jar:/opt/kafka/libs/*:. StepByStepServer
Start KAFKA consumer
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-kafka-topic
Call RFC function module
Here’s a link to the youtube video for the result
Consume a message using the Java connector client and call RFC
Now let’s try the other way. Someone changed a transaction outside and published to a KAFKA topic and SAP wants to know about it and do something with it.
The flow is:
- Message is produced to KAFKA topic
- Java client (with Java Connector) consumes the message.
- Java client calls SAP RFC
- SAP RFC do something with the message.
I copied the Java code from “Tutorialspoint.com” and put it inside StepByStepServer.java provided by SAP. See SimpleProducer.java from https://www.tutorialspoint.com/apache_kafka/
(I will not take credit for this code.)
Setup and run the JCo Client
You need the following files:
Now let’s explain.
SapKafkaConsumer.java is a copy of the SimpleConsumer.java which I borrowed from here as mentioned, combined with the code from the StepByStepClient.java from the SAP example.
The code can already consume a message from a KAFKA topic “my-kafka-topic”, and I take that message and call function STFC_CONNECTION in SAP with the message.
The function will echo back the text showing it has successfully received it.
In the doWork method which is called when a message is received, I’ve added the code to call function STFC_CONNECTION. It should be straight forward what the code does.
Compile and run the Java client
javac -cp ~/sapjco30/sapjco3.jar:/opt/kafka/libs/* -Xlint:deprecation *.java java -cp ~/sapjco30/sapjco3.jar:/opt/kafka/libs/*:. SapKafkaConsumeDemo
Produce a message to KAFKA topic
/opt/kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-kafka-topic
Here’s a youtube video.
In summary, it is possible and also not too difficult to do so.
What we now need to explore is how to productionize this solution, how to make this HA/DR etc. I’m still have some unanswered questions on how this can handle massive volume in an enterprise environment, or whether the JCo server and client should be on a separate instance.
Anyway, thanks for you time, I hope you find this interesting. Leave me some comments below.
your blog gave me an idea on how to “productionize” this solution: SAP already has an enterprise-ready JCo Client/JCoServer: the SAP Business Connector (SAP BC). It can be downloaded for free like JCo and then installed as a permanently running daemon (Unix/Linux) or Windows Service.
Granted, the installation is a bit more than just a plain JCo, but you get a number of benefits in return:
Perhaps add a little UI Page to that package, where they can enter the necessary logon data for their Kafka system, and they are ready to go
It takes a few days to read the documentation and get familiar with it, but once you know it, it’s a very powerful tool.
Wow, thanks for that. Let me take a look!
We chose a similar approach during a project in 2018.
Essentially, we used JCo server for getting data from SAP, published it into Kafka, processed it in Spark Streaming and finally pushed it into a SAP BA system using JCo client, JCo context and the transactional COMMIT/ROLLBACK BAPIs.
The customer requested to use AVRO as the serialization format for data in Kafka. Therefore we implemented a AVRO Serde for JCo objects.
I am not sure if the Business Connector will go into EOL, as I found some posts discussing this.
Instead, we implemented an own server using Akka to process requests in parallel and to include a scheduling and recovery process. Working with the JCo API got easier by wrapping it in Scala.
You might have a look into https://github.com/embeddedkafka/embedded-kafka for setting up a local Kafka instance. There is no need to install anything, because Zookeeper, Kafka brokers and needed topics will be started on the fly. We used it to implement fully automated, local end-to-end integration tests.
From my point of view writing Kafka connect source connectors is one step further to a tight integration with Kafka. We already implemented Kafka connect type converters and tested the source connector locally using debezium. I would be happy to find a customer testing it in a real world confluent platform installation.