Skip to Content
Product Information

SAP Data Hub, developer edition 2.4

End of 2017, we have delivered SAP Data Hub, developer edition. This week we updated it to our newest release. And here is SAP Data Hub, developer edition 2.4.

SAP Data Hub is a data sharing, pipelining, and orchestration solution that helps companies accelerate and expand the flow of data across their modern, diverse data landscapes (for more details take a look at Marc’s excellent FAQ blog post).

The architecture of SAP Data Hub leverages modern container technology and, simply spoken, looks like this:

The main (technical) components of SAP Data Hub are:

  • SAP Data Hub Foundation (mandatory component, installed on Kubernetes)
  • SAP Data Hub Spark Extensions (optional component, installed on Hadoop)

SAP Data Hub, developer edition

For the developer edition, we have been looking for a way to run SAP Data Hub on your local computer. We took the parts of SAP Data Hub, which are in our opinion of most interest for developers and packaged them together with HDFS, Spark and Livy into a single Docker container image. This container image can be used with different start options. Depending on the start option, it either runs SAP Vora Database, SAP Vora Tools, SAP Data Hub Modeler or HDFS, Spark, Livy (which are required for some example pipelines and tutorials).

 

Now, what are the advantages of this approach?

  • You can easily run SAP Data Hub, developer edition on your local computer (be it Windows, Linux or MacOS).
  • Building the container image locally typically takes a few minutes. During this time, you need a stable internet connection. Once the container image is built, you can start a container based on the container image within less than a minute and without network connectivity.
  • You can build powerful data pipelines (and they can interact with all kind of other technologies, e.g. SAP HANA, SAP API Business Hub, Kafka, any web service).

Of course, there are also some drawbacks:

  • The SAP Data Hub, developer edition currently does not allow you to use data governance and workflow features of SAP Data Hub.
  • Unfortunately, you cannot observe how the SAP Data Hub usually containerizes and deploys data-driven applications onto Kubernetes.
  • Some of the data pipeline operators (i.e., the re-useable and configurable components which you can combine to build data pipelines) will not work inside the container. Most notably, the operators related to machine learning (leveraging TensorFlow) and image processing (leveraging OpenCV) currently cannot be used, at least not “out-of-the-box”.

How to get started?

To give the SAP Data Hub, developer edition a try, visit our Tutorial Navigator. Currently the following tutorials are available:

The tutorials give you a first idea how to build data-driven applications with SAP Data Hub. You will learn how to create your first pipeline. You will use a message broker, HDFS as well as SAP Vora.

If you have questions, problems or proposals in the meantime, feel free to post them as comments to this blog, or to the SAP Community. We will try to answer them in a timely manner and collect frequently asked questions here.

25 Comments
You must be Logged on to comment or reply to a post.
  • Hello Thorsten,

    The SAP Data Hub, developer edition is brilliant. I got my first Data Pipeline running in about an hour.

    However, when adding port forwarding for port 5050 and trying to connect the SAP Date Hub, developer edition to my SAP Data Hub Cockpit it validates fine, but then throws internal error: Cannot connect to agent.

    Is this because the SAP Data Hub Adapter is not installed on the SAP Data Hub, developer edition, or does it listen to a different port? If it had not been included into the current SAP Data Hub, developer edition, could it be added?

    Best regards and many thanks in advance

    Frank

    • Hi Frank,

      the adapter is currently not installed. We decided against installing it, since it is only useful when you have the XSA-part of SAP Data Hub running (which is not availble as part of the developer edition at the moment). But let me give it a try to install and connect to it. If there are no (insolvable) problems, I will try to get it added.

      Cheers

      Thorsten

      PS: congratulations to your blog post series around SAP Data Hub. Very nice read!

      • Many thanks in advance, Thorsten.

        By the way, currently, I am stuck with connecting my SAP Data Hub Cockpit to my VORA Data Pipeline with the following error:

        I already discussed this with Axel Schuller, and he seems to remember a similar problem when he verified the SAP Data Hub installation, but suggests that this would need SAP development to look into.

        If this was in fact a known issue, might someone give me a hint how to overcome it? The trace does not show much more detail either.

        Very best regards

        Frank

        •  

          Hi Frank,

          does this happen when you connect to the VORA Pipeline inside or to one outside the container?
          I assume the later.

          Indeed, this message seems also familiar to me. But I don’t recall exactly what was the problem. Can you mail me a screenshot of the connection (firstname dot last name at sap dot com).

          Thanks

          Thorsten

  • Many thanks, Thorsten! Excellent blog and set of tutorials. The Data Hub Dev Edition provides a great environment for experimentation with SDH and Vora 2.0. Thank you!

    The only issues I ran into were seemingly related to using Docker Toolbox, rather than the more current Docker. I have a 2008 Mac Pro with 64GB RAM, but the pre-2010 Xeon CPUs don’t have the VT-X instruction set, so I must use the older Docker Toolbox. Even with Docker Toolbox, I was able to get Data Hub running, using the Virtualbox VMs IP address rather than localhost, but without Zeppelin (it wouldn’t start and get to the status loop, so just removed –zeppelin when starting). I also couldn’t start the spotify/kafka container (error waiting for the container, timeout) to go through the later exercises, but Data Hub and Vora both worked fine in Docker Toolbox. I just wanted to post for others that may have the same issues using Docker Toolbox.

    I’m impatiently awaiting a new Mac Pro in 2018, so I “borrowed” my wife’s slightly more current 2011 MacBook Pro w/16GB RAM, installed Docker for Mac and everything in your SDH Dev Edition  tutorials worked flawlessly.

    I’m looking forward to future dev editions (incl Vora 2.1) covering the data governance and workflow use cases, and plan to connect it with HXE using Frank’s blogs. Thanks to you both for all your insight!

    Doug

     

    • Hi Doug,

      thanks for the feedback and happy new year. Hard to say why you had trouble with Zeppelin / Kafka. We tested both successfully with Docker Toolbox (on Windows though). The only immediate thought I have: did you give enough resources to the VM running Docker (see our FAQs)? We observed that the initial sizing of the VM caused us trouble. I believe on the 16GB Windows system we used for testing, it was set to 1GB only.

      Cheers

      Thorsten

  • The SAP Data Hub, developer edition 1.2 is available. Same procedure as before… to get it follow the tutorials.

    There are not too many changes. SAP Vora tables do not have to be recreated after restarting the Docker container now.

  • Hi Thorsten,

    I have Data Hub Dev Edition 1.4 running on Ubuntu Linux, with Docker 18.06.0-ce in a VMware environment. I can access the Data Hub Pipeline Modeler and Vora Tools UI’s just fine within the Ubuntu system, but port forwarding outside the VM does not appear to be happening.

    I can ping the IP of the VM from another system on the same network, the VM is set to bridge to the network.  But I cannot telnet to any of the ports exposed through the docker run command’s –publish parameters.

    Is there another step I’m missing?

    Also, is there a timeline to adding some of the other features discussed, like the workflow capabilities and other operators currently not supported? Would really like to be able to tie this in to Data Services, for example, and be able to demonstrate use for ML and Data Services together.

    Thanks,

    Tony

    • Hi Thorsten,

      I played with the ‘docker run’ command and removed the 127.0.0.1 prefix from the publish statements, and now I’m able to access the ports outside the Ubuntu VM.  Further testing to go, but that’s a good start!

      Would still love to hear more info on future plans for the Data Hub Dev Edition if you can share them.

      Tony

       

    • Hi Tony,

      we are currently discussing which features to support for the next version of the developer edition, but we have not yet reached a final conclusion.

      One thing you can already do today is using the trial edition – https://blogs.sap.com/2018/04/26/sap-data-hub-trial-edition/. You need an account on Google Cloud Platform to give it a try. It includes all features of SAP Data Hub.

      Best regards
      Thorsten

  • Hi Thorsten,

    do you have an updated readme file available? I followed the ‘Adding Apache Zeppelin‘ instructions but received the message ‘livy interpreter does not exist at zeppelin server. terminating‘ after running

    docker run --net dev-net datahub zeppelin [ZEPPELIN_URL]

    with the Zeppelin URL pointing to the Apache livy server on port 8998. What am I missing?

    Regards,

    Thorsten

  • Hi,

    I tested the datahub and everything worked out fine (examples etc.).

    At the moment we are gathering use cases. Is the datahub able to draw data directly from a SAP R3 System, transform it and store it in the R3 again?

    Thanks in advance!

     

    Regards

     

    Mitch

    • Hi Mitch,

      to be honest, I don’t quite understand what you like to do in detail. You are rather flexible in the modeling environment. But what you describe does not sound like a good match for SAP Data Hub at the moment. I also find “transforming” data in a transactional system a bit dangerous.

      I do understand that one might want to extract data from R3, then analyze / use it and “in return” write some “other” data back. But “transforming” the “original” data… I am not so sure about the exact requirement you like to solve.

      Cheers
      Thorsten

  •  

    Hello Thorsten,

     

    Thanks for the Blog!

    I’m facing while starting separate HDFS container. Below is the log

     

    2019-01-31T06:13:22+00:00 TLS is set to false
    2019-01-31T06:13:22+00:00 ——– executing status_loop ——–
    LIVY is down. restart triggered
    2019-01-31T06:13:29+00:00 ——– executing LIVY_start ——–
    livy-server running as process 944.  Stop it first.
    2019-01-31T06:13:40+00:00 TLS is set to false
    2019-01-31T06:13:40+00:00 HDFS Namenode:                 tcp://172.21.0.3:9000 http://172.21.0.3:50070
    2019-01-31T06:13:40+00:00 HDFS Datanode:                 tcp://172.21.0.3:50010
    2019-01-31T06:13:40+00:00 HTTPFS:                        webhdfs://172.21.0.3:14000
    2019-01-31T06:13:40+00:00 Apache livy:                   http://172.21.0.3:8998

     

    and the Datahub tools and VORA URLs are not accessible from browser.

     

    Regards,

    Anil