Skip to Content
Author's profile photo Frank Legler

SAP HANA Vora – Troubleshooting

This blog is deprecated. As of Vora 1.3 a Troubleshooting Guide has been added to the official documentation at http://help.sap.com/hana_vora

(last update January 2017)

This blog describes issues that could occur when installing, configuring or running SAP HANA Vora. It explains the root causes of those issues and if possible provides solutions or workarounds. The current Vora version is 1.2 (released
on March 31, 2016)

In case you run into issues or questions that cannot be resolved with this troubleshooting page and the documentation:

  • If you are an SAP customer, please open a customer message in component HAN-VO for Vora-related issues.
  • It is also possible to ask questions on stack overflow. Please tag your questions with tag ‘vora’.
    This community-based help is primarily for non-SAP customers using the Vora Dev Edition in AWS.
  • For issues related to the HANA Spark Controller that is not connecting to Vora please open a customer message in component HAN-DB-SDA

How to?

  • How to find log files of Vora services?
    • Vora services save their log files to /var/log/vora-* directories: /var/log/vora-discovery, /var/log/vora-dlog, /var/log/vora-catalog, /var/log/vora-v2server, /var/log/vora-thriftserver, /var/log/vora-tools
  • How to check the status of the Vora discovery service?
    • The Vora Discovery service is based on HashiCorp’s Consul. It is used to register and query service endpoints and to perform health checks. The current status of Consul can be checked via the Web UI at http://<host>:8500/ui
    • The log files are written to /var/log/vora-discovery.
    • More information is documented in the SAP HANA Vora Installation and Administration Guide.
  • Delete a Vora datasource table from the Vora catalog?
    • Make sure the table is registered in the Spark Context, e.g. by using the REGISTER TABLE command
      • REGISTER ALL TABLES USING com.sap.spark.vora
      • REGISTER TABLE <tablename> USING com.sap.spark.vora
      • Note: If the table in the catalog is incorrectly defined the REGISTER command might fail. In that case you can add an option to avoid the loading of the table.
        • REGISTER ALL TABLES USING com.sap.spark.vora OPTIONS (eagerLoad “false”)
        • REGISTER TABLE <tableName> USING com.sap.spark.vora OPTIONS (eagerLoad “false”)
    • Drop the table
      • DROP TABLE <tablename>
    • To verify if the table is deleted from the Vora catalog you can run this command
      • SHOW TABLES USING com.sap.spark.vora
  • Start the Vora Thriftserver in yarn-client mode
    • Add ‘–master yarn-client’ to Thriftserver setting ‘vora_thriftserver_extra_arguments’ (e.g. via Ambari -> Vora Thriftserver -> Configs -> Advanced vora-thriftserver-config -> vora_thriftserver_extra_arguments -> add –master
      yarn-client)
    • Also possible: Change global setting for all Spark applications ‘spark.master yarn-client’ in spark-defaults.conf (e.g. via Ambari -> Spark -> Config -> Custom spark-defaults -> Add Property -> key spark.master with value
      yarn-client

Issues Overview

Issues related to Vora 1.2

[13] Loading a table from S3 fails with VoraException

Effected Vora versions: 1.1. 1.2
Symptoms: In the spark-shell you see an error similar to the following.
...
Caused by: sap.hanavora.jdbc.VoraException: HL(9): Runtime error. (v2 URL/S3 Plugin: Exception at opening http://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx:
v2url plugin: Exception: *ERROR_TYPE*
...

Root cause: HTTP/HTTPS connection problems.
Solution: Depending on *ERROR_TYPE*:
Could not connect to server: This is a connection problem. The VORA Engine cannot connect to the internet. This might indicate a proxy error. Make sure your internet connection is working and proxy environment settings (http_proxy,
https_proxy) are passed correctly to VORA. The proxy setup is described in detail in the SAP HANA Vora Installation and Administration Guide in Chapter “3.1 Configure Proxy Settings”.
Error Peer certificate cannot be authenticated with given CA certificates This is a problem with you certificates or your/our openssl implementation. The connection to the server has been built up, but cannot be verified with a SSL
Certificate. This can mean: a) The server you are communicating with is not the server it pretends to be (wrongly configured connections; but might also be a man-in-the-middle attack) b) VORA cannot find the necessary certificates on your system.
Certifcates bundles are searched in  /etc/pki/tls/certs/ca-bundle.crt . If you have a bundle at another place, copy it or link it to at /etc/ssl/certs/vora-ca-bundle.crt (If you are not using a bundle, Certificates are searched in  /etc/ssl/certs/
as OpenSSL 0.x  x509-hashes. Note that this does not work when you have OpenSSL 1.x installed, since then you probably have the matching certificates, so this method is not recommended).
v2url plugin: Exception: Error Code 403 when accessing URL Access denied / S3 Expired. Connection to the server built up,  SSL validated (if HTTPS), but server denied access. This is an access denied error code and is mostly because
you do not have the permission to access something. In S3, this can mean that the query ran out of time. Increase the expire time to fix this.

[16] Reading from HDFS fails with error “SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]”

Effected Vora versions: 1.0, 1.1. 1.2
Symptoms: The following error can be seen when loading from a file in HDFS (e.g. during a CREATE TABLE statement).
>...
15/12/10 17:29:51 ERROR client.VoraClient: [Velocity[<host>:2202]}] Unexpected JDBC exception
com.sap.spark.vora.client.jdbc.VoraJdbcException: [Velocity[<host>:2202]] Unknown error when loading partition: VoraTablePartition(VoraFileInfo(<file>,;,NULL,",<format>,None),VoraFilePartition(0,36))
at com.sap.spark.vora.client.jdbc.VoraJdbcClient.loadPartitionToTable(VoraJdbcClient.scala:240)
...
Caused by: sap.hanavora.jdbc.VoraException: HL(9): Runtime error. (v2 HDFS Plugin: Exception at opening hdfs://<host>:8020//<file>:
SIMPLE authentication is not enabled.  Available:[TOKEN, KERBEROS]
(c++ exception))
at sap.hanavora.jdbc.driver.HLMessage.buildException(HLMessage.java:88)
...

Root cause: The Hadoop cluster is enabled for Kerberos. However, the current version of Vora does not support Kerberos.
Workaround: Currently, the only workaround is to disable Kerberos in the cluster. We are working on supporting Kerberos in the future, but it is not yet determined when that will be available.

[19] Vora Tools UI shows error “org.apache.spark.sql.AnalysisException: no such table <table>; with error code 0, status TStatusCode_ERROR_STATUS”

Effected Vora versions: 1.2
Symptoms: The Vora Tools UI shows the following error when accessing a table:
>org.apache.spark.sql.AnalysisException: no such table <table>; with error code 0, status TStatusCode_ERROR_STATUS
Root cause: The table has been created outside of the Vora Tools (e.g. in Spark-shell) and needs to be manually registered in the Vora Thriftserver that is used by the Vora Tools
Solution: Register the tables in the Vora Thriftserver by running the following command in the Vora Tools SQL UI:
>REGISTER ALL TABLES USING com.sap.spark.vora

[20] Vora services fail after initial deployment and the log files are empty

Effected Vora versions: 1.2
Symptoms: Vora services Discover Server, DLOG, and Catalog were initially deployed and fail. The log file directory contains a single empty file, e.g.
>[root@someserver.somedomain.corp vora-dlog]# ls -ltr
total 0
-rw-r--r-- 1 vora hadoop 0 Apr  5 14:19 vora-dlog-someserver.somedomain.corp.log

Root cause: This issue has been observed if the default network interface ‘eth0’ does not exist or can/should not be used.
Solution: Determine the network interface name on your system, e.g. using command ‘ifconfig’. The example below shows an interface with name ‘someif’
>somehost:~ # ifconfig
someif    Link encap:Ethernet  HWaddr B4:B5:2F:5F:C6:50
inet addr:10.12.13.14  Bcast:10.12.13.255  Mask:255.255.254.0
...
Update the parameters using the cluster manager and set them to the name of the interface you want to use. Afterwards restart the services. For more information see SAP HANA Vora Installation and Administration Guide.

  • Vora Discovery Server -> vora_discovery_bind_interface = someif
  • Vora DLOG -> vora_dlog_bind_interface = someif
  • Vora Catalog -> vora_catalog_bind_interface = someif

[21] Vora services fail with “ImportError: No module named voralib”

Effected Vora versions: 1.2
Symptoms: Vora services, such as the Discovery Service, Catalog, DLOG, v2server, do not start. The log file shows this error.
>Fail: Execution of '/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/vora-discovery/package/scripts/control.py stop_vora_discovery_server' returned 1. Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/vora-discovery/package/scripts/control.py", line 32, in <module>
from voralib import *
ImportError: No module named voralib

Root cause: The Vora base libraries are missing. They need to be installed on each node that runs a Vora service.
Solution: Install the Vora base library as per instructions in the SAP HANA Vora Installation and Administration Guide.

[22] DLOG service fails with error “error while loading shared libraries: libaio.so.1”

Effected Vora versions: 1.2
Symptoms: The Vora DLOG does not start and the log file in /var/log/vora-dlog contains the following error:
>In an Ambari deployment
[2016-04-05 14:35:16.374357]Creating SAP Hana Vora Distributed Log store ...
[2016-04-05 14:35:16.377059]Error while creating dlog store:
[2016-04-05 14:35:16.377125]
[2016-04-05 14:35:16.377172]/var/lib/ambari-agent/cache/stacks/HDP/2.3/services/vora-dlog/package/../../vora-base/package/lib/vora-dlog/bin/v2dlog_format: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory
In a Cloudera deployment
[2016-04-07 21:24:55.544047]Creating SAP Hana Vora Distributed Log store ...
[2016-04-07 21:24:55.547848]Error while creating dlog store:
[2016-04-07 21:24:55.547947]
[2016-04-07 21:24:55.548015]/opt/cloudera/parcels/SAPHanaVora-1.2.48.113/lib/vora-dlog/bin/v2dlog_format: error while loading shared libraries: libaio.so.1: cannot open shared object file: No such file or directory

Root cause: The libaio library that is needed by the DLOG is not installed.
Solution: Install libaio as documented in the the SAP HANA Vora Installation and Administration Guide in chapter 2.3.7 DLog Server Requirements.

[23] Vora SQL fails with “Could not connect to Consul Agent on localhost:8500” or “No cluster leader”

Effected Vora versions: 1.2
Symptoms: When executing SQL queries the following errors are shown:
>scala> vc.sql(testsql)
com.sap.spark.vora.discovery.DiscoveryException: Could not connect to Consul Agent on localhost:8500 : null
at com.sap.spark.vora.discovery.ConsulDiscoveryClient$ConsulDiscoveryClient.(ConsulDiscoveryClient.scala:38)
...
OR
scala> vc.sql(testsql)
OperationException{statusCode=500, statusMessage='Internal Server Error', statusContent='No cluster leader'}
at com.ecwid.consul.v1.health.HealthConsulClient.getHealthServices(HealthConsulClient.java:96)
...

Root cause: No Vora Discovery Server or Agent is running on the host or the Discovery Server is not running correctly. The Vora Discovery Service uses Consul (from HashiCorp)
to register services. Each server needs to have either a Consul server or Consul agent running (mutually exclusive as both server and agent listen on port 8500; at least 3 Consul servers needed to elect a leader – non-Discovery-server hosts should
only have a Discovery agent). Vora1.2 has a different architecture compared to Vora1.1 with many new services. Please see the What’s New in Vora1.2 and the SAP HANA Vora Installation and Administration Guide.
Solution: Install the Vora Discovery Servers and Agents according to the installation instructions in the SAP HANA Vora Installation and Administration Guide.

[26] Vora Discovery Service does not run error in HDFS NameNode HA environment

Effected Vora versions: 1.2
Symptoms: At least one of the 3 Vora Discovery Servers is not running. In the log file in /var/log/vora-discovery the following can be seen (MYNAMESERVICE is the name of your NameService ID):
>[2016-04-19 21:58:35.045916]Service registry initial state for namenode:
[2016-04-19 21:58:35.052422][]
[2016-04-19 21:58:35.052548]Trying 0
[2016-04-19 21:58:35.056475]Result: (True, [])
[2016-04-19 21:58:36.761022]Registering namenode MYNAMESERVICE
...
[2016-04-19 22:08:17.447824]Result: (False, '')
[2016-04-19 22:08:27.458234]Trying 59
[2016-04-19 22:08:27.458478]Register namenode.MYNAMESERVICE
.39
[2016-04-19 22:08:27.461176]Result: (False, '')
...

Root cause: Vora 1.2 is not yet fully supporting a NameNode HA environment
Workaround: In order to run the Vora Discovery Service perform the following workaround before deploying the Vora Discovery Server. Add the hostname and port of your active namenode to script vora-discovery/package/scripts/control.py
># cat vora-discovery/package/scripts/control.py
...
def registerNamenodeService():
...
(nn_host, _, nn_port) = nn_machine.partition(':')
    ## manualy define active namenode host and port ##
nn_host="<active_namenode_url>"
    nn_port="<active_namenode_port_default_is_8020>"
    ## end of manual change ##
log('Registering namenode ' + nn_host + ':' + nn_port)
...

Notes:

  • When changing the script
    • Enclose both host and port with double-quotes.
    • Indent the additional lines by 4 spaces to be identical to the indention of the previous/following lines
  • This workaround does not support an automatic detection of a NameNode failover. In case your active NameNode changes the script needs to be adopted to point to the new active NameNode.
  • It is planned to automatically detect the NameNode HA environment in the next Vora version.

[29] Vora Modeler UI shows error “Object does not exist” for existing table

Effected Vora versions: 1.2
Symptoms: The Modeler UI shows Object does not exist” for an existing table
Root cause: Reason is a program error that does not properly deal with tables that do not have uppercase table names.
Workaround: Drop and recreate the tables in with uppercase name (in the CREATE TABLE as well as in the OPTION ‘tablename’).
Solution: It is planned to address this issue in the next Vora version.

[31] SQL commands cause error “The current Vora version does not support parallel loading of partitioned tables”

Effected Vora versions: 1.2
Symptoms:A SQL query ends in the following error:
>java.lang.RuntimeException: The current Vora version does not support parallel loading of partitioned tables. Please wait until the previous partitioned tables are loaded, then issue your query again.
at com.sap.spark.vora.client.VoraClient$$anonfun$5.apply(VoraClient.scala:105)
at com.sap.spark.vora.client.VoraClient$$anonfun$5.apply(VoraClient.scala:101)
...

Root cause: The error is due to a program error in Vora 1.2. It can occur when partitioned tables are loaded in parallel. It has been observed after v2servers have been restarted, the partitioned tables have not yet been reloaded
into the v2serves and a query on multiple partitioned tables has been executed. Once the situation happens, the Vora catalog stays in an inconsistent state and needs to be cleaned up.
Workaround: The situation can be avoided by loading partitioned tables after a v2servers have been restarted.
>com.sap.spark.vora.client.ClusterUtils.markAllHostsAsFailed()
REGISTER TABLE <tablename> USING com.sap.spark.vora;
(repeat for all partitioned tables)

Cleaning up the Vora catalog once the situation has occurred:

  • Option 1: Delete partitioned tables

>REGISTER ALL TABLES USING com.sap.spark.vora OPTIONS (eagerLoad "false")
DROP TABLE <tablename>  (repeat for all partitioned tables)

  • Option 2: Delete the Vora persistence
    • Note: This workaround will delete all tables stored in the Vora Catalog

>1. shut down DLOG, Discovery, Catalog, v2servers
2. delete Vora persistence on each node
sudo rm -rf /var/local/vora*
3. Start up DLOG, Discovery, Catalog, v2servers

Solution: It is planned to address this issue in the next Vora version.

[32] Vora Catalog produces many v2catalog_client* files

Effected Vora versions: 1.2
Symptoms: On the node that hosts the Vora Catalog many files with name v2catalog_client..glf are produced. Depending on the Ambari version and configuration those files can for example be found in folders /var/lib/ambari-agent/tmp/log/,
/root/log/, or /log. If too many files are produced the node could potentially run out of inodes. To check the number of available inodes the following command can be run ‘df -i’.
Root cause: Due to a program error the files are produced and not cleaned up.
Workaround: See SAP Note 2322686 – Vora Catalog produces many v2catalog_client* files
Solution: It is planned to address this issue in the next Vora version.

[33] Thriftserver fails in CDH with NoClassDefFoundError: HiveThriftServer2

Effected Vora versions: 1.2
Symptoms: The Vora Thriftserver fails and the log shows the following error:
>Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/spark/sql/hive/thriftserver/HiveThriftServer2
at org.apache.spark.sql.hive.thriftserver.SapThriftServer.main(SapThriftServer.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.lang.ClassNotFoundException: org.apache.spark.sql.hive.thriftserver.HiveThriftServer2
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
...

Root cause: The Vora Thriftserver is using the Spark Thriftserver. Cloudera does not ship the Spark Thriftserver and starting the Vora Thriftserver will fail.
Workaround: See SAP Note 2332867 – Thriftserver fails in CDH with NoClassDefFoundError: HiveThriftServer2 and SAP Note 2284507 – SAP HANA Vora 1.2 Release Note
Solution: It is planned to address this issue in the next Vora version.

Issues related to the SAP HANA Spark Controller

[14] Spark Controller raises ClassNotPersistableException: The class “org.apache.hadoop.hive.metastore.model.MVersionTable” is not persistable.

Symptoms: The following error can be seen when starting the Spark Controller.
>...
Caused by: org.datanucleus.exceptions.ClassNotPersistableException: The class "org.apache.hadoop.hive.metastore.model.MVersionTable" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found.
at org.datanucleus.ExecutionContextImpl.assertClassPersistable(ExecutionContextImpl.java:5499)
...

Root cause: Hive is configured to use a derby based metastore.
Solution: The hive-site.xml for Vora should only contain the property for the Hive metastore (hive.metastore.uris). Make sure that the Hive metastore is running.
><configuration>
...
<property>
<name>hive.metastore.uris</name>
<value>thrift://<host>:9083</value>
</property>
..
</configuration>

[15] Yarn NodeManager with Spark Controller fails with java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService

Symptoms: The following error can be seen in the Yarn NodeManager log file (/var/log/hadoop-yarn/yarn/yarn-yarn-nodemanager-<host>.log) when starting the Spark Controller
>...
2015-12-09 16:20:51,012 FATAL containermanager.AuxServices (AuxServices.java:serviceInit(145)) - Failed to initialize spark_shuffle
java.lang.RuntimeException: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.spark.network.yarn.YarnShuffleService not found
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2227)
...

Root cause: When using dynamic allocation in Spark (spark.dynamicAllocation.enabled=true) then Spark needs access to the shuffle service which is implemented in spark-<version>-yarn-shuffle.jar
Solution: Follow these steps to add the spark-<version>-yarn-shuffle.jar to the classpath of all
NodeManagers in your cluster.

[18] Spark Controller does not show Vora tables

Effected Vora versions: 1.2
Symptoms: The Spark controller is running but /var/log/hanaes/hana_controller.log does not show the “Picked up HanaESVoraContext” message and in HANA you do not see any Vora tables in the remote source.
Root cause: Either the wrong version of Spark Controller is used or Spark Controller is not correctly configured to connect to Vora
Solution: (1) Run the correct Spark Controller version With Vora 1.2 you need at least Spark Controller 1.5 Patch 5 (HANASPARKCTRL00P_5-70001262.RPM which includes controller-1.5.8.jar). Earlier versions of Spark Controller are not
compatible with Vora 1.2. This version of Spark Controller also requires a HANA system of SPS11 or later. (2) Correctly configure the Spark Controller Follow the instructions in the SAP HANA Vora Installation and Administration Guide in “Chapter 2.9
Connect SAP HANA Spark Controller to SAP HANA Vora”. Please be aware of the following additional step that is currently only documented in SAP Note 2220859 – SAP HANA Vora Documentation Corrections: In the Spark controller configuration file hanaes-site.xml,
which is normally located in the /usr/sap/spark/controller/conf directory, change the value of the property sap.hana.hadoop.datastore from ‘hive’ to ‘vora’. It should look like this:
><property>
<name>sap.hana.hadoop.datastore</name>
<value>vora</value>
<final>true</final>
</property>

[24] Spark Controller fails with “java.net.BindException: Address already in use”

Symptoms: The following error can be seen in Spark Controller log (/var/log/hanaes/hana_controller.log)
>...
Exception in thread "main" java.security.PrivilegedActionException: java.lang.reflect.InvocationTargetException
at java.security.AccessController.doPrivileged(Native Method)
...
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
...

Root cause: A process might be using the Spark Controller port 7860 (maybe a previously run Spark Controller).
Solution: If a process is using port 7860 you can find its PID with ‘netstat -tupln | grep 7860′. Afterwards do a ‘ps -efa | grep <PID>’ to check what is running. Afterwards kill the process and then try starting the Spark Controller
again.

[25] Spark Controller fails with “Required executor/AM memory is above the max threshold of this cluster”

Symptoms: The following error can be seen in Spark Controller log (/var/log/hanaes/hana_controller.log)
>...
2015-12-15 11:15:38,716 [ERROR] Error initializing SparkContext.
java.lang.IllegalArgumentException: Required executor memory (1024+384 MB) is above the max threshold (512 MB) of this cluster!
at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:201)
... or
2015-12-15 11:30:09,020 [ERROR] Error initializing SparkContext.
java.lang.IllegalArgumentException: Required AM memory (512+384 MB) is above the max threshold (512 MB) of this cluster!
at org.apache.spark.deploy.yarn.Client.verifyClusterResources(Client.scala:206)
...

Root cause: The Spark Controller uses default values for the Spark executor memory (1024MB) and Spark application master memory (512MB).
Solution: To change the values for the executor and AM memory, you can add them to the hanaes-site.xml file. The below example sets both values to 100 MB.
>...
<property>
<name>spark.executor.memory</name>
<value>100m</value>
<final>true</final>
</property>
<property>
<name>spark.yarn.am.memory</name>
<value>100m</value>
<final>true</final>
</property>
...

[27] Spark Controller fails with “Yarn application has already ended! It might have been killed or unable to launch application master.”

Symptoms: The Spark Controller does not start and the log file /var/log/hanaes/hana_controller.log shows and error similar to this
>...
16/04/12 18:54:22 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:125)
...
16/04/12 18:54:22 ERROR Utils: Uncaught exception in thread SAPHanaSpark-akka.actor.default-dispatcher-4 java.lang.NullPointerException
at org.apache.spark.network.netty.NettyBlockTransferService.close(NettyBlockTransferService.scala:152)
...

Root cause: Spark Controller runs as Spark application and uses Yarn for resource management. The error indicates a problem in Yarn.
Solution: Check the Yarn logs for errors. You can use the Yarn ResoureManager UI at http://<yarn_ResourceMgr>:8088 to access the log files conveniently (Ambari provides a Quick Link
via Ambari -> Yarn -> Quick Links -> Resource Manager UI). In the Resource Manager UI -> find the line for last Spark Controller run -> click on ‘Application ID’ link on left in the ID column (looks similar to application_1462809397144_0001)
-> click on ‘Logs’ link on next screen -> check stderr and stdout logs. A typical reason for seeing this error during Spark Controller startup is issue [17] Yarn NodeManager fails with error “bad substitution”.

[30] Spark Controller does not start (log ends with “Server: Starting Spark Controller”)

Symptoms: The Spark Controller ends without error message. The log file /var/log/hanaes/hana_controller.log looks similar to this
>SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/sap/spark/controller/lib/external/spark-assembly-1.5.2.2.3.4.0-3485-hadoop2.7.1.2.3.4.0-3485.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.4.0-3485/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/05/24 17:32:53 INFO HanaESConfig: Loaded HANA Extended Store Configuration
Found Spark Libraries. Proceeding with Current Class Path
16/05/24 17:32:54 INFO Server: Starting Spark Controller

Root cause: This situation can happen if the Vora datasources JAR file is not available in the Spark Controller library folder.
Solution: Copy the Vora datasource JAR file spark-sap-datasources-<VERSION>-assembly.jar to folder /usr/sap/spark/controller/lib/ (see Vora Installation and Administration Guide -> Chapter “2.9 Connect SAP HANA Spark Controller to SAP HANA Vora”). In an Ambari-deployment with HDP 2.3 and Vora 1.2 the following command could be used:
>cp /var/lib/ambari-agent/cache/stacks/HDP/2.3/services/vora-base/package/lib/vora-spark/lib/spark-sap-datasources-1.2.33-assembly.jar /usr/sap/spark/controller/lib/ Alternatively, file /usr/sap/spark/controller/conf/hana_hadoop-env.sh
can be adopted to point to the Vora datasources JAR file, e.g.
>cat /usr/sap/spark/controller/conf/hana_hadoop-env.sh
...
#export HANA_SPARK_ASSEMBLY_JAR= <Location to spark assembly>
export HANA_SPARK_ADDITIONAL_JARS=/var/lib/ambari-agent/cache/stacks/HDP/2.3/services/vora-base/package/lib/vora-spark/lib/spark-sap-datasources-1.2.33-assembly.jar
...
Afterwards the Spark Controller log should look similar to this:
>SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/sap/spark/controller/lib/spark-sap-datasources-1.2.33-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/sap/spark/controller/lib/external/spark-assembly-1.5.2.2.3.4.0-3485-hadoop2.7.1.2.3.4.0-3485.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/hdp/2.3.4.0-3485/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/05/24 17:36:22 INFO HanaESConfig: Loaded HANA Extended Store Configuration
Found Spark Libraries. Proceeding with Current Class Path
16/05/24 17:36:22 INFO Server: Starting Spark Controller
16/05/24 17:36:42 INFO CommandRouter: Connecting to Vora Engine
16/05/24 17:36:42 INFO CommandRouter: Initialized Router
16/05/24 17:36:42 INFO CommandRouter: Server started

General issues

[17] Yarn NodeManager fails with error “bad substitution”

Symptoms: The following error can be seen in the Yarn NodeManager log file (/var/log/hadoop-yarn/yarn/yarn-yarn-nodemanager-<host>.log)
>...
2015-12-02 19:26:11,968 INFO  nodemanager.ContainerExecutor (ContainerExecutor.java:logOutput(286)) - Stack trace: ExitCodeException exitCode=1: /hadoop/yarn/local/usercache/vora/appcache/application_1448856797920_0013/container_e02_1448856797920_0013_01_000005/launch_container.sh: line 23: :<lots_of_library_paths>:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
...

Root cause: The key mapreduce.application.classpath in /etc/hadoop/conf/mapred-site.xml contains a variable which is invalid in bash. This issue was only observed with HDP (Hortonworks) and Ambari.
Solution: Either set a value for hdp.version or delete the library entry with references to hdp.version if it is not needed. Instructions on how to set a value for hdp.version in the customer yarn-site.xml in Ambari can be found in
the SAP HANA Vora Installation and Administration Guide in Section “2.8 Install the SAP HANA Vora Zeppelin
Interpreter”.

[28] Removing a Vora service in Ambari fails with ‘Cannot remove <SERVICE_NAME>’

Symptoms: When removing a service in Ambari the following error occurs:
># curl -u admin:<PASSWORD> -X DELETE -H 'X-Requested-By:admin' http://<AMBARI_HOST>:8080/api/v1/clusters/<CLUSTER_NAME>/services/<SERVICE_NAME>
{
"status" : 500,
"message" : "org.apache.ambari.server.controller.spi.SystemException: An internal system exception occurred: Cannot remove <SERVICE_NAME>. Desired state STARTED is not removable.  Service must be stopped or disabled."
}

Root cause: Either the service is not stopped or Ambari thinks the service is not stopped even though it seems to be stopped according to the Ambari UI.
Solution: First ensure all components of the service are stopped. If the error still occurs, follow the steps from the Ambari documentation to move the service into status stopped. Usually the first command in section “2. Ensure the service is stopped” should be sufficient, but others might be needed too.
>curl -u admin:<PASSWORD> -H "X-Requested-By: ambari" -X PUT -d '{"RequestInfo":{"context":"Stop Service"},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}' http://<AMBARI_HOST>:8080/api/v1/clusters/<CLUSTER_NAME>/services/<SERVICE_NAME>

Issues related to older Vora versions (Vora 1.0, Vora 1.1)

[1] Error VelocityJdbcBadStateException “Cannot acquire a connection”

Effected Vora versions: 1.0, 1.1
Symptoms: In the spark-shell you see the following error.
>15/09/09 03:50:31 WARN client.TableLoader: [Velocity[<host>:2202]] Error dropping schema from server: [Velocity[<host>:2202]] Cannot acquire a connection
com.sap.spark.velocity.client.VelocityJdbcBadStateException: [Velocity[<host>:2202]] Cannot acquire a connection
...
java.lang.RuntimeException: There are no Velocity servers up

Root cause: The HANA Vora Engine is not running.
Solution: An initial check to see if the services are running can be done in Ambari (Serivces -> SAP HANA Vora) or Cloudera Manager. Further checks can be done on OS level on the host that can be seen in the error message. Check
if the SAP HANA Vora process is running (process name is v2server).
>$ ps -efa | grep v2server
vora    27859    1  0 Dec07 ?        00:00:00 /var/lib/ambari-agent/cache/stacks/HDP/2.2/services/VORA/package/scripts/../v2/bin/v2server --trace-level INFO --trace-directory /var/log/vora --api_server
The v2server is listening on port 2202
>$ netstat -tupln | grep 2202
tcp        0      0 0.0.0.0:2202            0.0.0.0:*              LISTEN      27859/v2server
The logs can be checked at /var/log/vora. If vora experienced a crash you should find a crash dump in the v2 directory (e.g. in an Ambari environment
are written to /var/lib/ambari-agent/cache/stacks/HDP/2.3/services/VORA/package/v2/crashdump.<timestamp>.txt). If you find a crash dump, open a SAP customer message in component HAN-VO-ENG.

[2] Ambari shows VORA servers in status red

Effected Vora versions:
1.0, 1.1
Symptoms:  Vora is successfully deployed but service shows as red in Ambari. Restarting in Ambari does not help. The v2server log (/var/log/vora) shows errors similar to this.
>/var/lib/ambari-agent/cache/stacks/HDP/2.3/services/VORA/package/scripts/../v2/bin/v2server: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found (required by /var/lib/ambari-agent/cache/stacks/HDP/2.3/services/VORA/package/scripts/../v2/bin/v2server)
Root cause: Vora is installed on an unsupported operating system (e.g. CentOS 6.X, RHEL6.7). This will prevent the compatibility libraries from being loaded and Vora will not start.
Solution: Currently, Vora only supports RHEL6.6, RHEL7.1, and SLES11.3. For details see SAP Product Availability Matrix (PAM) or the SAP HANA Vora Installation and Developer Guide. We strongly discourage from using any but the supported operating systems. We only tested
on the supported OS and we will not support if issues occur due to an unsupported operating systems.
Workaround: If, despite the warning, you want to run Vora on CentOS6.X or RHEL6.7, you could change the start_vora.sh script by commenting/deleting the 2 lines starting with ‘if’ and ‘fi’. Be aware that SAP will not provide support
for such an environment.
>cat /var/lib/ambari-server/resources/stacks/HDP/2.3/services/VORA/package/scripts/start_vora.sh
...
#if [ "$OS" == "RedHatEnterpriseServer" -a "$VER" == "6.6" ]; then
test_set_compat
#fi
If Vora is not yet deployed:

  • On the Ambari-server host change file /var/lib/ambari-server/resources/stacks/HDP/2.3/services/VORA/package/scripts/start_vora.sh
  • Deploy Vora

If Vora is already deployed:

  • Option 1)
    • On each Vora host change file /var/lib/ambari-agent/cache/stacks/HDP/2.3/services/VORA/package/scripts/start_vora.sh
  • Option 2)
    • Uninstall Vora
    • On the Ambari-server host change file /var/lib/ambari-server/resources/stacks/HDP/2.3/services/VORA/package/scripts/start_vora.sh
    • Deploy Vora again

[3] The Zookeeper catalog contains invalid entries

Effected Vora versions:
1.0, 1.1
Symptoms: In the spark-shell you see the following errors.
>15/10/14 23:34:20 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 2.0 (TID 2, <IP_address>): com.sap.spark.vora.client.VelocityJdbcBadStateException: [Velocity[<host>:2202]] Unexpected database state (query SELECT ... FROM ...)
at com.sap.spark.vora.client.VelocityJdbcClient.executeSql(VelocityJdbcClient.scala:99)
...
Caused by: sap.hanavora.jdbc.VoraException: HL(9): Runtime error. (schema error: schema "spark_velocity" does not exist (c++ exception))
at sap.hanavora.jdbc.driver.HLMessage.buildException(HLMessage.java:88)
...
OR
...
[79901]{263298}[1924/-1] 2016-01-12 15:32:10.329054 e FedTrace SparkSQLAccess.cpp(03170) : SparkSQLAccess::BrowseMetadata: failed with error: com.sap.spark.vora.CatalogException$ModuleVersionException: module version mismatch between ({"groupId":"com.sap.spark","artifactId":"vcatalog-client-zk","version":"1.0.0"}) and ({"groupId":"com.sap.spark","artifactId":"vcatalog-client-zk","version":"1.1.40","major":1,"minor":1,"patch":40})
at com.sap.spark.vora.ZKCatalog.checkModuleVersions(ZKCatalog.java:477)
at com.sap.spark.vora.ZKCatalog.<init>(ZKCatalog.java:159)
at com.sap.spark.vora.ZKCatalog.<init>(ZKCatalog.java:95)
...

Root cause: Vora uses Zookeeper to store metadata information about its tables. It is possible that the Zookeeper catalog contains inconsistent information, e.g. after an upgrade of the Vora engine or Vora datasources.
Solution: Clear the Zookeeper catalog. In the spark-shell this command can be used. Be aware that after the Zookeeper catalog is cleaned, the tables in Vora need to be recreated and reloaded.
>import com.sap.spark.vora.client._
ClusterUtils.clearZooKeeperCatalog("<zookeeper_host>:2181")
Also, make sure to use the correct VORA namespace com.sap.spark.vora in the USING clause. Some older documentation might still point to the deprecated namespace com.sap.spark.velocity.
>CREATE TEMPORARY TABLE <table> (<columns>)
USING com.sap.spark.vora
OPTIONS (...)

[4] SQL fails with InvalidProtocolBufferException

Effected Vora versions:
1.0, 1.1
Symptoms: You see an error similar to this.
>com.sap.spark.vora.hdfs.HdfsException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.; Host Details : local host is: "ldbwhdp06/10.68.91.155"; destination host is: "ldbwhdp01.wdf.sap.corp":50070;
Root cause: An invalid port is specified for the nameserver (in this case 500700 which is the port for the nameserver web UI)
Solution: Correct the nameserver port either in the nameNodeUrl option or spark.vora.namenodeurl in spark-defaults.conf. The default port for the nameserver is 8020.

[5] Instantiation of SapSQLContext fails with VerifyError

Effected Vora versions:
1.0, 1.1
Symptoms: In the spark-shell you see the following error.
>scala> val vc = new SapSQLContext(sc)
java.lang.VerifyError: class org.apache.spark.sql.ExtendableSQLContext$$anon$2 overrides final method registerFunction.(Ljava/lang/String;Lscala/Function1;)V

Root cause: We have seen this issue when using Vora 1.0 or Vora 1.1 with Spark 1.5.x. Vora has a strong dependency on Spark and Vora 1.0 and Vora 1.1 will not work with Spark 1.5.x.
Solution: Use Spark 1.4.1 (if you use Vora 1.0 or Vora 1.1). As of Vora 1.1 Patch 1 we support Spark 1.5.2 (available as of Jan 25, 2016).

[6] SQL commands fail with NoSuchMethodError: org.apache.spark.sql.sources.SapDDLParser.initLexical()

Effected Vora versions:
1.0, 1.1
Symptoms: In the spark-shell you see the following error.
>scala> vc.sql("show tables").show
java.lang.NoSuchMethodError: org.apache.spark.sql.sources.SapDDLParser.initLexical()V
at org.apache.spark.sql.sources.SapDDLParser.parse(SapDDLParser.scala:177)

Root cause: We have seen this issue when using Vora 1.0 or Vora 1.1 with Spark 1.4.0. Vora has a strong dependency on Spark and will not work with Spark1.4.0.
Solution: Use Spark 1.4.1 (if you use Vora 1.0 or Vora 1.1). As of Vora 1.1 Patch 1 we support Spark 1.5.2 (available as of Jan 25, 2016).
Comment: To see which functions are in a jar file, you can use ‘javap’. E.g. for Spark 1.4.1 it shows the method initLexical() in class SapDDLParser. For Spark 1.4.0 this method is missing.
>$ javap -cp /opt/spark/spark-1.4.1-bin-hadoop2.6/conf/:/opt/spark/spark-1.4.1-bin-hadoop2.6/lib/spark-assembly-1.4.1-hadoop2.6.0.jar:/opt/spark/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/opt/spark/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/opt/spark/spark-1.4.1-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/etc/hadoop/conf/ org.apache.spark.sql.catalyst.AbstractSparkSQLParser
Compiled from "AbstractSparkSQLParser.scala"
public abstract class org.apache.spark.sql.catalyst.AbstractSparkSQLParser extends scala.util.parsing.combinator.syntactical.StandardTokenParsers implements scala.util.parsing.combinator.PackratParsers {
public void initLexical();
$ javap -cp /root/spark-1.4.0-bin-hadoop2.6//conf/:/root/spark-1.4.0-bin-hadoop2.6/lib/spark-assembly-1.4.0-hadoop2.6.0.jar:/root/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-core-3.2.10.jar:/root/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-api-jdo-3.2.6.jar:/root/spark-1.4.0-bin-hadoop2.6/lib/datanucleus-rdbms-3.2.9.jar:/etc/hadoop/conf/ org.apache.spark.sql.catalyst.AbstractSparkSQLParser
Compiled from “AbstractSparkSQLParser.scala”
public abstract class org.apache.spark.sql.catalyst.AbstractSparkSQLParser extends scala.util.parsing.combinator.syntactical.StandardTokenParsers implements scala.util.parsing.combinator.PackratParsers {
// no initLexical() here

[7] Vora commands in Zeppelin fail with NoSuchMethodError: org.apache.spark.sql.sources.SapDDLParser.initLexical()

Effected Vora versions:
1.0, 1.1
Symptoms: The following error can be seen in the Zeppelin log file (/var/log/zeppelin/).
>ERROR [2015-11-19 15:04:39,053] ({pool-2-thread-5} Job.java[run]:183) - Job failed
java.lang.NoSuchMethodError: org.apache.spark.sql.sources.SapDDLParser.initLexical()V
at org.apache.spark.sql.sources.SapDDLParser.parse(SapDDLParser.scala:177)

Root cause: You are using Vora 1.0 or 1.1 and the Zeppelin version is not built for Spark 1.4.1 and your Hadoop version.
Solution: If you are using Vora 1.0 or Vora 1.1, build Zeppelin for Spark 1.4.1 and your Hadoop version. As of Vora 1.1 Patch 1 we support Spark 1.5.2 (available as of Jan 25, 2016). Instructions on how to build and setup Zeppelin
for Vora 1.1 Patch 1 with Spark 1.5.2 are in the SAP HANA Vora Installation and Developer Guide.

[8] Vora commands in Zeppelin fail with java.lang.ClassNotFoundException: com.sap.spark.vora.VoraRDDPartition

Effected Vora versions:

1.0, 1.1
Symptoms: The following error can be seen in Zeppelin: java.lang.ClassNotFoundException: com.sap.spark.vora.VoraRDDPartition.
Root cause: Zeppelin is not (correctly) configured to use the Vora datasources.
Solution: Add the Vora datasources jar file to ADD_JARs in zeppelin-env.sh, e.g.
>export ADD_JARS=$ZEPPELIN_HOME/interpreter/spark/spark-sap-datasources-1.0.0-assembly.jar

[9] SQL commands fail with NoSuchMethodError: org.apache.curator.framework.CuratorFramework.blockUntilConnected()

Effected Vora versions:

1.0, 1.1
Symptoms:
>java.lang.NoSuchMethodError: org.apache.curator.framework.CuratorFramework.blockUntilConnected(ILjava/util/concurrent/TimeUnit;)Z
at com.sap.spark.vora.ZKCatalog.<init>(ZKCatalog.java:120)

Root cause: We have observed this issue when Spark 1.4.1 binaries for Hadoop 2.4.0 were used in a Hadoop 2.6.0 environment.
Solution: Make sure to run the correct Spark version for your Hadoop environment.

[10] REGISTER command does not register HANA tables in Vora

Effected Vora versions:

1.0, 1.1
Symptoms: While the REGISTER SQL command registers Vora tables (com.sap.spark.vora) in the local Spark catalog it does not bring back HANA tables (com.sap.spark.hana).
Root cause: Vora tables are stored permanently in the Zookeeper catalog and can be registered back into the non-persistent local Spark catalog. In the current implementation HANA tables in Vora are not stored in the Zookeeper catalog.
Solution: Recreate the HANA tables.

[11] Start of Thriftserver fails with error “Could not create ServerSocket”

Effected Vora versions: 1.0, 1.1
Symptoms: The start of the Thriftserver fails with the following error message
>15/11/03 14:33:00 ERROR ThriftCLIService: Error:
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:10000.
at org.apache.thrift.transport.TServerSocket.<init>(TServerSocket.java:93)
...
15/11/03 14:33:00 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.

Root cause: Port is already used (most likely by Hive).
Solution: Follow these instructions and set another port for the Thriftserver (export
HIVE_SERVER2_THRIFT_PORT=<listening-port>). If you are using Hive in the cluster it is likely configured with the default port of 10000. In Ambari it can e.g. be checked via Hive -> Configs -> Advanced -> Hive server port . On OS level
this command can be used to find the process using the port: netstat -tupln | grep 10000

[12] SQL commands fail with VoraConfigurationException

Effected Vora versions:


1.0, 1.1
Symptoms: In the spark-shell you see an error similar to the following.
>...
com.sap.spark.vora.VoraConfigurationException: hosts expects host[:port], however an invalid value: ( hslave2) is provided
at com.sap.spark.vora.util.HostIpWithOptionalPortValidator$.apply(Validation.scala:65)
...

Root cause: This error can be raised if the options are incorrect. One example would be a space is in the comma-separated list of hosts in the ‘hosts’ option.
Solution: Verify the options. Do not add space in comma-separated list of hosts in the ‘hosts’ or ‘zkurls’ options.

Assigned Tags

      32 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Hello Frank,

      Thanks for the troubleshooting guide., I am encountering an error which is causing SAP HANA VORA service to fail on secondary node on the developer edition.

      Error from /var/log/vora/v2server*

      ----------------------------

      [centos@secondary lib]$ cat /var/log/vora/v2server.2016-02-25.21-53-45.12870.glf|more

      FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2

      ENCODING:UTF8

      RECORD_SEPARATOR:30

      COLUMN_SEPARATOR:124

      ESC_CHARACTER:27

      COLUMNS:Time|TZone|Severity|Text|Tracer|Component|Thread|Function|Location

      SEVERITY_MAP:DEBUG|Debug|INFO|Information|WARN|Warning|ERROR|Error|FATAL|Fatal

      HEADER_END

      2016-02-25 21:53:45.768937|+0000|INFO |accepting connections on port 2202|v2serv

      er|API Server|140271583508480|open_and_run|api_server.cpp(102)

      2016-02-25 21:53:45.769495|+0000|ERROR|sql server received exception: "bind: Add

      ress already in use"|v2server|API Server|140271583508480|open_and_run|api_server

      .cpp(131)

      ---------------------------------------------

      Output from netstat

      ---------------

      [centos@secondary lib]$ sudo netstat -tupln | grep 2202

      tcp6       0      0 :::2202                 :::*                    LISTEN      3119/v2server

      ---------------

      I will really appreciate if you can provide some direction in fixing this problem.

      TIA

      Gopal

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      Hi Gopal,

      Not exactly sure which situation lead to this, but it seems there is already a v2server process (=Vora engine) running and listening on port 2202 and another v2server is being started and fails as the port is already used.

      Could you please (1) stop Vora via Ambari Gui, (2) kill all remaining v2server processes, (3) restart Vora via Ambari Gui.

      Afterwards, could you please send me all log files from /var/log/vora from both master and secondary node?

      Thanks,

      Frank

      Author's profile photo Former Member
      Former Member

      Hello Frank,

      Thanks a lot for prompt reply.,

      Following are the steps followed:

      1. Stop all VORA services from Ambari

      2. Terminated all v2server processes:

           - Master node did not have any active processes running

           - Secondary node has a process running which i manually terminated

      3. Restarted VORA on both nodes

      4. Process started on both nodes but the processes on the secondary node terminated.

      Unfortunately I am not able find an option to upload files to this post so, I have uploaded the files in onedrive

      - Screenshots in file: VORAServices.docx

      - Logs from Master node in file: MasterVORALog.zip

      - Logs from Secondary node in file: SecondaryVORALog.zip

      https://onedrive.live.com/redir?resid=73CC906C49E36318%21106

      Last log entry from Master:

      -----------------------------------

      FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2

      ENCODING:UTF8

      RECORD_SEPARATOR:30

      COLUMN_SEPARATOR:124

      ESC_CHARACTER:27

      COLUMNS:Time|TZone|Severity|Text|Tracer|Component|Thread|Function|Location

      SEVERITY_MAP:DEBUG|Debug|INFO|Information|WARN|Warning|ERROR|Error|FATAL|Fatal

      HEADER_END

      2016-02-26 00:38:27.447873|+0000|INFO |accepting connections on port 2202|v2server|API Server|140716138096640|open_and_run|api_server.cpp(102

      -----------------------------------------

      Last log entry from Secondary:

      -----------------------------------------

      FILE_TYPE:DAAA96DE-B0FB-4c6e-AF7B-A445F5BF9BE2

      ENCODING:UTF8

      RECORD_SEPARATOR:30

      COLUMN_SEPARATOR:124

      ESC_CHARACTER:27

      COLUMNS:Time|TZone|Severity|Text|Tracer|Component|Thread|Function|Location

      SEVERITY_MAP:DEBUG|Debug|INFO|Information|WARN|Warning|ERROR|Error|FATAL|Fatal

      HEADER_END

      2016-02-26 00:30:23.280085|+0000|INFO |accepting connections on port 2202|v2server|API Server|139653325924352|open_and_run|api_server.cpp(102)

      2016-02-26 00:30:23.280630|+0000|ERROR|sql server received exception: "bind: Address already in use"|v2server|API Server|139653325924352|open_and_run|api_server.cpp(131)

      --------------------------------------------------

      Thanks,

      Gopal

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      Hi Gopal,

      The logs are not conclusive. I wrote you a private SCN-message on 2/26 requesting a screensharing session to further investigate this issue. Would be great if we can get together to figure out what is going wrong.

      Thanks,

      Frank

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      Hi Gopal,

      Thanks for the screensharing session.

      To document the issue for others:

      After start/stop of the AWS instances you need to run a 'reconfigure' step to propagate changes since the last start (e.g. changed IP addresses)

        1. In a web browser enter the Cluster Manager node's public IP and log on to the Cluster Manager UI

        2. Click the green 'CONFIGURE' button

      Regards,

      Frank

      Author's profile photo Benedict Venmani Felix
      Benedict Venmani Felix

      Thanks Frank. I had the same issue and this worked for me.

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Hi Frank,

      I installed Vora 1.1. Patch 1 on HDP 2.3 with Spark 1.5 -- SLES 11 SP3. It's not precisely the configuration mentioned in the Note 2213226, but shell-version of Vora seems to be working accordingly to the test 2.7 of the Installation manual (the latter did't prescribe HDP versions depending on the OS version, hence I went for HDP2.3 under SLES).

      I have problems with Zeppelin, though. The github installation of version 0.5.6 seems to be successful, and I can execute the "create table" statement in Zeppelin notepad, but when executing "show tables" statement I get error:

      Error: Job aborted due to stage failure: Task 0 in stage 12.0 failed 4 times, most recent failure: Lost task 0.3 in stage 12.0 (TID 36, eba156.extendtec.com.au): java.io.InvalidClassException: org.apache.spark.unsafe.types.UTF8String; local class incompatible: stream classdesc serialVersionUID = 7459647620003804432, local class serialVersionUID = 7786395165093970948 at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:621) at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1623) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1707) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1345) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.defaultReadObject(ObjectInputStream.java:501) at org.apache.spark.rdd.ParallelCollectionPartition$$anonfun$readObject$1.apply$mcV$sp(ParallelCollectionRDD.scala:74) at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1160) at org.apache.spark.rdd.ParallelCollectionPartition.readObject(ParallelCollectionRDD.scala:70) at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1900) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371) at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:72) at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:98) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Driver stacktrace:

      Any idea what might gone wrong? I'm tempted to try further steps of the installation (2.9 of the Installation manual and further). Thanks!

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Another thing that I noticed is that checkout of version 0.5.6 actually produced Zeppelin build of version 0.5.7. I saw somewhere an advice to checkout not branch-0.5.6, but v.0.5.6, which is a disconnected head. I wonder if there is any sense in doing that.

      Author's profile photo Roman Bukarev
      Roman Bukarev

      OK, I believe I found the reason why.

      • The class UTF8String.class coming from the library spark-sap-datasources-1.2.10-assembly.jar (and then used by Zeppelin) is dated Jan 20 and has size 17919 bytes.
      • The class UTF8String.class contained in the Spark's 1.5.2. library is dated Dec 16 and has size 18653

      So I guess versions of these libraries do not match. How should I proceed?

      [UPDATE] So I overwritten the class in the spark-sap library with the one from Spark 1.5.2, recompiled the "combined" library, and since then my Zeppelin works fine. 😏

      Author's profile photo Daniel Zheng
      Daniel Zheng

      Hi Roman,

      The root cause of this issue is same as the one I just replied you. If you use spark 1.5.2 from Apache and configure it as what I mentioned, this issue should be resolved.

      Also, you won't have this issue in Vora 1.2 now.

      Thanks,

      Daniel

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Hi Frank,

      Do you have any timeline when HANA Spark Controller is going to support Spark 1.5? It's quite impractical currently...

      Author's profile photo Former Member
      Former Member

      Hello Roman,

      HANA Spark controller 1.5.4 is available and supports Spark 1.5.2

      Release note: 2273047 - SAP HANA Spark Controller SPS 11 (Compatible with Spark 1.5.2)

      Regards,

      Gopal

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Hi Gopalakrishna, the Note you mentioned is not released yet. Any idea when it's available to gen pop? Also, does it require HANA to be on SPS11?

      Author's profile photo Former Member
      Former Member

      Hi Roman,

      New version of Spark Controller (Path 5) is released yesterday., probably the note is offline for that reason. I have installed patch 4 and it is working with spark 1.5.2.

      Screenshot from SWDC is attached

      /wp-content/uploads/2016/03/capture_915289.png

      Gopal

      Author's profile photo Roman Bukarev
      Roman Bukarev

      I have troubles starting sapthriftserver coming with Vora 1.1 Patch 1. Here's the relevant piece of the log:

      ===========================

      16/03/24 15:57:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.

      16/03/24 15:57:06 INFO SparkUI: Started SparkUI at <someURL>:4040

      16/03/24 15:57:13 INFO SparkContext: Added JAR file:/home/vora/lib/spark-sap-datasources-1.2.10-assembly.jar at http://<someURL>:32996/jars/spark-sap-datasources-1.2.10-assembly.jar with timestamp 1458795433310

      16/03/24 15:57:13 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.

      16/03/24 15:57:13 INFO Executor: Starting executor ID driver on host localhost

      16/03/24 15:57:13 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52655.

      16/03/24 15:57:13 INFO NettyBlockTransferService: Server created on 52655

      16/03/24 15:57:13 INFO BlockManagerMaster: Trying to register BlockManager

      16/03/24 15:57:13 INFO BlockManagerMasterEndpoint: Registering block manager localhost:52655 with 530.0 MB RAM, BlockManagerId(driver, localhost, 52655)

      16/03/24 15:57:13 INFO BlockManagerMaster: Registered BlockManager

      Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.sql.hive.HiveFunctionRegistry.<init>(Lorg/apache/spark/sql/catalyst/analysis/FunctionRegistry;)V

              at org.apache.spark.sql.hive.CompatHiveFunctionRegistry$.apply(CompatHiveFunctionRegistry.scala:9)

              at org.apache.spark.sql.hive.ExtendableHiveContext.functionRegistry$lzycompute(ExtendableHiveContext.scala:41)

              at org.apache.spark.sql.hive.ExtendableHiveContext.functionRegistry(ExtendableHiveContext.scala:40)

              at org.apache.spark.sql.hive.ExtendableHiveContext.functionRegistry(ExtendableHiveContext.scala:22)

              at org.apache.spark.sql.UDFRegistration.<init>(UDFRegistration.scala:40)

              at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:296)

              at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:74)

              at org.apache.spark.sql.hive.ExtendableHiveContext.<init>(ExtendableHiveContext.scala:22)

              at org.apache.spark.sql.hive.SapHiveContext.<init>(SapHiveContext.scala:12)

              at org.apache.spark.sql.hive.sap.thriftserver.SapSQLEnv$.init(SapSQLEnv.scala:40)

              at org.apache.spark.sql.hive.thriftserver.SapThriftServer$.main(SapThriftServer.scala:54)

              at org.apache.spark.sql.hive.thriftserver.SapThriftServer.main(SapThriftServer.scala)

              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

              at java.lang.reflect.Method.invoke(Method.java:497)

              at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:685)

              at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)

              at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)

      <etc>

      ======================

      Any idea what's wrong with the bolded stuff?

      Author's profile photo Daniel Zheng
      Daniel Zheng

      Hi Roman,

      You need to use spark 1.5.2 downloaded from Apache and extract to /opt/:

      cd /opt/

      wget http://apache.arvixe.com/spark/spark-1.5.2/spark-1.5.2-bin-hadoop2.6.tgz

      tar -zxvf spark-1.5.2-bin-hadoop2.6.tgz

      Then configure /etc/bashrc as below:

      export HADOOP_CONF_DIR=/etc/hadoop/conf

      export SPARK_HOME=/opt/spark-1.5.2-bin-hadoop2.6

      export SPARK_CONF_DIR=$SPARK_HOME/conf

      export PATH=$PATH:$SPARK_HOME/bin

      The root cause of this error is that we do not support HDP2.3.4 which comes with Spark1.5.2 in Vora 1.0 and Vora 1.1. You need to use the spark 1.5.2 from Apache instead.

      Please note as Vora 1.2, the Spark 1.5.2 from HDP2.3.4 is now supported.

      For more details, please check HANA VORA guide at:

      http://help.sap.com/Download/Multimedia/hana_vora/SAP_HANA_Vora_Installation_Admin_Guide_en.pdf

      Thanks,

      Daniel

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Thank you Daniel. I'm puzzled by the recommendation to download another copy of Spark, though. The Vora installation guide mentioned HDP 2.3 as supported platform, so I supposed I could use Spark as a part of HDP.

      Any hint on when Vora 1.2 is gonna be available at least for ramp-up?

      Author's profile photo Roman Bukarev
      Roman Bukarev

      I've installed Vora 1.2 on HDP2.3.4 (SLES 11 SP3). All related services seem to running fine in Ambari.

      When I try to do command-line based validation of Vora, as per section 2.7 (page 34) of the new Installation&Admin manual, I get a new error now:

      scala> vc.sql(testsql)
      com.sap.spark.vora.discovery.DiscoveryException: Could not connect to Consul Agent on localhost:8500 : null        at com.sap.spark.vora.discovery.ConsulDiscoveryClient$ConsulDiscoveryClient.(ConsulDiscoveryClient.scala:38)        at com.sap.spark.vora.discovery.ConsulDiscoveryClient$.getClient(ConsulDiscoveryClient.scala:21)        at com.sap.spark.vora.discovery.DiscoveryClientFactory$.getClient(DiscoveryClientFactory.scala:9)        at com.sap.spark.vora.config.VoraConfiguration$.apply(VoraConfiguration.scala:24)        at com.sap.spark.vora.DefaultSource.buildConfiguration(DefaultSource.scala:403)        at com.sap.spark.vora.DefaultSource.createRelation(DefaultSource.scala:149)        at org.apache.spark.sql.execution.datasources.CreateTableUsingTemporaryAwareCommand.resolveDataSource(CreateTableUsingTemporaryAwareCommand.scala:73)        at org.apache.spark.sql.execution.datasources.CreateTableUsingTemporaryAwareCommand.run(CreateTableUsingTemporaryAwareCommand.scala:31)        at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57)        at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57)        at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:69)        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:140)        at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:138)        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:138)        at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:933)        at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:933)        at org.apache.spark.sql.DataFrame.(DataFrame.scala:144)        at org.apache.spark.sql.DataFrame.(DataFrame.scala:129)        at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:725)

      I'm getting the same error in Zeppelin. Is that Consul thing missing on my server?

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      The Vora Discovery Service uses Consul (from HashiCorp) to register services. Each server needs to have either a Consul server or Consul agent (mutually exclusive as both server and agent listen on port 8500; at least 3 Consul servers needed - non-server hosts should have a client). Your error message indicates that the host does not have a Consul server or agent running.

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Thanks Frank! So, do I have to install it? It hasn't been mentioned in the installation manual, nor has it been required for Vora 1.1..

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      Yes you do. Please check the Install Guide (search for 'discovery service'). With Vora1.2 there is a very different architecture compared to Vora1.1 with many new services (please also see the What's New in Vora1.2).

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Ah, so that "Discovery Service" is actually Consul? Does the server-client on one port means I can't use it in a single-server setup, i.e. I must have at least 2 machines in the HDP cluster?

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      You actually need at least 3 machines as we require to have at least 3 Vora Discovery Servers (=Consul servers). Each server also acts as a client, so additional hosts with agents are optional (but if additional hosts exists they need to have a Discovery agent running).

      Author's profile photo Vasi Venkatesan
      Vasi Venkatesan

      I was able to create tables in Vora (and see the data in the scala command). I also see those tables fine when I refresh the Sparkcontroller connection at HANA Provisioning

      But when I try to add those tables in to HANA Catalog as Virtual table I get the following error.

      Table_Add_Issue.JPG

      Earlier with SPS 10, I was able to access them as virtual tables - but with SPS11 / Vora 1.1 / Spark 1.5.2 -  have these issues. Any resolution or anything that is wrongly configured?

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      Which Spark Controller version are you using? An upgrade to HANA SPS11 also needs a new Spark Controller version. Current version for SPS11 is Spark Controller 1.5 Patch 5 (see SAP Note 2273047 - SAP HANA Spark Controller SPS 11 (Compatible with Spark 1.5.2))

      Author's profile photo Vasi Venkatesan
      Vasi Venkatesan

      sap.hana.spark.controller-1.5.7-1.noarch (rpm -qa output). The required compatibility matrix is asking for 1.5.5 - which is lower than this.

      It is complaining with this in the hana_controller.log

      --------------

      SLF4J: Found binding in [jar:file:/usr/sap/spark/controller/lib/spark-sap-datasources-1.2.10-assembly.jar!/org/slf4j/impl/StaticLoggerBinder.class]

      SLF4J: Found binding in [jar:file:/usr/sap/spark/controller/lib/external/spark-assembly-1.5.2.2.3.4.7-4-hadoop2.7.1.2.3.4.7-4.jar!/org/slf4j/impl/StaticLoggerBinder.class]

      SLF4J: Found binding in [jar:file:/usr/hdp/2.3.4.7-4/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

      SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

      SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

      16/04/08 14:29:26 INFO HanaESConfig: Loaded HANA Extended Store Configuration

      Found Spark Libraries. Proceeding with Current Class Path

      16/04/08 14:29:27 INFO Server: Starting Spark Controller

      16/04/08 14:29:51 INFO HanaVoraContext: SapSQLContext [version: 1.2.10] created

      16/04/08 14:29:51 INFO CommandRouter: Connecting to Vora Engine

      16/04/08 14:29:51 INFO CommandRouter: Initialized Router

      16/04/08 14:29:51 INFO CommandRouter: Server started

      16/04/08 14:30:40 INFO CommandHandler: Getting BROWSE data/user/14389636838274891797-4030571392570231421_c7b24277-0040-0015-37ef-7753a0000a80

      16/04/08 14:30:42 INFO CommandHandler: Getting BROWSE data/user/14389636838274891797-4030571392570231427_c7b24277-0040-0015-37ef-7753a0000a86

      16/04/08 14:31:02 WARN DefaultSource: Creating a Vora Relation that is actually persistent with a temporary statement!

      16/04/08 14:31:02 WARN DefaultSource: Creating a Vora Relation that is actually persistent with a temporary statement!

      16/04/08 14:31:02 ERROR HanaVoraContext$$anon$1: Exception occured in Lookup Relation

      com.sap.spark.vora.VoraConfigurationException: namenodeurl expects a host:port, however an invalid host (MOCHDPPROD) is provided

              at com.sap.spark.vora.util.HostIpWithPortValidator$.apply(validations.scala:81)

              at com.sap.spark.vora.config.ParametersValidator$$anonfun$17$$anonfun$apply$17.apply(ParametersValidator.scala:35)

              at com.sap.spark.vora.config.ParametersValidator$$anonfun$17$$anonfun$apply$17.apply(ParametersValidator.scala:35)

              at com.sap.spark.vora.config.ParametersValidator$$anonfun$checkSyntax$1$$anonfun$apply$31.apply(ParametersValidator.scala:284)

              at com.sap.spark.vora.config.ParametersValidator$$anonfun$checkSyntax$1$$anonfun$apply$31.apply(ParametersValidator.scala:284)

              at scala.Option.foreach(Option.scala:236)

              at com.sap.spark.vora.config.ParametersValidator$$anonfun$checkSyntax$1.apply(ParametersValidator.scala:284)

              at com.sap.spark.vora.config.ParametersValidator$$anonfun$checkSyntax$1.apply(ParametersValidator.scala:283)

              at scala.collection.immutable.Map$Map4.foreach(Map.scala:181)

              at com.sap.spark.vora.config.ParametersValidator$.checkSyntax(ParametersValidator.scala:283)

              at com.sap.spark.vora.config.ParametersValidator$.apply(ParametersValidator.scala:98)

              at com.sap.spark.vora.DefaultSource.createRelation(DefaultSource.scala:108)

              at com.sap.spark.vora.DefaultSource.createRelation(DefaultSource.scala:49)

              at com.sap.spark.vora.DefaultSource.createRelation(DefaultSource.scala:40)

              at com.sap.spark.vora.DefaultSource.getTableRelation(DefaultSource.scala:203)

              at org.apache.spark.sql.hive.hana.vora.VoraAcceleratorCatalog$class.liftedTree1$1(VoraAcceleratorCatalog.scala:50)

              at org.apache.spark.sql.hive.hana.vora.VoraAcceleratorCatalog$class.lookupRelation(VoraAcceleratorCatalog.scala:47)

              at org.apache.spark.sql.hive.hana.vora.HanaVoraContext$$anon$1.lookupRelation(HanaVoraContext.scala:27)

              at org.apache.spark.sql.hive.hana.HanaSQLContext$class.getTableMetaNew(HanaSQLContext.scala:144)

              at org.apache.spark.sql.hive.hana.vora.HanaVoraContext.getTableMetaNew(HanaVoraContext.scala:20)

              at com.sap.hana.spark.network.CommandHandler$$anonfun$receive$2.applyOrElse(CommandRouter.scala:403)

              at akka.actor.Actor$class.aroundReceive(Actor.scala:467)

              at com.sap.hana.spark.network.CommandHandler.aroundReceive(CommandRouter.scala:202)

              at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)

              at akka.actor.ActorCell.invoke(ActorCell.scala:487)

              at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)

              at akka.actor.ActorCell.invoke(ActorCell.scala:487)

              at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)

              at akka.dispatch.Mailbox.run(Mailbox.scala:220)

              at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)

              at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

              at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

              at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

              at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

      16/04/08 14:31:02 ERROR CommandHandler:

      java.lang.NullPointerException

              at org.apache.spark.sql.hive.hana.HanaSQLContext$class.getTableMetaNew(HanaSQLContext.scala:153)

              at org.apache.spark.sql.hive.hana.vora.HanaVoraContext.getTableMetaNew(HanaVoraContext.scala:20)

              at com.sap.hana.spark.network.CommandHandler$$anonfun$receive$2.applyOrElse(CommandRouter.scala:403)

              at akka.actor.Actor$class.aroundReceive(Actor.scala:467)

              at com.sap.hana.spark.network.CommandHandler.aroundReceive(CommandRouter.scala:202)

              at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)

              at akka.actor.ActorCell.invoke(ActorCell.scala:487)

              at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)

              at akka.dispatch.Mailbox.run(Mailbox.scala:220)

              at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)

              at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

              at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

              at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

              at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

      16/04/08 14:31:02 ERROR RequestOrchestrator: java.lang.NullPointerException

              at org.apache.spark.sql.hive.hana.HanaSQLContext$class.getTableMetaNew(HanaSQLContext.scala:153)

              at org.apache.spark.sql.hive.hana.vora.HanaVoraContext.getTableMetaNew(HanaVoraContext.scala:20)

              at com.sap.hana.spark.network.CommandHandler$$anonfun$receive$2.applyOrElse(CommandRouter.scala:403)

              at akka.actor.Actor$class.aroundReceive(Actor.scala:467)

              at com.sap.hana.spark.network.CommandHandler.aroundReceive(CommandRouter.scala:202)

              at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)

              at akka.actor.ActorCell.invoke(ActorCell.scala:487)

              at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)

              at akka.dispatch.Mailbox.run(Mailbox.scala:220)

              at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)

              at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

              at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

              at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

              at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

      Author's profile photo Ravi Adhav
      Ravi Adhav

      Please cehck:

       

      https://answers.sap.com/questions/242871/sap-vora-not-able-to-add-vora-table-as-virtual-tab.html

      Author's profile photo Colm Noonan
      Colm Noonan

      Hi Roman

      Great post on troubleshooting. I've installed the SPARK CONTROLLER 1.5.0 PATCH LEVEL 5 for SAP HANA 1.0 SPS11. I've followed the Installation & Configuration guide. However, I'm getting the following error message in the logs. Do you have any idea what the problem might be?

      Cheers

      Colm

      screenshot.PNG

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Hi Colm, actually it's not my post on troubleshooting, but Frank's.

      As for you error, I don't see anything sap/vora related in the log. The error seems to lie on Hadoop side. Did you check in Ambari that your Hadoop installation is healthy in general, as in Yarn and Spark applications running?

      Author's profile photo Roman Bukarev
      Roman Bukarev

      Hi Frank, I finally got 3 machines in my HDP 2.3.4 cluster. On each of two additional nodes I've installed these Vora components:

      • Vora Base
      • Vora Discovery Client
      • Vora V2Server Worker

      Still observing the error when trying to execute the statement  val vc = new SapSQLContext(sc) :

      16/05/09 16:40:24 INFO SapSQLContext: Auto-Registering tables from Datasource 'com.sap.spark.vora' com.sap.spark.vora.discovery.DiscoveryException: Could not connect to Consul Agent on eba156.extendtec.com.au:8500 : null        at com.sap.spark.vora.discovery.ConsulDiscoveryClient$ConsulDiscoveryClient.(ConsulDiscoveryClient.scala:38)        at com.sap.spark.vora.discovery.ConsulDiscoveryClient$.getClient(ConsulDiscoveryClient.scala:21)        at com.sap.spark.vora.discovery.DiscoveryClientFactory$.getClient(DiscoveryClientFactory.scala:9)        at com.sap.spark.vora.config.VoraConfiguration$.apply(VoraConfiguration.scala:24)

      Another thing that I noticed is that Vora Thriftserver Master component goes down quite often. I can start it in Ambari without errors, but then it quetly stops. Here's the tail of its log:

      [2016-05-09 16:37:09.821864]VORA_THRIFTSERVER_SERVICE_REGISTRY is localhost:8500 [2016-05-09 16:37:09.822125]VORA_THRIFTSERVER_LOG_LEVEL is WARNING [2016-05-09 16:37:09.822234]VORA_PACKAGE_DIR is /var/lib/ambari-agent/cache/stacks/HDP/2.3/services/vora-thriftserver/package/../../vora-base/package [2016-05-09 16:37:09.822323]SERVICE_NAME is vora-thriftserver [2016-05-09 16:37:09.822408]VORA_THRIFTSERVER_EXTRA_ARGUMENTS is --conf vora.hadoop.distro=hdp [2016-05-09 16:37:09.822488]VORA_THRIFTSERVER_SPARK_HOME is /usr/hdp/2.3.4.0-3485/spark [2016-05-09 16:37:09.822567]PLATFORM is AMBARI [2016-05-09 16:37:09.822648]SESSION is 20160509.163709 [2016-05-09 16:37:09.822729]LOG_DIR is /var/log/vora-thriftserver [2016-05-09 16:37:09.822841]VORA_THRIFTSERVER_METASTORE_DIR is /tmp/vora-thriftserver [2016-05-09 16:37:09.822986]VORA_THRIFTSERVER_JAVA_HOME is /usr/jdk64/jdk1.8.0_60 [2016-05-09 16:37:09.823088]SELF_HOST is XXXXX.com.au [2016-05-09 16:37:09.823192]VORA_THRIFTSERVER_LOG_DIR is /var/log/vora-thriftserver [2016-05-09 16:37:09.823277]SERVICE_PORT is 49155 [2016-05-09 16:37:09.823529]Starting SAP HANA Vora Thriftserver ... [2016-05-09 16:37:09.823619]Logging to /var/log/vora-thriftserver/vora-thriftserver-server-XXXXX.com.au-20160509.163709.log [2016-05-09 16:37:09.823731]Starting task: server [2016-05-09 16:37:09.823812]Environment: [2016-05-09 16:37:09.823952] export SPARK_HOME=/usr/hdp/2.3.4.0-3485/spark export HIVE_SERVER2_THRIFT_PORT=49155 export JAVA_HOME=/usr/jdk64/jdk1.8.0_60 [2016-05-09 16:37:09.824040]CWD: [2016-05-09 16:37:09.824153]cd /tmp/vora-thriftserver [2016-05-09 16:37:09.824242]ProcessArgs: [2016-05-09 16:37:09.824337]/var/lib/ambari-agent/cache/stacks/HDP/2.3/services/vora-thriftserver/package/../../vora-base/package/lib/vora-spark/bin/start-sapthriftserver.sh --conf vora.hadoop.distro=hdp [2016-05-09 16:37:09.824422]User: vora [2016-05-09 16:37:09.824503]Outputfile: /var/log/vora-thriftserver/vora-thriftserver-server-XXXXX.com.au-20160509.163709.log [2016-05-09 16:37:09.825785]Starting registration thread ... [2016-05-09 16:37:09.826876]Discovery state:INIT [2016-05-09 16:37:09.827066]Trying 0 [2016-05-09 16:37:09.827188]Creating voraSequenceId [2016-05-09 16:37:09.830309]Result: (True, 'Completed') [2016-05-09 16:37:09.830454]Discovery state:CREATED_ID_SEQ [2016-05-09 16:37:09.830554]Trying 0 [2016-05-09 16:37:09.832303]Result: (False, -1)

      UPDATE: got this bit from another Vora Thriftserver's log:

      16/05/09 16:57:14 WARN DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded. 16/05/09 16:57:14 INFO SessionState: Created local directory: /tmp/88f39d77-4d79-4b8b-b174-1472009d8b94_resources 16/05/09 16:57:14 INFO SessionState: Created HDFS directory: /tmp/hive/vora/88f39d77-4d79-4b8b-b174-1472009d8b94 16/05/09 16:57:14 INFO SessionState: Created local directory: /tmp/vora/88f39d77-4d79-4b8b-b174-1472009d8b94 16/05/09 16:57:14 INFO SessionState: Created HDFS directory: /tmp/hive/vora/88f39d77-4d79-4b8b-b174-1472009d8b94/_tmp_space.db 16/05/09 16:57:14 INFO SapHiveContext: SapSQLContext [version: 1.2.33] created 16/05/09 16:57:14 INFO SapHiveContext: Auto-Registering tables from Datasource 'com.sap.spark.vora' Exception in thread "main" com.sap.spark.vora.discovery.DiscoveryException: Could not connect to Consul Agent on XXXXX.com.au:8500 : null         at com.sap.spark.vora.discovery.ConsulDiscoveryClient$ConsulDiscoveryClient.(ConsulDiscoveryClient.scala:38)         at com.sap.spark.vora.discovery.ConsulDiscoveryClient$.getClient(ConsulDiscoveryClient.scala:21)         at com.sap.spark.vora.discovery.DiscoveryClientFactory$.getClient(DiscoveryClientFactory.scala:9)

      Any ideas? Something I did wrong when setting up new nodes in regards to Vora?

      Author's profile photo Vasi Venkatesan
      Vasi Venkatesan

      Seems the issue is with discovery server configuration. I had a similar issue and Frank discovered that.

      Can you check on the interface parameter of the Discovery server? (Can you post the Discover server's configuration from Ambari?)

      If the consul is not connecting that could be one issue. Is it a 3 node cluster? Thrift server also seems to have that issue - of complaining about Consul.

      thanks,

      Author's profile photo Frank Legler
      Frank Legler
      Blog Post Author

      I am closing the comments on this Blog.


      For product issues please open a customer message in component HAN-VO.

      You can also use the community-based help on Stackoverflow (but it is primarily for non-SAP customers using the Vora Dev Edition in AWS).