Error transport tsasltransport sasl negotiation failure

HI could you help me to look into this issue. It let our jobs failed sometimes.   Why the connection was closed sometimes?   BR Paul

HI

We work with CDH 5.7 secure cluster. 

we run hive2 action with oozie.

1.    We find the below logs in hive server sometimes.

______________________

2016-08-06 00:09:06,778 ERROR org.apache.thrift.transport.TSaslTransport: [HiveServer2-Handler-Pool: Thread-52]: SASL negotiation failure
javax.security.sasl.SaslException: DIGEST-MD5: IO error acquiring password [Caused by org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or does not exist: owner=xxx, renewer=hive, realUser=hive/xxx.idc1.xx@XXX, issueDate=1470413336969, maxDate=1471018136969, sequenceNumber=9, masterKeyId=2]
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:594)
at com.sun.security.sasl.digest.DigestMD5Server.evaluateResponse(DigestMD5Server.java:244)
at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:765)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:762)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:762)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.security.token.SecretManager$InvalidToken: token expired or does not exist: owner=xxx, renewer=hive, realUser=hive/xxx.idc1.xx@XXX, issueDate=1470413336969, maxDate=1471018136969, sequenceNumber=9, masterKeyId=2
at org.apache.hadoop.hive.thrift.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java:114)
at org.apache.hadoop.hive.thrift.TokenStoreDelegationTokenSecretManager.retrievePassword(TokenStoreDelegationTokenSecretManager.java:56)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.getPassword(HadoopThriftAuthBridge.java:588)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$SaslDigestCallbackHandler.handle(HadoopThriftAuthBridge.java:619)
at com.sun.security.sasl.digest.DigestMD5Server.validateClientResponse(DigestMD5Server.java:585)
… 15 more
2016-08-06 00:09:06,779 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-52]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: DIGEST-MD5: IO error acquiring password
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:765)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:762)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:356)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:762)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: DIGEST-MD5: IO error acquiring password
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
… 10 more

______________________________________

2. I find hive2 action task sometimes failed and more time successed. 

    2.1 the success logs is below.

           ________________________________________

Connecting to jdbc:hive2://xxxx.idc1.xxx:10000/
Error: Could not open client transport with JDBC Uri: jdbc:hive2://xxxx.idc1.xxx:10000/: Peer indicated failure: DIGEST-MD5: IO error acquiring password (state=08S01,code=0)
Connected to: Apache Hive (version 1.1.0-cdh5.7.1)
Driver: Hive JDBC (version 1.1.0-cdh5.7.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
No rows affected (0.078 seconds)
INFO : Compiling command(queryId=hive_20160808000909_06e0d60a-7dcd-485f-b177-f83aced6ee9b): use xxx

            _____________________________________

2.2 the failed log is below, and it happend sometimes.

        Connecting to jdbc:hive2://xxxx.idc1.xxx:10000/
Error: Could not open client transport with JDBC Uri: jdbc:hive2://xxxx.idc1.xxx:10000/: Peer indicated failure: DIGEST-MD5: IO error acquiring password (state=08S01,code=0)
No current connection
Connected to: Apache Hive (version 1.1.0-cdh5.7.1)
Driver: Hive JDBC (version 1.1.0-cdh5.7.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Closing: 0: jdbc:hive2://xxxx.idc1.xxx:10000/
Intercepting System.exit(2)

__________________________________________________

could you help me to resolve this issue that sometimes the hive2 action failed?

thanks in advance.

BR

Paul

Здравствуйте.

Пробую написать клиента для hive c windows 2012 R2

Environment для HADOOP_HOME я прописал

далее подготовил код

public static void main (String args[]) {
    try {
        org.apache.hadoop.conf.Configuration conf = new     org.apache.hadoop.conf.Configuration();
        conf.set("hadoop.security.authentication", "Kerberos");
        UserGroupInformation.setConfiguration(conf);
        UserGroupInformation.loginUserFromKeytab("testuser@DOM.COM", "C:\Servers\Repository\Templates\HiveClient\src\main\resources\testuser.keytab");
        Class.forName("org.apache.hive.jdbc.HiveDriver");
        System.out.println("getting connection");
        Connection con = DriverManager.getConnection("jdbc:hive2://rnd-server02:10010/default;principal=testuser@DOM.COM");
        System.out.println("Connected");
        con.close();
    }
    catch (Exception e) {
        e.printStackTrace();
    }
}

возвращается ошибка

12:46:40.955 [main] ERROR org.apache.thrift.transport.TSaslTransport - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_121]
	at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) ~[libthrift-0.9.3.jar:0.9.3]
	at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) [libthrift-0.9.3.jar:0.9.3]
	at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) [libthrift-0.9.3.jar:0.9.3]
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) [hive-shims-common-2.0.0.jar:2.0.0]
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) [hive-shims-common-2.0.0.jar:2.0.0]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_121]
	at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_121]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) [hadoop-common-2.6.0.jar:?]
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) [hive-shims-common-2.0.0.jar:2.0.0]
	at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:181) [hive-jdbc-2.0.0.jar:2.0.0]
	at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:152) [hive-jdbc-2.0.0.jar:2.0.0]
	at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) [hive-jdbc-2.0.0.jar:2.0.0]
	at java.sql.DriverManager.getConnection(DriverManager.java:664) [?:1.8.0_121]
	at java.sql.DriverManager.getConnection(DriverManager.java:270) [?:1.8.0_121]
	at RU.Templates.Hive.Client.main(Client.java:49) [classes/:?]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7))
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:770) ~[?:1.8.0_121]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) ~[?:1.8.0_121]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) ~[?:1.8.0_121]
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ~[?:1.8.0_121]
	... 15 more
Caused by: sun.security.krb5.KrbException: Server not found in Kerberos database (7)
	at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:70) ~[?:1.8.0_121]
	at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251) ~[?:1.8.0_121]
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262) ~[?:1.8.0_121]
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308) ~[?:1.8.0_121]
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126) ~[?:1.8.0_121]
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458) ~[?:1.8.0_121]
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693) ~[?:1.8.0_121]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) ~[?:1.8.0_121]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) ~[?:1.8.0_121]
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ~[?:1.8.0_121]
	... 15 more
Caused by: sun.security.krb5.Asn1Exception: Identifier doesn't match expected value (906)
	at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140) ~[?:1.8.0_121]
	at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65) ~[?:1.8.0_121]
	at sun.security.krb5.internal.TGSRep.<init>(TGSRep.java:60) ~[?:1.8.0_121]
	at sun.security.krb5.KrbTgsRep.<init>(KrbTgsRep.java:55) ~[?:1.8.0_121]
	at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:251) ~[?:1.8.0_121]
	at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:262) ~[?:1.8.0_121]
	at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:308) ~[?:1.8.0_121]
	at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:126) ~[?:1.8.0_121]
	at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:458) ~[?:1.8.0_121]
	at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:693) ~[?:1.8.0_121]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) ~[?:1.8.0_121]
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) ~[?:1.8.0_121]
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ~[?:1.8.0_121]
	... 15 more
Exception in thread "main" java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://rnd-server02:10010/default;principal=testuser@DOM.COM: GSS initiate failed
	at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:207)
	at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:152)
	at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107)
	at java.sql.DriverManager.getConnection(DriverManager.java:664)
	at java.sql.DriverManager.getConnection(DriverManager.java:270)
	at RU.Templates.Hive.Client.main(Client.java:49)
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
	at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
	at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
	at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
	at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:181)
	... 5 more

Что возможно еще попробовать для решения проблемы с GSS initiate?

Problem

Following error is captured in hiveserver2.log
ERROR transport.TSaslTransport (TSaslTransport.java:open(315)) — SASL negotiation failure
javax.security.sasl.SaslException: Error validating the login [Caused by javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: system-auth]

Cause

Hive Password expiry is the cause of the problem

Diagnosing The Problem

Look a the /var/log/secure file , following information will be captured

May 12 16:07:17 bdmgt001 java: pam_unix(system-auth:auth): check pass; user unknown
May 12 16:07:17 bdmgt001 java: pam_unix(system-auth:auth): authentication failure; logname= uid=603 euid=603 tty= ruser= rhost=
May 12 16:07:17 bdtmgt001 java: pam_succeed_if(system-auth:auth): error retrieving information about user anonymous
May 12 16:07:19 bdmgt001 java: pam_unix(system-auth:auth): check pass; user unknown
May 12 16:07:19 bdmgt001 java: pam_unix(system-auth:auth): authentication failure; logname= uid=603 euid=603 tty= ruser= rhost=

In this current setup, id=603, belongs to hive user, so this indicates there is a problem with the hive user password

for example-
[root@bdsup006 ~]# chage -l hive <- To view the details about the user password
Last password change : Dec, 08, 2015
Password expires : May, 08,2015

Resolving The Problem

Looking at this date we can see that the hive password has expired, now change the password, restart hive and the error will no longer be reported

[{«Product»:{«code»:»SSCRJT»,»label»:»IBM Db2 Big SQL»},»Business Unit»:{«code»:»BU059″,»label»:»IBM Software w/o TPS»},»Component»:»Open Source Tools»,»Platform»:[{«code»:»PF016″,»label»:»Linux»}],»Version»:»4.1.0″,»Edition»:»»,»Line of Business»:{«code»:»LOB10″,»label»:»Data and AI»}}]

The region splitter job takes a longer time to run.

The region splitter job uses random sampling to analyze the input data. The random sampling is based on the default values of the following parameters:

—sortmaxsamples

Maximum number of samples that the job uses to analyze the input data. Default is 100,000.

—sortmaxsplitssampled

Maximum number of splits that the job uses to extract the sample data. Default is 20.

—sortsampleprobability

Frequency to sample the input data in each split. Specify a value between 0.0 and 1.0. A higher value results in dense sampling of each split and a lower value results in sparse sampling of each split. Default is 1.0.

If the input data is uniformly distributed, you can use a small sample size, few splits, and a higher sampling frequency to reduce the running time of the job. If the input data is skewed, you can use a large sample size, more number of splits, and a lower sampling frequency to reduce the running time of the job.

For example, the following command runs the region splitter job with the additional parameters:

run_hbase_region_analysis.sh --config=/usr/local/conf/config_big.xml --input=/usr/hdfs/workingdir/MDMBDRMInitialBatch/MDMBDE0063_1602999447744334391/output/dir/pass-join --hdfsdir=/usr/hdfs/workingdir --rule=/usr/local/conf/matching_rules.xml --regions=14 --sortmaxsamples=200000 --sortmaxsplitssampled=30 --sortsampleprobability=0.5

When you rerun the Hive enabler job with the same Hive-related options, the job fails.

When you run the Hive enabler job for the first time, the job creates an output table and an internal table in Hive. When you rerun the Hive enabler job with the same Hive-related options, you get an error in the following format:

AlreadyExistsException(message:Table <Table Name>|<Table Name>_internal already exists)

where

Table Name

indicates the output table, and

<Table Name>_internal

indicates the internal table. For example,

mdmbdrm002_emp

indicates the output table and

mdmbdrm002_emp_internal

indicates the internal table.

To rerun the Hive enabler job with the same Hive-related options, perform the following tasks:

  1. If you ran the Hive enabler job without the

    —linkHBase

    parameter, drop the output table as a view.

  2. If you ran the Hive enabler job with the

    —linkHBase

    parameter, drop the output table.

  3. If the

    <Table Name>_internal

    table exists, drop it.

  4. Rerun the Hive enabler job.

In an encrypted environment, when you run the Hive enabler job, the job fails.

In an encrypted environment, when you run the Hive enabler job, you get the following error:

ERROR transport.TSaslTransport: SASL negotiation failure javax.security.sasl.SaslException: No common protection layer between client and server

When you do not specify the authentication type that your environment uses in the configuration file, you get this error.

To fix this issue, in the

HiveConfiguration

section of the configuration file, specify the

sasl.qop

parameter in the

JDBCUrl

parameter.

For more information about the

JDBCUrl

parameter, see the

Informatica MDM — Relate 360

Installation and Configuration Guide

.

When you run the load clustering job, the job fails.

If the repository configuration in the configuration file is not in sync with the

hbase-site.xml

file, the job fails. You can find the

hbase-site.xml

file in the following directory:

${HBASE_HOME}/conf/hbase-site.xml

Ensure that the values that you specify in the

HBASEConfiguration

section of the configuration file are in sync with the values in the

hbase-site.xml

file.

For more information about the repository configuration, see the

Informatica MDM — Relate 360

Installation and Configuration Guide

.

@ghost

I am using the following version of Hive 1.1.0-cdh5.4.3 with R 3.2.2. My typical jdbc string would look something like below if I were connecting via SQL Squirrel for instance:

jdbc:hive2://hive.server.com:10000/default;AuthMech=1;principal=hive/_HOST@SOME.DOMAIN.COM

See the following error after the connect, note I have a valid credential prior to invoking R repl:

rhive.connect(host = «hive.server.com», port = «10001», db = «default», user = «bayroot», password = «XXXXX», defaultFS=»hdfs://nameservice1/rhive», properties=»hive.principal=hive/_HOST@SOME.DOMAIN.COM»)
Warning:
+———————————————————-+
+ / hiveServer2 argument has not been provided correctly. +
+ / RHive will use a default value: hiveServer2=TRUE. +
+———————————————————-+
15/10/21 11:17:03 INFO jdbc.Utils: Supplied authorities: hive.server.com:10001
15/10/21 11:17:03 INFO jdbc.Utils: Resolved authority: hive.server.com:10001
15/10/21 11:17:03 INFO jdbc.HiveConnection: Will try to open client transport with JDBC Uri: jdbc:hive2://hive.server.com:10001/default;principal=hive/_HOST@SOME.DOMAIN.COM
15/10/21 11:20:05 ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) — No service creds)]

Please note this code was working fine in CDH 5.3.2 which was hive .13 I believe:

Package: RHive
Type: Package
Title: R and Hive
Version: 2.0-1.0

@Worvast

Hi @bayroot22, i think in two options:

1.- See if you has a Kerberos ticket created with ‘klist’ (In Shell, not in R), if not, create one with ‘kinit’ command.

2.- See if HiveServer2 has the correct configuration with the RHive UDF File included, i only has this info for do that:

In hive-site.xml add this property and restart HiveServer2:

<property>
    <name>hive.aux.jars.path</name>
    <value> ----/path/to/rhive_udf.jar----, ----/other/aux/jars.jar----  </value>
</property>

If you use Cloudera CDH they has one tutorial for include UDF files to be use by HiveServer

http://www.cloudera.com/content/www/en-us/documentation/enterprise/latest/topics/cm_mc_hive_udf.html

Good luck

@ghost

thanks for the response…

  1. yes a ticket was created
  2. I added the rhive_udf.jar to my aux path and restarted hive but same issue

@ghost

I added the following property hive.keytab=/home/bayroot/hive.keytab to the connection string now I get the following:

Exception in thread «Thread-6» java.lang.RuntimeException: java.sql.SQLException: Could not open client transport with JDBC Uri: jdbc:hive2://hive.server.com:10001/default: Peer indicated failure: Unsupported mechanism type PLAIN

A couple thoughts/questions?

1 — I manage a multi-tenant environment so exposing the hive keytab is a serious security concern so even if this worked it wouldn’t be a solution I could implement…
2 — In the loginUserFromKeytab method how does the authentication type get set? I don’t see the setAuthenticationMethod method, possibly it gets set in loginUserFromKeytab? As indicated in the above message it looks like RHive is sending auth type of PLAIN and not KERBEROS.

@Worvast

As I have understood if you created the ticket in R you not should put the flags username and password, there may come the message of ‘plaintext password’ when trying to use these parameters to log, If you have a Kerberos ticket created in the R environment R is responsible for trying to use it automatically, creating the connection / set necessary, soo should use only:

rhive.connect(host = "hive.server.com", port = "10001", db = "default",
defaultFS="hdfs://nameservice1/rhive", 
properties="hive.principal=hive/_HOST@SOME.DOMAIN.COM")

It is also true that with this format i can’t connect, I used the following format for the connection (Example):

rhive.connect(host="hive.server.com:10000/DEFAULTDB;principal=hive/KERBEROSPRINCIPAL;AuthMech=1;KrbHostFQDN=KERBEROSHOSTURL;KrbServiceName=hive;KrbRealm=KERBEROSREALM",
defaultFS="hdfs://nameservice1/rhive", 
hiveServer2=TRUE,
updateJar=FALSE)

@arundoss

Hi Team,

could you please confirm Rhive supports kerberos or not.,if its support pls tell the me the correct format for rhive.connect.

@Prussia

I am using Spark 1.5.X to work with Hive 0.14.0.

a. spark-defaults.conf:

spark.sql.hive.metastore.version 0.14.0
spark.sql.hive.metastore.jars hadoop 2.6.0 jars:hive 0.14.0 jars

b. hive-site.conf

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <!--<value>jdbc:mysql:    <value>jdbc:mysql:    <description>the URL of the MySQL database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>hive</value>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>hive</value>
  </property>
  <property>
    <name>datanucleus.autoCreateSchema</name>
    <value>false</value>
  </property>
  <property>
    <name>datanucleus.fixedDatastore</name>
    <value>true</value>
  </property>
  <property>
    <name>datanucleus.autoStartMechanism</name>
    <value>SchemaTable</value>
  </property>

  <property>
    <name>hive.exec.max.dynamic.partitions</name>
    <value>100000</value>
  </property>
  <property>
    <name>hive.exec.max.dynamic.partitions.pernode</name>
    <value>10000</value>
  </property>
  <!-- rename bug workaround https:  <property>
    <name>fs.hdfs.impl.disable.cache</name>
    <value>false</value>
  </property>
  <property>
    <name>fs.file.impl.disable.cache</name>
    <value>false</value>
  </property>
  <!-- memory leak workaround https:   <property>
    <name>hive.server2.thrift.http.max.worker.threads</name>
    <value>5000</value>
  </property>
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>hdfs:/user/hive/warehouse</value>
  </property>
  <property>
    <name>hive.exec.max.dynamic.partitions.pernode</name>
    <value>10000</value>
  </property>
  <property>
    <name>hive.exec.max.dynamic.partitions</name>
    <value>10000</value>
  </property>
  <property>
    <name>mapred.output.compress</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.exec.compress.output</name>
    <value>true</value>
  </property>
  <property>
    <name>mapred.output.compression.type</name>
    <value>BLOCK</value>
  </property>
  <property>
    <name>mapreduce.input.fileinputformat.split.minsize</name>
    <value>134217728</value>
  </property>
  <property>
    <name>mapreduce.input.fileinputformat.split.maxsize</name>
    <value>1000000000</value>
  </property>
  <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx1024m</value>
  </property>
  <property>
    <name>mapreduce.map.memory.mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>mapreduce.reduce.memory.mb</name>
    <value>1024</value>
  </property>
  <!--
  <property>
    <name>hive.mapred.map.tasks.speculative.execution</name>
    <value>false</value>
  </property>
  -->
  <property>
    <name>hive.mapred.reduce.tasks.speculative.execution</name>
    <value>false</value>
  </property>
  <property>
    <name>mapred.map.tasks.speculative.execution</name>
    <value>false</value>
  </property>
  <property>
    <name>mapred.reduce.tasks.speculative.execution</name>
    <value>false</value>
  </property>
  <property>
    <name>mapreduce.job.queuename</name>
    <value>mapreduce</value>
  </property>
  <property>
    <name>hive.metastore.client.socket.timeout</name>
    <value>600</value>
  </property>
  <property>
    <name>hive.auto.convert.join.noconditionaltask.size</name>
    <value>671088000</value>
  </property>
  
  <property>
    <name>hive.server2.authentication</name>
    <value>KERBEROS</value>
  </property>
  <property>
    <name>hive.server2.authentication.kerberos.principal</name>
    <value>hive/_HOST@HADOOP.HAP</value>
  </property>
  <property>
    <name>hive.server2.authentication.kerberos.keytab</name>
    <value>/tmp/hive.keytab</value>
  </property>

  <property>
    <name>hive.metastore.sasl.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.metastore.kerberos.keytab.file</name>
    <value>/export/keytabs_conf/hive.keytab</value>
  </property>
  <property>
    <name>hive.metastore.kerberos.principal</name>
    <value>hive/_HOST@HADOOP.HAP</value>
  </property>

  <property>
    <name>hive.metastore.uris</name>
    <value>thrift:  </property>

  <property>
    <name>hive.server2.support.dynamic.service.discovery</name>
    <value>true</value>
  </property>
 

  <!--hive security-->
  <property>
    <name>hive.security.authorization.enabled</name>
    <value>true</value>
  </property>
  <property>
    <name>hive.security.authorization.createtable.owner.grants</name>
    <value>ALL</value>
  </property>
  <property>
    <name>hive.security.authorization.task.factory</name>
    <value>org.apache.hadoop.hive.ql.parse.authorization.HiveAuthorizationTaskFactoryImpl</value>
  </property>
  <property>
    <name>hive.server2.enable.doAs</name>
    <value>false</value>
  </property>
  <property>
    <name>hive.warehouse.subdir.inherit.perms</name>
    <value>true</value>
  </property>

  <!-- hive Storage Based Authorization-->
  <!--
  <property>
    <name>hive.metastore.pre.event.listeners</name>
    <value>org.apache.hadoop.hive.ql.security.authorization.AuthorizationPreEventListener</value>
  </property>
  <property>
    <name>hive.security.metastore.authorization.manager</name>
    <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
  </property>
  <property>
    <name>hive.security.metastore.authenticator.manager</name>
    <value>org.apache.hadoop.hive.ql.security.HadoopDefaultMetastoreAuthenticator</value>
  </property>
  <property>
    <name>hive.security.metastore.authorization.auth.reads</name>
    <value>true</value>
  </property>
  -->
  <!--  SQL Standard Based Hive Authorization-->
  <property>
    <name>hive.users.in.admin.role</name>
    <value>hive,test109</value>
  </property>
  <property>
    <name>hive.security.authorization.manager</name>
    <value>org.apache.hadoop.hive.ql.security.authorization.plugin.sqlstd.SQLStdConfOnlyAuthorizerFactory</value>
  </property>
  <property>
    <name>hive.security.authenticator.manager</name>
    <value>org.apache.hadoop.hive.ql.security.SessionStateUserAuthenticator</value>
  </property>

  <property>
    <name>hive.server2.map.fair.scheduler.queue</name>
    <value>false</value>
  </property>
  
  <!-- https:    <property>
        <name>hive.exec.stagingdir</name>
        <value>/tmp/hive/spark-stagingdir</value>
    </property>
  
</configuration>

The steps to startup spark 1.5.x

1.

kinit -kt /tmp/xx.keytab hive/xxx

2.Startup Spark Thrift Server

sbin/start-thriftserver.sh --master yarn-client --num-executors 2

Following exception is thrown during startup

15/11/19 15:39:59 ERROR TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
	at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
	at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
	at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:358)
	at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:215)
	at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:73)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1447)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:63)
	at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:73)
	at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:2661)
	at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:2680)
	at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:425)
	at org.apache.spark.sql.hive.client.ClientWrapper.<init>(ClientWrapper.scala:171)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at org.apache.spark.sql.hive.client.IsolatedClientLoader.liftedTree1$1(IsolatedClientLoader.scala:183)
	at org.apache.spark.sql.hive.client.IsolatedClientLoader.<init>(IsolatedClientLoader.scala:179)
	at org.apache.spark.sql.hive.HiveContext.metadataHive$lzycompute(HiveContext.scala:264)
	at org.apache.spark.sql.hive.HiveContext.metadataHive(HiveContext.scala:186)
	at org.apache.spark.sql.hive.HiveContext.setConf(HiveContext.scala:393)
	at org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:229)
	at org.apache.spark.sql.SQLContext$$anonfun$5.apply(SQLContext.scala:228)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
	at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:228)
	at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:72)
	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:58)
	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:77)
	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
	at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
	at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
	at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
	at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
	at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
	at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
	... 52 more
15/11/19 15:39:59 WARN metastore: Failed to connect to the MetaStore Server...
15/11/19 15:39:59 INFO metastore: Waiting 1 seconds before next connection attempt.

Note: If I don’t configure the spark.sql.hive.metastore things in spark-defaults.conf, not surprising that there is version incompatible issue when I do DML,but I can startup the spark thrift server and pass the kerberos authentication


Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.


Showing results for 


Search instead for 

Did you mean: 

Discover all of the brand-new features and improvements to existing capabilities in the Dataiku 11.3 updateLET’S GO

Hive metastore synchronization fails (GSS initiate failed: Server not found in Kerberos database)


DSS 4.0

When trying to synchronize the metastore, I get this error:


[18:20:40] [ERROR] [org.apache.thrift.transport.TSaslTransport] running compute_sfpd_incidents_sample_prepared_NP - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:193)
at org.apache.hive.jdbc.HiveConnection.(HiveConnection.java:155)
at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
at com.dataiku.dip.hive.HiveServer2ConnectionPoolService$1.makeObject(HiveServer2ConnectionPoolService.java:171)
at com.dataiku.dip.hive.HiveServer2ConnectionPoolService$1.makeObject(HiveServer2ConnectionPoolService.java:122)
at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1220)
at com.dataiku.dip.hive.HiveServer2ConnectionPoolService.take(HiveServer2ConnectionPoolService.java:293)
at com.dataiku.dip.hive.HiveServer2SchemaHandler.take(HiveServer2SchemaHandler.java:27)
at com.dataiku.dip.hive.HiveServer2SchemaHandler.takeForMetastore(HiveServer2SchemaHandler.java:31)
at com.dataiku.dip.hive.HiveServer2SchemaHandler.listHiveDatabase(HiveServer2SchemaHandler.java:40)
at com.dataiku.dip.hive.HiveServer2SchemaHandler.isHiveDatabase(HiveServer2SchemaHandler.java:72)
at com.dataiku.dip.hive.HiveMetastoreSynchronizer.isTableWriteSafe(HiveMetastoreSynchronizer.java:324)
at com.dataiku.dip.hive.HiveMetastoreSynchronizer.synchronizeOneDatasetPartition(HiveMetastoreSynchronizer.java:254)
at com.dataiku.dip.dataflow.jobrunner.ActivityRunner.waitForEnd(ActivityRunner.java:180)
at com.dataiku.dip.dataflow.jobrunner.ActivityRunner.runActivity(ActivityRunner.java:549)
at com.dataiku.dip.dataflow.jobrunner.JobRunner.runActivity(JobRunner.java:123)
at com.dataiku.dip.dataflow.jobrunner.JobRunner.access$900(JobRunner.java:35)
at com.dataiku.dip.dataflow.jobrunner.JobRunner$ActivityExecutorThread.run(JobRunner.java:312)
Caused by: GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:710)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
... 27 more
Caused by: KrbException: Server not found in Kerberos database (7) - LOOKING_UP_SERVER
at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:73)
at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:191)
at sun.security.krb5.KrbTgsReq.sendAndGetCreds(KrbTgsReq.java:202)
at sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:292)
at sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:101)
at sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:456)
at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:641)
... 30 more
Caused by: KrbException: Identifier doesn't match expected value (906)
at sun.security.krb5.internal.KDCRep.init(KDCRep.java:140)
at sun.security.krb5.internal.TGSRep.init(TGSRep.java:65)
at sun.security.krb5.internal.TGSRep.(TGSRep.java:60)
at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:55)
... 36 more

1 Solution

The most probable cause is that the Hiveserver2 principal which you had to enter in Administration > Settings > Hadoop is not correct.

It should generally be in the form hive/fully.qualified.host.name@YOUR.REALM.COM

Note that you need the full syntax («hive» or «hive/fully.qualified.host.name» is not enough) and that the hostname can generally not be an IP or 127.0.0.1


  • All discussion topics


  • Previous Topic

  • Next Topic

1 Reply

The most probable cause is that the Hiveserver2 principal which you had to enter in Administration > Settings > Hadoop is not correct.

It should generally be in the form hive/fully.qualified.host.name@YOUR.REALM.COM

Note that you need the full syntax («hive» or «hive/fully.qualified.host.name» is not enough) and that the hostname can generally not be an IP or 127.0.0.1

Symptom:

a. Hue could not show Hive tables after Hive enables PAM authentication, see below screenshot:

b. From /opt/mapr/hue/hue-<version>/logs/runcpserver.log, below error messages show up:

[19/Sep/2017 15:55:07 -0700] dbms         DEBUG    Query Server: {'server_name': 'beeswax', 'transport_mode': 'socket', 'server_host': 's4.poc.com', 'server_port': 10000, 'auth_password_used': False, 'http_url': 'http://s4.poc.com:10001/cliservice', 'auth_username': 'hue', 'principal': None}
[19/Sep/2017 15:55:10 -0700] thrift_util  INFO     Thrift saw a transport exception: Bad status: 3 (Error validating the login)

c. From HiveServer2 log /opt/mapr/hive/hive-<version>/logs/mapr/hive.log, below stacktrace shows up:

2017-09-19T15:57:11,046 ERROR [HiveServer2-Handler-Pool: Thread-60] transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: Error validating the login [Caused by javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: login [Caused by javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: login]]
 at org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:110)
 at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
 at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
 at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
 at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
Caused by: javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: login [Caused by javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: login]
 at org.apache.hive.service.auth.PamAuthenticationProviderImpl.Authenticate(PamAuthenticationProviderImpl.java:54)
 at org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(PlainSaslHelper.java:119)
 at org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(PlainSaslServer.java:103)
 ... 8 more
Caused by: javax.security.sasl.AuthenticationException: Error authenticating with the PAM service: login
 at org.apache.hive.service.auth.PamAuthenticationProviderImpl.Authenticate(PamAuthenticationProviderImpl.java:48)
 ... 10 more
2017-09-19T15:57:11,046 ERROR [HiveServer2-Handler-Pool: Thread-60] server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Error validating the login
 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
 at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Error validating the login
 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
 at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
 at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
 ... 4 more

Env:

Hue 3.10 or above version

Solution:

Enable Hue PAM pass-through authentication with Hive.
For MapR platform, please follow this Documentation:
1. Configure [beeswax] section in hue.ini in directory /opt/mapr/hue/hue-<version>/desktop/conf

[beeswax]
...
# Security mechanism of authentication none/GSSAPI/MAPR-SECURITY
  mechanism=none
# Override the default desktop username and password of the hue user used for authentications with other services.
# e.g. Used for LDAP/PAM pass-through authentication.
  auth_username=mapr
  auth_password=password_for_mapr_user 

2. Restart Hue

maprcli node services -name hue -action restart -nodes <Hue node>

Понравилась статья? Поделить с друзьями:
  • Error transport error 202 bind failed address already in use
  • Error transferring перевод
  • Error timeout of 59000ms exceeded перевод
  • Error timeout of 2000ms exceeded mocha
  • Error timeout of 0ms exceeded перевод