【hive catalog】与HMS链接被断开

【背景】配置hive catalog时,无法查询hive数据,日志里报链接被断开的错误信息
【StarRocks版本】3.0.2
【集群规模】3fe + 10be
【机器信息】10台物理机,48核、256GB内存
【HMS版本】2.3.8
【详述】
参考https://docs.starrocks.io/zh-cn/latest/data_source/catalog/hive_catalog#hive-catalog配置hive catalog,主要是进行了如下操作

  1. 物理机定期kinit
  2. be和fe配置文件添加JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"
  3. hadoop_env.sh 文件最开头增加 export HADOOP_USER_NAME="<user_name>"
  4. core-site.xml和hdfs-site.xml拷贝到be和fe的conf目录
  5. 重启所有fe和be
  6. 创建catalog:
    CREATE EXTERNAL CATALOG hive_catalog
    PROPERTIES
    (
    “type” = “hive”,
    “hive.metastore.uris” = “thrift://#######:9083,thrift://#######:9083
    );
  7. 查询hive catalog
    mysql> show databases from hive_catalog;
    ERROR 1064 (HY000): Failed to getAllDatabases, msg: null
    查看日志发现有如下报错:
    2023-08-02 10:15:42,103 WARN (starrocks-mysql-nio-pool-80|728) [HiveMetaStoreThriftClient.open():556] set_ugi() not successful, Likely cause: new client talking to old server. Continuing without it.
    org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[libthrift-0.13.0.jar:0.13.0]
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
    at org.apache.thrift.protocol.TBinaryProtocol.readStringBody(TBinaryProtocol.java:411) ~[libthrift-0.13.0.jar:0.13.0]
    at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:254) ~[libthrift-0.13.0.jar:0.13.0]
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) ~[libthrift-0.13.0.jar:0.13.0]
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_set_ugi(ThriftHiveMetastore.java:4931) ~[hive-apache-3.1.2-13.jar:?]
    at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.set_ugi(ThriftHiveMetastore.java:4917) ~[hive-apache-3.1.2-13.jar:?]
    at com.starrocks.connector.hive.HiveMetaStoreThriftClient.open(HiveMetaStoreThriftClient.java:548) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetaStoreThriftClient.(HiveMetaStoreThriftClient.java:302) ~[starrocks-fe.jar:?]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_131]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_131]
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_131]
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_131]
    at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84) ~[hive-apache-3.1.2-13.jar:?]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95) ~[hive-apache-3.1.2-13.jar:?]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148) ~[hive-apache-3.1.2-13.jar:?]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119) ~[hive-apache-3.1.2-13.jar:?]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:112) ~[hive-apache-3.1.2-13.jar:?]
    at com.starrocks.connector.hive.HiveMetaClient$RecyclableClient.(HiveMetaClient.java:93) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetaClient$RecyclableClient.(HiveMetaClient.java:82) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetaClient.getClient(HiveMetaClient.java:137) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetaClient.callRPC(HiveMetaClient.java:153) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetaClient.callRPC(HiveMetaClient.java:145) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetaClient.getAllDatabaseNames(HiveMetaClient.java:176) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetastore.getAllDatabaseNames(HiveMetastore.java:56) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.CachingHiveMetastore.loadAllDatabaseNames(CachingHiveMetastore.java:178) ~[starrocks-fe.jar:?]
    at com.google.common.cache.CacheLoader$SupplierToCacheLoader.load(CacheLoader.java:227) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.CacheLoader$1.load(CacheLoader.java:192) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache.get(LocalCache.java:3962) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3985) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4946) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4952) ~[spark-dpp-1.0.0.jar:?]
    at com.starrocks.connector.hive.CachingHiveMetastore.get(CachingHiveMetastore.java:447) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.CachingHiveMetastore.getAllDatabaseNames(CachingHiveMetastore.java:174) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.CachingHiveMetastore.loadAllDatabaseNames(CachingHiveMetastore.java:178) ~[starrocks-fe.jar:?]
    at com.google.common.cache.CacheLoader$SupplierToCacheLoader.load(CacheLoader.java:227) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.CacheLoader$1.load(CacheLoader.java:192) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3529) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2278) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2155) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2045) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache.get(LocalCache.java:3962) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3985) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4946) ~[spark-dpp-1.0.0.jar:?]
    at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:4952) ~[spark-dpp-1.0.0.jar:?]
    at com.starrocks.connector.hive.CachingHiveMetastore.get(CachingHiveMetastore.java:447) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.CachingHiveMetastore.getAllDatabaseNames(CachingHiveMetastore.java:174) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetastoreOperations.getAllDatabaseNames(HiveMetastoreOperations.java:42) ~[starrocks-fe.jar:?]
    at com.starrocks.connector.hive.HiveMetadata.listDbNames(HiveMetadata.java:73) ~[starrocks-fe.jar:?]
    at com.starrocks.server.MetadataMgr.listDbNames(MetadataMgr.java:105) ~[starrocks-fe.jar:?]
    at com.starrocks.qe.ShowExecutor.handleShowDb(ShowExecutor.java:770) ~[starrocks-fe.jar:?]
    at com.starrocks.qe.ShowExecutor.execute(ShowExecutor.java:264) ~[starrocks-fe.jar:?]
    at com.starrocks.qe.StmtExecutor.handleShow(StmtExecutor.java:1140) ~[starrocks-fe.jar:?]
    at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:516) ~[starrocks-fe.jar:?]
    at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:349) ~[starrocks-fe.jar:?]
    at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:463) ~[starrocks-fe.jar:?]
    at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:729) ~[starrocks-fe.jar:?]
    at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[?:1.8.0_131]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[?:1.8.0_131]
    at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]
    Caused by: java.net.SocketException: Connection reset
    at java.net.SocketInputStream.read(SocketInputStream.java:210) ~[?:1.8.0_131]
    at java.net.SocketInputStream.read(SocketInputStream.java:141) ~[?:1.8.0_131]
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:284) ~[?:1.8.0_131]
    at java.io.BufferedInputStream.read(BufferedInputStream.java:345) ~[?:1.8.0_131]
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125) ~[libthrift-0.13.0.jar:0.13.0]
    … 64 more

1.* 在每个 FE 和 每个 BE 上执行 kinit -kt keytab_path principal 命令,从 Key Distribution Center (KDC) 获取到 Ticket Granting Ticket (TGT)。执行命令的用户必须拥有访问 HMS 和 HDFS 的权限。注意,使用该命令访问 KDC 具有时效性,因此需要使用 cron 定期执行该命令。
2.如果查询时因为域名无法识别 (Unknown Host) 而发生访问失败,您需要将 HDFS 集群中各节点的主机名及 IP 地址之间的映射关系配置到 /etc/hosts 路径中。
3.网络和端口都是通的。
这几步确认一下吧

这几个都确认了的

  1. 就是用得crontab定时kinit
  2. 根据报错现象,是链接被重置了,不是没有链接上,host是可以解析的
  3. 这个是通的

装openjdk可以了

你好,你是怎么解决的?
我的报错:
org.jkiss.dbeaver.model.sql.DBSQLException: SQL 错误 [1064] [42000]: Failed to getAllDatabases, msg: null
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:135)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeStatement(SQLQueryJob.java:505)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.lambda$0(SQLQueryJob.java:436)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:168)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.executeSingleQuery(SQLQueryJob.java:423)
at org.jkiss.dbeaver.ui.editors.sql.execute.SQLQueryJob.extractData(SQLQueryJob.java:804)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditor$QueryResultsContainer.readData(SQLEditor.java:3008)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.lambda$0(ResultSetJobDataRead.java:121)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:168)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetJobDataRead.run(ResultSetJobDataRead.java:119)
at org.jkiss.dbeaver.ui.controls.resultset.ResultSetViewer$ResultSetDataPumpJob.run(ResultSetViewer.java:4425)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Failed to getAllDatabases, msg: null
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:425)
at com.mysql.jdbc.Util.getInstance(Util.java:408)
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:944)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3933)
at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3869)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2524)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2675)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2465)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2439)
at com.mysql.jdbc.StatementImpl.executeInternal(StatementImpl.java:829)
at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:729)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.execute(JDBCStatementImpl.java:342)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCStatementImpl.executeStatement(JDBCStatementImpl.java:132)

把hive-site.xml也复制到be和fe的conf目录下

1赞