hive catalog Failed to get partitionKeys on meta管理 阿里云dlf

为了更快的定位您的问题,请提供以下信息,谢谢
【详述】问题详细描述
【背景】做过哪些操作?
【业务影响】是
【是否存算分离】是
【StarRocks版本】3.2.8
【集群规模】例如:1fe(1 follower)+2be(fe与be混部)
【机器信息】16C/128G/万兆

【附件】
2024-08-01 09:00:00.148+08:00 ERROR (starrocks-mysql-nio-pool-59757|3432774) [CachingHiveMetastore.get():625] Error occurred when loading cache
com.google.common.util.concurrent.UncheckedExecutionException: com.starrocks.connector.exception.StarRocksConnectorException: Failed to get partitionKeys on [mysql_prod_enlightent_daily.animation_total_heat], msg: null
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2085) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.get(LocalCache.java:4011) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4034) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5010) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:5017) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.get(CachingHiveMetastore.java:623) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionKeysByValue(CachingHiveMetastore.java:270) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastoreOperations.getPartitionKeys(HiveMetastoreOperations.java:246) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetadata.listPartitionNames(HiveMetadata.java:203) ~[starrocks-fe.jar:?]
at com.starrocks.connector.CatalogConnectorMetadata.listPartitionNames(CatalogConnectorMetadata.java:105) ~[starrocks-fe.jar:?]
at com.starrocks.server.MetadataMgr.listPartitionNames(MetadataMgr.java:374) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.rewrite.OptExternalPartitionPruner.initPartitionInfo(OptExternalPartitionPruner.java:236) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.rewrite.OptExternalPartitionPruner.prunePartitionsImpl(OptExternalPartitionPruner.java:124) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.rewrite.OptExternalPartitionPruner.prunePartitions(OptExternalPartitionPruner.java:81) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.rule.transformation.ExternalScanPartitionPruneRule.transform(ExternalScanPartitionPruneRule.java:59) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:81) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.rewrite(RewriteTreeTask.java:99) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.RewriteTreeTask.execute(RewriteTreeTask.java:59) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.task.SeriallyTaskScheduler.executeTasks(SeriallyTaskScheduler.java:65) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.ruleRewriteOnlyOnce(Optimizer.java:857) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.logicalRuleRewrite(Optimizer.java:504) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.rewriteAndValidatePlan(Optimizer.java:609) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimizeByCost(Optimizer.java:225) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimize(Optimizer.java:174) ~[starrocks-fe.jar:?]
at com.starrocks.sql.optimizer.Optimizer.optimize(Optimizer.java:151) ~[starrocks-fe.jar:?]
at com.starrocks.sql.InsertPlanner.buildExecPlan(InsertPlanner.java:391) ~[starrocks-fe.jar:?]
at com.starrocks.sql.InsertPlanner.plan(InsertPlanner.java:203) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.planInsertStmt(StatementPlanner.java:161) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:140) ~[starrocks-fe.jar:?]
at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:92) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:532) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:415) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:610) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:917) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:834) ~[?:?]
Caused by: com.starrocks.connector.exception.StarRocksConnectorException: Failed to get partitionKeys on [mysql_prod_enlightent_daily.animation_total_heat], msg: null
at com.starrocks.connector.hive.HiveMetaClient.callRPC(HiveMetaClient.java:164) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetaClient.callRPC(HiveMetaClient.java:150) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetaClient.getPartitionKeys(HiveMetaClient.java:238) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionKeysByValue(HiveMetastore.java:141) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionKeys(CachingHiveMetastore.java:279) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:169) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.CacheLoader$1.load(CacheLoader.java:192) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3570) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2312) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2189) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.get(LocalCache.java:4011) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4034) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5010) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:5017) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.get(CachingHiveMetastore.java:623) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionKeysByValue(CachingHiveMetastore.java:270) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionKeys(CachingHiveMetastore.java:279) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:169) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.CacheLoader$1.load(CacheLoader.java:192) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3570) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2312) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2189) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2079) ~[spark-dpp-1.0.0.jar:?]
… 45 more
Caused by: java.lang.reflect.InvocationTargetException
at jdk.internal.reflect.GeneratedMethodAccessor26.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]
at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
at com.starrocks.connector.hive.HiveMetaClient.callRPC(HiveMetaClient.java:161) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetaClient.callRPC(HiveMetaClient.java:150) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetaClient.getPartitionKeys(HiveMetaClient.java:238) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.HiveMetastore.getPartitionKeysByValue(HiveMetastore.java:141) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionKeys(CachingHiveMetastore.java:279) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:169) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.CacheLoader$1.load(CacheLoader.java:192) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3570) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2312) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2189) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2079) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.get(LocalCache.java:4011) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:4034) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:5010) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.getUnchecked(LocalCache.java:5017) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.get(CachingHiveMetastore.java:623) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.getPartitionKeysByValue(CachingHiveMetastore.java:270) ~[starrocks-fe.jar:?]
at com.starrocks.connector.hive.CachingHiveMetastore.loadPartitionKeys(CachingHiveMetastore.java:279) ~[starrocks-fe.jar:?]
at com.google.common.cache.CacheLoader$FunctionToCacheLoader.load(CacheLoader.java:169) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.CacheLoader$1.load(CacheLoader.java:192) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3570) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2312) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2189) ~[spark-dpp-1.0.0.jar:?]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2079) ~[spark-dpp-1.0.0.jar:?]
… 45 more
Caused by: java.lang.StackOverflowError
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]
at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]

描述下具体信息,这个日志是什么操作对应的日志

这是执行一个sql 查询的时候,报的错。
我配置的 hive catalog, metastore 是用的dlf。

select * from hive.mysql_prod_enlightent_daily.tv_total_heat where name=‘少年白马醉春风’ order by day;
重启之后,估计要一个月左右才会复现这种问题

场景应该 是 sql 查询的时候, 需要所有的分区信息, 从 dlf 获取分区信息的时候异常了, 不清楚为什么一直异常, 当时查询非分区表是没有问题的
@Override
public List listPartitionNames(String dbName, String tblName, short max)
throws MetaException, TException {
return call(this.readWriteClient, client -> client.listPartitionNames(dbName, tblName, max),
“listPartitionNames”, dbName, tblName, max);
}

public List<String> getPartitionKeys(String dbName, String tableName) {
    try (Timer ignored = Tracers.watchScope(EXTERNAL, "HMS.listPartitionNames")) {
        return callRPC("listPartitionNames", String.format("Failed to get partitionKeys on [%s.%s]", dbName, tableName),
                dbName, tableName, (short) -1);
    }
}

报错点是在 CALLRPC函数里

public T callRPC(String methodName, String messageIfError, Class<?>[] argClasses, Object… args) {
RecyclableClient client = null;
StarRocksConnectorException connectionException = null;

try {
    client = getClient();
    argClasses = argClasses == null ? ClassUtils.getCompatibleParamClasses(args) : argClasses;
    Method method = client.hiveClient.getClass().getDeclaredMethod(methodName, argClasses);
    return (T) method.invoke(client.hiveClient, args);
} catch (Throwable e) {
    LOG.error(messageIfError, e);
    connectionException = new StarRocksConnectorException(messageIfError + ", msg: " + e.getMessage(), e);
    throw connectionException;
} finally {
    if (client == null && connectionException != null) {
        LOG.error("Failed to get hive client. {}", connectionException.getMessage());
    } else if (connectionException != null) {
        LOG.error("An exception occurred when using the current long link " +
                "to access metastore. msg: {}", messageIfError);
        client.close();
    } else if (client != null) {
        client.finish();
    }
}

}
不清楚异常的信息为什么为null

image

153: public T callRPC(String methodName, String messageIfError, Class<?>[] argClasses, Object… args) {

client 类是:DLFProxyMetaStoreClient.java

这是偶现问题还是必现问题

如果出现这个状况以后, 是必现的, 连续重试3,4次都是的, 重启完fe之后,就没了。 以前是无法获取到 hive 表的最新分区。现在直接报 exception

日志信息不够全,请把fe日志错误的位置前后stack多发一些。另外你这张表大约有多少分区,分区列有几列

前后stack 有很多, 我看都是重复的, 现在日志被运维删掉了。hive表有 2,165 个分区。5个字段
预格式化文本将缩进 4 格
| CREATE TABLE mysql_prod_enlightent_daily.tv_total_heat( |
| nameid bigint, |
| name string, |
| channel string, |
| total_heat int) |
| PARTITIONED BY ( |
| day string) |
| ROW FORMAT SERDE |
| ‘org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe’ |
| STORED AS INPUTFORMAT |
| ‘org.apache.hadoop.mapred.TextInputFormat’ |
| OUTPUTFORMAT |
| ‘org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat’ |
| LOCATION |
| ‘hdfs://emr-header-1.cluster-325063:9000/user/hive/warehouse/mysql_prod_enlightent_daily.db/tv_total_heat’ |
| TBLPROPERTIES ( |
| ‘bucketing_version’=‘2’, |
| ‘last_modified_by’=‘hive’, |
| ‘last_modified_time’=‘1665479865’, |
| ‘transient_lastDdlTime’=‘1665479865’) |

如果全表查询,getPartitionKeys抛异常,那可能是服务端响应慢或者主动断开连接,还是建议查询表时加上谓词。当然如果可以复现的话,请将相关日志全部贴出来,这样方便排查问题。

如果服务端主动断开连接的话, 是不是非分区的表也不能查询

非分区表不会调用这个接口,所以非分区表没问题

at com.starrocks.sql.optimizer.Optimizer.rewriteAndValidatePlan(Optimizer.java:609) ~[starrocks-fe.jar:?]

    at com.starrocks.sql.optimizer.Optimizer.optimizeByCost(Optimizer.java:225) ~[starrocks-fe.jar:?]

    at com.starrocks.sql.optimizer.Optimizer.optimize(Optimizer.java:174) ~[starrocks-fe.jar:?]

    at com.starrocks.sql.StatementPlanner.createQueryPlan(StatementPlanner.java:216) ~[starrocks-fe.jar:?]

    at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:132) ~[starrocks-fe.jar:?]

    at com.starrocks.sql.StatementPlanner.plan(StatementPlanner.java:92) ~[starrocks-fe.jar:?]

    at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:532) ~[starrocks-fe.jar:?]

    at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:415) ~[starrocks-fe.jar:?]

    at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:610) ~[starrocks-fe.jar:?]

    at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:917) ~[starrocks-fe.jar:?]

    at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]

    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]

    at java.lang.Thread.run(Thread.java:834) ~[?:?]

Caused by: java.lang.reflect.InvocationTargetException

    at jdk.internal.reflect.GeneratedMethodAccessor38.invoke(Unknown Source) ~[?:?]

    at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?]

    at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]

    at com.starrocks.connector.hive.HiveMetaClient.callRPC(HiveMetaClient.java:161) ~[starrocks-fe.jar:?]

    ... 58 more

Caused by: java.lang.StackOverflowError

    at java.util.Collections$UnmodifiableCollection.isEmpty(Collections.java:1033) ~[?:?]

复现了, 大概3周复现, 不重启 fe 的话, 会一直有问题, show databases 命令也会报错

定位到了, 是 dlf 的client exception。
日志:An exception occurred when using the current long link to access metastore. msg: Failed to get partition
Keys on [mysql_prod_enlightent_daily.tv_total_heat]

public <T> T callRPC(String methodName, String messageIfError, Class<?>[] argClasses, Object... args) {
    RecyclableClient client = null;
    StarRocksConnectorException connectionException = null;

    try {
        client = getClient();
        argClasses = argClasses == null ? ClassUtils.getCompatibleParamClasses(args) : argClasses;
        Method method = client.hiveClient.getClass().getDeclaredMethod(methodName, argClasses);
        return (T) method.invoke(client.hiveClient, args);
    } catch (Throwable e) {
        LOG.error(messageIfError, e);
        connectionException = new StarRocksConnectorException(messageIfError + ", msg: " + e.getMessage(), e);
        throw connectionException;
    } finally {
        if (client == null && connectionException != null) {
            LOG.error("Failed to get hive client. {}", connectionException.getMessage());
        } else if (connectionException != null) {
            LOG.error("An exception occurred when using the current long link " +
                    "to access metastore. msg: {}", messageIfError);
            client.close();
        } else if (client != null) {
            client.finish();
        }
    }
}

dlf 的问题目前优先级不高,可以先自己看看

请问有没有catalog 元数据缓存这块儿的资料, 自己看好难

2024-09-06 09:10:15.840+08:00 ERROR (starrocks-mysql-nio-pool-7470|704276) [DlfMetaStoreClientDelegate.getPartitionsByValues():1888] unable to get partition by values:
java.util.concurrent.ExecutionException: java.lang.StackOverflowError
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
at com.aliyun.datalake.metastore.common.functional.FunctionalUtils.batchedCall(FunctionalUtils.java:109)
at com.aliyun.datalake.metastore.common.DefaultDataLakeMetaStore.getPartitionsByValues(DefaultDataLakeMetaStore.java:603)
at com.aliyun.datalake.metastore.hive2.DlfMetaStoreClientDelegate.getPartitionsByValues(DlfMetaStoreClientDelegate.java:1880)
at com.aliyun.datalake.metastore.hive2.DlfMetaStoreClientDelegate.getPartitionsByNames(DlfMetaStoreClientDelegate.java:1870)
at com.aliyun.datalake.metastore.hive2.DlfMetaStoreClient.getPartitionsByNames(DlfMetaStoreClient.java:1129)
at com.aliyun.datalake.metastore.hive2.DlfMetaStoreClient.getPartitionsByNames(DlfMetaStoreClient.java:1120)
at com.starrocks.connector.hive.DLFProxyMetaStoreClient.lambda$getPartitionsByNames$124(DLFProxyMetaStoreClient.java:1344)
at com.aliyun.datalake.metastore.common.functional.FunctionalUtils.functionWrapper(FunctionalUtils.java:47)
at com.aliyun.datalake.metastore.common.functional.FunctionalUtils.call(FunctionalUtils.java:35)
at com.starrocks.connector.hive.DLFProxyMetaStoreClient.call(DLFProxyMetaStoreClient.java:2502)
at com.starrocks.connector.hive.DLFProxyMetaStoreClient.getPartitionsByNames(DLFProxyMetaStoreClient.java:1344)

跟缓存没关系