CN节点报错:RemoteException: File does not exist

采用Starrocks3.2.4 存算分离模式,cn.out一直报错:
hdfsOpenFile(/srvolume/default/006f07e6-c044-4e3f-88c7-fe595625d46e/db28245/28260/28259/meta/0000000000006E66_000000000000000C.meta): FileSystem#open((Lorg/apache/hadoop/fs/Path;I)Lorg/apache/hadoop/fs/FSDataInputStream;) error:
RemoteException: File does not exist: /srvolume/default/006f07e6-c044-4e3f-88c7-fe595625d46e/db28245/28260/28259/meta/0000000000006E66_000000000000000C.meta

我的远程Hadoop版本是2.6.1 ,请问这个问题怎么解决呢?

试试将 fe 参数 lake_autovacuum_max_previous_versions 调大到 10,再测试看看

调大了,还是报这个错误,每次重启集群,报错找不到的文件都不一样。

什么场景下出现这个错误的? 是导入数据后立马查询报错吗?

是的。但是好像不影响查询结果。CN节点一直在报这个错误。

admin set frontend config ("lake_autovacuum_grace_period_minutes" = "30");

尝试将这个参数调整到30

还是报错,这是我的fe配置:
run_mode = shared_data
cloud_native_storage_type = HDFS
enable_load_volume_from_conf = false
cloud_native_hdfs_url = hdfs://hdfsservice/startrocks
JAVA_OPTS="-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:+UseG1GC -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:${LOG_DIR}/fe.gc.log.$DATE -XX:+PrintConcurrentLocks -Djava.security.policy=${STARROCKS_HOME}/conf/udf_security.policy"
sys_log_level = INFO
meta_dir = /data/meta

http_port = 8030
rpc_port = 9020
query_port = 9030
edit_log_port = 9010
mysql_service_nio_enabled = true
priority_networks = 172.16.70.152

lake_autovacuum_max_previous_versions = 10
lake_autovacuum_grace_period_minutes = 30