Waiting too much time to get appId from handle. spark app state: UNKNOWN

版本:2.3.8,fe和be非混布
创建spark资源:
CREATE EXTERNAL RESOURCE “spark_etl_cluster”
PROPERTIES
(
“type” = “spark”,
“spark.master” = “yarn”,
“spark.submit.deployMode” = “cluster”,
“spark.hadoop.yarn.resourcemanager.ha.enabled” = “true”,
“spark.hadoop.yarn.resourcemanager.ha.rm-ids” = “rm64,rm77”,
“spark.hadoop.yarn.resourcemanager.address.rm64” = “mphd01:8032”,
“spark.hadoop.yarn.resourcemanager.address.rm77” = “mphd02:8032”,
“spark.hadoop.fs.defaultFS” = “hdfs://nameservice1”,
“working_dir” = “hdfs://nameservice1/tmp/starrocks”,
“broker” = “broker1”
);
fe.conf配置spark和yarn客户端:
spark_home_default_dir = /home/starrocks/spark-2.4.6-bin-hadoop2.6
spark_resource_path = /opt/module/starrocks/fe/lib/spark2x/jars/spark-2x.zip
yarn_client_path = /opt/module/starrocks/fe/lib/yarn-client/hadoop/bin/yarn
执行导入
LOAD LABEL das.spark_load_label_test_4

(

DATA INFILE (“hdfs://nameservice1/starrocks/user_info.txt”)

INTO TABLE user_info_test

COLUMNS TERMINATED BY “,”

(name,gender,age)

)

WITH RESOURCE ‘spark_etl_cluster’

(

“spark.executor.memory” = “2g”,

“spark.shuffle.compress” = “true”

)

PROPERTIES

(

“timeout” = “3600”

);
报错:
type :ETL_SUBMIT_FAIL; msg: start spark app failed. error: Waiting too much time to get appId from handle. spark app state: UNKNOWN , loadJobId:93693049
请问这是什么原因导致的?

已收到,我们正在排查,请稍等。

麻烦发一下详细的日志。

发下日志 fe/log/spark_launcher_log/spark_launcher_xxx_yyy.log
spark 任务在 spark 集群一直没跑起来

你好,我在fe的log目录下fe/log/spark_launcher_log下没有找打任何日志文件:

我创建这种资源也不行:

我的是2.3.8的版本,因为官网的文档只到2.5的,所以spark load我只能参考2.5的,是不是2.3和2.5的spark load不一致?

fe leader 上看看fe/log/spark_launcher_log/spark_launcher_xxx_yyy.log

Exception in thread “main” org.apache.spark.SparkException: When running with master ‘yarn’ either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
at org.apache.spark.deploy.SparkSubmitArguments.error(SparkSubmitArguments.scala:657)
at org.apache.spark.deploy.SparkSubmitArguments.validateSubmitArguments(SparkSubmitArguments.scala:290)
at org.apache.spark.deploy.SparkSubmitArguments.validateArguments(SparkSubmitArguments.scala:251)
at org.apache.spark.deploy.SparkSubmitArguments.(SparkSubmitArguments.scala:120)
at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$1.(SparkSubmit.scala:907)
at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:907)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:81)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


之前的文档里也没有说明要配置这个HADOOP_CONF_DIR,现在要配置,是配置的前面下载的hadoop配置目录,也就是这里?

但是配置好了之后报错:
Exception in thread “main” java.lang.IllegalArgumentException: java.net.UnknownHostException: nameservice1
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:374)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:310)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:668)
at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:604)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:148)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2598)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.spark.deploy.yarn.Client$$anonfun$9.apply(Client.scala:139)
at org.apache.spark.deploy.yarn.Client$$anonfun$9.apply(Client.scala:139)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.yarn.Client.(Client.scala:139)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1530)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.net.UnknownHostException: nameservice1
… 24 more
哎。。。你们的文档做的真的是稀碎,描述真的是混乱不堪。
还有这个地方:

nameservice1填对了吗?域名看通不通吧


我改成主机+端口的方式依然不行。。。。
type :ETL_SUBMIT_FAIL; msg: start spark app failed. error: spark app state: KILLED, loadJobId:93766230,
logPath:/opt/module/starrocks/fe/ log /spark_launcher_log/spark_launcher_93766230_spark_load_label_test_18. log
虽然有日志文件,但是日志文件是空: