Spark Connector 读取数据报错

为了更快的定位您的问题,请提供以下信息,谢谢
【详述】【详述】使用starrocks-spark-connector-3.3_2.12-1.1.1.jar查询starrocks报错Caused by: java.lang.IllegalStateException: Duplicate key (attempted merging values Field{name=’’, type=‘VARCHAR’, comment=’’, precision=0, scale=0} and Field{name=’’, type=‘VARCHAR’, comment=’’, precision=0, scale=0})

【背景】执行spark.sql(‘sql内容’),如果sql内容为selet a from table_name where b = ‘条件’ 则报上面的错误,where条件放到select里则不会报错,应该是个bug

错误栈:

Caused by: java.lang.IllegalStateException: Duplicate key (attempted merging values Field{name=’’, type=‘VARCHAR’, comment=’’, precision=0, scale=0} and Field{name=’’, type=‘VARCHAR’, comment=’’, precision=0, scale=0})
at java.base/java.util.stream.Collectors.duplicateKeyException(Unknown Source)
at java.base/java.util.stream.Collectors.lambda$uniqKeysMapAccumulator$1(Unknown Source)
at java.base/java.util.stream.ReduceOps$3ReducingSink.accept(Unknown Source)
at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.copyInto(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(Unknown Source)
at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(Unknown Source)
at java.base/java.util.stream.AbstractPipeline.evaluate(Unknown Source)
at java.base/java.util.stream.ReferencePipeline.collect(Unknown Source)
at com.starrocks.connector.spark.serialization.RowBatch.(RowBatch.java:108)
at com.starrocks.connector.spark.rdd.ScalaValueReader.hasNext(ScalaValueReader.scala:201)
at com.starrocks.connector.spark.rdd.AbstractStarrocksRDDIterator.hasNext(AbstractStarrocksRDDIterator.scala:58)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:760)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:32)
at org.sparkproject.guava.collect.Ordering.leastOf(Ordering.java:628)
at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37)
at org.apache.spark.rdd.RDD.$anonfun$takeOrdered$2(RDD.scala:1539)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2(RDD.scala:855)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitions$2$adapted(RDD.scala:855)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:365)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:329)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:136)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:548)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1504)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)

您好 StarRocks集群是什么版本 我尝试看能不能复现下

3.1.0

@U_1669452058044_6098 方便发下starrocks表的DDL,以及spark sql吗

java 怎么构建操作对象啊?

这个原因确定了,是SR侧构建plan时有点问题,后面会修复下,可以先把where里的列放到select里绕过

:+1:

请问我需要限定分区查询,该怎么绕过
用 starrocks.filter.query 的方式或者 where pt =xxx的方式 都有这个错误
不使用这个限定条件 用id = xxx 就成功