sr 2.5.1 使用hudi catalog查询mor表和mor rt表时,导致be 宕机

【详述】sr 2.5.1 使用hudi catalog查询mor表和mor rt表时,导致be 宕机,麻烦大佬看下什么问题?

查询语句:
select * from hudi_catalog.hudi.flink_sink_hudi_mor_001;
select * from hudi_catalog.hudi.flink_sink_hudi_mor_001_rt;

报错信息:be.out

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/jni-packages/starrocks-jdbc-bridge-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
二月 16, 2023 2:25:21 下午 org.apache.hadoop.fs.FileSystem loadFileSystems
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.http.HttpFileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:454)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:75)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:60)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:92)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReaderInternal(HoodieParquetInputFormat.java:89)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:83)
at org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:74)
at com.starrocks.hudi.reader.HudiSliceScanner.open(HudiSliceScanner.java:140)

二月 16, 2023 2:25:21 下午 org.apache.hadoop.fs.FileSystem loadFileSystems
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.http.HttpsFileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:454)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:75)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:60)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:92)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReaderInternal(HoodieParquetInputFormat.java:89)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:83)
at org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:74)
at com.starrocks.hudi.reader.HudiSliceScanner.open(HudiSliceScanner.java:140)

二月 16, 2023 2:25:21 下午 org.apache.hadoop.fs.FileSystem loadFileSystems
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3native.NativeS3FileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:454)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:75)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:60)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:92)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReaderInternal(HoodieParquetInputFormat.java:89)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:83)
at org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:74)
at com.starrocks.hudi.reader.HudiSliceScanner.open(HudiSliceScanner.java:140)

二月 16, 2023 2:25:22 下午 org.apache.hadoop.util.NativeCodeLoader
警告: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
二月 16, 2023 2:25:22 下午 org.apache.hadoop.conf.Configuration warnOnceIfDeprecated
信息: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
二月 16, 2023 2:25:22 下午 org.apache.hadoop.io.compress.CodecPool getDecompressor
信息: Got brand-new decompressor [.gz]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/hudi-reader-lib/slf4j-log4j12-1.7.32.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/jni-packages/starrocks-jdbc-bridge-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
二月 16, 2023 2:25:23 下午 org.apache.hadoop.conf.Configuration warnOnceIfDeprecated
信息: mapred.job.map.memory.mb is deprecated. Instead, use mapreduce.map.memory.mb
terminate called after throwing an instance of ‘std::length_error’
what(): basic_string::_M_create
query_id:afd647f9-adc2-11ed-8199-1866dafb1278, fragment_instance:afd647f9-adc2-11ed-8199-1866dafb1279
*** Aborted at 1676528723 (unix time) try “date -d @1676528723” if you are using GNU date ***
PC: @ 0x7f1394c87387 __GI_raise
*** SIGABRT (@0x3eb00023567) received by PID 144743 (TID 0x7f12f2deb700) from PID 144743; stack trace: ***
@ 0x56ec9c2 google::(anonymous namespace)::FailureSignalHandler()
@ 0x7f139573c630 (unknown)
@ 0x7f1394c87387 __GI_raise
@ 0x7f1394c88a78 __GI_abort
@ 0x2a14690 _ZN9__gnu_cxx27__verbose_terminate_handlerEv.cold
@ 0x7bb3676 __cxxabiv1::__terminate()
@ 0x7bb36e1 std::terminate()
@ 0x7bb3834 __cxa_throw
@ 0x2a163a5 std::__throw_length_error()
@ 0x7c2bcec std::__cxx11::basic_string<>::_M_create()
@ 0x4d51db4 starrocks::vectorized::JniScanner::_fill_chunk()
@ 0x4d52bd9 starrocks::vectorized::JniScanner::do_get_next()
@ 0x4d5376a starrocks::vectorized::HdfsScanner::get_next()
@ 0x4cd28bf starrocks::connector::HiveDataSource::get_next()
@ 0x2eeeb75 starrocks::pipeline::ConnectorChunkSource::_read_chunk()
@ 0x2ed888c starrocks::pipeline::ChunkSource::buffer_next_batch_chunks_blocking()
@ 0x2c5bce4 _ZZN9starrocks8pipeline12ScanOperator18_trigger_next_scanEPNS_12RuntimeStateEiENKUlvE_clEv
@ 0x2c6cd1d starrocks::workgroup::ScanExecutor::worker_thread()
@ 0x47a3a9d starrocks::ThreadPool::dispatch_thread()
@ 0x479e82a starrocks::thread::supervise_thread()
@ 0x7f1395734ea5 start_thread
@ 0x7f1394d4fb0d __clone
@ 0x0 (unknown)

【StarRocks版本】2.5.1
【集群规模】例如:3fe(1 follower+2observer)+3be(fe与be混部)

已经修复,会发布在2.5.2版本,https://github.com/StarRocks/starrocks/pull/17833

升级后还是会导致be宕机

升级到2.5.2之后,hudi_catalog查询外表还是会导致be宕机

be.out日志如下:

key = required_fields, value = _hoodie_commit_time,_hoodie_commit_seqno,_hoodie_record_key,_hoodie_partition_path,_hoodie_file_name,uuid,name,age,ts,partition
key = hive_column_types, value = string#string#string#string#string#string#string#int#bigint#string
key = hive_column_names, value = _hoodie_commit_time,_hoodie_commit_seqno,_hoodie_record_key,_hoodie_partition_path,_hoodie_file_name,uuid,name,age,ts,partition
key = data_file_length, value = 451014
key = delta_file_paths, value = hdfs://xxx.xxx.xxx.xxx:8020/tmp/hudi/flink_sink_hudi_mor_002/.0a6b751f-d548-43ff-8b8b-bb6d44eb7d67_20230221154918784.log.1_0-2-0
key = serde, value = org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
key = base_path, value = hdfs://xxx.xxx.xxx.xxx:8020/tmp/hudi/flink_sink_hudi_mor_002
key = data_file_path, value = hdfs://xxx.xxx.xxx.xxx:8020/tmp/hudi/flink_sink_hudi_mor_002/0a6b751f-d548-43ff-8b8b-bb6d44eb7d67_0-2-0_20230221154918784.parquet
key = input_format, value = org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat
key = instant_time, value = 20230221154939809
二月 21, 2023 3:49:46 下午 org.apache.hadoop.fs.FileSystem loadFileSystems
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.http.HttpFileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:454)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:75)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:60)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:92)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReaderInternal(HoodieParquetInputFormat.java:89)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:83)
at org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:74)
at com.starrocks.hudi.reader.HudiSliceScanner.initReader(HudiSliceScanner.java:183)
at com.starrocks.hudi.reader.HudiSliceScanner.open(HudiSliceScanner.java:201)

二月 21, 2023 3:49:46 下午 org.apache.hadoop.fs.FileSystem loadFileSystems
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.http.HttpsFileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:454)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:75)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:60)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:92)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReaderInternal(HoodieParquetInputFormat.java:89)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:83)
at org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:74)
at com.starrocks.hudi.reader.HudiSliceScanner.initReader(HudiSliceScanner.java:183)
at com.starrocks.hudi.reader.HudiSliceScanner.open(HudiSliceScanner.java:201)

二月 21, 2023 3:49:46 下午 org.apache.hadoop.fs.FileSystem loadFileSystems
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.s3native.NativeS3FileSystem not a subtype
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:376)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(HadoopInputFile.java:38)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:469)
at org.apache.parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:454)
at org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:79)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:75)
at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:60)
at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:92)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReaderInternal(HoodieParquetInputFormat.java:89)
at org.apache.hudi.hadoop.HoodieParquetInputFormat.getRecordReader(HoodieParquetInputFormat.java:83)
at org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat.getRecordReader(HoodieParquetRealtimeInputFormat.java:74)
at com.starrocks.hudi.reader.HudiSliceScanner.initReader(HudiSliceScanner.java:183)
at com.starrocks.hudi.reader.HudiSliceScanner.open(HudiSliceScanner.java:201)

二月 21, 2023 3:49:46 下午 org.apache.hadoop.util.NativeCodeLoader
警告: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
二月 21, 2023 3:49:47 下午 org.apache.hadoop.conf.Configuration warnOnceIfDeprecated
信息: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
二月 21, 2023 3:49:47 下午 org.apache.hadoop.io.compress.CodecPool getDecompressor
信息: Got brand-new decompressor [.gz]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/hudi-reader-lib/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/jni-packages/starrocks-jdbc-bridge-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/starrocks/be/lib/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory]
二月 21, 2023 3:49:48 下午 org.apache.hadoop.conf.Configuration warnOnceIfDeprecated
信息: mapred.job.map.memory.mb is deprecated. Instead, use mapreduce.map.memory.mb

WARNING: Unable to attach Serviceability Agent. Unable to attach even with module exceptions: [org.apache.hudi.org.openjdk.jol.vm.sa.SASupportException: Sense failed., org.apache.hudi.org.openjdk.jol.vm.sa.SASupportException: Sense failed., org.apache.hudi.org.openjdk.jol.vm.sa.SASupportException: Sense failed.]

terminate called after throwing an instance of ‘std::length_error’
what(): basic_string::_M_create
query_id:4f03f248-b1bc-11ed-888d-1866dafb1278, fragment_instance:4f03f248-b1bc-11ed-888d-1866dafb1279
*** Aborted at 1676965789 (unix time) try “date -d @1676965789” if you are using GNU date ***
PC: @ 0x7fbac809d387 __GI_raise
*** SIGABRT (@0x3eb0002ee5a) received by PID 192090 (TID 0x7fba276ff700) from PID 192090; stack trace: ***
@ 0x5722822 google::(anonymous namespace)::FailureSignalHandler()
@ 0x7fbac8b52630 (unknown)
@ 0x7fbac809d387 __GI_raise
@ 0x7fbac809ea78 __GI_abort
@ 0x2a19d90 _ZN9__gnu_cxx27__verbose_terminate_handlerEv.cold
@ 0x7be9636 __cxxabiv1::__terminate()
@ 0x7be96a1 std::terminate()
@ 0x7be97f4 __cxa_throw
@ 0x2a1baa5 std::__throw_length_error()
@ 0x7c61cac std::__cxx11::basic_string<>::_M_create()
@ 0x4d839ac starrocks::vectorized::JniScanner::_append_datetime_data()
@ 0x4d83fa2 starrocks::vectorized::JniScanner::_fill_column()
@ 0x4d85082 starrocks::vectorized::JniScanner::_fill_chunk()
@ 0x4d8543c starrocks::vectorized::JniScanner::do_get_next()
@ 0x4d8843a starrocks::vectorized::HdfsScanner::get_next()
@ 0x4d06f4f starrocks::connector::HiveDataSource::get_next()
@ 0x2ef5fe5 starrocks::pipeline::ConnectorChunkSource::_read_chunk()
@ 0x2edff3c starrocks::pipeline::ChunkSource::buffer_next_batch_chunks_blocking()
@ 0x2c62984 _ZZN9starrocks8pipeline12ScanOperator18_trigger_next_scanEPNS_12RuntimeStateEiENKUlvE_clEv
@ 0x2c739ed starrocks::workgroup::ScanExecutor::worker_thread()
@ 0x47d2acd starrocks::ThreadPool::dispatch_thread()
@ 0x47cd85a starrocks::thread::supervise_thread()
@ 0x7fbac8b4aea5 start_thread
@ 0x7fbac8165b0d __clone
@ 0x0 (unknown)

你能在确认一下是不是这个query导致的?因为我看这里面好像也没有datetime类型的字段。然后stacktrace是挂在datetime字段上的。

*** Aborted at 1676965789 (unix time) try “date -d @1676965789” if you are using GNU date ***
PC: @ 0x7fbac809d387 __GI_raise
*** SIGABRT (@0x3eb0002ee5a) received by PID 192090 (TID 0x7fba276ff700) from PID 192090; stack trace: ***
@ 0x5722822 google::(anonymous namespace)::FailureSignalHandler()
@ 0x7fbac8b52630 (unknown)
@ 0x7fbac809d387 __GI_raise
@ 0x7fbac809ea78 __GI_abort
@ 0x2a19d90 _ZN9__gnu_cxx27__verbose_terminate_handlerEv.cold
@ 0x7be9636 __cxxabiv1::__terminate()
@ 0x7be96a1 std::terminate()
@ 0x7be97f4 __cxa_throw
@ 0x2a1baa5 std::__throw_length_error()
@ 0x7c61cac std::__cxx11::basic_string<>::_M_create()
@ 0x4d839ac starrocks::vectorized::JniScanner::_append_datetime_data()
@ 0x4d83fa2 starrocks::vectorized::JniScanner::_fill_column()
@ 0x4d85082 starrocks::vectorized::JniScanner::_fill_chunk()
@ 0x4d8543c starrocks::vectorized::JniScanner::do_get_next()

为了简洁,可以把Java那个waring exception不用post出来,这个warning和问题关系不大。

hudi表字段:没有datetime字段

sr外表查询语句:简单的select *查询
select * from hudi_catalog.hudi.flink_sink_hudi_mor_002;

上面的异常是从be.out取的。是我执行
select * from hudi_catalog.hudi.flink_sink_hudi_mor_002;
或者
select * from hudi_catalog.hudi.flink_sink_hudi_mor_002_rt;
时全部报错信息。

你能确认一下是不是在一台BE机器上出错的?

全部报错,报错信息是什么?

我们测试集群有3台,3fe和3be混步的,我看是有2台be的be.out有报错信息

全部报错信息就是,我执行完命令后打印的全部be.out日志。

我们可以试着简化下查询列来复现下这个问题吗?比如只是单纯select其中某一两个列就可以复现的。如果可以的话我们可以看下原始数值上有什么规律。

意思是我只是查询某一列?然后把be.out信息拿出来?

这个是查询一列的报错信息,但是be没有宕机。
查询一列的报错信息 (10.3 KB)

下面是查询select * 的报错,be宕机
查询所有字段的报错信息be宕机 (3.2 KB)

1. flink 代码:
hudi sink 表创建语句:
CREATE CATALOG hudi_catalog WITH (
‘type’ = ‘hudi’,
‘mode’ = ‘hms’,
‘default-database’ = ‘default’,
‘hive.conf.dir’ = ‘/etc/hive/conf.cloudera.hive’,
‘table.external’ = ‘true’
);

use catalog hudi_catalog;

use hudi;

drop table if exists flink_sink_hudi_mor_paramer_test001;
create table if not exists flink_sink_hudi_mor_paramer_test001(
uuid varchar(20),
name varchar(10),
age int,
ts timestamp(3),
partition varchar(20)
)
with (
‘connector’ = ‘hudi’,
‘path’ = ‘hdfs://xxx.xxx.xxx.xxx:8020/tmp/hudi/flink_sink_hudi_mor_paramer_test001’,
‘table.type’ = ‘MERGE_ON_READ’,
‘write.operation’=‘bulk_insert’,
‘hoodie.datasource.write.recordkey.field’ = ‘uuid’,
‘write.tasks’ = ‘2’, – bulk_insert 写 task 的并发,写频繁时可以调大
‘write.bucket_assign.tasks’ = ‘2’, – bucket assigner 的并发
‘compaction.schedule.enabled’ = ‘true’, – 是否阶段性生成压缩 plan, 建议开启,即使compaction.async.enabled 关闭的情况下
‘compaction.async.enabled’ = ‘true’, – 是否开启异步压缩,
‘compaction.tasks’ = ‘6’ – 压缩 task 并发, 默认为4
‘compaction.trigger.strategy’ = ‘num_or_time’, – 压缩策略,支持四种策略:num_commits、time_elapsed、num_and_time、num_or_time
‘compaction.delta_commits’ = ‘2’, – 默认策略,5 个 commits 压缩一次
‘compaction.delta_seconds’ = ‘120’, –
‘compaction.max_memory’ = ‘1024’, – 压缩去重的 hash map 可用内存, 单位是MB, 资源够用的话建议调整到 1GB
‘hive_sync.enable’ = ‘true’, – 自动同步hive
‘hive_sync.metastore.uris’ = ‘thrift://10.40.1.201:9083’ , – hive metastore的端口,用来读取hudi数据
‘hoodie.datasource.write.keygenerator.class’ = ‘org.apache.hudi.keygen.ComplexAvroKeyGenerator’,
‘hive_sync.conf.dir’=’/etc/hive/conf.cloudera.hive’
);

2. hive里查询表结构
show create table hudi.flink_sink_hudi_mor_paramer_test001;

CREATE EXTERNAL TABLE hudi.flink_sink_hudi_mor_paramer_test001(
_hoodie_commit_time string,
_hoodie_commit_seqno string,
_hoodie_record_key string,
_hoodie_partition_path string,
_hoodie_file_name string,
uuid string,
name string,
age int,
ts bigint,
partition string)
ROW FORMAT SERDE
‘org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe’
WITH SERDEPROPERTIES (
‘hoodie.query.as.ro.table’=‘false’,
‘path’=‘hdfs://xxx.xxx.xxx.xxx:8020/tmp/hudi/flink_sink_hudi_mor_paramer_test001’,
‘primaryKey’=‘uuid’,
‘type’=‘mor’)
STORED AS INPUTFORMAT
‘org.apache.hudi.hadoop.realtime.HoodieParquetRealtimeInputFormat’
OUTPUTFORMAT
‘org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat’
LOCATION
‘hdfs://xxx.xxx.xxx.xxx:8020/tmp/hudi/flink_sink_hudi_mor_paramer_test001’
TBLPROPERTIES (
‘compaction.async.enabled’=‘true’,
‘compaction.delta_commits’=‘2’,
‘compaction.delta_seconds’=‘120’,
‘compaction.max_memory’=‘300’,
‘compaction.schedule.enabled’=‘true’,
‘compaction.trigger.strategy’=‘num_or_time’,
‘connector’=‘hudi’,
‘hive_sync.conf.dir’=’/etc/hive/conf.cloudera.hive’,
‘hive_sync.enable’=‘true’,
‘hive_sync.metastore.uris’=‘thrift://xxx.xxx.xxx.xxx:9083’,
‘hive_sync.skip_ro_suffix’=‘true’,
‘hoodie.datasource.write.keygenerator.class’=‘org.apache.hudi.keygen.ComplexAvroKeyGenerator’,
‘hoodie.datasource.write.recordkey.field’=‘uuid’,
‘index.global.enabled’=‘true’,
‘last_commit_time_sync’=‘20230216192829848’,
‘path’=‘hdfs://xxx.xxx.xxx.xxx:8020/tmp/hudi/flink_sink_hudi_mor_paramer_test001’,
‘spark.sql.create.version’=‘spark2.4.4’,
‘spark.sql.sources.provider’=‘hudi’,
‘spark.sql.sources.schema.numParts’=‘1’,
‘spark.sql.sources.schema.part.0’=’{“type”:“struct”,“fields”:[{“name”:“uuid”,“type”:“string”,“nullable”:true,“metadata”:{}},{“name”:“name”,“type”:“string”,“nullable”:true,“metadata”:{}},{“name”:“age”,“type”:“integer”,“nullable”:true,“metadata”:{}},{“name”:“ts”,“type”:“timestamp”,“nullable”:true,“metadata”:{}},{“name”:“partition”,“type”:“string”,“nullable”:true,“metadata”:{}}]}’,
‘table.type’=‘MERGE_ON_READ’,
‘transient_lastDdlTime’=‘1676546884’,
‘write.bucket_assign.tasks’=‘2’,
‘write.insert.drop.duplicates’=‘true’,
‘write.tasks’=‘2’)

主要问题还是,在flink那边创建的ts是timestamp类型,在hive这边是bigint. 类型不匹配导致的问题。

大佬,这个问题有修复计划吗?

可以先试试这个PR https://github.com/StarRocks/starrocks/pull/18167

需要复现一下,之前没有接触过flink.

这个问题修复了。 @U_1647419731669_9072 麻烦确认一下。