为了更快的定位您的问题,请提供以下信息,谢谢
【详述】使用spark connector 向 sr 写数据
【背景】做过哪些操作?
【业务影响】
【是否存算分离】是
【StarRocks版本】3.2.1
【集群规模】1 fe + 1 be(fe与be混部)
【机器信息】8C/64G
【附件】
W0104 18:44:59.563812 6510 mem_hook.cpp:249] large memory alloc, query_id:00000000-0000-0000-0000-000000000000 instance: 00000000-0000-0000-0000-000000000000 acquire:1557085809 bytes, stack:
@ 0x2d1a85d malloc
@ 0x8985f55 operator new()
@ 0x2dc4df8 starrocks::faststring::GrowToAtLeast()
@ 0x527394e starrocks::faststring::append()
@ 0x5273b18 starrocks::SliceMutableIndex::append_wal()
@ 0x5239429 starrocks::ShardByLengthMutableIndex::append_wal()
@ 0x5245196 starrocks::PersistentIndex::try_replace()
@ 0x4f20101 starrocks::PrimaryIndex::_replace_persistent_index()
@ 0x4f2020e starrocks::PrimaryIndex::try_replace()
@ 0x51d4f83 starrocks::lake::UpdateManager::publish_primary_compaction()
@ 0x520a048 starrocks::lake::PrimaryKeyTxnLogApplier::apply()
@ 0x51c548d starrocks::lake::publish_version()
@ 0x2d323de _ZZN9starrocks15LakeServiceImpl15publish_versionEPN6google8protobuf13RpcControllerEPKNS_4lake21PublishVersionRequestEPNS5_22PublishVersionResponseEPNS2_7ClosureEENKUlvE_clEv
@ 0x2d2b76a _ZNSt17_Function_handlerIFvvEZN9starrocks12_GLOBAL__N_133ConcurrencyLimitedThreadPoolToken11submit_funcESt8functionIS0_ENSt6chrono10time_pointINS6_3_V212system_clockENS6_8durationIlSt5ratioILl1ELl1000000000EEEEEEEUlvE_E9_M_invokeERKSt9_Any_data
@ 0x2dcefca starrocks::ThreadPool::dispatch_thread()
@ 0x2dc9a3a starrocks::supervise_thread()
@ 0x7f55b4ca7ea5 start_thread
@ 0x7f55b40a8b0d __clone
@ (nil) (unknown)
这个是be.out的信息吗?麻烦上传一份完整的be.out日志
后来检查发现,应该是内存满了的原因,限制内存占比之后,没有再 crash了
这是一个已知问题,1般是主键模型一次导入比较大导致,我们正在优化