【元数据】FE服务meta源数据一直增长,节点重启回放时间久

升级之后,fe.out,一直报错这个: 【2.5.8】升级后FE服务内存溢出 - :speech_balloon: StarRocks 用户问答 / 日常运维 - StarRocks中文社区论坛 (mirrorship.cn)
这个是我们另外一个集群,升级出现的问题。现在把128G主机的集群也升级了,同样出现了这个问题了。
fe.out.tar.gz (110.6 KB)

升级之后,出现了这个错误:
2023-07-25 13:31:10,150 ERROR (leaderCheckpointer|112) [Daemon.run():117] daemon thread got exception. name: leaderCheckpointer
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
at java.util.Arrays.copyOf(Arrays.java:3332) ~[?:1.8.0_291]
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) ~[?:1.8.0_291]
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) ~[?:1.8.0_291]
at java.lang.StringBuffer.append(StringBuffer.java:270) ~[?:1.8.0_291]
at java.io.StringWriter.write(StringWriter.java:112) ~[?:1.8.0_291]
at com.google.gson.stream.JsonWriter.string(JsonWriter.java:590) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.stream.JsonWriter.writeDeferredName(JsonWriter.java:401) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.stream.JsonWriter.value(JsonWriter.java:416) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.TypeAdapters$15.write(TypeAdapters.java:384) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.TypeAdapters$15.write(TypeAdapters.java:368) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.persist.gson.GsonUtils$ProcessHookTypeAdapterFactory$1.write(GsonUtils.java:509) ~[starrocks-fe.jar:?]
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.persist.gson.GsonUtils$ProcessHookTypeAdapterFactory$1.write(GsonUtils.java:509) ~[starrocks-fe.jar:?]
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.write(CollectionTypeAdapterFactory.java:97) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.write(CollectionTypeAdapterFactory.java:61) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.persist.gson.GsonUtils$ProcessHookTypeAdapterFactory$1.write(GsonUtils.java:509) ~[starrocks-fe.jar:?]
at com.google.gson.internal.bind.TypeAdapterRuntimeTypeWrapper.write(TypeAdapterRuntimeTypeWrapper.java:69) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.write(ReflectiveTypeAdapterFactory.java:127) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.write(ReflectiveTypeAdapterFactory.java:245) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.persist.gson.GsonUtils$ProcessHookTypeAdapterFactory$1.write(GsonUtils.java:509) ~[starrocks-fe.jar:?]
at com.google.gson.Gson.toJson(Gson.java:735) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.Gson.toJson(Gson.java:714) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.Gson.toJson(Gson.java:669) ~[spark-dpp-1.0.0.jar:?]
at com.google.gson.Gson.toJson(Gson.java:649) ~[spark-dpp-1.0.0.jar:?]
at com.starrocks.scheduler.TaskManager.saveTasks(TaskManager.java:533) ~[starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.saveImage(GlobalStateMgr.java:1639) ~[starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.saveImage(GlobalStateMgr.java:1590) ~[starrocks-fe.jar:?]
at com.starrocks.leader.Checkpoint.replayAndGenerateGlobalStateMgrImage(Checkpoint.java:204) ~[starrocks-fe.jar:?]
at com.starrocks.leader.Checkpoint.runAfterCatalogReady(Checkpoint.java:93) ~[starrocks-fe.jar:?]

这个报错应该是关闭了这个:echo ‘madvise’ | sudo tee /sys/kernel/mm/transparent_hugepage/enabled导致的。
现在设置为always之后,没有报错这个了。但是由报错了这个:
2023-07-25 13:32:10,272 WARN (leaderCheckpointer|112) [ColocateTableIndex.cleanupInvalidDbOrTable():989] remove 0 invalid tableid: []
2023-07-25 13:32:18,716 WARN (leaderCheckpointer|112) [GlobalStateMgr.updateBaseTableRelatedMv():1384] Setting the materialized view mv_zt_saas_public_staff_department_data(50300) to invalid because the table null was not exist.
2023-07-25 13:32:18,716 WARN (leaderCheckpointer|112) [GlobalStateMgr.updateBaseTableRelatedMv():1384] Setting the materialized view mv_zt_saas_public_staff_department_data(50300) to invalid because the table null was not exist.
2023-07-25 13:32:21,999 WARN (leaderCheckpointer|112) [OlapTable.onDrop():2077] Ignore materialized view MvId{dbId=10616, id=158009} does not exists
2023-07-25 13:32:21,999 WARN (leaderCheckpointer|112) [OlapTable.onDrop():2077] Ignore materialized view MvId{dbId=10616, id=158112} does not exists
2023-07-25 13:32:39,565 WARN (leaderCheckpointer|112) [OlapTable.onDrop():2077] Ignore materialized view MvId{dbId=10616, id=158370} does not exists
2023-07-25 13:32:40,201 WARN (leaderCheckpointer|112) [OlapTable.onDrop():2077] Ignore materialized view MvId{dbId=10616, id=159770} does not exists
2023-07-25 13:32:40,250 WARN (leaderCheckpointer|112) [OlapTable.onDrop():2077] Ignore materialized view MvId{dbId=10616, id=160850} does not exists
麻烦看下,这个是什么问题?

继续升级到了2.5.9版本。设置FE的内存为110G。主机128G。但是依旧出现:
java.lang.OutOfMemoryError: null
这个需要多大的内存?

还是在saveTask的时候出错了吗

在拉起checkpoints的daemon的时候,报错。
2023-07-26 13:31:48,478 ERROR (leaderCheckpointer|112) [Daemon.run():117] daemon thread got exception. name: leaderCheckpointer
java.lang.OutOfMemoryError: null
at java.lang.AbstractStringBuilder.hugeCapacity(AbstractStringBuilder.java:161) ~[?:1.8.0_291]
at java.lang.AbstractStringBuilder.newCapacity(AbstractStringBuilder.java:155) ~[?:1.8.0_291]
at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:125) ~[?:1.8.0_291]
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:448) ~[?:1.8.0_291]
at java.lang.StringBuffer.append(StringBuffer.java:270) ~[?:1.8.0_291]

这个错误,是不是由于bdb的数据库太多了,加载的数量超过了hugeCapacity这个定义的大小,导致的oom了?
这里有一个搜到的资料:https://blog.51cto.com/u_15696939/5415367