集群迁移后仍去连接旧集群IP

【详述】从10.2.32.0/24网段迁移FE/BE服务到10.2.29.0/24网段。迁移完成后,均执行drop 掉旧节点,show frontend,backend均没有旧节点的信息。但是日志中,仍旧在打印去请求旧节点的be/fe服务。
【背景】迁移集群。并且以及迁移很久了。
【业务影响】
【StarRocks版本】2.5.4
【集群规模】3fe+ 3e
【机器信息】CPU虚拟核/内存/网卡,例如:48C/64G/万兆
【联系方式】为了在解决问题过程中能及时联系到您获取一些日志信息,请补充下您的联系方式,例如:社区群4-小李或者邮箱,谢谢
【附件】
fe的out日志:
六月 14, 2023 6:08:04 上午 com.github.benmanes.caffeine.cache.LocalAsyncCache$AsyncBulkCompleter accept
警告: Exception thrown during asynchronous load
java.util.concurrent.CompletionException: com.starrocks.sql.analyzer.SemanticException: rpc failed, host: 10.2.32.78
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1606)
at java.util.concurrent.CompletableFuture$AsyncSupply.exec(CompletableFuture.java:1596)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1067)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1703)
at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:172)
Caused by: com.starrocks.sql.analyzer.SemanticException: rpc failed, host: 10.2.32.78
at com.starrocks.statistic.StatisticExecutor.executeDQL(StatisticExecutor.java:243)
at com.starrocks.statistic.StatisticExecutor.queryStatisticSync(StatisticExecutor.java:83)
at com.starrocks.sql.optimizer.statistics.ColumnBasicStatsCacheLoader.queryStatisticsData(ColumnBasicStatsCacheLoader.java:111)
at com.starrocks.sql.optimizer.statistics.ColumnBasicStatsCacheLoader.lambda$asyncLoadAll$1(ColumnBasicStatsCacheLoader.java:77)
at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1604)
… 5 more

fe.log日志:
achment: com.starrocks.transaction.InsertTxnCommitAttachment@357d35e3
2023-06-21 14:05:48,974 INFO (stateChangeExecutor|63) [DatabaseTransactionMgr.replayUpsertTransactionState():1596] remove expired transaction: TransactionState. txn_id: 4025569, label: insert_6d4d7663-00ce-11ee-8d45-0242f1965298, db id: 10005, table id list: 45010, callback id: -1, coordinator: FE: 10.2.32.78, transaction status: VISIBLE, error replicas num: 0, replica ids: , prepare time: 1685659709214, commit time: 1685659709325, finish time: 1685659709343, write cost: 111ms, publish total cost: 18ms, total cost: 129ms, reason: attachment: com.starrocks.transaction.InsertTxnCommitAttachment@357d35e3
2023-06-21 14:05:48,975 INFO (stateChangeExecutor|63) [DatabaseTransactionMgr.replayUpsertTransactionState():1588] replay a committed transaction TransactionState. txn_id: 4025570, label: 302437a8-f156-463f-8b97-5c4d74e16ced, db id: 11002, table id list: 102307, callback id: -1, coordinator: BE: 10.2.32.79, transaction status: COMMITTED, error replicas num: 0, replica ids: , prepare time: 1685659709340, commit time: 1685659709634, finish time: -1, write cost: 294ms, reason: attachment: com.starrocks.load.loadv2.ManualLoadTxnCommitAttachment@3bd5a25
2023-06-21 14:05:48,975 INFO (stateChangeExecutor|63) [DatabaseTransactionMgr.replayUpsertTransactionState():1588] replay a committed transaction TransactionState. txn_id: 4025571, label: insert_6d636f64-00ce-11ee-8d45-0242f1965298, db id: 10005, table id list: 45010, callback id: -1, coordinator: FE: 10.2.32.78, transaction status: COMMITTED, error replicas num: 0, replica ids: , prepare time: 1685659709359, commit time: 1685659709639, finish time: -1, write cost: 280ms, reason: attachment: com.starrocks.transaction.InsertTxnCommitAttachment@22a5a5f6
2023-06-21 14:05:48,975 INFO (stateChangeExecutor|63) [DatabaseTransactionMgr.replayUpsertTransactionState():1591] replay a visible transaction TransactionState. txn_id: 4025570, label: 302437a8-f156-463f-8b97-5c4d74e16ced, db id: 11002, table id list: 102307, callback id: -1, coordinator: BE: 10.2.32.79, transaction status: VISIBLE, error replicas num: 0, replica ids: , prepare time: 1685659709340, commit time: 1685659709634, finish time: 1685659709649, write cost: 294ms, publish total cost: 15ms, total cost: 309ms, reason: attachment: com.starrocks.load.loadv2.ManualLoadTxnCommitAttachment@7e2b4e6c

观察到一个情况,在FE节点出现异常重启的时候,就会打印旧IP的操作。
2023-06-26 18:14:09,549 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10003, host=10.2.32.78, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:09,549 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10003, host=10.2.32.78, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:10,455 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10003, host=10.2.32.78, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:10,627 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10004, host=10.2.32.79, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:10,627 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10004, host=10.2.32.79, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:11,496 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10004, host=10.2.32.79, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:11,592 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10002, host=10.2.32.77, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:11,593 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10002, host=10.2.32.77, heartbeatPort=9050, alive=true] set decommission: true
2023-06-26 18:14:12,516 WARN (leaderCheckpointer|483) [ComputeNode.setDecommissioned():205] Backend [id=10002, host=10.2.32.77, heartbeatPort=9050, alive=true] set decommission: true

这期间master 节点出现了FE内存溢出,导致重启。然后打印这个。

改网段后,fe.conf和be.conf中的 priority_networks 有修改么

我们是在新网段全新启动的FE/BE服务节点,其中的priority_networks为当前节点的IP。启动后,加入旧IP网段的集群内。
等新BE节点加入后,通过命令deccommission把旧IP的BE的分片迁移到新IP的BE节点。
FE的迁移则是新IP的FE加入之后,状态正常之后,再把FE的节点逐个drop掉。之后旧IP的FE /BE服务完全停服,新IP的FE/BE继续提供服务。但是FE日志还是出现去请求BE的。

有什么办法解决吗?去掉旧集群的IP

show backends 里还有旧的be吗

已经没有了。旧的be服务器都下线了。