broker load 会命中insert 的资源组吗?
最近集群经常有sql内存溢出, 有很多sql很简单就是insert into select 没有加工逻辑也会内存溢出, 不知道怎么排查.
spill也开了 资源组也开了
不知道这个broker load会不会被资源组控制, 会不会干扰到其他正常sql的执行
另外还有stream load会不会影响其他正常sql的执行, 造成内存溢出.
| Variable_name | Value |
+---------------------------+-------+
| enable_spill | true |
| spill_encode_level | 7 |
| spill_mode | auto |
| spill_revocable_max_bytes | 0 |
+---------------------------+-------+
| Variable_name | Value |
+------------------------+-------+
| enable_pipeline_engine | true |
| max_pipeline_dop | 64 |
| pipeline_dop | 2 |
| pipeline_profile_level | 1 |
| pipeline_sink_dop | 2 |
| Variable_name | Value |
+----------------------------------------------+-------+
| enable_group_level_query_queue | true |
| enable_query_queue_load | true |
| enable_query_queue_select | false |
| enable_query_queue_statistic | true |
| query_queue_concurrency_limit | 30 |
| query_queue_cpu_used_permille_limit | 1000 |
| query_queue_driver_high_water | -1 |
| query_queue_driver_low_water | -1 |
| query_queue_fresh_resource_usage_interval_ms | 1000 |
| query_queue_max_queued_queries | 1024 |
| query_queue_mem_used_pct_limit | 0.75 |
| query_queue_pending_timeout_second | 3600 |
| name | id | cpu_core_limit | mem_limit | max_cpu_cores | big_query_cpu_second_limit | big_query_scan_rows_limit | big_query_mem_limit | concurrency_limit | spill_mem_limit_threshold | type | classifiers |
+-------------------------+---------+----------------+-----------+---------------+----------------------------+---------------------------+---------------------+-------------------+---------------------------+--------+--------------------------------------------------------------------------+
| dev_load_resource_group | 4662765 | 12 | 90.0% | null | 0 | 0 | 0 | 10 | 90.0% | NORMAL | (id=4662766, weight=12.1, user=fabric, query_type in (INSERT), db='dev') |
| load_resource_group | 4662744 | 12 | 80.0% | null | 0 | 0 | 0 | 6 | 10.0% | NORMAL | (id=4662745, weight=2.1, user=fabric, query_type in (INSERT)) |
| load_resource_group | 4662744 | 12 | 80.0% | null | 0 | 0 | 0 | 6 | 10.0% | NORMAL | (id=4662746, weight=2.1, user=cubeuser, query_type in (INSERT)) |
| ro_resource_group | 4322391 | 2 | 5.0% | null | 0 | 0 | 0 | 8 | 10.0% | NORMAL | (id=4322392, weight=2.05, user=ro, query_type in (INSERT, SELECT))
2024-08-02 00:30:11.357+08:00 [query] |Timestamp=1722529811353|Client=10.0.0.77:40370|User=cubeuser|AuthorizedUser=‘cubeuser’@’%’|ResourceGroup=|Catalog=default_catalog|Db=cubeappdata|State=OK|ErrorCode=|Time=4|ScanBytes=0|ScanRows=0|ReturnRows=0|StmtId=1517013|QueryId=52b6489d-5023-11ef-bdc2-02423c37a781|IsQuery=false|feIp=mdw|Stmt=LOAD LABEL ``.1722529811329_3FB__migrate_02qdy_ods_api_wdtqjqm_other_instock_business_info_f_1
(DATA INFILE (‘s3://broker-load/1722529811329_3FB__migrate_02qdy_ods_api_wdtqjqm_other_instock_business_info_f_1.csv’) INTO TABLE _migrate_02qdy_ods_api_wdtqjqm_other_instock_business_info_f COLUMNS TERMINATED BY ‘\x01’ ROWS TERMINATED BY ‘\x02’ (rec_id
, other_in_no
, status
, warehouse_no
, warehouse_name
, warehouse_id
, logistics_no
, logistics_name
, logistics_id
, remark
, employee_name
, note_count
, reason
, created
, modified
, detail_list
, origin_data
, dt
, yuce_cube_shop_id
)) WITH BROKER (“aws.s3.access_key” = “", “aws.s3.secret_key” = "”, “aws.s3.enable_ssl” = “false”, “aws.s3.endpoint” = “http://10.0.0.78:9000”, “aws.s3.region” = “us-east-1”) PROPERTIES (“timeout” = “3600”)|Digest=|IsForwardToLeader=true