【报错:The arguments of blob store(S3/Azure) may be wrong. You can check the arguments like region, IAM, instance profile and so on.】

1、–oss-hdfs的文件导入标签sr
–sr建表
CREATE TABLE test.arch_account_type2
(
id int(11) NOT NULL COMMENT “主键”,
name varchar(65533) NULL COMMENT “来源类型名称”,
url varchar(65533) NULL COMMENT “来源类型的图片的url”,
create_time datetime NULL COMMENT “创建时间”,
update_time datetime NULL COMMENT “更新时间”,
kudu_time datetime NULL COMMENT “kudu存储时间”,
kafka_time datetime NULL COMMENT “kafka发送时间”
) ENGINE=OLAP
PRIMARY KEY(id)
DISTRIBUTED BY HASH(id) BUCKETS 1
PROPERTIES (
“compression” = “LZ4”,
“enable_persistent_index” = “true”,
“fast_schema_evolution” = “true”,
“replicated_storage” = “true”,
“replication_num” = “1”
);

2、
–方法1、
use test;
LOAD LABEL arch_account_type_003
(
DATA INFILE(“oss://xxx/user/hive/warehouse/ods.db/arch_account_type_all_temp4/")
INTO TABLE arch_account_type2
format as “parquet”
)
WITH BROKER
(
“fs.oss.accessKeyId” = "
",
“fs.oss.accessKeySecret” = "
",
“fs.oss.endpoint” = "
****”
);

show load where label=‘arch_account_type_003’;

报错:The arguments of blob store(S3/Azure) may be wrong. You can check the arguments like region, IAM, instance profile and so on.

1、【报错:The arguments of blob store(S3/Azure) may be wrong. You can check the arguments like region, IAM, instance profile and so on.】
2、【报错:ERROR 1064 (HY000): Access storage error. Error message: No FileSystem for scheme "oss"】
3、【报错:Fail to get file status: listObjects() on oss://*/user/hive/warehouse/ods.db/arch_account_type_all_temp4: software.amazon.awssdk.services.s3.model.S3Exception: The specified bucket is not valid. :InvalidBucketName: The specified bucket is not valid.】
这三个数据同一个问题,三种测试方式

说明:
1、文件所在的oss,是oss-hdfs模式。
2、导入的Starrocks,是存算一体的开源版本,版本为:
version info
Version: 3.3.1
Git: 2b87854
Build Info: StarRocks@localhost (CentOS Linux 7 (Core))
Build Time: 2024-07-18 05:32:09

fe日志里有具体的报错trace你可以找下。
按经验,是因为你的oss://xxx/…这个xxx写的是长域名,这个地方要求写短域名,或者这个xxx不符合字母加横杠的命名规范(规范里没有点号)

刚试了一下,不行
我们有两套SR:
1、阿里云存算分离的一套,这个可以执行,能成功。
2、社区版v3.3.1 存算一体的,相同脚本执行就不行

(帖子被作者删除,如无标记将在 24 小时后自动删除)

(帖子被作者删除,如无标记将在 24 小时后自动删除)

fe.warn.log

2025-02-21 16:31:41.910+08:00 WARN (thrift-server-pool-445708|449341) [LeaderImpl.finishTask():198] finish task reports bad. request: TFinishTaskRequest(backend:TBackend(host:192.168.135.198, be_port:9060, http_port:8040), task_type:CLONE, signature:579469, task_status:TStatus(status_code:RUNTIME_ERROR, error_msgs:[local tablet migration failed. error message: storage migrate failed, tablet is in schemachange process. tablet_id: 579469]))
2025-02-21 16:31:41.910+08:00 WARN (thrift-server-pool-445708|449341) [LeaderImpl.finishTask():246] task type: CLONE, status_code: RUNTIME_ERROR, local tablet migration failed. error message: storage migrate failed, tablet is in schemachange process. tablet_id: 579469, backendId: 1289383, signature: 579469
2025-02-21 16:31:41.910+08:00 WARN (thrift-server-pool-445696|449329) [LeaderImpl.finishTask():246] task type: CLONE, status_code: RUNTIME_ERROR, local tablet migration failed. error message: storage migrate failed, tablet is in schemachange process. tablet_id: 587269, backendId: 1289383, signature: 587269
2025-02-21 16:31:41.910+08:00 WARN (thrift-server-pool-445691|449324) [LeaderImpl.finishTask():246] task type: CLONE, status_code: RUNTIME_ERROR, local tablet migration failed. error message: storage migrate failed, tablet is in schemachange process. tablet_id: 579533, backendId: 1289383, signature: 579533
2025-02-21 16:31:50.014+08:00 ERROR (pending_load_task_scheduler_pool-0|96051) [HdfsFsManager.listPath():1245] The arguments of blob store(S3/Azure) may be wrong. You can check the arguments like region, IAM, instance profile and so on.
2025-02-21 16:31:50.014+08:00 WARN (pending_load_task_scheduler_pool-0|96051) [LoadTask.exec():78] LOAD_JOB=9568896, error_msg={Failed to execute load task}
com.starrocks.common.UserException: The arguments of blob store(S3/Azure) may be wrong. You can check the arguments like region, IAM, instance profile and so on.
at com.starrocks.fs.hdfs.HdfsFsManager.listPath(HdfsFsManager.java:1247)
at com.starrocks.fs.hdfs.HdfsService.listPath(HdfsService.java:60)
at com.starrocks.fs.HdfsUtil.parseFile(HdfsUtil.java:83)
at com.starrocks.fs.HdfsUtil.parseFile(HdfsUtil.java:97)
at com.starrocks.load.loadv2.BrokerLoadPendingTask.getAllFileStatus(BrokerLoadPendingTask.java:96)
at com.starrocks.load.loadv2.BrokerLoadPendingTask.executeTask(BrokerLoadPendingTask.java:74)
at com.starrocks.load.loadv2.LoadTask.exec(LoadTask.java:72)
at com.starrocks.task.PriorityLeaderTask.run(PriorityLeaderTask.java:38)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.lang.IllegalArgumentException: The bucket name “xxxx” is invalid. A bucket name must: 1) be comprised of lower-case characters, numbers or dash(-); 2) start with lower case or numbers; 3) be between 3-63 characters long.
at com.aliyun.oss.internal.OSSUtils.ensureBucketNameValid(OSSUtils.java:90)
at com.aliyun.oss.internal.OSSObjectOperation.getObjectMetadata(OSSObjectOperation.java:466)
at com.aliyun.oss.OSSClient.getObjectMetadata(OSSClient.java:645)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystemStore.getObjectMetadata(AliyunOSSFileSystemStore.java:273)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.getFileStatus(AliyunOSSFileSystem.java:271)
at org.apache.hadoop.fs.aliyun.oss.AliyunOSSFileSystem.listStatus(AliyunOSSFileSystem.java:427)
at org.apache.hadoop.fs.Globber.listStatus(Globber.java:128)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:291)
at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2316)
at com.starrocks.fs.hdfs.HdfsFsManager.listPath(HdfsFsManager.java:1216)
… 12 more
2025-02-21 16:31:50.014+08:00 WARN (pending_load_task_scheduler_pool-0|96051) [LoadJob.unprotectedExecuteCancel():666] LOAD_JOB=9568896, transaction_id={7040252}, error_msg={Failed to execute load with error: The arguments of blob store(S3/Azure) may be wrong. You can check the arguments like region, IAM, instance profile and so on.}
2025-02-21 16:31:51.990+08:00 WARN (thrift-server-pool-445745|449379) [LeaderImpl.finishTask():198] finish task reports bad. request: TFinishTaskRequest(backend:TBackend(host:192.168.135.198, be_port:9060, http_port:8040), task_type:CLONE, signature:589953, task_status:TStatus(status_code:RUNTIME_ERROR, error_msgs:[local tablet migration failed. error message: storage migrate failed, tablet is in schemachange process. tablet_id: 589953]))
2025-02-21 16:31:51.990+08:00 WARN (thrift-server-pool-445748|449383) [LeaderImpl.finishTask():198] finish task reports bad. request: TFinishTaskRequest(backend:TBackend(host:192.168.135.198, be_port:9060, http_port:8040), task_type:CLONE, signature:589997, task_status:TStatus(status_code:RUNTIME_ERROR, error_msgs:[local tablet migration failed. error message: storage migrate failed, tablet is in schemachange process. tablet_id: 589997]))