首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >星火写入流到IBM对象存储失败,"Access键为空。请提供有效的访问密钥“

星火写入流到IBM对象存储失败,"Access键为空。请提供有效的访问密钥“
EN

Stack Overflow用户
提问于 2018-12-17 16:29:40
回答 2查看 1.2K关注 0票数 0

我目前正在使用ApacheSpark2.3.2,并创建一个管道从文件系统读取流csv文件,然后将流写入IBM对象存储。

为此,我使用储料器连接器。在下面的配置中,常规读写IBM运行良好。但是,读写流操作正在抛出错误,如下所示:

com.ibm.stocator.fs.common.exception.ConfigurationParseException:配置解析异常:访问键为空。请提供有效的访问密钥。

存储器配置

代码语言:javascript
运行
复制
sc.hadoopConfiguration.set("fs.cos.impl","com.ibm.stocator.fs.ObjectStoreFileSystem")    
sc.hadoopConfiguration.set("fs.stocator.scheme.list","cos")    
sc.hadoopConfiguration.set("fs.stocator.cos.impl","com.ibm.stocator.fs.cos.COSAPIClient")    
sc.hadoopConfiguration.set("fs.stocator.cos.scheme", "cos")    
sc.hadoopConfiguration.set("fs.cos.Cloud Object Storage-POCDL.endpoint", "{url}")    
sc.hadoopConfiguration.set("fs.cos.Cloud Object Storage-POCDL.access.key", "{access_key}")    
sc.hadoopConfiguration.set("fs.cos.Cloud Object Storage-POCDL.secret.key", {secret_key})

readstream

代码语言:javascript
运行
复制
val csvDF = sqlContext
.readStream
.option("sep", ",")
.schema(fschema)
.csv({path})

writestream

代码语言:javascript
运行
复制
val query = csvDF
.writeStream
.outputMode(OutputMode.Append())
.format("parquet")
.option("checkpointLocation", "cos://stream-csv.Cloud Object Storage-POCDL/")
.option("path", "cos://stream-csv.Cloud Object Storage-POCDL/")
.start()

错误日志:

"2018-12-17 16:51:14 WARN FileStreamSinkLog:66 - Could not use FileContext API for managing metadata log files at path cos://stream-csv.Cloud Object Storage-POCDL/_spark_metadata. Using FileSystem API instead for managing log files. The log may be inconsistent under failures. 2018-12-17 16:51:14 INFO ObjectStoreVisitor:110 - Stocator registered as cos for cos://stream-csv.Cloud Object Storage-POCDL/_spark_metadata 2018-12-17 16:51:14 INFO COSAPIClient:251 - Init : cos://stream-csv.Cloud Object Storage-POCDL/_spark_metadata Exception in thread "main" com.ibm.stocator.fs.common.exception.ConfigurationParseException: Configuration parse exception: Access KEY is empty. Please provide valid access key"

是否有任何方法来解决此错误,或另一种替代解决方案的方法?

更新了更多日志

代码语言:javascript
运行
复制
scala>  val csvDF = spark.readStream.option("sep", ",").schema(fschema).csv("C:\\Users\\abc\\Desktop\\stream")
csvDF: org.apache.spark.sql.DataFrame = [EMP_NO: string, EMP_SALARY: string ... 2 more fields]

scala>  val query = csvDF.writeStream.outputMode(OutputMode.Append()).format("csv").option("checkpointLocation", "cos://stream-csv.Cloud Object Storage-POCDL/").option("path", "cos://stream-csv.Cloud Object Storage-POCDL/").start()
18/12/18 10:47:40 WARN FileStreamSinkLog: Could not use FileContext API for managing metadata log files at path cos://stream-csv.Cloud%20Object%20Storage-POCDL/_spark_metadata. Using FileSystem API instead for managing log files. The log may be inconsistent under failures.
18/12/18 10:47:40 DEBUG ObjectStoreVisitor: Stocator schema space : cos, provided cos. Implementation com.ibm.stocator.fs.cos.COSAPIClient
18/12/18 10:47:40 INFO ObjectStoreVisitor: Stocator registered as cos for cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata
18/12/18 10:47:40 DEBUG ObjectStoreVisitor: Load implementation class com.ibm.stocator.fs.cos.COSAPIClient
18/12/18 10:47:40 DEBUG ObjectStoreVisitor: Load direct init for COSAPIClient. Overwrite com.ibm.stocator.fs.cos.COSAPIClient
18/12/18 10:47:40 INFO COSAPIClient: Init :  cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata
18/12/18 10:47:40 DEBUG ConfigurationHandler: COS driver: initialize start for cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata
18/12/18 10:47:40 DEBUG ConfigurationHandler: extracted host name from cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata is stream-csv.Cloud%20Object%20Storage-POCDL
18/12/18 10:47:40 DEBUG ConfigurationHandler: Initiaize for bucket: stream-csv, service: Cloud%20Object%20Storage-POCDL
18/12/18 10:47:40 DEBUG ConfigurationHandler: Filesystem cos://stream-csv.Cloud%2520Object%2520Storage-POCDL/_spark_metadata, using conf keys for fs.cos.Cloud%20Object%20Storage-POCDL. Alternative list [fs.s3a.Cloud%20Object%20Storage-POCDL, fs.s3d.Cloud%20Object%20Storage-POCDL]
18/12/18 10:47:40 DEBUG ConfigurationHandler: Initialize completed successfully for bucket stream-csv service Cloud%20Object%20Storage-POCDL
18/12/18 10:47:40 DEBUG MemoryCache: Guava initiated with size 2000 expiration 30 seconds
18/12/18 10:47:40 ERROR ObjectStoreVisitor: Configuration parse exception: Access KEY is empty. Please provide valid access key
com.ibm.stocator.fs.common.exception.ConfigurationParseException: Configuration parse exception: Access KEY is empty. Please provide valid access key
at com.ibm.stocator.fs.cos.COSAPIClient.initiate(COSAPIClient.java:276)
at com.ibm.stocator.fs.ObjectStoreVisitor.getStoreClient(ObjectStoreVisitor.java:130)
at com.ibm.stocator.fs.ObjectStoreFileSystem.initialize(ObjectStoreFileSystem.java:105)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog$FileSystemManager.<init>(HDFSMetadataLog.scala:409)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.createFileManager(HDFSMetadataLog.scala:292)
at org.apache.spark.sql.execution.streaming.HDFSMetadataLog.<init>(HDFSMetadataLog.scala:63)
at org.apache.spark.sql.execution.streaming.CompactibleFileStreamLog.<init>(CompactibleFileStreamLog.scala:46)
at org.apache.spark.sql.execution.streaming.FileStreamSinkLog.<init>(FileStreamSinkLog.scala:85)
at org.apache.spark.sql.execution.streaming.FileStreamSink.<init>(FileStreamSink.scala:98)
at org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:317)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:293)
... 49 elided
EN

回答 2

Stack Overflow用户

发布于 2018-12-18 06:20:23

刚刚测试了流,似乎对我有用,我测试了一些类似的代码。

代码语言:javascript
运行
复制
val userSchema = spark.read.parquet("/mydata/test.parquet").schema
val streamDf = spark.readStream.schema(userSchema).parquet("/mydata/")
    streamDf.writeStream.format("parquet").option("checkpointLocation",
    "cos://bucket.my_service/").option("path","cos://bucket.my_service").start()

您使用的是什么Stocator版本?您可以从日志、用户代理头中看到这一点。

票数 0
EN

Stack Overflow用户

发布于 2019-01-14 13:41:34

问题的原因和提出的解决方案是这里

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/53819304

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档