首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >AWS S3:上传大型文件失败,而ResetException:未能重置请求输入流

AWS S3:上传大型文件失败,而ResetException:未能重置请求输入流
EN

Stack Overflow用户
提问于 2015-05-08 10:09:03
回答 2查看 16.5K关注 0票数 6

有人能告诉我下面的代码有什么问题吗?大文件上传(>10 me )在ResetException: Failed to reset the request input stream中总是失败吗?

失败总是发生在一段时间之后(即大约15分钟之后),这意味着上传进程执行时只会在中间某个地方失败。

下面是我尝试调试这个问题的地方:

  1. in.marksSupported() == false // checking if mark is supported on my FileInputStream 我高度怀疑这就是问题所在,因为S3 SDK似乎想在上传过程中的某个点执行重置操作,可能是连接丢失了,或者传输过程遇到了一些错误。
  2. 将我的FileInputStream包装在BufferedInputStream中以启用标记。现在调用in.marksSupported()返回true,这意味着标记支持在那里。奇怪的是,上传过程仍然失败,同样的错误。
  3. 添加putRequest.getRequestClientOptions.setReadLimit(n),其中n=100000 (100kb), and 800000000 (800mb)但仍然抛出相同的错误。我怀疑是因为这个参数用于重置流,正如上面所述,流在FileInputStream上不受支持。

有趣的是,在我的AWS开发帐户上没有发生同样的问题。我认为这仅仅是因为dev帐户没有像我的生产帐户那样负载很重,这意味着上传过程可以尽可能顺利地执行,而不会出现任何故障。

请看我的代码如下:

代码语言:javascript
运行
复制
object S3TransferExample {
// in main class
def main(args: Array[String]): Unit = {
    ...
    val file = new File("/mnt/10gbfile.zip")
    val in = new FileInputStream(file)
    // val in = new BufferedInputStream(new FileInputStream(file)) --> tried wrapping file inputstream in a buffered input stream, but it didn't help..
    upload("mybucket", "mykey", in, file.length, "application/zip").waitForUploadResult
    ...
}

val awsCred = new BasicAWSCredentials("access_key", "secret_key")
val s3Client = new AmazonS3Client(awsCred)
val tx = new TransferManager(s3Client)

def upload(bucketName: String,  keyName: String,  inputStream: InputStream,  contentLength: Long,  contentType: String,  serverSideEncryption: Boolean = true,  storageClass: StorageClass = StorageClass.ReducedRedundancy ):Upload = {
  val metaData = new ObjectMetadata
  metaData.setContentType(contentType)
  metaData.setContentLength(contentLength)

  if(serverSideEncryption) {
    metaData.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION)
  }

  val putRequest = new PutObjectRequest(bucketName, keyName, inputStream, metaData)
  putRequest.setStorageClass(storageClass)
  putRequest.getRequestClientOptions.setReadLimit(100000)

  tx.upload(putRequest)
 
}
}

下面是完整的堆栈跟踪:

代码语言:javascript
运行
复制
Unable to execute HTTP request: mybucket.s3.amazonaws.com failed to respond
org.apache.http.NoHttpResponseException: mybuckets3.amazonaws.com failed to respond
    at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260) ~[httpcore-4.3.2.jar:4.3.2]
    at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283) ~[httpcore-4.3.2.jar:4.3.2]
    at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271) ~[httpcore-4.3.2.jar:4.3.2]
    at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:66) ~[aws-java-sdk-core-1.9.13.jar:na]
    at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123) ~[httpcore-4.3.2.jar:4.3.2]
    at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[httpclient-4.3.4.jar:4.3.4]
    at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57) ~[httpclient-4.3.4.jar:4.3.4]
    at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:685) [aws-java-sdk-core-1.9.13.jar:na]
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460) [aws-java-sdk-core-1.9.13.jar:na]
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295) [aws-java-sdk-core-1.9.13.jar:na]
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3710) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2799) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2784) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadPartsInSeries(UploadCallable.java:259) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInParts(UploadCallable.java:193) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:125) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:129) [aws-java-sdk-s3-1.9.13.jar:na]
    at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50) [aws-java-sdk-s3-1.9.13.jar:na]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_40]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_40]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
    at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
com.amazonaws.ResetException: Failed to reset the request input stream;  If the request involves an input stream, the maximum stream buffer size can be configured via request.getRequestClientOptions().setReadLimit(int)
  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:636)
  at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
  at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
  at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3710)
  at com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:2799)
  at com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:2784)
  at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadPartsInSeries(UploadCallable.java:259)
  at com.amazonaws.services.s3.transfer.internal.UploadCallable.uploadInParts(UploadCallable.java:193)
  at com.amazonaws.services.s3.transfer.internal.UploadCallable.call(UploadCallable.java:125)
  at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:129)
  at com.amazonaws.services.s3.transfer.internal.UploadMonitor.call(UploadMonitor.java:50)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Resetting to invalid mark
  at java.io.BufferedInputStream.reset(BufferedInputStream.java:448)
  at com.amazonaws.internal.SdkBufferedInputStream.reset(SdkBufferedInputStream.java:106)
  at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:103)
  at com.amazonaws.event.ProgressInputStream.reset(ProgressInputStream.java:139)
  at com.amazonaws.internal.SdkFilterInputStream.reset(SdkFilterInputStream.java:103)
  at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:634) 
EN

回答 2

Stack Overflow用户

发布于 2015-05-12 06:31:55

这看起来肯定是个bug,我已经报告了。解决方案是使用另一个构造函数,它接受File而不是InputStream

代码语言:javascript
运行
复制
def upload(bucketName: String,  keyName: String,  file: File,  contentLength: Long,  contentType: String,  serverSideEncryption: Boolean = true,  storageClass: StorageClass = StorageClass.ReducedRedundancy ):Upload = {
  val metaData = new ObjectMetadata
  metaData.setContentType(contentType)
  metaData.setContentLength(contentLength)

  if(serverSideEncryption) {
    metaData.setSSEAlgorithm(ObjectMetadata.AES_256_SERVER_SIDE_ENCRYPTION)
  }

  val putRequest = new PutObjectRequest(bucketName, keyName, file)
  putRequest.setStorageClass(storageClass)
  putRequest.getRequestClientOptions.setReadLimit(100000)
  putRequest.setMetadata(metaData)
  tx.upload(putRequest)

}
}
票数 7
EN

Stack Overflow用户

发布于 2018-01-26 05:18:07

我调查过这个问题,说来话长。

其结论是:通过向java命令行插入以下选项,将系统属性传递给java

-Dcom.amazonaws.sdk.s3.defaultStreamBufferSize=YOUR_MAX_PUT_SIZE

请参阅https://github.com/aws/aws-sdk-java/blob/master/aws-java-sdk-s3/src/main/java/com/amazonaws/services/s3/AmazonS3Client.java#L1668

这告诉AmazonS3Client设置适当的最大不可卷缓冲区大小,这将用于重读以进行重试。

票数 5
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/30121218

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档