我在运行CDH4.5。我试图将distcp应用于s3n,但自从升级到4.5之后,我就遇到了一些问题。我正在尝试让s3distcp启动并运行,但是我遇到了一些问题。我下载了它,并运行以下命令:
hadoop jar /usr/lib/hadoop/lib/s3distcp.jar --src hdfs://NN:8020/path/to/destination/folder --dest s3n://acceseKeyId:secretaccesskey@mybucket/destination/但我得到的错误是:
INFO mapred.JobClient: map 100% reduce 0%
INFO mapred.JobClient: Task Id : attempt_201312042223_10889_r_000001_0, Status : FAILED
Error: java.lang.ClassNotFoundException: com.amazonaws.services.s3.AmazonS3
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
at com.amazon.external.elasticmapreduce.s3distcp.CopyFilesReducer.executeDownloads(CopyFilesReducer.java:209)
at com.amazon.external.elasticmapreduce.s3distcp.CopyFilesReducer.reduce(CopyFilesReducer.java:196)
at com.amazon.external.elasticmapreduce.s3distcp.CopyFilesReducer.reduce(CopyFilesReducer.java:30)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:506)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447)
at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
at java.security.AccessController.doPrivileged(Native Method)
at javax
INFO mapred.JobClient: Job Failed: NA
13/12/12 13:55:25 INFO s3distcp.S3DistCp: Try to recursively delete hdfs:/tmp/985ffdb0-1bc8-4d00-aba6-fd9b18e905f1/tempspace
Exception in thread "main" java.lang.RuntimeException: Error running job
at com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.run(S3DistCp.java:586)
at com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.run(S3DistCp.java:216)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at com.amazon.external.elasticmapreduce.s3distcp.Main.main(Main.java:12)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1388)
at com.amazon.external.elasticmapreduce.s3distcp.S3DistCp.run(S3DistCp.java:568)然后,我在所有数据和名称节点上将访问键和id放入我的核心-site.xml中:
<property>
<name>fs.s3.awsSecretAccessKey</name>
<value>bippitybopityboo</value>
</property>
<property>
<name>fs.s3.awsAccessKeyId</name>
<value>supercalifragilisticexpialadoscious</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>bippitybopityboo</value>
</property>
<property>
<name>fs.s3n.awsAccessKeyId</name>
<value>supercalifragilisticexpialadoscious</value>
</property>当我尝试这样做的时候仍然会犯同样的错误:
hadoop jar /usr/lib/hadoop/lib/s3distcp.jar --src hdfs://NN:8020/path/to/destination/folder --dest s3n://mybucket/destination/是否有一些我应该做的配置,还是我丢失了一些jar文件,或者不正确地执行它?
谢谢你的帮助
发布于 2014-02-27 10:44:21
您需要。从http://aws.amazon.com/de/sdkforjava/抓取它,或者将jar放入/usr/lib/hadoop/lib,或者用-libjars选项传递它。假设AWS的版本为1.7.1,则命令如下所示:
hadoop jar /usr/lib/hadoop/lib/s3distcp.jar \
-libjars aws-java-sdk-1.7.1/lib/aws-java-sdk-1.7.1.jar \
--src hdfs://NN:8020/path/to/destination/folder \
--dest s3n://acceseKeyId:secretaccesskey@mybucket/destination/参考文献:
https://stackoverflow.com/questions/20552541
复制相似问题