我确实设置了hadoop,遵循了所有必要的步骤,1.创建了hdfs文件系统2.将文本文件移动到输入目录3.拥有访问所有目录的特权。但是当运行简单的单词计数示例时,我得到:
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class wordcount {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
conf.addResource(new Path("/HADOOP_HOME/conf/core-site.xml"));
conf.addResource(new Path("/HADOOP_HOME/conf/hdfs-site.xml"));
Job job = new Job(conf, "wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setJarByClass(wordcount.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
// FileInputFormat.addInputPath(job, new Path(args[0]));
// FileOutputFormat.setOutputPath(job, new Path(args[1]));
FileInputFormat.setInputPaths(job, new Path("/user/gabriele/input"));
FileOutputFormat.setOutputPath(job, new Path("/user/gabriele/output"));
job.waitForCompletion(true);
}
}
但是,输入路径是有效的(也从命令行检查),甚至可以从eclipse本身查看路径中的文件,所以如果我错了,请提供帮助。
有一个解决方案,即添加以下2行:
Config.addResource(新路径(“/HADOOP_HOME/conf/core-site.xml”);config.addResource(新路径(“/HADOOP_HOME/conf/HDFS-site.xml”);
但还是没有用。
在这里,错误:在hadoop上以->的形式运行
13/11/ 08 :39:11警告util.NativeCodeLoader:无法为您的平台加载本机-hadoop库。在适用的情况下使用内置java类: 13/11/ 08 :39:12警告mapred.JobClient:使用GenericOptionsParser解析参数。应用程序应该实现相同的工具。13/11/ 08 :39:12警告mapred.JobClient:无作业jar文件集。可能找不到用户类。请参见JobConf(类)或JobConf#setJar(字符串)。13/11/ 08 :39:12 INFO mapred.JobClient:清理暂存区域file:/tmp/hadoop-gabriele/mapred/staging/gabriele481581440/.staging/job_local481581440_0001 13/11/ 08 :39:12错误security.UserGroupInformation: PriviledgedActionException :PriviledgedActionException as:gabriele cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException:输入路径不存在: file: /user/ org.apache.hadoop.mapreduce.lib.input.InvalidInputException: /org.apache.hadoop.mapreduce.lib.input.InvalidInputException:输入线程中的输入异常“主”org.apache.hadoop.mapreduce.lib.input.InvalidInputException:输入路径不存在:org.apache.hadoop.mapreduce.lib.input.InvalidInputException::/user/gabriele/input at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:235) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1054) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1071) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:179)在org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:983) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:936) at java.security.AccessController.doPrivileged(Native Method)在javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:936) at org.apache.hadoop.mapreduce.Job.submit(Job.java:550) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:580) at wordcount.main(wordcount.java:74)
谢谢
发布于 2013-11-08 04:17:52
除非您的Hadoop安装确实植根于/HADOOP_HOME,否则我建议您更改以下行,以便将HADOOP_HOME替换为实际安装Hadoop的位置(/usr/lib/hadoop、/opt/hadoop或安装它的任何地方):
conf.addResource(new Path("/usr/lib/hadoop/conf/core-site.xml"));
conf.addResource(new Path("/usr/lib/hadoop/conf/hdfs-site.xml"));
或者在Eclipse中,将/usr/lib/hadoop/conf文件夹(或安装hadoop的地方)添加到构建类路径中。
https://stackoverflow.com/questions/19854747
复制相似问题