首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >NoSuchMethodException org.apache.hadoop.yarn.api.records.URL.fromURI

NoSuchMethodException org.apache.hadoop.yarn.api.records.URL.fromURI
EN

Stack Overflow用户
提问于 2018-08-01 13:31:40
回答 1查看 557关注 0票数 1

我正在尝试从一个hbase表中读取数据,对其进行一点处理,然后使用以下代码将其存储在另一个表中

代码语言:javascript
运行
复制
package analysis;
import java.io.IOException;
import org.apache.hadoop.io.Text;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.hbase.HBaseConfiguration;

import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableReducer; 
import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.util.Bytes;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Scan;
public class Author_ref {

public static class MyMapper extends TableMapper<Text,Text>  {

    public void map(ImmutableBytesWritable row, Result value,Context context)throws IOException, InterruptedException
    {
        String key = new String(row.get());
        String values = new String(value.getValue(Bytes.toBytes("authors"), Bytes.toBytes("authors")));
        String clean_values = values.replaceAll("[","");
        String clean_values2 = clean_values.replaceAll("]","");
        String authors[] = clean_values2.trim().split(",");

        for (String author : authors)
        {
            //Put row = new Put();
            context.write(new Text(author),new Text( key));
        }

    }
}

public static class MyReducer extends TableReducer<Text, Text, ImmutableBytesWritable>
{
    public void reduce(Text author, Iterable<Text> values,Context context)throws IOException,InterruptedException
    {
        String papers = "";
        for (Text x : values)
        {
            papers = papers + ","+x.toString();
        }
        Put p = new Put(author.getBytes());
        p.add(Bytes.toBytes("papers_writen"),Bytes.toBytes("papers_writen"),Bytes.toBytes(papers));
        context.write(null, p);
    }
}
public static void main(String[] args) throws Exception
{
Configuration config = HBaseConfiguration.create();
Job job = new Job(config,"ExampleSummary");
Scan scan = new Scan();
scan.setCaching(500);        // 1 is the default in Scan, which will be bad for MapReduce jobs
scan.setCacheBlocks(false);
job.setJarByClass(Author_ref.class);     // class that contains mapper and reducer
TableMapReduceUtil.initTableMapperJob(
        "Dataset",        // input table
        scan,               // Scan instance to control CF and attribute selection
        MyMapper.class,     // mapper class
        Text.class,         // mapper output key
        Text.class,  // mapper output value
        job);
TableMapReduceUtil.initTableReducerJob(
        "Author_paper",        // output table
        MyReducer.class,    // reducer class
        job);

job.setNumReduceTasks(1);   // at least one, adjust as required

System.exit(job.waitForCompletion(true)?0:1);

}

}

我得到了以下错误..

线程"main“java.lang.NoSuchMethodError异常: org.apache.hadoop.yarn.api.records.URL.fromURI(Ljava/net/URI;)Lorg/apache/hadoop/yarn/api/records/URL;在org.apache.hadoop.mapreduce.v2.util.LocalResourceBuilder.createLocalResources(LocalResourceBuilder.java:144)在org.apache.hadoop.mapreduce.v2.util.MRApps.setupDistributedCache(MRApps.java:531)在org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:92)在org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:171)在org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:760)在org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:253) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889) at org.apache.hadoop.mapreduce。org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1588) at analysis.Author_ref.main的Job.submit(Job.java:1567) (Author_ref.java:111)

我使用的是hadoop 2.9和hbase 1.2.6.1

EN

回答 1

Stack Overflow用户

发布于 2018-08-07 14:49:19

hadoop 2.9和hbase 1.2.x不兼容,请查看以下内容

代码语言:javascript
运行
复制
http://hbase.apache.org/book.html#basic.prerequisites

您必须使用兼容的版本。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51625899

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档