前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >HBase Java API 03:HBase与MapReduce整合

HBase Java API 03:HBase与MapReduce整合

作者头像
CoderJed
发布2018-09-13 10:39:59
7320
发布2018-09-13 10:39:59
举报
文章被收录于专栏:Jed的技术阶梯Jed的技术阶梯

HBase版本:1.2.6

1. 案例说明

现有hbase表"student",其中内容如下:

hbase(main):025:0> scan 'student'
ROW     COLUMN+CELL                                                                            
 0001   column=info:age, timestamp=1516139523768, value=15                                     
 0001   column=info:name, timestamp=1516139523388, value=Madeline                              
 0002   column=info:age, timestamp=1516139523820, value=16                                     
 0002   column=info:name, timestamp=1516139523469, value=Jed                                   
 0003   column=info:age, timestamp=1516139523862, value=17                                     
 0003   column=info:name, timestamp=1516139523607, value=Olivia                                
 0004   column=info:age, timestamp=1516139523908, value=18                                     
 0004   column=info:name, timestamp=1516139523680, value=Jed                                   
 0005   column=info:age, timestamp=1516139525527, value=19                                     
 0005   column=info:name, timestamp=1516139523725, value=Sarah

需求:

编写MapReduce程序,把"student"表中"info"列族下的"name"那一列抽取出来,存入新HBase表"student_extract"中,要求"student_extract"表中只有"info"这个列族,"info"这个列族下只有"name"这个列

2. 代码实现

import java.io.IOException;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Mutation;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.mapreduce.TableReducer;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;

public class HBaseAndMapReduce {
    
    private static final String ZOOKEEPER_LIST = "repo:2181";
    private static final String INPUT_TABLE_NAME = "student";
    private static final String OUTPUT_TABLE_NAME = "student_extract";
    private static final byte[] FAMILY_NAME = "info".getBytes();
    private static final byte[] COLUMN_NAME = "name".getBytes();
    
    public static void main(String[] args) throws Exception {
        
        System.setProperty("HADOOP_USER_NAME", "root");
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.zookeeper.quorum", ZOOKEEPER_LIST);
        
        // 创建需要添加数据的新表
        Connection conn = ConnectionFactory.createConnection(conf);
        Admin admin = conn.getAdmin();
        
        if(admin.tableExists(TableName.valueOf(OUTPUT_TABLE_NAME))) {
            admin.disableTable(TableName.valueOf(OUTPUT_TABLE_NAME));
            admin.deleteTable(TableName.valueOf(OUTPUT_TABLE_NAME));
        }
        
        HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(OUTPUT_TABLE_NAME));
        HColumnDescriptor columnDesc = new HColumnDescriptor(FAMILY_NAME);
        tableDesc.addFamily(columnDesc);
        admin.createTable(tableDesc);
        
        Job job = Job.getInstance(conf);
        
        // 初始化Job,设置Mapper和Reducer相关的参数
        Scan scan = new Scan();
        scan.addColumn(FAMILY_NAME, COLUMN_NAME);
        // 最后的参数代表是否添加依赖的jar包,本地模式设置为false
        TableMapReduceUtil.initTableMapperJob(INPUT_TABLE_NAME, scan, HBaseAndMapreduceMapper.class, Text.class, NullWritable.class, job, false);
        TableMapReduceUtil.initTableReducerJob(OUTPUT_TABLE_NAME, HBaseAndMapreduceReducer.class, job);
        
        /*
         * 上面两行代码中已经包含了下面的这5行代码
         * job.setMapperClass(HBaseAndMapreduceMapper.class);
         * job.setReducerClass(HBaseAndMapreduceReducer.class);
         * job.setMapOutputKeyClass(Text.class);
         * job.setMapOutputValueClass(NullWritable.class);
         * job.setOutputKeyClass(ImmutableBytesWritable.class);
         * job.setOutputValueClass(Mutation.class);
         */
        
        boolean successful = job.waitForCompletion(true);
        System.exit(successful ? 0 : -1);
        
    }
    
    // public abstract class TableMapper<KEYOUT, VALUEOUT> extends Mapper<ImmutableBytesWritable, Result, KEYOUT, VALUEOUT> {...}
    private static class HBaseAndMapreduceMapper extends TableMapper<Text, NullWritable> {

        @Override
        protected void map(ImmutableBytesWritable key, Result value, Context context)
                throws IOException, InterruptedException {
            
            String rowkey = Bytes.toString(key.copyBytes());
            List<Cell> cells = value.listCells();
            Text keyOut = new Text();
            StringBuilder sb = new StringBuilder();
            
            for(Cell cell : cells) {
                String family = new String(CellUtil.cloneFamily(cell));
                String qualifier = new String(CellUtil.cloneQualifier(cell));
                String rowValue = new String(CellUtil.cloneValue(cell));
                sb.append(rowkey + "\t" + family + "\t" + qualifier + "\t" + rowValue);
                keyOut.set(sb.toString());
                context.write(keyOut, NullWritable.get());
                
                sb = new StringBuilder();
            }
        }
        
    }
    
    // public abstract class TableReducer<KEYIN, VALUEIN, KEYOUT> extends Reducer<KEYIN, VALUEIN, KEYOUT, Mutation> {...}
    private static class HBaseAndMapreduceReducer extends TableReducer<Text, NullWritable, ImmutableBytesWritable> {

        @Override
        protected void reduce(Text key, Iterable<NullWritable> values, Context context) throws IOException, InterruptedException {
            
            byte[] rowkey = Bytes.toBytes(key.toString().split("\t")[0]);
            byte[] family = Bytes.toBytes(key.toString().split("\t")[1]);
            byte[] qualifier = Bytes.toBytes(key.toString().split("\t")[2]);
            byte[] rowValue = Bytes.toBytes(key.toString().split("\t")[3]);
            
            ImmutableBytesWritable newRowKey = new ImmutableBytesWritable(rowkey);
            Put put = new Put(rowkey);
            put.addImmutable(family, qualifier, rowValue);
            
            context.write(newRowKey, put);
        }
        
    }

}

# 程序运行结束后,"student_extract"表中数据为:

hbase(main):033:0> scan 'student_extract'
ROW     COLUMN+CELL                                                                            
 0001   column=info:name, timestamp=1516142234629, value=Madeline                              
 0002   column=info:name, timestamp=1516142234629, value=Jed                                   
 0003   column=info:name, timestamp=1516142234629, value=Olivia                                
 0004   column=info:name, timestamp=1516142234629, value=Jed                                   
 0005   column=info:name, timestamp=1516142234629, value=Sarah
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2018.01.17 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1. 案例说明
  • 2. 代码实现
相关产品与服务
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档