前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Hadoop基础教程-第10章 HBase:Hadoop数据库(10.6 HBase API)

Hadoop基础教程-第10章 HBase:Hadoop数据库(10.6 HBase API)

作者头像
程裕强
发布2018-01-02 15:59:36
2.2K0
发布2018-01-02 15:59:36
举报

第10章 HBase:Hadoop数据库

10.6 HBase API (新特性)

本节所有代码可以从https://github.com/ihadron/hbase.git下载。

10.6.1 HBase API介绍

前面我们已经学习了通过HBase Shell命令来操作HBase,本质上是通过Java API进行操作的。所以Java API操作HBase是最直接、最原生的方式。

https://hbase.apache.org/devapidocs/index.html (1)Configuration

返回值

方法

说明

Table

getTable(TableName tableName)

获取表对象

Admin

getAdmin()

获取Admin对象,管理HBase集群

void

close()

关闭连接

(2)Admin The administrative API for HBase. Obtain an instance from an Connection.getAdmin() and call close() afterwards. Admin can be used to create, drop, list, enable and disable tables, add and drop table column families and other administrative operations. HBase的管理接口。从Connection.getAdmin()获取一个实例,然后调用close()。Admin可用于创建,删除,列表,启用和禁用表,添加和删除表列列和其他管理操作。

返回值

方法

说明

boolean

tableExists(TableName tableName)

判定表是否存在

List

listTableDescriptors()

列出所有的用户空间的数据表

TableName[]

listTableNames()

列出所有的用户空间的数据表的表名

void

createTable(TableDescriptor desc)

创建一个新表

void

deleteTable(TableName tableName)

删除一个表

(3)

HBase API程序设计步骤 (1)创建一个Configuration 对象 (2)通过Configuration 对象的getTable方法获取Table对象 (3)执行相应的put 、get 、delete 、scan 等操作 (4)释放各种资源

10.6.2 Windows+Eclipse+Maven+HBase

(1)编辑Windows系统的hosts文件

C:\Windows\System32\drivers\etc\hosts

# localhost name resolution is handled within DNS itself.
#   127.0.0.1       localhost
#   ::1             localhost
192.168.80.131  node1
192.168.80.132  node2
192.168.80.133  node3

(2)在Windows系统下,JDK和Maven已经安装配置,具体内容请参考4.1节4.2节内容。 (3)打开Eclipse创建maven项目,项目名称取hhase (4)编译pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>cn.hadron</groupId>
    <artifactId>hbase</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>hbase</name>
    <url>http://maven.apache.org</url>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.hbase/hbase-client -->
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.2.6</version>
        </dependency>
        <dependency>
            <groupId>jdk.tools</groupId>
            <artifactId>jdk.tools</artifactId>
            <version>1.8</version>
            <scope>system</scope>
            <systemPath>${JAVA_HOME}/lib/tools.jar</systemPath>
        </dependency>
    </dependencies>
</project>

10.6.3 创建表

package cn.hadron.hbase.dao;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
public class CreateDemo {
    public static void main(String[] args)throws Exception{
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.rootdir", "hdfs://cetc/hbase");
        // 设置Zookeeper,直接设置IP地址
        conf.set("hbase.zookeeper.quorum", "node1,node2,node3");
        //建立HBase连接
        Connection connection = ConnectionFactory.createConnection(conf);
        //表管理类
        Admin admin = connection.getAdmin();
        //定义表名
        String tablename="test1";
        TableName tableNameObj = TableName.valueOf(tablename);
        //判断表是否存在
        if (admin.tableExists(tableNameObj)) {
            System.out.println("Table exists!");
            System.exit(0);
        } else {
            //定义表结构
            HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(tablename));
            //添加列族
            tableDesc.addFamily(new HColumnDescriptor("info"));
            //创建表
            admin.createTable(tableDesc);
            System.out.println("create table success!");
        }
        admin.close();
        connection.close();
    }
}

Eclipse执行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
create table success!
这里写图片描述
这里写图片描述

通过HBase Shell 查看结果

[root@node2 ~]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6, rUnknown, Mon May 29 02:25:32 CDT 2017

hbase(main):001:0> list
TABLE                                                                                                                                                                                        
mydb:test                                                                                                                                                                                    
t1                                                                                                                                                                                           
test1                                                                                                                                                                                        
3 row(s) in 0.6070 seconds

=> ["mydb:test", "t1", "test1"]
hbase(main):002:0> desc 'test1'
Table test1 is ENABLED                                                                                                                                                                       
test1                                                                                                                                                                                        
COLUMN FAMILIES DESCRIPTION                                                                                                                                                                  
{NAME => 'info', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERS
IONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}                                                                                                           
1 row(s) in 0.2110 seconds

hbase(main):003:0> 

10.6.4 插入数据

package cn.hadron.hbase.dao;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Connection;
public class PutDemo {
    public static void main(String[] args)throws Exception{
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.rootdir", "hdfs://cetc/hbase");
        // 设置Zookeeper,直接设置IP地址
        conf.set("hbase.zookeeper.quorum", "node1,node2,node3");
        //建立连接
        Connection connection = ConnectionFactory.createConnection(conf);
        //获取表
        Table table = connection.getTable(TableName.valueOf("test1"));
        //通过rowKey实例化Put
        Put put = new Put(Bytes.toBytes("001"));
        //指定列族名、列名和值
        String family="info";
        String qualifier="name";
        String value="hadron";
        put.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes.toBytes(value));
        //执行Put
        table.put(put);
        //关闭表和连接
        table.close();
        connection.close();
        System.out.println("ok!");
    }
}

Eclipse运行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
ok!

HBase Shell查看结果

hbase(main):003:0> scan 'test1'
ROW                                              COLUMN+CELL                                                                                                                                 
 001                                             column=info:name, timestamp=1501421890863, value=hadron                                                                                     
1 row(s) in 0.2060 seconds

hbase(main):004:0>

10.6.5 读取数据

package cn.hadron.hbase.dao;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Connection;
public class GetDemo {
    public static void main(String[] args)throws Exception{
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.rootdir", "hdfs://cetc/hbase");
        // 设置Zookeeper,直接设置IP地址
        conf.set("hbase.zookeeper.quorum", "node1,node2,node3");
        //建立连接
        Connection connection = ConnectionFactory.createConnection(conf);
        Table table = connection.getTable(TableName.valueOf("test1"));
        //通过rowKey实例化Get
        Get get = new Get(Bytes.toBytes("001"));
        //添加列族名和列名条件
        String family="info";
        String qualifier="name";
        get.addColumn(family.getBytes(), qualifier.getBytes());
        //执行Get,返回结果
        Result result=table.get(get);
        //提取结果
        String value=Bytes.toString(result.getValue(family.getBytes(), qualifier.getBytes()));
        System.out.println("value="+value);
        //关闭表和连接
        table.close();
        connection.close();
    }
}

Eclipse运行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
value=hadron

10.6.6 扫描

准备数据

hbase(main):004:0> put 'test1','002','info:name','abc'
0 row(s) in 0.2870 seconds

hbase(main):005:0> put 'test1','003','info:name','xyz'
0 row(s) in 0.0280 seconds

hbase(main):006:0> put 'test1','004','info:name','qiang'
0 row(s) in 0.0200 seconds

hbase(main):007:0> put 'test1','005','info:name','test'
0 row(s) in 0.0430 seconds

hbase(main):008:0> put 'test1','005','info:age','20'
0 row(s) in 0.0240 seconds

hbase(main):009:0> 

Java代码

package cn.hadron.hbase.dao;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Connection;
public class ScanDemo {
    public static void main(String[] args)throws Exception{
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.rootdir", "hdfs://cetc/hbase");
        // 设置Zookeeper,直接设置IP地址
        conf.set("hbase.zookeeper.quorum", "node1,node2,node3");
        //建立连接
        Connection connection = ConnectionFactory.createConnection(conf);
        Table table = connection.getTable(TableName.valueOf("test1"));
        //初始化Scan
        Scan scan = new Scan();
/*        //指定开始的rowKey
        scan.setStartRow("001".getBytes());
        //指定结束的rowKey
        scan.setStopRow("005".getBytes());*/
        //添加过滤条件
        String family="info";
        String qualifier="name";
        scan.addColumn(family.getBytes(), qualifier.getBytes());
        //执行scan返回结果
        ResultScanner result=table.getScanner(scan);
        //迭代提取结果
        String value="";
        for(Result r:result){
             value=Bytes.toString(r.getValue(family.getBytes(), qualifier.getBytes()));     
             System.out.println("value="+value);
        }   
        //关闭表和连接
        table.close();
        connection.close();
    }
}

Eclipse执行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
value=hadron
value=abc
value=xyz
value=qiang
value=test

添加rowKey开始和结束条件,筛选出[start,stop)返回的数据。

        //指定开始的rowKey
        scan.setStartRow("001".getBytes());
        //指定结束的rowKey
        scan.setStopRow("005".getBytes());

保存代码,重新执行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
value=hadron
value=abc
value=xyz
value=qiang

10.6.7 删除数据

package cn.hadron.hbase.dao;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Connection;
public class DeleteDemo {
    public static void main(String[] args)throws Exception{
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.rootdir", "hdfs://cetc/hbase");
        // 设置Zookeeper,直接设置IP地址
        conf.set("hbase.zookeeper.quorum", "node1,node2,node3");
        //建立连接
        Connection connection = ConnectionFactory.createConnection(conf);
        //获取表
        Table table = connection.getTable(TableName.valueOf("test1"));
        //通过rowKey实例化Delete
        Delete delete=new Delete(Bytes.toBytes("001"));
        //指定列族名、列名和值
        String family="info";
        String qualifier="name";
        delete.addColumn(family.getBytes(), qualifier.getBytes());
        //执行Delete
        table.delete(delete);
        //关闭表和连接
        table.close();
        connection.close();
        System.out.println("ok!");
    }
}

Eclipse运行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
ok!
hbase(main):003:0> get 'test1','001'
COLUMN                                           CELL                                                                                                                                        
0 row(s) in 0.0730 seconds
hbase(main):004:0> scan 'test1'
ROW                                              COLUMN+CELL                                                                                                                                 
 002                                             column=info:name, timestamp=1501424329079, value=abc                                                                                        
 003                                             column=info:name, timestamp=1501424339893, value=xyz                                                                                        
 004                                             column=info:name, timestamp=1501424362260, value=qiang                                                                                      
 005                                             column=info:age, timestamp=1501424541777, value=20                                                                                          
 005                                             column=info:name, timestamp=1501424381141, value=test                                                                                       
4 row(s) in 0.0500 seconds

hbase(main):005:0> 

10.6.8 删除表

编写Java类

package cn.hadron.hbase.dao;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
public class DropDemo {
    public static void main(String[] args)throws Exception{
        Configuration conf = HBaseConfiguration.create();
        conf.set("hbase.rootdir", "hdfs://cetc/hbase");
        // 设置Zookeeper,直接设置IP地址
        conf.set("hbase.zookeeper.quorum", "node1,node2,node3");
        //建立连接
        Connection connection = ConnectionFactory.createConnection(conf);
        //表管理类
        Admin admin = connection.getAdmin();
        //定义表名
        TableName table = TableName.valueOf("test1");
        //先禁用
        admin.disableTable(table);
        //再删除
        admin.deleteTable(table);
        //关闭
        admin.close();
        connection.close();
        System.out.println("Successfully deleted data table!");
    }
}

Eclipse执行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Successfully deleted data table!

通过HBase Shell查询表,发现test1已经被删除了。

hbase(main):005:0> list
TABLE                                                                                                                                                                                        
mydb:test                                                                                                                                                                                    
t1                                                                                                                                                                                           
2 row(s) in 0.0440 seconds

=> ["mydb:test", "t1"]
hbase(main):006:0> 

10.6.9 封装类

package cn.hadron.hbase.dao;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;

public class HBaseDao {

    private static Configuration conf = HBaseConfiguration.create();
    private static Connection connection =null;
    private static Admin admin=null;

    static {
        conf.set("hbase.rootdir", "hdfs://cc/hbase");
        // 设置Zookeeper,直接设置IP地址
        conf.set("hbase.zookeeper.quorum", "192.168.80.131,192.168.80.132,192.168.80.133");
        try {
            connection = ConnectionFactory.createConnection(conf);
        } catch (IOException e) {
            e.printStackTrace();
        }
        try {
            admin = connection.getAdmin();
        } catch (IOException e) {
            e.printStackTrace();
        } 
    }

    // 创建表
    public static void createTable(String tablename, String columnFamily) {
        TableName tableNameObj = TableName.valueOf(tablename);
        try {
            if (admin.tableExists(tableNameObj)) {
                System.out.println("Table exists!");
                System.exit(0);
            } else {
                HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(tablename));
                tableDesc.addFamily(new HColumnDescriptor(columnFamily));
                admin.createTable(tableDesc);
                System.out.println("create table success!");
            }
        } catch (IOException e) {
            e.printStackTrace();
        }

    }

    // 删除表
    public static void deleteTable(String tableName) {
        try {
            TableName table = TableName.valueOf(tableName);
            admin.disableTable(table);
            admin.deleteTable(table);
            System.out.println("delete table " + tableName + " ok.");
        } catch (IOException e) {
            System.out.println("删除表出现异常!");
        }
    }


    // 插入一行记录
    public static void put(String tableName, String rowKey, String family, String qualifier, String value){
        try {
            Table table = connection.getTable(TableName.valueOf(tableName));
            Put put = new Put(Bytes.toBytes(rowKey));
            put.addColumn(Bytes.toBytes(family), Bytes.toBytes(qualifier), Bytes.toBytes(value));
            table.put(put);
            table.close();
            //System.out.println("insert recored " + rowKey + " to table " + tableName + " ok.");
        } catch (IOException e) {
            e.printStackTrace();
        }
    }

    //查询数据
    public static String get(String tableName, String rowKey, String family, String qualifier){
        try{
            Table table = connection.getTable(TableName.valueOf(tableName));
            //通过rowKey实例化Get
            Get get = new Get(Bytes.toBytes(rowKey));
            //添加列族名和列名条件
            get.addColumn(family.getBytes(), qualifier.getBytes());
            //执行Get,返回结果
            Result result=table.get(get);
            //返回结果
            return Bytes.toString(result.getValue(family.getBytes(), qualifier.getBytes()));
        }catch(IOException e){
            e.printStackTrace();
            return null;
        }
    }
    //统计记录数
    public static long count(String tableName){
        try{
            final long[] rowCount = {0};
            Table table = connection.getTable(TableName.valueOf(tableName));
            Scan scan = new Scan();
            scan.setFilter(new FirstKeyOnlyFilter());
            ResultScanner resultScanner = table.getScanner(scan);
            resultScanner.forEach(result -> {
                rowCount[0] += result.size();//result.size()是int型
            });
            return rowCount[0];
        }catch(IOException e){
            e.printStackTrace();
            return -1;
        }
    }

    //扫描表
    public static List<String> scan(String tableName, String startRow,String stopRow,String family, String qualifier){
        try {
            Table table = connection.getTable(TableName.valueOf(tableName));
            //初始化Scan
            Scan scan = new Scan();
            //指定开始的rowKey
            scan.setStartRow(startRow.getBytes());
            //指定结束的rowKey
            scan.setStopRow(stopRow.getBytes());
            scan.addColumn(family.getBytes(), qualifier.getBytes());
            //执行scan返回结果
            ResultScanner result=table.getScanner(scan);
            List<String> list=new ArrayList<>();
            String value=null;
            for(Result r:result){
                value=Bytes.toString(r.getValue(family.getBytes(), qualifier.getBytes()));  
                list.add(value); 
            } 
            return list;
        } catch (IOException e) {
            e.printStackTrace();
            return null;
        } 
    }
    //扫描表
    public static List<String> scan(String tableName,String family, String qualifier){
        try {
            Table table = connection.getTable(TableName.valueOf(tableName));
            //初始化Scan
            Scan scan = new Scan();
            scan.addColumn(family.getBytes(), qualifier.getBytes());
            //执行scan返回结果
            ResultScanner result=table.getScanner(scan);
            List<String> list=new ArrayList<>();
            String value=null;
            for(Result r:result){
                value=Bytes.toString(r.getValue(family.getBytes(), qualifier.getBytes()));  
                list.add(value); 
            } 
            return list;
        } catch (IOException e) {
            e.printStackTrace();
            return null;
        } 
    }

    public static List<String> scan(String tableName){
        List<String> list=new ArrayList<>();
        try {
            //获取表
            Table table = connection.getTable(TableName.valueOf(tableName));
            Scan scan = new Scan();
            ResultScanner resultScanner = table.getScanner(scan);
            StringBuffer sb=null;
            for (Result result : resultScanner) {
                List<Cell> cells = result.listCells();
                for (Cell cell : cells) {
                    sb=new StringBuffer();
                    sb.append("rowKey:").append(Bytes.toString(CellUtil.cloneRow(cell))).append("\t");
                    sb.append("family:").append(Bytes.toString(CellUtil.cloneFamily(cell))).append(",");
                    sb.append(Bytes.toString(CellUtil.cloneQualifier(cell))).append("=");
                    sb.append(Bytes.toString(CellUtil.cloneValue(cell)));
                    list.add(sb.toString());
                }
            }
            return list;
        } catch (IOException e) {
            e.printStackTrace();
            return null;
        }
    }

    public static void delete(String tableName,String rowKey,String family, String qualifier){
        try {
            //获取表
            Table table = connection.getTable(TableName.valueOf(tableName));
            //通过rowKey实例化Delete
            Delete delete=new Delete(Bytes.toBytes(rowKey));
            //指定列族名、列名和值
            delete.addColumn(family.getBytes(), qualifier.getBytes());
            //执行Delete
            table.delete(delete);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }


    //关闭
    public static void close(){
        try {
            admin.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
        try {
            connection.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
    //测试
    public static void main(String[] args) {
        HBaseDao.deleteTable("testA");
        HBaseDao.createTable("testA", "info");
        //循环插入10条数据
        for(int i=0;i<10;i++){
            HBaseDao.put("testA", "00"+i, "info", "name", "test"+i);
            HBaseDao.put("testA", "00"+i, "info", "age", i+"");
        }
        System.out.println("count="+HBaseDao.count("testA"));
        String value=HBaseDao.get("testA", "001","info", "name");
        System.out.println("value="+value);
        //扫描
        System.out.println("------------------sacn(testA,000,004,info,name)");
        List<String> list=HBaseDao.scan("testA", "000", "004", "info", "name");
        for(String s:list){
            System.out.println(s);
        }
        //扫描
        System.out.println("------------------sacn(testA,info,name)");
        list=HBaseDao.scan("testA", "info", "name");
        for(String s:list){
            System.out.println(s);
        }
        list.clear();
        //扫描
        System.out.println("------------------sacn(testA)");
        list=HBaseDao.scan("testA");
        for(String s:list){
            System.out.println(s);
        }
        HBaseDao.close();
    }

}

Eclipse执行结果

log4j:WARN No appenders could be found for logger (org.apache.hadoop.security.Groups).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
删除表出现异常!
create table success!
count=10
value=test1
------------------sacn(testA,000,004,info,name)
test0
test1
test2
test3
------------------sacn(testA,info,name)
test0
test1
test2
test3
test4
test5
test6
test7
test8
test9
------------------sacn(testA)
rowKey:000  family:info,age=0
rowKey:000  family:info,name=test0
rowKey:001  family:info,age=1
rowKey:001  family:info,name=test1
rowKey:002  family:info,age=2
rowKey:002  family:info,name=test2
rowKey:003  family:info,age=3
rowKey:003  family:info,name=test3
rowKey:004  family:info,age=4
rowKey:004  family:info,name=test4
rowKey:005  family:info,age=5
rowKey:005  family:info,name=test5
rowKey:006  family:info,age=6
rowKey:006  family:info,name=test6
rowKey:007  family:info,age=7
rowKey:007  family:info,name=test7
rowKey:008  family:info,age=8
rowKey:008  family:info,name=test8
rowKey:009  family:info,age=9
rowKey:009  family:info,name=test9

HBase Shell查询结果

hbase(main):004:0> list
TABLE                                                                                                                  
mydb:test                                                                                                              
t1                                                                                                                     
testA                                                                                                                  
3 row(s) in 0.0570 seconds

=> ["mydb:test", "t1", "testA"]
hbase(main):005:0> scan 'testA'
ROW                            COLUMN+CELL                                                                             
 000                           column=info:age, timestamp=1501921506036, value=0                                       
 000                           column=info:name, timestamp=1501921505995, value=test0                                  
 001                           column=info:age, timestamp=1501921506053, value=1                                       
 001                           column=info:name, timestamp=1501921506046, value=test1                                  
 002                           column=info:age, timestamp=1501921506066, value=2                                       
 002                           column=info:name, timestamp=1501921506059, value=test2                                  
 003                           column=info:age, timestamp=1501921506078, value=3                                       
 003                           column=info:name, timestamp=1501921506072, value=test3                                  
 004                           column=info:age, timestamp=1501921506089, value=4                                       
 004                           column=info:name, timestamp=1501921506084, value=test4                                  
 005                           column=info:age, timestamp=1501921506101, value=5                                       
 005                           column=info:name, timestamp=1501921506095, value=test5                                  
 006                           column=info:age, timestamp=1501921506112, value=6                                       
 006                           column=info:name, timestamp=1501921506106, value=test6                                  
 007                           column=info:age, timestamp=1501921506144, value=7                                       
 007                           column=info:name, timestamp=1501921506117, value=test7                                  
 008                           column=info:age, timestamp=1501921506154, value=8                                       
 008                           column=info:name, timestamp=1501921506149, value=test8                                  
 009                           column=info:age, timestamp=1501921506164, value=9                                       
 009                           column=info:name, timestamp=1501921506159, value=test9                                  
10 row(s) in 0.3550 seconds

hbase(main):006:0> 
本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2017-07-29 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 第10章 HBase:Hadoop数据库
  • 10.6 HBase API (新特性)
    • 10.6.1 HBase API介绍
      • 10.6.2 Windows+Eclipse+Maven+HBase
        • 10.6.3 创建表
          • 10.6.4 插入数据
            • 10.6.5 读取数据
              • 10.6.6 扫描
                • 10.6.7 删除数据
                  • 10.6.8 删除表
                    • 10.6.9 封装类
                    相关产品与服务
                    领券
                    问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档