前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Hive2.0.0操作HBase 1.2.1报错解决

Hive2.0.0操作HBase 1.2.1报错解决

作者头像
汤高
发布2018-01-11 16:11:41
1.2K0
发布2018-01-11 16:11:41
举报
文章被收录于专栏:积累沉淀积累沉淀

首先看错

 org.apache.hive.service.cli.HiveSQLException: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: tgis not allowed to impersonate hive at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:258) at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:249) at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:579) at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167) at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) at java.sql.DriverManager.getConnection(DriverManager.java:579) at java.sql.DriverManager.getConnection(DriverManager.java:221) at com.yc.hive.TestHive1.getConn(TestHive1.java:153) at com.yc.hive.TestHive1.main(TestHive1.java:33)

Hive2.0.0加了权限,低版本用JDBC操作hive不会报错,但是高版本连接就会报上面的错

,归根结底是因为权限不足

只要在Hadoop的配置文件core-site.xml中加入

<property>     <name>hadoop.proxyuser.tg.hosts</name>     <value>*</value> </property> <property>     <name>hadoop.proxyuser.tg.groups</name>     <value>*</value> </property>

即可

重启虚拟机  重启所有服务

再看测试

测试帮助类

代码语言:javascript
复制
package com.tg.hadoop.hive;

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
/**
 * 
 * @author 汤高
 *
 */
public class HiveAPI {
	//网上写 org.apache.hadoop.hive.jdbc.HiveDriver ,新版本不能这样写
    private static String driverName = "org.apache.hive.jdbc.HiveDriver";  
    
  //这里是hive2,网上其他人都写hive,在高版本中会报错
    private static String url = "jdbc:hive2://master:10000/default"; 
    private static String user = "hive";  
    private static String password = "hive";  
    private static String sql = "";  
    
	public static ResultSet countData(Statement stmt, String tableName)  {
		sql = "select count(1) from " + tableName;
		System.out.println("Running:" + sql);
		ResultSet res=null;
		try {
			res = stmt.executeQuery(sql);
			System.out.println("执行“regular hive query”运行结果:");
			while (res.next()) {
				System.out.println("count ------>" + res.getString(1));
			}
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return res;
	}

	public static ResultSet selectData(Statement stmt, String tableName)  {
		sql = "select * from " + tableName;
		System.out.println("Running:" + sql);
		ResultSet res=null;
		try {
			res = stmt.executeQuery(sql);
			System.out.println("执行 select * query 运行结果:");
			while (res.next()) {
				System.out.println(res.getInt(1) + "\t" + res.getString(2));
			}
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return res;
	}

	public static boolean loadData(Statement stmt, String tableName,String filepath) {
		// 目录 ,我的是hive安装的机子的虚拟机的home目录下
		sql = "load data local inpath '" + filepath + "' into table " + tableName;
		System.out.println("Running:" + sql);
		boolean result=false;
		try {
			result=stmt.execute(sql);
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return result;
	}
	
	public static boolean loadDataFromHDFS(Statement stmt, String tableName,String filepath) {
		// 目录 ,我的是hive安装的机子的虚拟机的home目录下
		sql = "load data inpath '" + filepath + "' into table " + tableName;
		System.out.println("Running:" + sql);
		boolean result=false;
		try {
			result=stmt.execute(sql);
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return result;
	}

	public static ResultSet describeTables(Statement stmt, String tableName)   {
		sql = "describe " + tableName;
		System.out.println("Running:" + sql);
		ResultSet res=null;
		try {
			res = stmt.executeQuery(sql);
			System.out.println("执行 describe table 运行结果:");
			while (res.next()) {
				System.out.println(res.getString(1) + "\t" + res.getString(2));
			}
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return res;
	}

	public static ResultSet showTables(Statement stmt, String tableName)  {
		if(tableName==null||tableName.equals(null)){
			sql = "show tables";
		}else{
			sql = "show tables '" + tableName + "'";
		}
		ResultSet res=null;
		try {
			res = stmt.executeQuery(sql);
			System.out.println("执行 show tables 运行结果:");
			while (res.next()) {
				System.out.println(res.getString(1));
			}
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return res;
	}

	public static boolean createTable(Statement stmt, String tableName)  {
		String reqsql=" drop table if exists	" + tableName ;
		sql = "create table " + tableName + " (key int, value string)  row format delimited fields terminated by '\t'";
		boolean result=false;
		try {
			stmt.execute(reqsql);
			result=stmt.execute(sql);
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return result;
	}

	public static boolean dropTable(Statement stmt,String tableName) {
		// 创建的表名
		//String tableName = "testHive";
		sql = "drop table  " + tableName;
		boolean result=false;
		try {
			stmt.execute(sql);
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return result;
	}

	public static Connection getConn()  {
		Connection conn = null;
		try {
			Class.forName(driverName);
			conn = DriverManager.getConnection(url, user, password);
		} catch (ClassNotFoundException e) {
			e.printStackTrace();
		} catch (SQLException e) {
			e.printStackTrace();
		}
		return conn;
	}
	
	public static void  close(Connection conn,Statement stmt){
		  try {
			if (conn != null) {  
			      conn.close();  
			      conn = null;  
			  }  
			  if (stmt != null) {  
			      stmt.close();  
			      stmt = null;  
			  }
		} catch (SQLException e) {
			e.printStackTrace();
		}  
	}
}
代码语言:javascript
复制
import static org.junit.Assert.*;

import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;

import org.junit.Before;
import org.junit.Test;

import com.tg.hadoop.hive.HiveAPI;
/**
 * 
 * @author 汤高
 *
 */
public class TestHive {
	private  Statement stmt = null; 
	private  Connection conn=null;
	@Before
	public void setConAndStatement (){
	
       conn = HiveAPI.getConn();  
        try {
			stmt = conn.createStatement();
		} catch (SQLException e) {
			e.printStackTrace();
		}
        assertNotNull(conn);
	}
	
	@Test
	public void testDropTable() {
		String tableName="testhive";
		assertNotNull(HiveAPI.dropTable(stmt, tableName));
	}
	
	@Test
	public void testCreateTable() {
		boolean result=HiveAPI.createTable(stmt,"testhive7");
		assertNotNull(result);
	}
	
	@Test
	public void testCreateTableHive() {
		String sql="CREATE  TABLE  hbase_table_tgg(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES (\"hbase.columns.mapping\" = \":key,cf1:val\")  TBLPROPERTIES (\"hbase.table.name\" = \"tanggaozhou\")";
		
		try {
			stmt.execute(sql);
		} catch (SQLException e) {
			// TODO Auto-generated catch block
			e.printStackTrace();
		}
	}
	
	
	@Test
	public void testdescribeTables(){
		ResultSet res=HiveAPI.describeTables(stmt, "testhive");
		assertNotNull(res);
	}
	
	@Test
	public void testshowTables(){
		//ResultSet res=HiveAPI.showTables(stmt, "testhive");
		ResultSet res=HiveAPI.showTables(stmt, null);
		assertNotNull(res);
	}
	
	@Test
	public void testloadData(){
		boolean result=HiveAPI.loadData( stmt, "testhive","user.txt");
		assertNotNull(result);
	}
	
	
	@Test
	public  void testclose(){
		HiveAPI.close(conn,stmt);
	}
	
	@Test
	public  void testSelectData(){
		ResultSet res=HiveAPI.selectData(stmt, "testhive");
		assertNotNull(res);
	}
	
	@Test
	public  void testCountData(){
		ResultSet res=HiveAPI.countData(stmt, "testhive");
		assertNotNull(res);
	}
	
	
}

结果:

[28 21:48:40,381 INFO ] org.apache.hive.jdbc.Utils - Supplied authorities: master:10000 [28 21:48:40,382 INFO ] org.apache.hive.jdbc.Utils - Resolved authority: master:10000 [28 21:48:40,453 DEBUG] org.apache.thrift.transport.TSaslTransport - opening transport org.apache.thrift.transport.TSaslClientTransport@1505b41 [28 21:48:40,465 DEBUG] org.apache.thrift.transport.TSaslClientTransport - Sending mechanism name PLAIN and initial response of length 10 [28 21:48:40,468 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: Writing message with status START and payload length 5 [28 21:48:40,468 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: Writing message with status COMPLETE and payload length 10 [28 21:48:40,469 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: Start message handled [28 21:48:40,469 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: Main negotiation loop complete [28 21:48:40,469 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: SASL Client receiving last message [28 21:48:40,470 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: Received message with status COMPLETE and payload length 0 [28 21:48:40,488 DEBUG] org.apache.thrift.transport.TSaslTransport - writing data length: 71 [28 21:48:40,519 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: reading data length: 109 [28 21:48:40,600 DEBUG] org.apache.thrift.transport.TSaslTransport - writing data length: 337 [28 21:48:40,622 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: reading data length: 109 [28 21:48:40,654 DEBUG] org.apache.thrift.transport.TSaslTransport - writing data length: 100 [28 21:48:44,116 DEBUG] org.apache.thrift.transport.TSaslTransport - CLIENT: reading data length: 53

成功了

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
原始发表:2016-05-28 ,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档