首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >scala连接hbase主故障

scala连接hbase主故障
EN

Stack Overflow用户
提问于 2016-12-13 08:45:40
回答 1查看 483关注 0票数 2

我编写Scala代码如下:

代码语言:javascript
运行
复制
44 val config: Configuration = HBaseConfiguration.create()
 45     config.set("hbase.zookeeper.property.clientPort", zooKeeperClientPort)
 46     config.set("hbase.zookeeper.quorum", zooKeeperQuorum)
 47     config.set("zookeeper.znode.parent", zooKeeperZNodeParent)
 48     config.set("hbase.master", hbaseMaster)
 49     config.addResource("hbase-site.xml")
 50     config.addResource("hdfs-site.xml")
 51     HBaseAdmin.checkHBaseAvailable(config);
 52     val admin: HBaseAdmin = new HBaseAdmin(config)
 53     // descriptor.addColumn(new HColumnDescriptor(Bytes.toBytes("cfbfeature")))
 54     val conn = ConnectionFactory.createConnection(config)
 55     table = conn.getTable(TableName.valueOf(outputTable))

以下是我的完整错误日志:

zooKeeperClientPort:2181,zooKeeperQuorum:zk1.hbase.busdev.usw2.cmcm.com,zk2.hbase.busdev.usw2.cmcm.com,zk3.hbase.busdev.usw2.cmcm.com,zooKeeperZNodeParent:/hbase,outputTable:RequestFeature,hbaseMaster:10.2.2.62:60000 16/12/13 08:25:56警告util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit被hbase.regionserver.global.memstore.size废弃16/12/13 08:25:56警告util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit被hbase.regionserver.global.memstore.size 16/12/13 08:25:56警告util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit废弃.regionserver.global.memstore.size 16/12/13 08:25:57 INFO zookeeper.RecoverableZooKeeper: zookeeper.RecoverableZooKeeper=hconnection-0x6ae9e162连接到ZooKeeper ensemble=zk2.hbase.busdev.usw2.cmcm.com:2181,zk1.hbase.busdev.usw2.cmcm.com:2181,zk3.hbase.busdev.usw2.cmcm.com:2181 16/12/13 08:25:57警告util.HeapMemorySizeUtil: hbase.regionserver.global.memstore.upperLimit被hbase.regionserver.global.memstore.size 16/12/13 08:25:57警告util.DynamicClassLoader:未能识别dir ://my群集/hbase/lib的fs,忽略:未知主机:我的集群在org.apache.hadoop.ipc.Client$Connection.(Client.java:214),org.apache.hadoop.ipc.Client.getConnection(Client.java:1196),org.apache.hadoop.ipc.Client.call(Client.java:1050),org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225),com.sun.proxyorg.apache.hadoop.ipc.RPC.getProxy(RPC.java:396),org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379),org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119),org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:238),org.apache.hadoop.hdfs.DFSClient。( org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404) at org.apache.hadoop.fs.FileSystem.get)在org.apache.hadoop.fs.Path.getFileSystem(Path.java:187),org.apache.hadoop.hbase.util.DynamicClassLoader.(DynamicClassLoader.java:104),org.apache.hadoop.hbase.protobuf.ProtobufUtil.(ProtobufUtil.java:229),org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64),org.apache.hadoop.hbase.zookeeper。ZKClusterId.readClusterIdZNode(ZKClusterId.java:75) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:833) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:623) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)java:57)在sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.org.apache.hadoop.hbase.client.HBaseAdmin.checkHBaseAvailable(HBaseAdmin.java:2508) at com.cmcm.datahero.streaming.actor.ToHBaseActor.preStart(ToHBaseActor.scala:51) at akka.actor.Actor$class.aroundPreStart(Actor.scala:472) at com.cmcm.datahero.streaming.actor.ToHBaseActor.aroundPreStart(ToHBaseActor.scala:16) at akka.actor.ActorCell.create(ActorCell.scala:akka.actor.ActorCell.invokeAll$1(ActorCell.scala:456) at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478) at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:263) at akka.dispatch.Mailbox.run(Mailbox.scala:219) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(client.ConnectionManager$HConnectionImplementation: java.lang.Thread.run(Thread.java:745) 16/12/13 08:25:57 INFO java.lang.Thread.run关闭动物园管理员sessionid=0x356c1ee7cac04c8

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2016-12-15 00:14:32

最后,我将hbase和hdfs信任放入子路径src/main/resources中。然后addResouce到hadoop配置。但这不是我问题的核心。jar版本的hbase包应该与hbase版本相匹配。我修好了我的build.sbt。下面公布的代码。希望能帮助别人克服我遇到的错误。

代码语言:javascript
运行
复制
libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.0.0-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-core" % "2.6.0-mr1-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-hdfs" % "2.6.0-cdh5.4.8"
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0-cdh5.5.4"
// libraryDependencies += "org.apache.hbase" % "hbase-client" % "1.0.0-CDH"
// libraryDependencies += "org.apache.hbase" % "hbase-common" % "1.0.0"
// libraryDependencies += "org.apache.hbase" % "hbase-server" % "1.0.0"

//scalaSource in Compile := baseDirectory.value / "src/main/scala"
//resourceDirectory in Compile := baseDirectory.value / "src/main/resources"
unmanagedBase := baseDirectory.value / "lib"
//unmanagedResourceDirectories in Compile += baseDirectory.value / "conf"
packAutoSettings
resolvers += Resolver.sonatypeRepo("snapshots")
resolvers += "cloudera repo" at "https://repository.cloudera.com/content/repositories/releases/"
resolvers += "cloudera repo1" at "https://repository.cloudera.com/artifactory/cloudera-repos/"
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/41116823

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档