首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >尝试从木星笔记本访问时出现区域错误

尝试从木星笔记本访问时出现区域错误
EN

Stack Overflow用户
提问于 2017-09-28 13:30:46
回答 1查看 325关注 0票数 1

我正在尝试从运行PySpark内核的木星笔记本中运行对的并行访问。我以http://ec2-54-66-129-240.ap-southeast-2.compute.amazonaws.com/httrack/docs/cloud.google.com/dataproc/examples/cloud-bigtable-example.html为例,使用特定的项目/区域/集群/表名。身份验证通过火花上下文中广播的服务帐户凭据进行。

代码语言:javascript
运行
复制
jconf = {"hbase.client.connection.impl": "com.google.cloud.bigtable.hbase1_1.BigtableConnection",
        "google.bigtable.project.id": myProject,
        "google.bigtable.zone.name": myZone,
        "google.bigtable.cluster.name": myCluster,
        "hbase.mapreduce.inputtable": myTable}

keyConv = "org.apache.spark.examples.pythonconverters.ImmutableBytesWritableToStringConverter"
valueConv = "org.apache.spark.examples.pythonconverters.HBaseResultToStringConverter"

hbase_rdd = sc.newAPIHadoopRDD(
    "org.apache.hadoop.hbase.mapreduce.TableInputFormat",
    "org.apache.hadoop.hbase.io.ImmutableBytesWritable",
    "org.apache.hadoop.hbase.client.Result",
    conf=jconf)

hbase_rdd = hbase_rdd.flatMapValues(lambda v: v.split("\n")).mapValues(json.loads)

print("Row count: %s" % hbase_rdd.count())

我得到了以下错误:

代码语言:javascript
运行
复制
Py4JJavaErrorTraceback (most recent call last)
<ipython-input-30-55b05ded0d2b> in <module>()
     21     #keyConverter=keyConv,
     22     #valueConverter=valueConv,
---> 23     conf=jconf)
     24 
     25 hbase_rdd = hbase_rdd.flatMapValues(lambda v: v.split("\n")).mapValues(json.loads)

/usr/lib/spark/python/pyspark/context.pyc in newAPIHadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter, valueConverter, conf, batchSize)
    644         jrdd = self._jvm.PythonRDD.newAPIHadoopRDD(self._jsc, inputFormatClass, keyClass,
    645                                                    valueClass, keyConverter, valueConverter,
--> 646                                                    jconf, batchSize)
    647         return RDD(jrdd, self)
    648 

/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/usr/lib/spark/python/pyspark/sql/utils.pyc in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: java.io.IOException: Error sampling rowkeys.
    at com.google.cloud.bigtable.hbase.BigtableRegionLocator.getRegions(BigtableRegionLocator.java:79)
    at com.google.cloud.bigtable.hbase.BigtableRegionLocator.getAllRegionLocations(BigtableRegionLocator.java:100)
    at org.apache.hadoop.hbase.util.RegionSizeCalculator.init(RegionSizeCalculator.java:94)
    at org.apache.hadoop.hbase.util.RegionSizeCalculator.<init>(RegionSizeCalculator.java:81)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:256)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:121)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
    at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1303)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
    at org.apache.spark.rdd.RDD.take(RDD.scala:1298)
    at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:203)
    at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:582)
    at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
    at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: UNKNOWN
    at io.grpc.Status.asRuntimeException(Status.java:430)
    at io.grpc.stub.ClientCalls$BlockingResponseStream.hasNext(ClientCalls.java:369)
    at com.google.bigtable.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:268)
    at com.google.cloud.bigtable.grpc.BigtableDataGrpcClient.sampleRowKeys(BigtableDataGrpcClient.java:203)
    at com.google.cloud.bigtable.hbase.BigtableRegionLocator.getRegions(BigtableRegionLocator.java:73)
    ... 33 more
Caused by: java.lang.IllegalStateException: Channel is closed
    at com.google.cloud.bigtable.grpc.io.ReconnectingChannel$DelayingCall.start(ReconnectingChannel.java:88)
    at com.google.cloud.bigtable.grpc.io.ChannelPool$1.checkedStart(ChannelPool.java:97)
    at io.grpc.ClientInterceptors$CheckedForwardingClientCall.start(ClientInterceptors.java:164)
    at io.grpc.stub.ClientCalls.startCall(ClientCalls.java:193)
    at io.grpc.stub.ClientCalls.asyncUnaryRequestCall(ClientCalls.java:173)
    at io.grpc.stub.ClientCalls.blockingServerStreamingCall(ClientCalls.java:122)
    at com.google.cloud.bigtable.grpc.io.ClientCallService$1.blockingServerStreamingCall(ClientCallService.java:79)
    ... 35 more

不过,在运行木星笔记本的终端上,我可以毫无问题地访问GCloud上的Bigtable实例。此外,google.cloud.bigtable和google.cloud.happybase连接器在同一台木星笔记本中工作良好(但它们不处理对Bigtable调用的先验并行)。

知道我在这里做错了什么吗?

FYI,我在集群上使用了Spark2.0.2、Hadoop2.7.3、Python2.7.12、Google BigTable0.26.0、com.google.cloud.bigtable:bigtable-hbase-1.1:0.2.2。

非常感谢,

乔治斯

编辑:在按照Igor Bernstein的建议编辑之后,我得到了一个新的错误:

代码语言:javascript
运行
复制
Py4JJavaErrorTraceback (most recent call last)
<ipython-input-5-4f0d8b1fb126> in <module>()
     23     #keyConverter=keyConv,
     24     #valueConverter=valueConv,
---> 25     conf=jconf)
     26 
     27 hbase_rdd = hbase_rdd.flatMapValues(lambda v: v.split("\n")).mapValues(json.loads)

/usr/lib/spark/python/pyspark/context.py in newAPIHadoopRDD(self, inputFormatClass, keyClass, valueClass, keyConverter, valueConverter, conf, batchSize)
    644         jrdd = self._jvm.PythonRDD.newAPIHadoopRDD(self._jsc, inputFormatClass, keyClass,
    645                                                    valueClass, keyConverter, valueConverter,
--> 646                                                    jconf, batchSize)
    647         return RDD(jrdd, self)
    648 

/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/lib/spark/python/lib/py4j-0.10.3-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: java.io.IOException: Cannot create a record reader because of a previous error. Please look at the previous logs lines from the task's full log for more details.
    at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:252)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormat.getSplits(TableInputFormat.java:237)
    at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:121)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
    at scala.Option.getOrElse(Option.scala:121)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
    at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1303)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
    at org.apache.spark.rdd.RDD.take(RDD.scala:1298)
    at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:203)
    at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:582)
    at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.IllegalStateException: The input format instance has not been properly initialized. Ensure you call initializeTable either in your constructor or initialize method
    at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getTable(TableInputFormatBase.java:585)
    at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase.getSplits(TableInputFormatBase.java:247)
    ... 30 more
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-09-28 17:37:43

你在使用什么版本的bigtable-hbase?你能试试最新的版本吗?bigtable-hbase-1.x-hadoop:1.0.0-pre3?还请按以下方式更新您的配置:

  • "hbase.client.connection.impl": "com.google.cloud.bigtable.hbase1_x.BigtableConnection"
  • 删除"google.bigtable.zone.name""google.bigtable.cluster.name"
  • 添加"google.bigtable.instance.id":"“
  • 确保netty-tc本机-boringssl静态:1.1.33.Fork 26在类路径上

而且,我很难找到http://ec2-54-66-129-240.ap-southeast-2.compute.amazonaws.com/httrack/docs/cloud.google.com/dataproc/examples/cloud-bigtable-example.html的原始来源。它是从哪里来的?

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/46470444

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档