exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.

1、虽然,不是大错,还说要贴一下,由于我运行run-example streaming.NetworkWordCount localhost 9999的测试案例,出现的错误,第一感觉就是Spark没有启动导致的:

  1 18/04/23 03:21:58 ERROR SparkContext: Error initializing SparkContext.
  2 java.net.ConnectException: Call From slaver1/192.168.19.131 to slaver1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
  3     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  4     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  5     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  6     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  7     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
  8     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
  9     at org.apache.hadoop.ipc.Client.call(Client.java:1414)
 10     at org.apache.hadoop.ipc.Client.call(Client.java:1363)
 11     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
 12     at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
 13     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 14     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 15     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 16     at java.lang.reflect.Method.invoke(Method.java:606)
 17     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
 18     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
 19     at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
 20     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
 21     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
 22     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
 23     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
 24     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 25     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
 26     at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
 27     at org.apache.spark.SparkContext.<init>(SparkContext.scala:541)
 28     at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:864)
 29     at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:81)
 30     at org.apache.spark.examples.streaming.NetworkWordCount$.main(NetworkWordCount.scala:47)
 31     at org.apache.spark.examples.streaming.NetworkWordCount.main(NetworkWordCount.scala)
 32     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 33     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 34     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 35     at java.lang.reflect.Method.invoke(Method.java:606)
 36     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
 37     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
 38     at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
 39     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
 40     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 41 Caused by: java.net.ConnectException: Connection refused
 42     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
 43     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
 44     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
 45     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
 46     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
 47     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
 48     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
 49     at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
 50     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
 51     at org.apache.hadoop.ipc.Client.call(Client.java:1381)
 52     ... 31 more
 53 18/04/23 03:21:59 INFO SparkUI: Stopped Spark web UI at http://192.168.19.131:4040
 54 18/04/23 03:21:59 INFO DAGScheduler: Stopping DAGScheduler
 55 18/04/23 03:21:59 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
 56 18/04/23 03:21:59 INFO MemoryStore: MemoryStore cleared
 57 18/04/23 03:21:59 INFO BlockManager: BlockManager stopped
 58 18/04/23 03:21:59 INFO BlockManagerMaster: BlockManagerMaster stopped
 59 18/04/23 03:21:59 INFO SparkContext: Successfully stopped SparkContext
 60 Exception in thread "main" java.net.ConnectException: Call From slaver1/192.168.19.131 to slaver1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
 61     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 62     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 63     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 64     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 65     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
 66     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
 67     at org.apache.hadoop.ipc.Client.call(Client.java:1414)
 68     at org.apache.hadoop.ipc.Client.call(Client.java:1363)
 69     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
 70     at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
 71     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 72     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 73     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 74     at java.lang.reflect.Method.invoke(Method.java:606)
 75     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
 76     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
 77     at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source)
 78     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:699)
 79     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1762)
 80     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1124)
 81     at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
 82     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 83     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1120)
 84     at org.apache.spark.scheduler.EventLoggingListener.start(EventLoggingListener.scala:100)
 85     at org.apache.spark.SparkContext.<init>(SparkContext.scala:541)
 86     at org.apache.spark.streaming.StreamingContext$.createNewSparkContext(StreamingContext.scala:864)
 87     at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:81)
 88     at org.apache.spark.examples.streaming.NetworkWordCount$.main(NetworkWordCount.scala:47)
 89     at org.apache.spark.examples.streaming.NetworkWordCount.main(NetworkWordCount.scala)
 90     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 91     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 92     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 93     at java.lang.reflect.Method.invoke(Method.java:606)
 94     at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
 95     at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
 96     at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
 97     at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
 98     at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
 99 Caused by: java.net.ConnectException: Connection refused
100     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
101     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
102     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
103     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
104     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
105     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:604)
106     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:699)
107     at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:367)
108     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1462)
109     at org.apache.hadoop.ipc.Client.call(Client.java:1381)
110     ... 31 more
111 18/04/23 03:21:59 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
112 18/04/23 03:21:59 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
113 18/04/23 03:21:59 INFO ShutdownHookManager: Shutdown hook called
114 18/04/23 03:21:59 INFO ShutdownHookManager: Deleting directory /tmp/spark-7ef5c2da-0b57-4553-a9f9-6e215885c7ba
115 18/04/23 03:21:59 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.

2、启动Spark的脚本命令:

[hadoop@slaver1 spark-1.5.1-bin-hadoop2.4]$ sbin/start-all.sh

[hadoop@slaver2 ~]$ run-example streaming.NetworkWordCount localhost 9999

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

发表于

我来说两句

0 条评论
登录 后参与评论

相关文章

来自专栏cmazxiaoma的架构师之路

MySQL基于GTID主从复制的杂谈

先来回顾一下MySQL的二进制知识点。基于Row格式的日志可以避免MySQL主从复制中出现的主从不一致问题。在一个sql语句修改了1000条数据的情况下,基于段...

2585
来自专栏沃趣科技

【MySQL】主从GTID复制修复

作者 董红禹 沃趣科技数据库工程师 导 读 ---- GTID是5.6新增特性,减少DBA运维的工作。在以前一主两从架构下当主库M1发生故障我们需要选择一个从...

4109
来自专栏乐沙弥的世界

MySQL修改复制用户及密码

    在生产环境中有时候需要修改复制用户账户的密码,比如密码遗失,或者由于多个不同的复制用户想统一为单独一个复制账户。对于这些操作应尽可能慎重以避免操作不同导...

1134
来自专栏耕耘实录

MySQL数据库的主从同步配置

版权声明:本文为耕耘实录原创文章,各大自媒体平台同步更新。欢迎转载,转载请注明出处,谢谢

2411
来自专栏乐沙弥的世界

MySQL GTID 错误处理汇总

1、GTID是全局事务ID,简化了主从架构的部署使得从库不再需要关心log_file和log_pos 2、由于事务ID的唯一性,使得将其他从库的GTID应用...

3591
来自专栏散尽浮华

mysql操作命令梳理(3)-pager

在mysql日常操作中,妙用pager设置显示方式,可以大大提高工作效率。比如select出来的结果集超过几个屏幕,那么前面的结果一晃而过无法看到,这时候使用p...

22110
来自专栏一个会写诗的程序员的博客

使用Xposed强制android WebView开启debug模式使用Xposed强制android WebView开启debug模式Xposed前期工作

从 https://developer.chrome.com/devtools/docs/remote-debugging 我们可以知道在android 4.4...

3242
来自专栏张戈的专栏

MySQL主从报错解决:Table ‘mysql.gtid_slave_pos’ doesn’t exist

给内部一个数据库做异地热备,热备部分采用了 MariaDB 的 galera 集群模式。然后挑选其中一台作为 Slave 和深圳主集群做主从同步。

5953
来自专栏Java成神之路

hbase_异常_03_java.io.EOFException: Premature EOF: no length prefix available

更改了hadoop的配置文件:core-site.xml  和   mapred-site.xml  之后,重启hadoop 和 hbase 之后,发现hbas...

3163
来自专栏数据和云

例证MySQL GTID与MariaDB GTID的不同之处

GTID是全称是Global Transaction Identifier,可简化MySQL的主从切换以及Failover。GTID用于在binlog中唯一标识...

1522

扫码关注云+社区

领取腾讯云代金券