前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Hadoop Live node为0,无datanode进程

Hadoop Live node为0,无datanode进程

原创
作者头像
迷乐
修改2020-06-01 11:32:14
1.5K0
修改2020-06-01 11:32:14
举报
文章被收录于专栏:数据-迷之欢乐数据-迷之欢乐

问题:

用VirtualBox安装Hadoop伪集群,start-all.sh过程没有报错,如下:

代码语言:shell
复制
hadoop1@master:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
The authenticity of host 'master (192.168.56.100)' can't be established.
ECDSA key fingerprint is SHA256:5v0Pv8H46CIUWEJBviEE3+hdPhc7y4jMdy6Sotf6nSQ.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
master: Warning: Permanently added 'master,192.168.56.100' (ECDSA) to the list of known hosts.
master: starting namenode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop1-namenode-master.out
datanode1: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop1-datanode-datanode1.out
datanode2: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop1-datanode-datanode2.out
datanode3: starting datanode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop1-datanode-datanode3.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.10.0/logs/hadoop-hadoop1-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.10.0/logs/yarn-hadoop1-resourcemanager-master.out
datanode1: starting nodemanager, logging to /usr/local/hadoop-2.10.0/logs/yarn-hadoop1-nodemanager-datanode1.out
datanode2: starting nodemanager, logging to /usr/local/hadoop-2.10.0/logs/yarn-hadoop1-nodemanager-datanode2.out
datanode3: starting nodemanager, logging to /usr/local/hadoop-2.10.0/logs/yarn-hadoop1-nodemanager-datanode3.out
hadoop1@master:~$ jps
3252 SecondaryNameNode
3014 NameNode
3374 ResourceManage
3631 Jps

http://master:8088显示有3个datanode,但http://master:50070显示Live nodes为0.

查看子节点进程,发现没有datanode进程:

代码语言:shell
复制
hadoop1@master:~$ ssh datanode1
hadoop1@datanode1:~$ jps
1915 Jps
1740 NodeManage
hadoop1@datanode1:~$ exit

解决:

查看子节点/usr/local/hadoop-2.10.0/logs/hadoop-hadoop1-datanode-datanode3.log日志文件,搜索WARN信息,看到:

代码语言:javascript
复制
2020-05-22 12:12:17,690 WARN org.apache.hadoop.hdfs.server.datanode.checker.StorageLocationChecker: Exception checking StorageLocation [DISK]file:/usr/loacal/hadoop-2.10.0/hadoop_data/hdfs/datanode java.io.FileNotFoundException: File file:/usr/loacal/hadoop-2.10.0/hadoop_data/hdfs/datanode does not exist

原来是目录错误,hdfs-site.xml设置的与实际目录/usr/local/hadoop-2.10.0/hadoop_data/hdfs/datanode(local误打成loacal)

代码语言:html
复制
<property>
        <name>dfs.datanode.data.dir</name>
        <value> file:/usr/loacal/hadoop-2.10.0/hadoop_data/hdfs/datanode</value>
</property>

修正各节点的hdfs-site.xml文件

代码语言:html
复制
<property>
        <name>dfs.datanode.data.dir</name>
        <value> file:/usr/local/hadoop-2.10.0/hadoop_data/hdfs/datanode</value>
</property>

并保存:

代码语言:shell
复制
hadoop1@datanode3:~$ sudo nano /usr/local/hadoop-2.10.0/etc/hadoop/hdfs-site.xml 
hadoop1@datanode3:~$ ssh datanode1
hadoop1@datanode1:~$ sudo nano /usr/local/hadoop-2.10.0/etc/hadoop/hdfs-site.xml 
hadoop1@datanode1:~$ exit
hadoop1@datanode3:~$ ssh datanode2
hadoop1@datanode2:~$ sudo nano /usr/local/hadoop-2.10.0/etc/hadoop/hdfs-site.xml
hadoop1@datanode2:~$ exit
hadoop1@master:~$ sudo nano /usr/local/hadoop-2.10.0/etc/hadoop/hdfs-site.xml
hadoop1@master:~$ exit

在master终端中执行stop-all.sh,再执行start-all.sh,打开http://master:50070后发现已可以正常显示datanode情况.

总结:

  1. 出现不明问题时可以查看日志文件,查找WARN和ERROR信息,根据具体信息解决。
  2. 通过ssh进入各虚拟主机,再用nano修订,真的方便。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 问题:
  • 解决:
  • 总结:
相关产品与服务
大数据
全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档