前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >datanode启动不了(datanode启动不了)

datanode启动不了(datanode启动不了)

作者头像
全栈程序员站长
发布2022-07-29 16:16:04
2.4K0
发布2022-07-29 16:16:04
举报
文章被收录于专栏:全栈程序员必看

大家好,又见面了,我是你们的朋友全栈君。

安装Hadoop(伪分布式环境)namenode和datanode无法启动解决方案

先附上我参考的安装教程链接

10.1.88.4/index_1.php?url=http://www.msftconnecttest.com/redirect

我在执行./start-all.sh之后发现,没有任何错误提示,输入jps得到如下结果:

代码语言:javascript
复制
[hadoop@localhost sbin]$ ./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out
starting yarn daemons
resourcemanager running as process 21995. Stop it first.
localhost: nodemanager running as process 22133. Stop it first.
[hadoop@localhost sbin]$ jps
22133 NodeManager
23848 Jps
21995 ResourceManager

明显没有datanode和namenode,上网找了很多方法都没用。

按照网上的方法,我就查看文件夹data/tmp/data发现我根本没有这个目录。一脸懵逼。

我只好查看$HADOOP_HOME/log里面的文件,查看有关于datanode和namenode的日志,

我先查看的是datanode的日志,

有点多,直接划到最后,(看我加粗字体)

2019-11-02 17:35:59,401 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 2019-11-02 17:36:00,195 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /usr/software/hadoop_install/hadoop/data/dfs/data :  java.io.FileNotFoundException: File file:/usr/software/hadoop_install/hadoop/data/dfs/data does not exist     at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:635)     at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:861)     at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:625)     at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442)     at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:233)     at org.apache.hadoop.util.DiskChecker.checkDirInternal(DiskChecker.java:141)     at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:116)     at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2580)     at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2622)     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753) 2019-11-02 17:36:00,207 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain java.io.IOException: All directories in dfs.datanode.data.dir are invalid: “/usr/software/hadoop_install/hadoop/data/dfs/data”      at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2631)     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2604)     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)     at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753) 2019-11-02 17:36:00,208 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 2019-11-02 17:36:00,216 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:  /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at localhost/127.0.0.1 ************************************************************/ [hadoop@localhost logs]$ 

 我顿时恍然大悟,,肯定是权限不够,看不到data,我立马回到hadoop的安转目录下查看文件的权限情况

[hadoop@localhost hadoop]$ ls -l 总用量 128 drwxr-xr-x. 2 hadoop hadoop 194 11月 2 17:50 bin drwxr-xr-x. 2 root root 6 11月 2 16:58 data drwxr-xr-x. 3 hadoop hadoop 20 11月 2 16:57 etc drwxr-xr-x. 2 hadoop hadoop 106 9月 10 2018 include drwxr-xr-x. 3 hadoop hadoop 20 9月 10 2018 lib drwxr-xr-x. 2 hadoop hadoop 239 9月 10 2018 libexec -rw-r–r–. 1 hadoop hadoop 99253 9月 10 2018 LICENSE.txt drwxrwxr-x. 3 hadoop hadoop 4096 11月 2 17:36 logs -rw-r–r–. 1 hadoop hadoop 15915 9月 10 2018 NOTICE.txt -rw-r–r–. 1 hadoop hadoop 1366 9月 10 2018 README.txt drwxr-xr-x. 2 hadoop hadoop 4096 9月 10 2018 sbin drwxr-xr-x. 4 hadoop hadoop 31 9月 10 2018 share drwxr-xr-x. 2 root root 27 11月 2 16:23 test

果然 ,根据红色字体能发现,data的权限所有者是root的,hadoop根本就不能操作,我就想肯定是一开始创建的时候滥用了root用户

到这里就很简单了,两行命令即可:

代码语言:javascript
复制
# 修改文件权限拥有者,hadoop是我的用户名,data是文件夹名字
sudo chown -R hadoop data         

# 修改文件权限组
sudo chgrp -R hadoop data

修改过后,查看一下修改结果,可以看到修改成功:

[hadoop@localhost hadoop]$ ls -l 总用量 128 drwxr-xr-x. 2 hadoop hadoop 194 11月 2 17:50 bin drwxr-xr-x. 2 hadoop hadoop 6 11月 2 16:58 data drwxr-xr-x. 3 hadoop hadoop 20 11月 2 16:57 etc drwxr-xr-x. 2 hadoop hadoop 106 9月 10 2018 include drwxr-xr-x. 3 hadoop hadoop 20 9月 10 2018 lib drwxr-xr-x. 2 hadoop hadoop 239 9月 10 2018 libexec -rw-r–r–. 1 hadoop hadoop 99253 9月 10 2018 LICENSE.txt drwxrwxr-x. 3 hadoop hadoop 4096 11月 2 17:36 logs -rw-r–r–. 1 hadoop hadoop 15915 9月 10 2018 NOTICE.txt -rw-r–r–. 1 hadoop hadoop 1366 9月 10 2018 README.txt drwxr-xr-x. 2 hadoop hadoop 4096 9月 10 2018 sbin drwxr-xr-x. 4 hadoop hadoop 31 9月 10 2018 share drwxr-xr-x. 2 root root 27 11月 2 16:23 test

然后再回去停止刚才执行的所有node

[hadoop@localhost sbin]$ ./stop-all.sh This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh Stopping namenodes on [localhost] localhost: no namenode to stop localhost: no datanode to stop Stopping secondary namenodes [0.0.0.0] 0.0.0.0: no secondarynamenode to stop stopping yarn daemons stopping resourcemanager localhost: stopping nodemanager localhost: nodemanager did not stop gracefully after 5 seconds: killing with kill -9 no proxyserver to stop

最后就是启动所有node

[hadoop@localhost sbin]$ ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh Starting namenodes on [localhost] localhost: starting namenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-namenode-localhost.localdomain.out localhost: starting datanode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-datanode-localhost.localdomain.out Starting secondary namenodes [0.0.0.0] 0.0.0.0: starting secondarynamenode, logging to /usr/software/hadoop_install/hadoop/logs/hadoop-hadoop-secondarynamenode-localhost.localdomain.out starting yarn daemons starting resourcemanager, logging to /usr/software/hadoop_install/hadoop/logs/yarn-hadoop-resourcemanager-localhost.localdomain.out localhost: starting nodemanager, logging to /usr/software/hadoop_install/hadoop/logs/yarn-hadoop-nodemanager-localhost.localdomain.out

输入jps命令查看启动情况:

[hadoop@localhost sbin]$ jps 36534 DataNode 36343 NameNode 37097 NodeManager 36762 SecondaryNameNode 36954 ResourceManager 37422 Jps

可以看到所有的DataNode和NameNode都已经成功启动。

激动万分,终于弄出来了,哈哈大家要是哪里对不上或者是有其他问题,可以留言问我,我最近装这个装了好几遍哈哈。

发布者:全栈程序员栈长,转载请注明出处:https://javaforall.cn/129408.html原文链接:https://javaforall.cn

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022年4月1,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 安装Hadoop(伪分布式环境)namenode和datanode无法启动解决方案
相关产品与服务
大数据
全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档