基本上,当我使用./ start -all.sh命令启动hadoop时,我遇到了一些问题。
我已经看过Hadoop cluster setup - java.net.ConnectException: Connection refused了
和There are 0 datanode(s) running and no node(s) are excluded in this operation
当我运行./start-all.sh时,我得到
WARNING: Attempting to start all Apache Hadoop daemons as snagaraj in 10
seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
pdsh@greg: localhost: ssh exited with exit code 1
Starting datanodes
Starting secondary namenodes [greg.bcmdc.bcm.edu]
Starting resourcemanager
Starting nodemanagers当我运行一个使用Hadoop/hdfs的python脚本时,我得到以下错误
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File
/user/data/...._COPYING_ could only be written to 0 of the
1 minReplication nodes. There are 0 datanode(s) running and 0 node(s)
are excluded in this operation.我试着用hdfs namenode -format重新格式化namenode,但是没有帮助。
我的xml文件上的配置似乎是正确的,我到JAVA_HOME的路径也是正确的。我很乐意根据需要提供信息。
发布于 2019-08-09 21:45:01
在主节点或namenode服务器中执行: ssh localhost。
如果它能够连接,那么您的上述问题将得到解决。
https://stackoverflow.com/questions/57417885
复制相似问题