用Java往腾讯云服务器里的Hadoop写入数据时出现异常ipc.RemoteException?

  • 回答 (5)
  • 关注 (0)
  • 查看 (741)

我用腾讯云的学生机在CentOS7的系统上搭建了Hadoop的伪分布式的环境,配置等都正常,datanode,namenode 等进程都正常开启,http的50070也都能正常访问,空间足够,然后用Windows的本地机用JavaAPI操作服务器的dfs,可以正常在dfs上创建目录,但就是在创建的目录下写入字符串到txt文件时出现

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hdfsapi/test/b.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

的异常。到服务器上用hdfs dfs -ls 查看时发现只创建了txt文件,但字符串并没有写入到txt文件中。上网查了很多,试了关闭hdfs 的safe mode等都没有解决问题。希望有大神能解答问题。


可以正常创建目录
服务器上

写入字符串到txt文件时的代码如下

@Test
    public void create()throws Exception{
        FSDataOutputStream output = fileSystem.create(new Path("/hdfsapi/test/b.txt"));
        output.write("hello hadoop \n".getBytes());
        output.flush();
        output.close();
    }

txt文件为空

下面是Hadoop的logs文件夹中hadoop-ma-namenode-spring.log文件的报错

2018-04-19 19:22:42,078 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storag
ePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.
hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2018-04-19 19:22:42,078 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, s
elected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2018-04-19 19:22:42,078 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2018-04-19 19:22:42,080 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:ma (auth:SIMPLE) cause:java.io.IOException: File /hdfsapi/test/b.txt could only bereplicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
2018-04-19 19:22:42,080 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 221.4.215.218:30690 Call#3 Retry#0
java.io.IOException: File /hdfsapi/test/b.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
用户1549228用户1549228提问于
以往V回答于

可能是使用hadoop namenode -format格式化时格式化了多次造成那么spaceID不一致,解决方案:

1、停止集群(切换到/sbin目录下) $./stop-all.sh

2、删除在hdfs中配置的data目录(即在core-site.xml中配置的hadoop.tmp.dir对应文件件)下面的所有数据; $ rm -rf /home/hadoop/hdpdata/*

3、重新格式化namenode(切换到hadoop目录下的bin目录下) $ ./hadoop namenode -format

4、重新启动hadoop集群(切换到hadoop目录下的sbin目录下) $./start-all.sh

用户2149916回答于

我也遇到一样的问题,想问问楼主解决了没?

用户1511423回答于
用户1732797回答于

1.关闭防火墙

2.重启hadoop服务

3.再次上传的时候换个文件名,例如b.txt

卖米的老白回答于

把所有服务全部停掉,然后在重新启动

可能回答问题的人

  • 腾讯云计算产品团队

    腾讯云 · 产品团队 (已认证)

    137 粉丝0 提问0 回答
  • CVM 产品团队

    28 粉丝0 提问7 回答
  • 小仙女和科学家

    9 粉丝0 提问0 回答
  • 怕冷的阳阳

    腾讯云 · 高级工程师 (已认证)

    19 粉丝1 提问0 回答
  • DRRR

    腾讯云 · 产品经理 (已认证)

    7 粉丝0 提问4 回答
  • candyxiao

    腾讯 · 高级产品经理 (已认证)

    13 粉丝0 提问7 回答

扫码关注云+社区

领取腾讯云代金券