首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >用Java往腾讯云服务器里的Hadoop写入数据时出现异常ipc.RemoteException?

用Java往腾讯云服务器里的Hadoop写入数据时出现异常ipc.RemoteException?

提问于 2018-04-19 20:30:32
回答 5关注 0查看 3.5K

我用腾讯云的学生机在CentOS7的系统上搭建了Hadoop的伪分布式的环境,配置等都正常,datanode,namenode 等进程都正常开启,http的50070也都能正常访问,空间足够,然后用Windows的本地机用JavaAPI操作服务器的dfs,可以正常在dfs上创建目录,但就是在创建的目录下写入字符串到txt文件时出现

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hdfsapi/test/b.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.

的异常。到服务器上用hdfs dfs -ls 查看时发现只创建了txt文件,但字符串并没有写入到txt文件中。上网查了很多,试了关闭hdfs 的safe mode等都没有解决问题。希望有大神能解答问题。


可以正常创建目录
可以正常创建目录
服务器上
服务器上

写入字符串到txt文件时的代码如下

@Test
    public void create()throws Exception{
        FSDataOutputStream output = fileSystem.create(new Path("/hdfsapi/test/b.txt"));
        output.write("hello hadoop \n".getBytes());
        output.flush();
        output.close();
    }

txt文件为空
txt文件为空

下面是Hadoop的logs文件夹中hadoop-ma-namenode-spring.log文件的报错

2018-04-19 19:22:42,078 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[], storag
ePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.
hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy
2018-04-19 19:22:42,078 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=1, s
elected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2018-04-19 19:22:42,078 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 1 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2018-04-19 19:22:42,080 WARN org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:ma (auth:SIMPLE) cause:java.io.IOException: File /hdfsapi/test/b.txt could only bereplicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
2018-04-19 19:22:42,080 INFO org.apache.hadoop.ipc.Server: IPC Server handler 2 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 221.4.215.218:30690 Call#3 Retry#0
java.io.IOException: File /hdfsapi/test/b.txt could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1595)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3287)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:677)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:213)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:485)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
相关文章

相似问题

相关问答用户
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档