首页
学习
活动
专区
工具
TVP
发布
社区首页 >问答首页 >无法在HDFS "datanode“中写入数据-节点已添加到排除列表中

无法在HDFS "datanode“中写入数据-节点已添加到排除列表中
EN

Stack Overflow用户
提问于 2019-06-04 02:16:52
回答 1查看 203关注 0票数 0

我在同一个jvm中运行"namenode“和"datanode”,当我试图写入数据时,我得到了以下异常

org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy$NotEnoughReplicasException: at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:836) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:724) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:631) at org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalStorage(BlockPlacementPolicyDefault.java:591) at org。apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:490)在org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:421)在org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:297)在org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:148)在org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:164)在org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2127)在org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)在org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2771)在org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:876)在org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:567) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at java.base/java.securityjava.base/javax.security.auth.Subject.doAs(Subject.java:423) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)的.AccessController.doPrivileged(本机方法)

        final File file = new File("C:\\ManageEngine\\test\\data\\namenode");
        final File file1 = new File("C:\\ManageEngine\\test\\data\\datanode1");
        BasicConfigurator.configure();
        final HdfsConfiguration nameNodeConfiguration = new HdfsConfiguration();
        FileSystem.setDefaultUri(nameNodeConfiguration, "hdfs://localhost:5555");
        nameNodeConfiguration.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY, file.toURI().toString());
        nameNodeConfiguration.set(DFSConfigKeys.DFS_REPLICATION_KEY, "1" );
        final NameNode nameNode = new NameNode(nameNodeConfiguration);



        final HdfsConfiguration dataNodeConfiguration1 = new HdfsConfiguration();
        dataNodeConfiguration1.set(DFSConfigKeys.DFS_DATANODE_DATA_DIR_KEY, file1.toURI().toString());
        dataNodeConfiguration1.set(DFSConfigKeys.DFS_DATANODE_ADDRESS_KEY, "localhost:5556" );
        dataNodeConfiguration1.set(DFSConfigKeys.DFS_REPLICATION_KEY, "1" );
        FileSystem.setDefaultUri(dataNodeConfiguration1, "hdfs://localhost:5555");
        final DataNode dataNode1 = DataNode.instantiateDataNode(new String[]{}, dataNodeConfiguration1);



        final FileSystem fs = FileSystem.get(dataNodeConfiguration1);

        Path hdfswritepath = new Path(fileName);
        if(!fs.exists(hdfswritepath)) {
            fs.create(hdfswritepath);
            System.out.println("Path "+hdfswritepath+" created.");
        }
        System.out.println("Begin Write file into hdfs");

        FSDataOutputStream outputStream=fs.create(hdfswritepath);
        //Cassical output stream usage
        outputStream.writeBytes(fileContent);
        outputStream.close();
        System.out.println("End Write file into hdfs");

Request data - Image

EN

回答 1

Stack Overflow用户

发布于 2019-06-04 03:26:19

副本的数量不能大于数据节点的数量。

如果希望在单个节点上运行,请在hdfs-site.xml中将dfs.replication设置为1。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/56432568

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档