我在添加到我创建的文件时遇到了问题。我没有这样的问题,文件是手动上传到HDFS。上传文件和创建文件有什么区别?
为了追加和创建,我使用下面的代码
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataOutputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class Test {
    public static final String hdfs = "hdfs://192.168.15.62:8020";
    public static final String hpath = "/user/horton/wko/test.log";
    public static void main(String[] args) throws IOException {
            Configuration conf = new Configuration();
            conf.set("fs.defaultFS", hdfs);
            conf.set("hadoop.job.ugi", "hdfs");
            FileSystem fs = FileSystem.get(conf);
            Path filenamePath = new Path(hpath);
            //FSDataOutputStream out = fs.create(filenamePath);
            FSDataOutputStream out = fs.append(filenamePath);
            out.writeUTF("TEST\n");
            out.close();
        }
}我在case append中得到了这样的异常:
Exception in thread "main" java.io.IOException: Failed to replace a bad datanode on   the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[192.168.15.62:50010], original=[192.168.15.62:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.发布于 2014-12-04 20:03:59
我在添加conf.set("dfs.replication", "1")时遇到了类似的问题。
在我的例子中,集群中只有一个节点,即使在hdfs-site.xml中将dfs.replication设置为1,它仍然使用默认值3。
https://stackoverflow.com/questions/25639616
复制相似问题