CentOs7 docker20 jdk1.8 hadoop3.2
手把手复现,有手就行。
可以使用虚拟机或服务器
yum update #更新(询问输入y)
yum install -y yum-utils device-mapper-persistent-data lvm2 #安装依赖
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce #安装docker
systemctl start docker #启动docker
docker -v #查看docker版本
docker pull centos
vim Dockerfile
#复制以下内容
FROM centos
MAINTAINER mwf
RUN yum install -y openssh-server sudo
RUN sed -i 's/UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config
RUN yum install -y openssh-clients
RUN echo "root:123456" | chpasswd #ssh密码可自定义,这里就写123456了
RUN echo "root ALL=(ALL) ALL" >> /etc/sudoers
RUN ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key
RUN ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key
RUN mkdir /var/run/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
#然后按Esc :wq保存退出
docker build -t="centos7-ssh" . #镜像名可自定义
docker images #查看镜像
至此一个带ssh服务的centos镜像就安好了。
eyu3
hadoop
百度网盘:https://pan.baidu.com/s/1S9Sqwl3UN9cq2-dSdBGKRQ 提取码ca8s
mv Dockerfile Dockerfile.centos_ssh
vim Dockerfile
#复制以下内容
FROM centos7-ssh
ADD jdk-8u281-linux-x64.tar.gz /usr/local/
RUN mv /usr/local/jdk1.8.0_281 /usr/local/jdk1.8
ENV JAVA_HOME /usr/local/jdk1.8
ENV PATH $JAVA_HOME/bin:$PATH
ADD hadoop-3.2.2.tar.gz /usr/local
RUN mv /usr/local/hadoop-3.2.2 /usr/local/hadoop
ENV HADOOP_HOME /usr/local/hadoop
ENV PATH $HADOOP_HOME/bin:$PATH
RUN yum install -y which sudo
#然后按Esc :wq保存退出
docker build -t="hadoop" .
设置集群间的网络。
docker network create --driver bridge hadoop-br
docker run -itd --network hadoop-br --name hadoop1 -p 50070:50070 -p 8088:8088 -p 9000:9000 -p 16010:16010 -p 2181:2181 -p 8080:8080 -p 16000:16000 -p 9020:9020 -p 42239:42239 -p 60000:60000 hadoop
docker run -itd --network hadoop-br --name hadoop2 -p 16020:16020 hadoop
docker run -itd --network hadoop-br --name hadoop3 hadoop
3. 检查网络情况
docker network inspect hadoop-br
docker exec -it hadoop1 bash #hadoop2、hadoop3
vi /etc/hosts
# 加入以下id和hostname,就是上一个图圈起来的
172.18.0.2 hadoop1
172.18.0.3 hadoop2
172.18.0.4 hadoop3
ssh-keygen
#一路回车
ssh-copy-id -i /root/.ssh/id_rsa -p 22 root@hadoop1
#输入密码就是前面的123456
ssh-copy-id -i /root/.ssh/id_rsa -p 22 root@hadoop2
#输入密码就是前面的123456
ssh-copy-id -i /root/.ssh/id_rsa -p 22 root@hadoop3
#输入密码就是前面的123456
(
插播反爬信息)博主CSDN地址:https://wzlodq.blog.csdn.net/
ping hadoop1
ping hadoop2
ping hadoop3
ssh hadoop1
ssh hadoop2
ssh hadoop3
#记得exit
以下操作在主节点hadoop1上执行即可:
docker exec -it hadoop1 bash#进入hadoop1
mkdir /home/hadoop #创建目录
mkdir /home/hadoop/tmp /home/hadoop/hdfs_name /home/hadoop/hdfs_data
cd /usr/local/hadoop/etc/hadoop/
#vi core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
2. hdfs-site.xml
#vi hdfs-site.xml
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hadoop/hdfs_name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hadoop/hdfs_data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop1:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
3. mapred-site.xml
#vi mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop1:19888</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
4. yarn-site.xml
# vi yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
#vi workers
hadoop2
hadoop3
以下操作在主节点hadoop1上执行即可:
cd /usr/local/hadoop/sbin
vi start-dfs.sh#第二行添加如下4句
vi stop-dfs.sh#第二行添加如下4句
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
cd /usr/local/hadoop/sbin
vi start-yarn.sh#第二行添加如下3句
vi stop-yarn.sh#第二行添加如下3句
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
3. env.sh
cd /usr/local/hadoop/etc/hadoop
vi hadoop-env.sh#加入下面这句话
export JAVA_HOME=/usr/local/jdk1.8
export JAVA_HOME=/usr/local/jdk1.8
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:/bin:/usr/bin:$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:$PATH:$HADOOP_HOME/sbin
source ~/.bashrc #执行
也可以写到/etc/profile永久生效
5. 拷贝 将主节点hadoop1配置拷贝到从节点
scp -r /usr/local/hadoop/ hadoop2:/usr/local/
scp -r /usr/local/hadoop/ hadoop3:/usr/local/
在hadoop1下执行
hdfs namenode -format
start-all.sh
报错记录:cannot execute binary file: Exec format error jdk环境问题 测试
java -version
,没有输出的话估计是兼容问题,比如32位系统用64位包或版本不对,输指令uname -m
查看版本并下载对应jdk安装即可。 然后测试javac
如果没有输出,估计是你的JAVA_HOME配置错了,输入java -version
如果是自带的openjdk的话,配置一下就可以了,export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.292.b10-1.el7_9.aarch64
(看你的对应路径下名字),如果切到这里面只有一个jre的话,还需要安装对应devel(通过yum search java|grep jdk查看),如1.8openjdk对应安装yum install java-1.8.0-openjdk-devel.aarch64
然后记得修改上一步第四点的env.sh文件里JAVA_HOME路径。
jps
若需要可开启历史服务
mr-jobhistory-daemon.sh start historyserver
开放8088端口,公网ip:8088
附:重启步骤
exit #退出docker容器
shutdown -r now #重启系统
systemctl start docker #启动docker服务
docker start hadoop1 #启动相应容器
docker start hadoop2
docker start hadoop3
docker exec -it hadoop1 bash #进入主节点
$HADOOP_HOME/sbin/./start-all.sh #启动集群