前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Hadoop2.5.0伪分布式环境搭建

Hadoop2.5.0伪分布式环境搭建

作者头像
星哥玩云
发布2022-07-26 20:06:44
5790
发布2022-07-26 20:06:44
举报
文章被收录于专栏:开源部署开源部署

本章主要介绍下在Linux系统下的Hadoop2.5.0伪分布式环境搭建步骤。首先要搭建Hadoop伪分布式环境,需要完成一些前置依赖工作,包括创建用户、安装JDK、关闭防火墙等。

一、创建hadoop用户

使用root账户创建hadoop用户,为了在实验环境下便于操作,赋予hadoop用户sudo权限。具体操作代码如下:

useradd hadoop # 添加hadoop用户 passwd hadoop # 设置密码 visudo hadoop ALL=(root)NOPASSWD:ALL

二、Hadoop伪分布式环境搭建

1、关闭Linux中的防火墙和selinux

禁用selinux,代码如下:

sudo vi /etc/sysconfig/selinux # 打开selinux配置文件 SELINUX=disabled # 修改SELINUX属性值为disabled

关闭防火墙,代码如下:

sudo service iptables status # 查看防火墙状态 sudo service iptables stop # 关闭防火墙 sudo chkconfig iptables off # 关闭防火墙开机启动设置

2、安装jdk

首先,查看系统中是否有安装自带的jdk,如果存在,则先卸载,代码如下:

rpm -qa | grep java # 查看是否有安装jdk sudo rpm -e --nodeps java-1.6.0-openjdk-1.6.0.0-1.50.1.11.5.el6_3.x86_64 tzdata-java-2012j-1.el6.noarch java-1.7.0-openjdk-1.7.0.9-2.3.4.1.el6_3.x86_64 # 卸载自带jdk

接着,安装jdk,步骤如下:

step1.解压安装包:

tar -zxf jdk-7u67-linux-x64.tar.gz -C /usr/local/

step2.配置环境变量及检查是否安装成功:

sudo vi /etc/profile # 打开profile文件##JAVA_HOMEexport JAVA_HOME=/usr/local/jdk1.7.0_67export PATH=PATH:JAVA_HOME/bin

# 生效文件 source /etc/profile # 使用root用户操作

# 查看是否配置成功 java -version

3、安装hadoop

step1:解压hadoop安装包

tar -zxvf /opt/software/hadoop-2.5.0.tar.gz -C /opt/software/

建议:将/opt/software/hadoop-2.5.0/share下的doc目录删除。

step2:修改etc/hadoop目录下hadoop-env.sh、mapred-env.sh、yarn-env.sh三个配置文件中的JAVA_HOME

export JAVA_HOME=/usr/local/jdk1.7.0_67

step3:修改core-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/Linux/2019-04/configuration.xsl"?> <!--   Licensed under the Apache License, Version 2.0 (the "License");   you may not use this file except in compliance with the License.   You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software   distributed under the License is distributed on an "AS IS" BASIS,   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   See the License for the specific language governing permissions and   limitations under the License. See accompanying LICENSE file. -->

<!-- Put site-specific property overrides in this file. -->

<configuration>     <property>         <name>name</name>         <value>my-study-cluster</value>     </property>     <property>         <name>fs.defaultFS</name>         <value>hdfs://bigdata01:8020</value>     </property>         <!-- 指定Hadoop系统生成文件的临时目录地址 -->     <property>         <name>hadoop.tmp.dir</name>         <value>/opt/software/hadoop-2.5.0/data/tmp</value>     </property>     <property>         <name>fs.trash.interval</name>         <value>1440</value>     </property>     <property>         <name>hadoop.http.staticuser.user</name>         <value>hadoop</value>     </property>         <property>                 <name>hadoop.proxyuser.hadoop.hosts</name>                 <value>bigdata01</value>         </property>         <property>                 <name>hadoop.proxyuser.hadoop.groups</name>                 <value>*</value>         </property> </configuration>

step4:修改hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/Linux/2019-04/configuration.xsl"?> <!--   Licensed under the Apache License, Version 2.0 (the "License");   you may not use this file except in compliance with the License.   You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software   distributed under the License is distributed on an "AS IS" BASIS,   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   See the License for the specific language governing permissions and   limitations under the License. See accompanying LICENSE file. -->

<!-- Put site-specific property overrides in this file. -->

<configuration>     <property>         <name>dfs.replication</name>         <value>1</value>     </property>     <property>         <name>dfs.permissions.enabled</name>         <value>false</value>     </property>     <property>         <name>dfs.namenode.name.dir</name>         <value>/opt/software/hadoop-2.5.0/data/name</value>     </property>     <property>         <name>dfs.datanode.data.dir</name>         <value>/opt/software/hadoop-2.5.0/data/data</value>     </property> </configuration>

step5:修改mapred-site.xml

<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="https://www.linuxidc.com/Linux/2019-04/configuration.xsl"?> <!--   Licensed under the Apache License, Version 2.0 (the "License");   you may not use this file except in compliance with the License.   You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software   distributed under the License is distributed on an "AS IS" BASIS,   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   See the License for the specific language governing permissions and   limitations under the License. See accompanying LICENSE file. -->

<!-- Put site-specific property overrides in this file. -->

<configuration>     <property>         <name>mapreduce.framework.name</name>         <value>yarn</value>     </property>     <property>         <name>mapreduce.jobhistory.address</name>         <value>bigdata01:10020</value>     </property>     <property>         <name>mapreduce.jobhistory.webapp.address</name>         <value>bigdata01:19888</value>     </property> </configuration>

step6:修改yarn-site.xml

<?xml version="1.0"?> <!--   Licensed under the Apache License, Version 2.0 (the "License");   you may not use this file except in compliance with the License.   You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software   distributed under the License is distributed on an "AS IS" BASIS,   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.   See the License for the specific language governing permissions and   limitations under the License. See accompanying LICENSE file. --> <configuration>

<!-- Site specific YARN configuration properties -->

    <property>         <name>yarn.nodemanager.aux-services</name>         <value>mapreduce_shuffle</value>     </property>     <property>         <name>yarn.resourcemanager.hostname</name>         <value>bigdata01</value>     </property>     <property>         <name>yarn.log-aggregation-enable</name>         <value>true</value>     </property>     <property>         <name>yarn.log-aggregation.retain-seconds</name>         <value>106800</value>     </property>     <property>         <name>yarn.log.server.url</name>         <value>http://bigdata01:19888/jobhistory/job/</value>     </property> </configuration>

step7:修改slaves文件

bigdata01

step8:格式化namenode

bin/hdfs namenode -format

step9:启动进程

## 方式一:单独启动一个进程 # 启动namenode sbin/hadoop-daemon.sh start namenode # 启动datanode sbin/hadoop-daemon.sh start datanode # 启动resourcemanager sbin/yarn-daemon.sh start resourcemanager # 启动nodemanager sbin/yarn-daemon.sh start nodemanager # 启动secondarynamenode sbin/hadoop-daemon.sh start secondarynamenode # 启动历史服务器 sbin/mr-jobhistory-daemon.sh start historyserver

## 方式二: sbin/start-dfs.sh # 启动namenode、datanode、secondarynamenode sbin/start-yarn.sh # 启动resourcemanager、nodemanager sbin/mr-jobhistory-daemon.sh start historyserver # 启动历史服务器

step10:检查

1.通过浏览器访问HDFS的外部UI界面,加上外部交互端口号:50070

  http://bigdata01:50070

2.通过浏览器访问YARN的外部UI界面,加上外部交互端口号:8088

  http://bigdata01:8088

3.执行Wordcount程序

  bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount input output

  注:输入输出目录自定义

结束!

以上为Hadoop2.5.0伪分布式环境搭建步骤,如有问题,请指出,谢谢!

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
大数据
全栈大数据产品,面向海量数据场景,帮助您 “智理无数,心中有数”!
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档