Hadoop11,12,13集群
文章中没有明确说明关闭命令的皆使用kill <PID>
的方式关闭应用或服务。
默认通信端口:2181
**注意:**以下命令测试环境为Zookeeper-3.4.6
[root@hadoop10 ~]# zkServer.sh start
[root@hadoop10 ~]# zkCli.sh
Connecting to localhost:2181
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.ZooKeeper).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Welcome to ZooKeeper!
JLine support is enabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[cluster, controller_epoch, controller, brokers, zookeeper, admin, isr_change_notification, dolphinscheduler, consumers, latest_producer_id_block, config, hbase]
[zk: localhost:2181(CONNECTED) 1]
[root@hadoop10 ~]# zkServer.sh status
JMX enabled by default
Using config: /opt/installs/zookeeper3.4.6/zoo.cfg
Mode: standalone
[root@hadoop10 ~]# zkServer.sh stop
JMX enabled by default
Using config: /opt/installs/zookeeper3.4.6/zoo.cfg
Stopping zookeeper ... STOPPED
通信端口:9092
**注意:**以下命令针对Kafka0.11,版本不同命令略有差异,影响使用。
[root@hadoop10 ~]# kafka-server-start.sh -daemon /opt/installs/kafka0.11/config/server.properties
切入kafka目录,在bin中启动。
[root@hadoop10 ~]# cd /opt/installs/kafka0.11/
[root@hadoop10 kafka0.11]# bin/kafka-server-stop.sh stop
[root@hadoop10 kafka0.11]# kafka-topics.sh --create --zookeeper hadoop10:2181 --topic topic1 --partitions 1 --replication-factor 1
Created topic "topic1".
[root@hadoop10 kafka0.11]# kafka-topics.sh --delete --zookeeper hadoop10:2181 --topic topic1
Topic topic1 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
切入kafka目录,在bin中启动。
[root@hadoop10 ~]# cd /opt/installs/kafka0.11/
[root@hadoop10 kafka0.11]# bin/kafka-console-producer.sh --broker-list hadoop10:9092 --topic topic-car
HDFS webUI端口:9870
Hadoop日志服务:8088
[root@hadoop10 ~]# start-all.sh
[root@hadoop10 dolphinscheduler2.0.6]# start-dfs.sh
[root@hadoop10 ~]# mr-jobhistory-daemon.sh start historyserver
运行成功显示:
[root@hadoop10 ~]# jps
2400 SecondaryNameNode
100481 RunJar
100625 RunJar
62627 JobHistoryServer # Hadoop 历史日志进程
62691 Jps
2709 ResourceManager
2901 NodeManager
2172 DataNode
2029 NameNode
[root@hadoop11 ~]# hdfs haadmin -getServiceState nn2
standby
[root@hadoop11 ~]# hdfs haadmin -getServiceState nn1
standby
[root@hadoop11 ~]# hdfs haadmin -transitionToActive --forcemanual nn1
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2023-09-28 16:08:26,544 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop12/192.168.200.12:8020
2023-09-28 16:08:26,787 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at hadoop11/192.168.200.11:8020
[root@hadoop11 ~]# hdfs haadmin -getServiceState nn1
active
以下Spark命令为Standalone模式中使用测试。
通信端口:7077
web UI:8080
[root@hadoop10 ~]# cd /opt/installs/spark3.2.0/sbin/
[root@hadoop10 sbin]# ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /opt/installs/spark3.2.0/logs/spark-root-org.apache.spark.deploy.master.Master-1-hadoop10.out
hadoop10: starting org.apache.spark.deploy.worker.Worker, logging to /opt/installs/spark3.2.0/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-hadoop10.out
因为spark的群起命令会和hdfs的命令冲突,所以spark执行命令时使用绝对路径。
[root@hadoop11 ~]# /opt/installs/spark3.1.2/sbin/stop-all.sh
[root@hadoop10 sbin]# sh /opt/installs/spark3.2.0/sbin/stop-all.sh
hadoop10: stopping org.apache.spark.deploy.worker.Worker
stopping org.apache.spark.deploy.master.Master
运行计算Pi测试案例:
[root@hadoop10 installs]# spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.2.0.jar
运行结果:
23/06/25 22:35:42 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Pi is roughly 3.142675713378567
在yarn模式的abc队列中运行计算Pi测试案例:
[root@hadoop10 installs]# spark-submit --queue abc --master yarn --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.2.0.jar
运行结果:
23/06/25 22:41:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
23/06/25 22:42:02 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Pi is roughly 3.1404757023785117
[root@hadoop10 ~]# cd /opt/installs/spark3.2.0/
[root@hadoop10 spark3.2.0]# sbin/start-history-server.sh
历史日志服务web:18080
webUI:8081
[root@hadoop10 ~]# start-cluster.sh
[root@hadoop10 ~]# stop-cluster.sh
webUI端口 8888
[root@hadoop10 ~]# cd /opt/installs/dlink0.7.3/
[root@hadoop10 dlink0.7.3]# sh auto.sh start
FLINK VERSION : 1.14
........................................Start Dinky Successfully........................................
注意查看CPU占用和内存使用情况是否提高,检查jps是否有Dlink进程。
[root@hadoop10 dlink0.7.3]# jps
49632 Dlink
1859 NameNode
64052 QuorumPeerMain
2006 DataNode
2214 SecondaryNameNode
2679 NodeManager
48775 StandaloneSessionClusterEntrypoint
49082 TaskManagerRunner
2523 ResourceManager
50653 Jps
[root@hadoop10 ~]# cd /opt/installs/dlink0.7.3/
[root@hadoop10 dlink0.7.3]# sh auto.sh stop
........................................Stop Dinky Successfully.....................................
**注意:**启用HBase前请确保ZooKeeper连接已成功建立且HBase在ZooKeeper中注册。否则请回到1.1 Zookeeper的启动
小节。
[root@hadoop10 ~]# start-hbase.sh
...
[root@hadoop10 ~]# stop-hbase.sh
stopping hbase...............
启动成功后会加载片刻,然后可以在shell中进行HBase操作。
[root@hadoop10 ~]# hbase shell
...
hbase:001:0> list
[root@hadoop11 ~]# hive
which: no hbase in ...
创建数据库和显示数据库名。
create database test_hive;
show databases;
[root@hadoop10 ~]# nohup hive --service metastore > /tmp/metastore.log 2>&1 &
[root@hadoop10 ~]# hiveserver2
2023-06-25 21:06:59: Starting HiveServer2
...
[root@hadoop11 ~]# nohup hive --service hiveserver2 > /tmp/hiveserver2.log 2>&1 &
[1] 7273
[root@hadoop11 ~]# tail -f /tmp/hiveserver2.log
nohup: 忽略输入
set hive.exec.mode.local.auto=true;
alter table t_name add partition(dt='xxxxxxx')
alter table t_name drop partition(dt='xxxxxxx')
[root@hadoop11 ~]# beeline
Beeline version 2.3.7 by Apache Hive
beeline> !connect jdbc:hive2://hadoop11:10000
Connecting to jdbc:hive2://hadoop11:10000
Enter username for jdbc:hive2://hadoop11:10000: root
Enter password for jdbc:hive2://hadoop11:10000: ****
2023-10-09 14:18:39,146 INFO jdbc.Utils: Supplied authorities: hadoop11:10000
2023-10-09 14:18:39,149 INFO jdbc.Utils: Resolved authority: hadoop11:10000
Connected to: Apache Hive (version 3.1.2)
[root@hadoop10 ~]# yarn rmadmin -refreshQueues
[root@hadoop10 ~]# start-yarn.sh
[root@hadoop10 ~]# stop-yarn.sh
[root@hadoop10 ~]# mapred --daemon start historyserver
[root@hadoop10 ~]# jps
*
100022 JobHistoryServer
启动DS需要前置启动Zookeeper、HDFS、Yarn。
zkServer.sh start #启动zk
start-dfs.sh #启动hdfs
start-yarn.sh #启动yarn
以下命令经过测试,不区分dolphinscheduler2.0.6
的解压目录和安装目录。
[root@hadoop10 dolphinscheduler2.0.6]# bin/start-all.sh
Web UI:http://hadoop10:12345/dolphinscheduler
上面进不去用Web UI:http://hadoop10:12345/dolphinscheduler/ui/view/login/index.html
正常启动成功后的进程:
[root@hadoop10 dolphinscheduler2.0.6]# jps
76706 MasterServer
74345 NodeManager
76937 PythonGatewayServer
73608 DataNode
77288 Jps
76843 AlertServer
73836 SecondaryNameNode
76748 WorkerServer
73455 NameNode
74193 ResourceManager
74833 QuorumPeerMain
76796 LoggerServer
76892 ApiApplicationServer
[root@hadoop10 dolphinscheduler2.0.6]# bin/stop-all.sh
[root@hadoop10 ~]# cd /opt/installs/dolphinscheduler2.0.6/
[root@hadoop10 dolphinscheduler2.0.6]# sh install.sh
版本不同,命令有差异
[root@hadoop10 ~]# service mysqld status
set global validate_password_policy=0;
set global validate_password_length=4;
flush privileges;
[root@hadoop10 ~]# redis-server /opt/installs/redis-6.2.0/redis.conf
[root@hadoop10 ~]# ps -ef | grep redis
root 80277 1 0 23:33 ? 00:00:00 redis-server hadoop10:6379
root 80350 79829 0 23:33 pts/1 00:00:00 grep --color=auto redis
[root@hadoop10 ~]# redis-cli -h hadoop10 -p 6379
hadoop10:6379> auth 123
OK
hadoop10:6379> flushall
OK
首先windows中可以手动在服务中启动:
http://localhost:27017/
It looks like you are trying to access MongoDB over HTTP on the native driver port.
PS C:\Users\Lenovo> mongo
admin
内执行> db.shutdownServer()
shutdown command only works with the admin database; try 'use admin'
> use admin
switched to db admin
> db.shutdownServer()
server should be down...
This site can’t be reached
localhost refused to connect.
如果没有这个库,则创建,但是并不会显示,需要插入数据后才能显示
> use TestDb2
> switched to db TestDb2
> show dbs
> TestDb1 0.000GB
> admin 0.000GB
> config 0.000GB
> local 0.000GB
> db.testdemo.insert({name:"guoyachao",age:25})
WriteResult({ "nInserted" : 1 })
> db.testdemo.insertMany([{name:"guoyachao2",age:25},{name:"guoyachao3",age:"25"}])
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("650d08a71163e5c30f7eb223"),
ObjectId("650d08a71163e5c30f7eb224")
]
}
> db.testdemo.insertOne({name:"guoyachao4",age:25})
{
"acknowledged" : true,
"insertedId" : ObjectId("650d09131163e5c30f7eb225")
}
首先进入要删除的数据库,然后执行删除指令。
> use TestDb2
switched to db TestDb2
> db.dropDatabase()
{ "ok" : 1 }
> show dbs
TestDb1 0.000GB
admin 0.000GB
config 0.000GB
local 0.000GB
free -h
:显示系统内存使用情况,G为单位。
df -h
:显示磁盘使用情况和文件系统的信息。
lscpu
:显示有关CPU(中央处理单元)和系统架构的信息。
wc -l *
:查看当前目录中文件数量。
du -sh *
:显示当前目录中文件大小,G为单位。
同步后立刻查看时间会有延迟,请等待数秒再查看新的时间。
[root@hadoop10 ~]# date
2023年 10月 04日 星期三 05:56:06 CST
[root@hadoop10 ~]# systemctl restart chronyd
[root@hadoop10 ~]# date
2023年 10月 04日 星期三 05:56:23 CST
[root@hadoop10 ~]# date
2023年 10月 05日 星期四 11:14:59 CST
首次使用注意:
options — session options — X/Y/Zmodem
中配置上传和下载目录;yum -y install lrzsz
。sz
:s意为send(发送),站在服务器的视角发送文件,即为“下载”。
[root@hadoop11 data]# sz aaaa
rz
Starting zmodem transfer. Press Ctrl+C to cancel.
Transferring aaaa...
100% 8 bytes 8 bytes/sec 00:00:01 0 Errors
rz
:r意为received(接收),站在服务器的视角接收文件,即为“上传”。
输入该命令后回车执行,会弹出窗口选择上传至服务器的文件。