2. 用户登录后在 家目录 ,可用pwd命令查看,普通用户为 /home/用户名,root用户为/root
VIP linux1-vip is enabled VIP linux1-vip is running on node: linux1 Network is enabled Network is...running on node: linux1 GSD is disabled GSD is not running on node: linux1 ONS is enabled ONS daemon...is running on node: linux1 eONS is enabled eONS daemon is running on node: linux1 #下面查看节点linux2...PRKO-2417 : ONS is already enabled on node(s): linux1,linux2 PRKO-2418 : eONS is already enabled on...on node(s): linux1,linux2 PRKO-2422 : ONS is already started on node(s): linux1,linux2 PRKO-2423 :
. 2013-07-16 16:27:14.932 [ohasd(2748)]CRS-2765:Resource 'ora.crsd' has failed on server 'linux1'. #.../crsctl start res ora.crsd -init CRS-2672: Attempting to start 'ora.crsd' on 'linux1' CRS-2676: Start...of 'ora.crsd' on 'linux1' succeeded #crs成功启动 [root@linux1 bin]# ....linux1 ora.evmd 1 ONLINE ONLINE linux1...1 ONLINE ONLINE linux1
IP 1.2 修改主机名及主机名和IP地址的映射 1.3 关闭防火墙 1.4 ssh免密登录 1.5 安装JDK,配置环境变量 2 集群规划 节点名称 NN JJN DN ZKFC ZK RM NM linux1...-- 指定ZKFC故障自动切换转移 --> ha.zookeeper.quorum linux1:2181,linux2...-- nn1的RPC通信地址 --> dfs.namenode.rpc-address.mycluster.nn1 linux1...-- nn1的http通信地址 --> dfs.namenode.http-address.mycluster.nn1 linux1...-- 指定mr历史服务器主机,端口 --> mapreduce.jobhistory.address linux1
--------------- ora.ASM_DATA.dg diskgroup L ONLINE ONLINE linux1... 0 ora.FRA_DATA.dg diskgroup L ONLINE ONLINE linux1... 0 ora.LISTENER.lsnr Listener L ONLINE ONLINE linux1... 0 ora.LISTENER_SCAN1.lsnr SCAN Listener C ONLINE ONLINE linux1...) 0 ora.linux1.vip Cluster VIP C ONLINE ONLINE linux1
所有实例和服务的状态 $ srvctl status database -d orcl Instance orcl1 is running on node linux1 Instance orcl2...Service orcltest is running on instance(s) orcl2, orcl1 特定节点上节点应用程序的状态 $ srvctl status nodeapps -n linux1...VIP is running on node: linux1 GSD is running on node: linux1 Listener is running on node: linux1 ONS...daemon is running on node: linux1 ASM 实例的状态 $ srvctl status asm -n linux1 ASM instance +ASM1 is running...列出配置的所有数据库 $ srvctl config database orcl 显示 RAC 数据库的配置 $ srvctl config database -d orcl linux1 orcl1
mirror.bit.edu.cn/apache/zookeeper/ 不要下载源码的问下 下载bin的文件 源码有时候会找不到类 apache-zookeeper-3.6.0-bin.tar.gz 1.1 集群规划 在linux1...配置好ssh key免密登录 1.首先配置hosts文件 vim /etc/hosts 192.168.10.11 linux1 192.168.10.12 linux2 192.168.10.13 linux3...ssh-copy-id linux3 第二台机器 [hadoop@linux2 conf]$ ssh-keygen -t rsa [hadoop@linux2 conf]$ ssh-copy-id linux1...ssh-copy-id linux3 第三台机器 [hadoop@linux3 conf]$ ssh-keygen -t rsa [hadoop@linux3 conf]$ ssh-copy-id linux1...apache-zookeeper-3.6.0/Data dataLogDir=/opt/module/apache-zookeeper-3.6.0/Data/logs 在末尾增加 server.1=linux1
拥有读写权限,而其它用户(Others)和组(Group)仅仅拥有读的权限,操作如下: [linux1@localhost ~]$ ls -al mysqltuner.pl -rw------- 1 linux1... linux1 38063 Oct 26 07:49 mysqltuner.pl [linux1@localhost ~]$ chmod 644 mysqltuner.pl [linux1@localhost... ~]$ ls -al mysqltuner.pl -rw-r--r-- 1 linux1 linux1 38063 Oct 26 07:49 mysqltuner.pl 然后接着修改mysqltuner.pl...localhost ~]$ chmod 755 mysqltuner.pl [linux1@localhost ~]$ ls -al mysqltuner.pl -rwxr-xr-x 1 linux1... linux1 38063 Oct 26 07:49 mysqltuner.pl
所有实例和服务的状态 $ srvctl status database -d orcl Instance orcl1 is running on node linux1 Instance orcl2...Service orcltest is running on instance(s) orcl2, orcl1 特定节点上节点应用程序的状态 $ srvctl status nodeapps -n linux1...VIP is running on node: linux1 GSD is running on node: linux1 Listener is running on node: linux1...ONS daemon is running on node: linux1 ASM 实例的状态 $ srvctl status asm -n linux1 ASM instance +ASM1 is...列出配置的所有数据库 $ srvctl config database orcl 显示 RAC 数据库的配置 $ srvctl config database -d orcl linux1 orcl1
--正确开启linux1镜像服务器(5分) --正确打开【网络internet设置】正确【更改适配器选项】开启对应的网络服务(5分) --正确通过xshell工具链接linux1镜像服务器 ip a(5
首先使 Win1 主机上线,接着在 Linux1 主机上通过 SSH 连接到 Linux2 主机。...在 Linux1 主机上开启 445 端口转发 socat TCP4-LISTEN:445,fork SOCKS4:127.0.0.1:192.168.232.132:445 3.
Oracle Grid Infrastructure home /u01/app/11.2.0/grid The following nodes are part of this cluster: linux1...node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)linux1...Oracle install clean START Clean install operation removing temporary directory '/tmp/install' on node 'linux1...de-configured on node "linux2" Oracle Clusterware is stopped and successfully de-configured on node "linux1...Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'linux1,linux2' at the end of the session.
在tomcat的server.xml中配置jvmRoute Linux1 <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcat1...Linux2 在context.xml的Context配置Manager <em>Linux1</em>
整个安装过程 使用Linux1镜像,账号密码都是root cd /opt/soft ls 查看是否有MySQL安装包 systemctr stop firewalld yum install net-tools
curl -X PUT -d '{"Datacenter": "dc1", "Node": "c2", "Address": "10.0.2.15", "Service": {"Service": "Linux1...http://127.0.0.1:8500/v1/catalog/register 查看服务节点并使用json格式化输出 curl 127.0.0.1:8500/v1/catalog/service/Linux1
查看一台机器的实例的状态 $ srvctl status instance -d orcl -iorcl1 特定节点上节点应用程序的状态 IXDBA.NET社区论坛 $ srvctl status nodeapps -n linux1...VIP is running on node: linux1 GSD is running on node: linux1 Listener is running on node: linux1 ONS...daemon is running on node: linux1 3、关闭整个rac db: $ srvctl stop database –d orcl $ srvctl stop database
Jps standalone提交应用 bin/spark-submit \ --class org.apache.spark.examples.SparkPi \ --master spark://linux1.../examples/jars/spark-examples_2.12-3.0.0.jar \ 10 1) --class 表示要执行程序的主类 2) --master spark://linux1:7077...配置历史服务 由于 spark-shell 停止掉后,集群监控 linux1:4040 页面就看不到历史任务的运行情况,所以 ,开发时都配置历史服务器记录任务运行情况。...sbin/stop-all.sh 启动Zookeeper zkServer.sh start 修改 spark-env.sh 文件添加如下配置 注释如下内容: #SPARK_MASTER_HOST=linux1.../examples/jars/spark-examples_2.12-3.0.0.jar \ 10 配置历史服务 由于 spark-shell 停止掉后,集群监控 linux1:4040 页面就看不到历史任务的运行情况
负责把命令传输到服务器上SFTP功能:负责把文件传输到服务器上操作系统:管理和控制计算机硬件和软件资源的最基本的计算机程序,任何应用程序都必须基于操作系统的支持才能运行常用的三大操作系统:Windows、MacOS、Linux1
两个例子如下: L 5 //用指针值加载ACCU1 T MW 2 //将指针传输到MW2 L T [MW 2] //用T5的当前时间值加载ACCU1 OPN DB [#DB_Temp] /.../打开数据块号来自接口临时参数的DB,命名为DB_Temp 存储区标识符 I、Q、M、L、DB 使用 POINTER 数据类型的双字(32 位)位置。
领取专属 10元无门槛券
手把手带您无忧上云