前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >Openshift3.7完整安装手册+容器化Harbor

Openshift3.7完整安装手册+容器化Harbor

作者头像
魏新宇
发布2018-04-18 11:41:29
1.8K0
发布2018-04-18 11:41:29
举报

大魏:燕华是我们的合作伙伴,对Openshift很熟悉。按照本文档,可以一步步将Openshift3.7安装起来。本文仅供测试环境参考,生产环境慎用。

1 环境准备

安装rhel7.3系统,全部使用最小化安装。

主机名

IP

功能说明

ocp37master01.demo.com

192.168.250.111

Master节点,yum源,ntp时钟服务器、harbor镜像仓库(https加密连接)

ocp37node01.demo.com

192.168.250.121

基础架构节点、计算节点

1.1 基础环境准备

1.1.1 在所有节点上

systemctl disable firewalld

systemctl stop firewalld

setenforce 0

sed -i 's/^SELINUX=.*/SELINUX=permissive/'/etc/selinux/config

1.2 在master节点上配置yum源

mkdir /opt/repos;

将:

rhel-7-fast-datapath-rpms.zip

rhel-7-server-extras-rpms.zip

rhel-7-server-ose-3.7.zip

rhel-7-server-rpms.zip

上传到/opt/repos目录下。

挂载光驱,安装unzip;

cd /opt/repos,并执行:for i in `ls *.zip`; do unzip $i;done

cat << EOF > /etc/yum.repos.d/local.repo

[rhel-7-server-rpms]

name=rhel-7-server-rpms

baseurl=file:///opt/repos/rhel-7-server-rpms

enabled=1

gpgcheck=0

[rhel-7-server-extras-rpms]

name=rhel-7-server-extras-rpms

baseurl=file:///opt/repos/rhel-7-server-extras-rpms

enabled=1

gpgcheck=0

[rhel-7-fast-datapath-rpms]

name=rhel-7-fast-datapath-rpms

baseurl=file:///opt/repos/rhel-7-fast-datapath-rpms

enabled=1

gpgcheck=0

[rhel-7-server-ose-3.7-rpms]

name=rhel-7-server-ose-3.7-rpms

baseurl=file:///opt/repos/rhel-7-server-ose-3.7-rpms

enabled=1

gpgcheck=0

EOF

subscription-manager clean

yum clean all

yum makecache

yum -y installhttpd;

cat << EOF > /etc/httpd/conf.d/yum.conf

Alias /repos "/opt/repos"

<Directory "/opt/repos">

Options +Indexes +FollowSymLinks

Require all granted

</Directory>

<Location /repos>

SetHandler None

</Location>

EOF

systemctl enable httpd;

systemctl restarthttpd;

1.3 在node节点上配置yum源

cat << EOF > /etc/yum.repos.d/ocp.repo

[rhel-7-server-rpms]

name=rhel-7-server-rpms

baseurl=http://192.168.250.111/repos/rhel-7-server-rpms

enabled=1

gpgcheck=0

[rhel-7-server-extras-rpms]

name=rhel-7-server-extras-rpms

baseurl=http://192.168.250.111/repos/rhel-7-server-extras-rpms

enabled=1

gpgcheck=0

[rhel-7-fast-datapath-rpms]

name=rhel-7-fast-datapath-rpms

baseurl=http://192.168.250.111/repos/rhel-7-fast-datapath-rpms

enabled=1

gpgcheck=0

[rhel-7-server-ose-3.7-rpms]

name=rhel-7-server-ose-3.7-rpms

baseurl=http://192.168.250.111/repos/rhel-7-server-ose-3.7-rpms

enabled=1

gpgcheck=0

EOF

subscription-manager clean

yum clean all

yum makecache

1.4 设置hostname

master01上:

hostnamectl set-hostname ocp37master01.demo.com

node01上:

hostnamectl set-hostname ocp37node01.demo.com

1.5 所有节点上配置/etc/hosts

echo "192.168.250.111 ocp37master01.demo.com" >> /etc/hosts

echo "192.168.250.121 ocp37node01.demo.com" >> /etc/hosts

1.6 配置ntp

1.6.1 在所有节点上安装ntp

yum install -y ntp

1.6.2 在master上配置

vi /etc/ntp.conf

server 127.127.1.0 iburst

fudge 127.127.1.0 stratum 10

#server 0.rhel.pool.ntp.org iburst

#server 1.rhel.pool.ntp.org iburst

#server 2.rhel.pool.ntp.org iburst

#server 3.rhel.pool.ntp.org iburst

1.6.3 在node上配置

vi /etc/ntp.conf

#server 0.rhel.pool.ntp.org iburst

#server 1.rhel.pool.ntp.org iburst

#server 2.rhel.pool.ntp.org iburst

server 192.168.250.111 iburst

2 2个节点上安装软件包和docker

yum install -ywget git net-tools bind-utils iptables-services \

bridge-utilsbash-completion kexec-tools sos psacct vim lrzsz python-setuptools

yum update -y

reboot

systemctl enableiptables

systemctl startiptables

放开iptables:

sed -i -e s/'-AINPUT -m state --state RELATED,ESTABLISHED -j ACCEPT'/'-A INPUT -j ACCEPT'/g /etc/sysconfig/iptables

systemctl restartiptables

yum install -ydocker-1.12.6

systemctl enabledocker

systemctl startdocker

3 安装harbor镜像仓库

3.1 检查python和docker

python --version

docker -v

3.2 生成https需要的证书

mkdir -p /data/cert && cd /data/cert

openssl req \

-newkey rsa:2048 -nodes -keyout server.key \

-x509 -days 3650 -out server.crt -subj \

"/C=CN/ST=BJ/L=BJ/O=AB/OU=IT/CN=*.demo.com"

3.3 安装harbor

上传docker-compose到/usr/local/bin目录下,并设置可执行权限:

chmod +x docker-compose

tar zxvf harbor-offline-installer-v1.2.2.tgz-C /opt/

sed -i "s/hostname =.*/hostname =${HOSTNAME}/" /opt/harbor/harbor.cfg

sed -i "s/ui_url_protocol =http/ui_url_protocol = https/" /opt/harbor/harbor.cfg

sed -i "s/verify_remote_cert =on/verify_remote_cert = off/" /opt/harbor/harbor.cfg

/opt/harbor/prepare

注意: harbor默认会监听主机上的80和443端口,会和master上的httpd监听的80端口冲突,因此,需要修改harbor的默认监听端口。如果是单独的节点安装harbor,不需要进行修改。

修改方法:

1、vi docker-compose.yml

2、vi common/templates/registry/config.yml

执行harbor安装:

/opt/harbor/install.sh

确认相关容器都是启动状态:

docker-compose -f/opt/harbor/docker-compose.yml ps

访问harbor的方式变成了:

https://ocp37master01.demo.com:10443/harbor/projects

解决harbor在系统重启后无法完全启动的问题:

cat << EOF > /etc/systemd/system/harbor.service

[Unit]

Description=harbor registry service

After=docker.service

[Service]

ExecStart=/usr/local/bin/docker-compose -f/opt/harbor/docker-compose.yml start

[Install]

WantedBy=multi-user.target

EOF

systemctl daemon-reload

systemctl enable harbor

依据镜像命名在harbor上创建出openshift需要的项目:(公开项目)

gogs

openshift

openshift3

rhel7

rhscl

sonatype

4 配置docker(2个节点上)

vi /etc/sysconfig/docker

修改:

OPTIONS='--insecure-registry=172.30.0.0/16--selinux-enabled --log-opt max-size=1M --log-opt max-file=3'

增加:

ADD_REGISTRY='--add-registryocp37master01.demo.com'

配置信任harbor的证书:

scp ocp37master01.demo.com:/data/cert/server.crt/etc/pki/ca-trust/source/anchors/

update-ca-trust extract

systemctl restart docker

5 准备镜像(master节点上)

docker login ocp37master01.demo.com:10443

验证是否能够login成功。

导入镜像:

进入镜像上传后的目录:

for i in `ls *.tar.gz`;do docker load -i $i;done

tag镜像:

docker images |grep"redhat.com"|awk '{print "docker tag "$3""$1":"$2}'| \

sed-e s/registry.access.redhat.com/ocp37master01.demo.com:10443/| \

xargs -i bash -c "{}"

向harbor推送镜像:

docker images |grep "demo.com"| \

awk'{print "docker push "$1":"$2}'| \

xargs -i bash -c "{}"

镜像上传完成后,删除原有redhat.com的镜像。

for i in `docker images | grep "redhat.com"| awk '{print $1":"$2}'`;do docker rmi $i;done

用类似的方式导入:

docker images |grep "docker.io"|awk'{print "docker tag "$3" "$1":"$2}'| \

sed-e s/docker.io/ocp37master01.demo.com:10443/| \

xargs -i bash -c "{}"

docker push ocp37master01.demo.com:10443/openshift/hello-openshift:latest

docker push ocp37master01.demo.com:10443/sonatype/nexus3:latest

docker pushocp37master01.demo.com:10443/gogs/gogs:latest

新tag镜像上传完成后删除原镜像:

for i in `docker images | grep "docker.io"| awk '{print $1":"$2}'`;do docker rmi $i;done

6 安装前配置(master节点上)

6.1 配置互信

生成key:

ssh-keygen -t rsa -N "" -f/root/.ssh/id_rsa

ssh-copy-id ocp37master01.demo.com

ssh-copy-id ocp37node01.demo.com

注意,如果安邦的ssh-copy-id命令不可用,可以通过将master01上的id_rsa.pub的内容写入到其他节点的:/root/.ssh/authorized_keys实现互信。

6.2 安装ansible

yum install -y openshift-ansible

6.3 准备/etc/ansible/hosts

[OSEv3:children]

masters

nodes

etcd

nfs

[OSEv3:vars]

ansible_ssh_user=root

#ansible_become=true

openshift_deployment_type=openshift-enterprise

openshift_master_identity_providers=[{'name':'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind':'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

openshift_disable_check=docker_image_availability,docker_storage,memory_availability

oreg_url=ocp37master01.demo.com:10443/openshift3/ose-${component}:${version}

openshift_examples_modify_imagestreams=true

openshift_clock_enabled=true

openshift_service_catalog_image_version=v3.7

openshift_service_catalog_image_prefix=ocp37master01.demo.com:10443/openshift3/ose-

openshift_hosted_router_replicas=1

openshift_hosted_router_selector='router=yes'

openshift_master_default_subdomain=apps.demo.com

openshift_hosted_etcd_storage_kind=nfs

openshift_hosted_etcd_storage_nfs_options="*(rw,root_squash,sync,no_wdelay)"

openshift_hosted_etcd_storage_nfs_directory=/exports

openshift_hosted_etcd_storage_volume_name=etcd-vol1

openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]

openshift_hosted_etcd_storage_volume_size=1Gi

openshift_hosted_etcd_storage_labels={'storage':'etcd'}

ansible_service_broker_etcd_image_prefix=ocp37master01.demo.com:10443/rhel7/

ansible_service_broker_image_prefix=ocp37master01.demo.com:10443/openshift3/ose-

#ansible_service_broker_image_tag=v3.7

template_service_broker_prefix=ocp37master01.demo.com:10443/openshift3/

template_service_broker_version=v3.7

#template_service_broker_image_name=ose

openshift_metrics_install_metrics=true

openshift_metrics_hawkular_hostname=hawkular-metrics.apps.demo.com

#openshift_metrics_cassandra_storage_type=emptydir

openshift_metrics_cassandra_storage_type=nfs

openshift_metrics_storage_kind=nfs

openshift_metrics_storage_access_modes=['ReadWriteOnce']

openshift_metrics_storage_nfs_directory=/exports

openshift_metrics_storage_nfs_options='*(rw,root_squash)'

openshift_metrics_storage_volume_name=metrics

openshift_metrics_storage_volume_size=10Gi

openshift_metrics_image_prefix=ocp37master01.demo.com:10443/openshift3/

openshift_logging_install_logging=true

openshift_logging_storage_kind=nfs

openshift_logging_storage_access_modes=['ReadWriteOnce']

openshift_logging_storage_nfs_directory=/exports

openshift_logging_storage_nfs_options='*(rw,root_squash)'

openshift_logging_storage_volume_name=logging

openshift_logging_storage_volume_size=12Gi

openshift_logging_image_prefix=ocp37master01.demo.com:10443/openshift3/

# host group formasters

[masters]

ocp37master01.demo.com

# host group foretcd

[etcd]

ocp37master01.demo.com

# host group fornodes, includes region info

[nodes]

ocp37master01.demo.comopenshift_node_labels="{'region': 'infra', 'zone': 'default'}"

ocp37node01.demo.comopenshift_node_labels="{'region': 'infra','router': 'yes', 'zone':'default'}" openshift_schedulable=true

[nfs]

ocp37master01.demo.com

7 安装ocp

7.1 安装前检查

ansible -m shell -a 'hostname' nodes

ansible -m shell -a 'docker pull ocp37master01.demo.com/openshift3/ose-pod:v3.7.9'nodes

ansible -m shell -a 'yum repolist' nodes

7.2 执行安装

ansible-playbook/usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

安装完成确认:

oc get node

oc get pod --all-namespaces:

8 安装后配置

8.1 admin用户创建

touch /etc/origin/master/htpasswd;

htpasswd -b /etc/origin/master/htpasswdadmin admin;

oadm policy add-cluster-role-to-usercluster-admin admin;

systemctl restartatomic-openshift-master-api.service

systemctl restartatomic-openshift-master-controllers.service

9 附录

9.1 通过一个脚本监控harbor

当把harbor和master安装在相同节点时,会碰到在openshift安装过程中,由于master重启docker服务导致harbor的某几个容器出现退出而中断服务的情况,可以通过一个shell脚本监控harbor的容器状态,如果发现有exit的容器,则执行docker-compose进行重新启动。

cat << EOF > /opt/harbor/check.sh

#!/bin/bash

set -v on

while true;

do

if

docker-compose -f /opt/harbor/docker-compose.yml ps | grep Exit

then

echo "------------------execute script-----------------------"

docker-compose -f /opt/harbor/docker-compose.yml start

fi

sleep 5

done

EOF

chmod +x /opt/harbor/check.sh

/opt/harbor/check.sh

从该check脚本可以看出,openshift安装过程中2次重启docker。

注意:安装完成后停止该检查。

9.2 参考文档

https://github.com/nichochen/openshift-docs/blob/master/openshift-3.7-1master-cn.md

https://docs.openshift.com/container-platform/3.7/install_config/install/advanced_install.html

https://github.com/openshift/openshift-ansible

本文参与 腾讯云自媒体同步曝光计划,分享自微信公众号。
原始发表:2018-04-09,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 大魏分享 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 1 环境准备
    • 1.1 基础环境准备
      • 1.1.1 在所有节点上
    • 1.2 在master节点上配置yum源
      • 1.3 在node节点上配置yum源
        • 1.4 设置hostname
          • 1.5 所有节点上配置/etc/hosts
            • 1.6 配置ntp
              • 1.6.1 在所有节点上安装ntp
              • 1.6.2 在master上配置
              • 1.6.3 在node上配置
          • 2 2个节点上安装软件包和docker
          • 3 安装harbor镜像仓库
            • 3.1 检查python和docker
              • 3.2 生成https需要的证书
                • 3.3 安装harbor
                • 4 配置docker(2个节点上)
                • 5 准备镜像(master节点上)
                • 6 安装前配置(master节点上)
                  • 6.1 配置互信
                    • 6.2 安装ansible
                      • 6.3 准备/etc/ansible/hosts
                      • 7 安装ocp
                        • 7.1 安装前检查
                          • 7.2 执行安装
                          • 8 安装后配置
                            • 8.1 admin用户创建
                            • 9 附录
                              • 9.1 通过一个脚本监控harbor
                                • 9.2 参考文档
                                相关产品与服务
                                容器服务
                                腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。同时产品作为云原生 Finops 领先布道者,主导开源项目Crane,全面助力客户实现资源优化、成本控制。
                                领券
                                问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档