这篇文章主要介绍了如何部署Kubernetes高可用,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。
Kubernetes高可用是保证Master节点中API Server服务的高可用。API Server提供了Kubernetes各类资源对象增删改查的唯一访问入口,是整个Kubernetes系统的数据总线和数据中心。采用负载均衡(Load Balance)连接两个Master节点可以提供稳定容器云业务。
主机名 | IP地址 | 操作系统 | 主要软件 |
K8s-master01 | 192.168,200.111 | CentOS7.x | Etcd+Kubernetes |
K8s-master02 | 192.168.200.112 | CentOS7.x | Etcd+Kubernetes |
K8s-node01 | 192.168.200.113 | CentOS7.x | Etcd+Kubernetes+Flannel+Docker |
K8s-node02 | 192.168.200.114 | CentOS7.x | Etcd+Kubernetes+Flannel+Docker |
K8s-lb01 | 192.168.200.115 | CentOS7.x | Nginx+Keepalived |
K8s-lb02 | 192.168.200.116 | CentOS7.x | Nginx+Keepalived |
LB集群VIP地址为192.168.200.200。
为所有主机配置IP地址、网关、DNS(建议配置阿里云的223.5.5.5)等基础网络信息。建议主机设置为静态IP地址,避免因为IP地址变化出现集群集中无法连接API Server的现象,导致Kubernetes群集不可用。
为所有主机配置主机名并添加地址解析记录,下面以k8s-master01主机为例进行操作演示。
[root@localhost ~]# hostnamectl set-hostname k8s-master01 [root@localhost ~]# bash [root@k8s-master01 ~]# cat <<EOF>> /etc/hosts 192.168.200.111 k8s-master01 192.168.200.112 k8s-master02 192.168.200.113 k8s-node01 192.168.200.114 k8s-node02 192.168.200.115 k8s-lb01 192.168.200.116 k8s-lb02 EOF
[root@k8s-master01 ~]# iptables -F [root@k8s-master01 ~]# systemctl stop firewalld && systemctl disable firewalld [root@k8s-master01 ~]# setenforce 0 [root@k8s-master01 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
在k8s-master01主机上创建的目录“/k8s”,并将准备好的脚本文件etcd-cert.sh和etcd.sh上传至/k8s目录中。其中etcd-cert.sh脚本是etcd证书创建的脚本:etcd.sh脚本是etcd服务脚本,包含配置文件及启动脚本。
[root@k8s-master01 ~]# mkdir /k8s [root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# ls etcd-cert.sh etcd.sh
创建目录/k8s/etcd-cert,证书全部存放至该目录中,方便管理。
[root@k8s-master01 k8s]# mkdir etcd-cert [root@k8s-master01 k8s]# mv etcd-cert.sh etcd-cert
上传cfssl、cfssl-certinfo、cfssljson软件包。部署到/usr/local/bin目录下并配置执行权限
[root@k8s-master01 k8s]# ls #上传cfssl、cfssl-certinfo、cfssljson软件包(证书生成工具) cfssl cfssl-certinfo cfssljson etcd-cert etcd.sh [root@k8s-master01 k8s]# mv cfssl* /usr/local/bin/ [root@k8s-master01 k8s]# chmod +x /usr/local/bin/cfssl* [root@k8s-master01 k8s]# ls -l /usr/local/bin/cfssl* -rwxr-xr-x 1 root root 10376657 7月 21 2020 /usr/local/bin/cfssl -rwxr-xr-x 1 root root 6595195 7月 21 2020 /usr/local/bin/cfssl-certinfo -rwxr-xr-x 1 root root 2277873 7月 21 2020 /usr/local/bin/cfssljson
创建CA和Server证书
[root@k8s-master01 ~]# cd /k8s/etcd-cert/ [root@k8s-master01 etcd-cert]# cat etcd-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "etcd", "hosts": [ #这里写的是etcd节点的IP地址(注意最后一个不能有逗号) "192.168.200.111", "192.168.200.112", "192.168.200.113", "192.168.200.114" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server [root@k8s-master01 etcd-cert]# bash etcd-cert.sh 2021/01/28 15:20:19 [INFO] generating a new CA key and certificate from CSR 2021/01/28 15:20:19 [INFO] generate received request 2021/01/28 15:20:19 [INFO] received CSR 2021/01/28 15:20:19 [INFO] generating key: rsa-2048 2021/01/28 15:20:19 [INFO] encoded CSR 2021/01/28 15:20:19 [INFO] signed certificate with serial number 165215637414524108023506135876170750574821614462 2021/01/28 15:20:19 [INFO] generate received request 2021/01/28 15:20:19 [INFO] received CSR 2021/01/28 15:20:19 [INFO] generating key: rsa-2048 2021/01/28 15:20:19 [INFO] encoded CSR 2021/01/28 15:20:19 [INFO] signed certificate with serial number 423773750965483892371547928227340126131739080799 2021/01/28 15:20:19 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@k8s-master01 etcd-cert]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd-cert.sh server.csr server-csr.json server-key.pem server.pem
[root@k8s-master01 ~]# cd /k8s/
上传 etcd-v3.3.18-linux-amd64.tar.gz软件包
[root@k8s-master01 k8s]# ls etcd etcd-cert/ etcd.sh etcd-v3.3.18-linux-amd64/ etcd-v3.3.18-linux-amd64.tar.gz [root@k8s-master01 k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p [root@k8s-master01 k8s]# cd etcd-v3.3.18-linux-amd64 [root@k8s-master01 etcd-v3.3.18-linux-amd64]# ls Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md [root@k8s-master01 etcd-v3.3.18-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/ [root@k8s-master01 etcd-v3.3.18-linux-amd64]# cp /k8s/etcd-cert/*.pem /opt/etcd/ssl/ [root@k8s-master01 etcd-v3.3.18-linux-amd64]# ls /opt/etcd/ssl/ ca-key.pem ca.pem server-key.pem server.pem
[root@k8s-master01 etcd-v3.3.18-linux-amd64]# cd /k8s/ [root@k8s-master01 k8s]# bash etcd.sh etcd01 192.168.200.111 etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
执行时会卡在启动etcd服务上,实际已经启动Ctrl+C终止就行。(查看进程存在)(因为第一个etcd会尝试去连接其他节点,但他们此时还并未启动)
[root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-master02:/opt/ [root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-node01:/opt/ [root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-node02:/opt/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-master02:/usr/lib/systemd/system/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-node01:/usr/lib/systemd/system/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-node02:/usr/lib/systemd/system/
其他节点拿到后需修改后使用
[root@k8s-master02 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd02" # 修改为相应的主机名 ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.112:2380" # 修改为相应的IP地址 ETCD_LISTEN_CLIENT_URLS="https://192.168.200.112:2379" # 修改为相应的IP地址 #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.112:2380" # 修改为相应的IP地址 ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.112:2379" # 修改为相应的IP地址 ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node01 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.113:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.200.113:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.113:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.113:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node02 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd04" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.114:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.200.114:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.114:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.114:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new"
maser01、master02、node01、node02这4台主机上均执行以下操作
[root@k8s-master01 k8s]# systemctl daemon-reload && systemctl restart etcd && systemctl enable etcd
上传并解压master.zip包后会生成三个脚本:apiserver.sh、controller-manager.sh、及scheduler.sh。为脚本文件添加执行权限,后面每一个服务的启动都要依赖于这三个脚本。
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# unzip master.zip Archive: master.zip inflating: apiserver.sh inflating: controller-manager.sh inflating: scheduler.sh [root@k8s-master01 k8s]# chmod +x *.sh
创建/k8s/k8s-cert目录,作为证书自签的工作目录,将所有证书都生成到此目录中。在/k8s/k8s-cert目录中创建证书生成脚本k8s-cert.sh,脚本内容如下所示。执行k8s-cert.sh脚本即可生成CA证书、服务器端的私钥、admin证书、proxy代理端证书。
[root@k8s-master01 k8s]# mkdir /k8s/k8s-cert [root@k8s-master01 k8s]# cd /k8s/k8s-cert/ [root@k8s-master01 k8s-cert]# vim k8s-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "192.168.200.111", "192.168.200.112", "192.168.200.113", "192.168.200.114", "192.168.200.200", # 上面是四个节点的IP地址,200.200为VIP)(因为做完高可用之后node通过VIP连接到master,所以咱们的证书也要对VIP起到作用) "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
执行k8s-cert.sh脚本会生成8张证书。
[root@k8s-master01 k8s-cert]# bash k8s-cert.sh 2021/01/28 16:34:13 [INFO] generating a new CA key and certificate from CSR 2021/01/28 16:34:13 [INFO] generate received request 2021/01/28 16:34:13 [INFO] received CSR 2021/01/28 16:34:13 [INFO] generating key: rsa-2048 2021/01/28 16:34:13 [INFO] encoded CSR 2021/01/28 16:34:13 [INFO] signed certificate with serial number 308439344193766038756929834816982880388926996986 2021/01/28 16:34:13 [INFO] generate received request 2021/01/28 16:34:13 [INFO] received CSR 2021/01/28 16:34:13 [INFO] generating key: rsa-2048 2021/01/28 16:34:14 [INFO] encoded CSR 2021/01/28 16:34:14 [INFO] signed certificate with serial number 75368861589931302301330401750480744629388496397 2021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2021/01/28 16:34:14 [INFO] generate received request 2021/01/28 16:34:14 [INFO] received CSR 2021/01/28 16:34:14 [INFO] generating key: rsa-2048 2021/01/28 16:34:14 [INFO] encoded CSR 2021/01/28 16:34:14 [INFO] signed certificate with serial number 108292524112693440628246698004254871159937905177 2021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2021/01/28 16:34:14 [INFO] generate received request 2021/01/28 16:34:14 [INFO] received CSR 2021/01/28 16:34:14 [INFO] generating key: rsa-2048 2021/01/28 16:34:14 [INFO] encoded CSR 2021/01/28 16:34:14 [INFO] signed certificate with serial number 262399212790704249587468309931495790220005272357 2021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@k8s-master01 k8s-cert]# ls *.pem admin-key.pem admin.pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem [root@k8s-master01 k8s-cert]# ls *.pem | wc -l 8
证书生辰以后,需要将其中的CA与Server相关证书拷贝到Kubernetes的工作目录。创建/opt/kubernetes/{cfg,bin,ssl}目录,分别用于存放配置文件、可执行文件以及证书文件。
[root@k8s-master01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@k8s-master01 ~]# cd /k8s/k8s-cert/ [root@k8s-master01 k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/ [root@k8s-master01 k8s-cert]# ls /opt/kubernetes/ssl/ ca-key.pem ca.pem server-key.pem server.pem
上传并解压Kubernetes软件压缩包,将压缩包中的kube-apiserver、kubectl、kube-controller-manager与kube-scheduler组件的脚本文件拷贝到/opt/kubernetes/bin/目录下。
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# tar xf kubernetes-server-linux-amd64.tar.gz [root@k8s-master01 k8s]# cd kubernetes/server/bin/ [root@k8s-master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/ [root@k8s-master01 bin]# ls /opt/kubernetes/bin/ kube-apiserver kube-controller-manager kubectl kube-scheduler
在/opt/kubernetes/cfg/目录中创建名为token.csv的token文件,其本质就是创建一个用户角色,可以理解为管理性的角色。Node节点加入到群集当中也是通过这个角色去控制。但是,在此之前需要通过head命令生成随机序列号作为token令牌。token文件的主要内容如下所示,其中:
48be2e8be6cca6e349d3e932768f5d71为token令牌;
kubelet-bootstrap为角色名;
10001为角色ID;
"system:kubelet-bootstrap"为绑定的超级用户权限。
[root@k8s-master01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' ' 48be2e8be6cca6e349d3e932768f5d71 [root@k8s-master01 ~]# vim /opt/kubernetes/cfg/token.csv 48be2e8be6cca6e349d3e932768f5d71,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
将k8s-master01主机/opt/kubernetes/目录下的所有文件拷贝到k8s-master02主机中。
[root@k8s-master01 ~]# ls -R /opt/kubernetes/ /opt/kubernetes/: # 三个目录 bin cfg ssl /opt/kubernetes/bin: # 一些命令 kube-apiserver kube-controller-manager kubectl kube-scheduler /opt/kubernetes/cfg: # token的文件 token.csv /opt/kubernetes/ssl: # 一些证书 ca-key.pem ca.pem server-key.pem server.pem [root@k8s-master01 ~]# scp -r /opt/kubernetes/ root@k8s-master02:/opt
运行apiserver.sh脚本,运行脚本需要填写两个位置参数。第一个位置参数是本地的IP地址,第二个位置参数是API Server群集列表。
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# bash apiserver.sh https://192.168.200.111 https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service. [root@k8s-master01 k8s]# ps aux | grep [k]ube
查看k8s-master01节点的6443安全端口以及https的8080端口是否启动。
[root@k8s-master01 k8s]# netstat -anpt | grep -E "6443|8080" tcp 0 0 192.168.200.111:6443 0.0.0.0:* LISTEN 39105/kube-apiserve tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 39105/kube-apiserve tcp 0 0 192.168.200.111:46832 192.168.200.111:6443 ESTABLISHED 39105/kube-apiserve tcp 0 0 192.168.200.111:6443 192.168.200.111:46832 ESTABLISHED 39105/kube-apiserve
将/opt/kubernetes/cfg/工作目录下的kube-apiserver配置文件及其token.csv令牌文件拷贝到k8s-master02主机上。在k8s-master02主机是上修改kube-apiserver配置文件,将bind-address、advertise-address地址修改为本机地址。
[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/* root@k8s-master02:/opt/kubernetes/cfg/
k8s-master02主机操作:
KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379 \ --bind-address=192.168.200.112 \ # 修改为相应的IP地址 --secure-port=6443 \ --advertise-address=192.168.200.112 \ # 修改为相应的IP地址 --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --kubelet-https=true \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" [root@k8s-master02 ~]# vim /opt/kubernetes/cfg/kube-apiserver
将k8s-master01节点的kube-apiserver.service启动脚本拷贝到k8s-master02节点的/usr/lib/systemd/system目录下,并且在k8s-master02启动API Server,并且查看端口信息。
[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-apiserver.service root@k8s-master02:/usr/lib/systemd/system
k8s-master02主机操作:
[root@k8s-master02 ~]# systemctl start kube-scheduler && systemctl enable kube-scheduler [root@k8s-master02 ~]# netstat -anptu | grep -E "6443|8080" tcp 0 0 192.168.200.112:6443 0.0.0.0:* LISTEN 544/kube-apiserver tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 544/kube-apiserver
[root@k8s-master01 ~]# cd /k8s/ [root@k8s-master01 k8s]# ./scheduler.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service. [root@k8s-master01 k8s]# ps aux | grep [k]ube
将k8s-master01节点的kube-scheduler配置文件与kube-scheduler.service启动脚本拷贝到k8s-master02节点上,并且在k8s-master02启动Schedule。
[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/kube-controller-manager root@k8s-master02:/opt/kubernetes/cfg/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-controller-manager.service root@k8s-master02:/usr/lib/systemd/system
k8s-master02主机操作:
[root@k8s-master02 ~]# systemctl start kube-scheduler [root@k8s-master02 ~]# systemctl enable kube-scheduler
在k8s-master01节点,启动Controller-Manager服务。
[root@k8s-master01 k8s]# ./controller-manager.sh 127.0.0.1 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
将k8s-master01节点的kube-controller-manager配置文件和controller-manager.service启动脚本拷贝到k8s-master02节点的/opt/kubernetes/cfg目录下,并且在k8s-master02节点上启动Controller-Manager。
[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/kube-controller-manager root@k8s-master02:/opt/kubernetes/cfg/ [root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-controller-manager.service root@k8s-master02:/usr/lib/systemd/system
k8s-master02主机操作:
[root@k8s-master02 ~]# systemctl start kube-controller-manager [root@k8s-master02 ~]# systemctl enable kube-controller-manager
在k8s-master01和k8s-master02节点上,查看各组件状态。
[root@k8s-master01 k8s]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"} etcd-3 Healthy {"health":"true"}
[root@k8s-master02 ~]# /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-3 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
在两台node节点上均需要操作,以k8s-node01主机为例:
安装docker-ce
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo yum makecache fast yum -y install docker-ce systemctl start docker && systemctl enable docker docker version
阿里云镜像加速器
tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://vbgix9o1.mirror.aliyuncs.com"] } EOF systemctl daemon-reload && systemctl restart docker docker info
虽然在两台node节点上安装了Docker,但是Docker运行的容器还需要网络组件Flannel的支持来实现彼此之间互联互通。
首先需要将分配的子网段写入到Etcd中,以便Flannel使用。网络中涉及到的路由如何转发、源目地址如何封装等信息均存储到Etcd中。
通过执行以下的etcdctl命令,定义以逗号进行分割列出群集中的IP地址,set指定网络中的配置,对应的参数etcd是一个键值对,设置网段为172.17.0.0/16,类型是vxlan。
执行完后,查看两台node节点的docker0地址,即docker网关的地址是否为172.17.0.0/16网段的地址。
[root@k8s-master01 ~]# cd /k8s/etcd-cert/ [root@k8s-master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}' {"Nerwork":"172.17.0.0/16","Backend":{"Type":"vxlan"}}
[root@k8s-node01 ~]# ifconfig docker0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 ether 02:42:d6:c7:05:8b txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
查看写入的网络信息
[root@k8s-master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379" get /coreos.com/network/config {"Nerwork":"172.17.0.0/1","Backend":{"Type":"vxlan"}}
将flannel-v0.10.1-linux-amd64.tar.gz软件包上传两个node节点服务器,并进行解压缩。
在两台node节点上均需要操作
[root@k8s-node01 ~]# tar xf flannel-v0.12.0-linux-amd64.tar.gz
在node节点上创建k8s工作目录。将flanneld脚本和mk-docker-opts.sh脚本剪切至k8s工作目录中。
[root@k8s-node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p [root@k8s-node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
将追备好的flannel.sh脚本拖至两台node节点上,用以启动Flannel服务和创建陪孩子文件。其中:指定陪孩子文件路径/opt/kubernetes/cfg/flanneld/,Etcd的终端地址以及需要认证的证书密钥文件;指定启动脚本路径/usr/lib/systemd/system/flanneld.service,添加至自定义系统服务中,交由系统统一管理。
以k8s-node01为例:
[root@k8s-node01 ~]# cat /opt/etcd/cfg/etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.200.113:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.200.113:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.113:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.113:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@k8s-node01 ~]# cat flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable flanneld systemctl restart flanneld [root@k8s-node01 ~]# bash flannel.sh https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113,https://192.168.200.114:2379
两台node节点配置Docker连接Flannel。docker.service需要借助Flannel进行通信,需要修改docker.service。添加EnvironmentFile=/run/flannel/subnet.env,借助Flannel的子网进行通信以及添加$DOCKER_NETWORK_OPTIONS网络参数。以上两个参数均是官网要求。下面以k8s-node01主机为例进行操作演示。
[root@k8s-node01 ~]# vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket containerd.service [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker EnvironmentFile=/run/flannel/subnet.env # 添加此行 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock # 添加变量 ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not support it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target
在两台node节点上查看使用的子网地址分别为172.17.11.1/24和172.17.100.1/24。bip是指定启动时的子网。
[root@k8s-node01 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.11.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.11.1/24 --ip-masq=false --mtu=1450"
[root@k8s-node02 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.100.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=172.17.100.1/24 --ip-masq=false --mtu=1450"
在两台node节点上修改完启动脚本之后,需要重新启动Docker服务。分别查看两台node节点的docker0网卡信息。
[root@k8s-node01 ~]# systemctl daemon-reload && systemctl restart docker [root@k8s-node01 ~]# ip add s docker0 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:d6:c7:05:8b brd ff:ff:ff:ff:ff:ff inet 172.17.11.1/24 brd 172.17.11.255 scope global docker0 valid_lft forever preferred_lft forever
[root@k8s-node02 ~]# systemctl daemon-reload && systemctl restart docker [root@k8s-node02 ~]# ip add s docker0 3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b8:77:89:4a brd ff:ff:ff:ff:ff:ff inet 172.17.100.1/24 brd 172.17.100.255 scope global docker0 valid_lft forever preferred_lft forever
在两台node节点上分别运行busybox容器。(busybox是一个集成了三百多个常用linux命令和工具的软件工具箱,在本案例中用于测试)。
进入容器内部查看k8s-node01节点的地址是172.17.11.2;k8s-node02节点的地址是172.17.100.2。与/run/flannel/subnet.env文件中看到的子网信息处于同一个网段。
接着再通过ping命令测试,如果k8s-node02容器能ping通k8s-node01容器的IP地址就代表两个独立的容器可以互通,说明Flannel组件搭建成功。
[root@k8s-node01 ~]# docker pull busybox [root@k8s-node01 ~]# docker run -it busybox /bin/sh / # ipaddr show eth0 9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:ac:11:0b:03 brd ff:ff:ff:ff:ff:ff inet 172.17.11.2/24 brd 172.17.11.255 scope global eth0 valid_lft forever preferred_lft forever
[root@k8s-node02 ~]# docker pull busybox [root@k8s-node02 ~]# docker run -it busybox /bin/sh / # ip a s eth0 7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue link/ether 02:42:ac:11:64:02 brd ff:ff:ff:ff:ff:ff inet 172.17.100.2/24 brd 172.17.100.255 scope global eth0 valid_lft forever preferred_lft forever / # ping -c 4 172.17.11.2 PING 172.17.11.2 (172.17.11.2): 56 data bytes 64 bytes from 172.17.11.2: seq=0 ttl=62 time=1.188 ms 64 bytes from 172.17.11.2: seq=1 ttl=62 time=0.598 ms 64 bytes from 172.17.11.2: seq=2 ttl=62 time=0.564 ms 64 bytes from 172.17.11.2: seq=3 ttl=62 time=0.372 ms --- 172.17.11.2 ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.372/0.680/1.188 ms
在k8s-master01节点上将kubelet和kube-proxy执行脚本拷贝到两台node节点上。
[root@k8s-master01 ~]# cd /k8s/kubernetes/server/bin/ [root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node01:/opt/kubernetes/bin/ [root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node02:/opt/kubernetes/bin/
将node.zip上传至两台node节点,并解压node.zip,可获得proxy.sh和kubelet.sh两个执行脚本。以k8s-node01为例进行操作演示。
[root@k8s-node01 ~]# unzip node.zip Archive: node.zip inflating: proxy.sh inflating: kubelet.sh [root@k8s-node02 ~]# unzip node.zip Archive: node.zip inflating: proxy.sh inflating: kubelet.sh
在k8s-master01节点上创建kubeconfig工作目录。将kubecofng.sh脚本上传至当前目录/k8s/kubeconfig/下,此脚本中包含有创建TLS Bootstrapping Token、创建kubeletbootstrapping kubeconfig、设置集群参数、设置客户端认证参数、设置上下文参数、设置默认上下文、创建kube-proxy kubeconfig文件。
查看序列号将其拷贝到客户端认证参数。更新kubeconfig.sh脚本的token值。
[root@k8s-master01 ~]# mkdir /k8s/kubeconfig [root@k8s-master01 ~]# cd /k8s/kubeconfig/ [root@k8s-master01 kubeconfig]# cat /opt/kubernetes/cfg/token.csv 48be2e8be6cca6e349d3e932768f5d71,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [root@k8s-master01 kubeconfig]# vim kubeconfig.sh # 创建 TLS Bootstrapping Token #BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ') BOOTSTRAP_TOKEN=48be2e8be6cca6e349d3e932768f5d71
为了便于识别在k8s-master01和k8s-master02节点上声明路径export PATH=$PATH:/opt/kubernetes/bin/到环境变量中。
[root@k8s-master01 ~]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile [root@k8s-master01 ~]# source /etc/profile [root@k8s-master02 ~]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile [root@k8s-master02 ~]# source /etc/profile
将kubeconfig.sh重命名为kubeconfig,执行kubeconfig脚本。使用bash执行kubeconfig,第一个参数是当前APIServer的IP,它会写入整个配置当中;第二个参数执行证书kubenets的证书位置。执行完成以后会生成bootstrap.kubeconfig和kube-proxy.kubeconfig两个文件。
[root@k8s-master01 ~]# cd /k8s/kubeconfig/ [root@k8s-master01 kubeconfig]# mv kubeconfig.sh kubeconfig [root@k8s-master01 kubeconfig]# bash kubeconfig 192.168.200.111 /k8s/k8s-cert/ Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default". [root@k8s-master01 kubeconfig]# ls bootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig token.csv
将bootstrap.kubeconfig和kube-proxy.kubeconfig文件拷贝到两台node节点上。
[root@k8s-master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@k8s-node01:/opt/kubernetes/cfg/ [root@k8s-master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@k8s-node02:/opt/kubernetes/cfg/
创建bootstrap角色,并赋予权限。用于连接API Server请求签名(关键)。查看k8s-node01节点的bootstrap.kubeconfig。kubelet在启动的时候如果想加入集群中,需要请求申请API Server请求签名。kubeconfig的作用是指名如果想要加入群集,需要通过哪一个地址、端口才能申请到所需要的证书。
[root@k8s-master01 kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@k8s-node01 ~]# cat /opt/kubernetes/cfg/bootstrap.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVTmdibUJxMkJpRkF5Z1lEVFpvb1p1a3V4QWZvd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl4TURFeU9EQTRNamt3TUZvWERUSTJNREV5TnpBNE1qa3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBMk1GRzYyVDdMTC9jbnpvNGx4V0gKZGNOVnVkblkzRTl0S2ZvbThNaVZKcFRtVUhlYUhoczY2M1loK1VWSklnWXkwVXJzWGRyc2VPWDg2Nm9PcEN1NQpUajRKbEUxbXQ5b1NlOEhLeFVhYkRqVjlwd05WQm1WSllCOEZIMnZVaTZVZEVpOVNnVXF2OTZIbThBSUlFbTFhCmpLREc2QXRJRWFZdFpJQ1MyeVg5ZStPVXVCUUtkcDBCcGdFdUxkMko5OEpzSjkrRzV6THc5bWdab0t5RHBEeHUKVHdGRC9HK2k5Vk9mbTh7ZzYzVzRKMUJWL0RLVXpTK1Q3NEs0S3I5ZmhDbHp4ZVo3bXR1eXVxUkM2c1lrcXpBdApEbklmNzB1QWtPRzRYMU52eUhjVmQ5Rzg4ZEM3NDNSbFZGZGNvbzFOM0hoZ1FtaG12ZXdnZ0tQVjZHWGwwTkJnCkx3SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVRVJ0cngrWHB4andVQWlKemJnUEQ2bGJOUlFFd0h4WURWUjBqQkJnd0ZvQVVFUnRyeCtYcAp4andVQWlKemJnUEQ2bGJOUlFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJWTBtdWxJK25BdE1KcWpSZXFnCmRuWk1Ya2U3ZGxIeHJKMmkvT3NaSXRoZUhYakMwMGdNWlRZSGV6WUxSKzl0MUNKV1lmUVdOR3V3aktvYitPaDUKMlE5SURUZmpJblhXcmU5VU5SNUdGNndNUDRlRzZreUVNbE9WcUc3L2tldERpNlRzRkZyZWJVY0FraEFnV0J1eApJWXJWb1ZhMFlCK3hhZk1KdTIzMnQ5VmtZZHovdm9jWGV1MHd1L096Z1dsUEJFNFBkSUVHRWprYW5yQTk5UCtGCjhSUkJudmVJcjR4S21iMlJIcEFYWENMRmdvNTc1c1hEQWNGbWswVm1KM2kzL3pPbmlsd3cwRmpFNFU2OVRmNWMKekhncE0vdmtLbG9aTjYySW44YUNtbUZTcmphcjJRem1Ra3FwWHRsQmdoZThwUjQ3UWhiZS93OW5DWGhsYnVySgpzTzQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K server: https://192.168.200.200:6443 name: kubernetes contexts: - context: cluster: kubernetes user: kubelet-bootstrap name: default current-context: default kind: Config preferences: {} users: - name: kubelet-bootstrap user: token: 48be2e8be6cca6e349d3e932768f5d71
在两台node节点上,执行kubelet脚本,并通过ps命令查看服务启动情况。kubelet启动之后会自动联系API Server发进行证书申请。在k8s-master01节点上通过get csr命令查看是否收到请求申请。当看到处于Pending状态时,即为等待集群给该节点颁发证书。
[root@k8s-node01 ~]# bash kubelet.sh 192.168.200.113 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@k8s-node01 ~]# ps aux | grep [k]ube
[root@k8s-node02 ~]# bash kubelet.sh 192.168.200.114 Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [root@k8s-node02 ~]# ps aux | grep [k]ube
[root@k8s-master01 kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM 105s kubelet-bootstrap Pending node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 48s kubelet-bootstrap Pending
k8s-master01节点颁发证书给两台node节点。通过get csr命令可以查看到证书已经颁发。使用get node查看,两台node节点都已经加入到了群集中。
[root@k8s-master01 kubeconfig]# kubectl certificate approve node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM certificatesigningrequest.certificates.k8s.io/node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM approved [root@k8s-master01 kubeconfig]# kubectl certificate approve node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 certificatesigningrequest.certificates.k8s.io/node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 approved [root@k8s-master01 kubeconfig]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM 5m44s kubelet-bootstrap Approved,Issued node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 4m47s kubelet-bootstrap Approved,Issued
在两台node节点上执行proxy.sh脚本。
[root@k8s-node01 ~]# bash proxy.sh 192.168.200.113 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@k8s-node01 ~]# systemctl status kube-proxy.service
[root@k8s-node02 ~]# bash proxy.sh 192.168.200.114 Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. [root@k8s-node02 ~]# systemctl status kube-proxy
[root@k8s-master02 ~]# kubectl get node NAME STATUS ROLES AGE VERSION 192.168.200.113 Ready <none> 25h v1.12.3 192.168.200.114 Ready <none> 25h v1.12.3
在NodePort基础上,Kubernetes可以请求底层云平台创建一个负载均衡器,将每个Node作为后端,进行服务分发。该模式需要底层云平台(例如GCE)支持。
安装配置Nginx服务,lb01、lb02主机上执行以下操作,以lb01节点为例
[root@k8s-lb01 ~]# rpm -ivh epel-release-latest-7.noarch.rpm [root@k8s-lb01 ~]# yum -y install nginx [root@k8s-lb01 ~]# vim /etc/nginx/nginx.conf events { worker_connections 1024; } stream { #四层代理stream和http是平级所以不要放在里面 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; access_log /var/log/nginx/k8s-access.log main; upstream k8s-apiserver { # upstream声明k8s-apiserver,指定了两个master的6443端口 server 192.168.200.111:6443; server 192.168.200.112:6443; } server { # 然后server,listen监听的端口6443,proxy_pass反向代理给他 listen 6443; proxy_pass k8s-apiserver; } } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' [root@k8s-lb01 ~]# nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@k8s-lb01 ~]# systemctl start nginx && systemctl enable nginx Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
修改两台Nginx节点的首页,以示区分,并且浏览器中访问两台LB节点
[root@k8s-lb01 ~]# echo "This is Master Server" > /usr/share/nginx/html/index.html [root@k8s-lb02 ~]# echo "This is Backup Server" > /usr/share/nginx/html/index.html
[root@k8s-lb01 ~]# yum -y install keepalived [root@k8s-lb01 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state MASTER interface ens32 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.200.200 } track_script { check_nginx } } [root@k8s-lb01 ~]# scp /etc/keepalived/keepalived.conf 192.168.200.116:/etc/keepalived/
[root@k8s-lb02 ~]# vim /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id LVS_DEVEL } vrrp_script check_nginx { script "/etc/nginx/check_nginx.sh" } vrrp_instance VI_1 { state BACKUP # 修改 interface ens32 virtual_router_id 51 priority 90 # 修改优先级 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.200.200 } track_script { check_nginx } }
在两台LB节点上创建触发脚本,统计数据进行比对,值为0的时候,关闭Keepalived服务。
lb01、lb02主机上均执行以下操作
[root@k8s-lb01 ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef|grep nginx|egrep -cv "grep|$$") if [ "$count" -eq 0 ];then systemctl stop keepalived fi [root@k8s-lb02 ~]# chmod +x /etc/nginx/check_nginx.sh [root@k8s-lb02 ~]# systemctl start keepalived && systemctl enable keepalived
查看网卡信息,可以查看到k8s-lb01节点上有漂移地址192.168.200.200/24,而k8s-lb02节点上没有漂移地址。
[root@k8s-lb01 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:3b:05:a3 brd ff:ff:ff:ff:ff:ff inet 192.168.200.115/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet 192.168.200.200/32 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link noprefixroute valid_lft forever preferred_lft forever
[root@k8s-lb02 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
验证故障转移切换:首先将k8s-lb01节点上的Nginx服务关闭,查看IP信息可以看出k8s-lb01的漂移IP已经不存在,Keepalived服务也关闭离;查看k8s-lb02的IP信息,漂移IP地址已经绑定在k8s-lb02节点上。此时在将k8s-lb01的Nginx与Keepalived服务开启,漂移IP地址就会重新k8s-lb01节点上。
[root@k8s-lb01 ~]# systemctl stop nginx [root@k8s-lb01 ~]# ps aux | grep [k]eepalived [root@k8s-lb02 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet 192.168.200.200/32 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link noprefixroute valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
故障恢复测试
[root@k8s-lb01 ~]# systemctl start nginx [root@k8s-lb01 ~]# systemctl start keepalived [root@k8s-lb01 ~]# ip a s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:3b:05:a3 brd ff:ff:ff:ff:ff:ff inet 192.168.200.115/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet 192.168.200.200/32 scope global ens32 valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link noprefixroute valid_lft forever preferred_lft forever
[root@k8s-lb02 ~]# ip add s ens32 2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32 valid_lft forever preferred_lft forever inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed valid_lft forever preferred_lft forever
修改两台node节点上的bootstrap.kubeconfig、kubelet.kubeconfig和kube-prosy.kubeconfig配置文件,这三个文件中指向API Server的IP地址,将此地址更新为VIP地址。
node01、node02主机上均执行以下操作
[root@k8s-node01 ~]# cd /opt/kubernetes/cfg/ [root@k8s-node01 cfg]# vim bootstrap.kubeconfig ……//省略部分内容 server: https://192.168.200.111:6443 ……//省略部分内容 [root@k8s-node01 cfg]# vim kubelet.kubeconfig ……//省略部分内容 server: https://192.168.200.111:6443 ……//省略部分内容 [root@k8s-node01 cfg]# vim kube-proxy.kubeconfig ……//省略部分内容 server: https://192.168.200.111:6443 ……//省略部分内容 [root@k8s-node01 cfg]# grep 200.200 * bootstrap.kubeconfig: server: https://192.168.200.200:6443 kubelet.kubeconfig: server: https://192.168.200.200:6443 kube-proxy.kubeconfig: server: https://192.168.200.200:6443
重启两台node节点相关服务。node01、node02主机上均执行以下操作
[root@k8s-node01 cfg]# systemctl restart kubelet [root@k8s-node01 cfg]# systemctl restart kube-proxy
k8s-lb01节点上动态查看Nginx的访问日志。从日志中可以看出了负载均衡已经实现。
[root@k8s-lb01 ~]# tail -fn 200 /var/log/nginx/k8s-access.log 192.168.200.113 192.168.200.111:6443 - [29/Jan/2021:20:29:41 +0800] 200 1120 192.168.200.113 192.168.200.112:6443 - [29/Jan/2021:20:29:41 +0800] 200 1120 192.168.200.114 192.168.200.112:6443 - [29/Jan/2021:20:30:12 +0800] 200 1121 192.168.200.114 192.168.200.111:6443 - [29/Jan/2021:20:30:12 +0800] 200 1121
在k8s-master01节点上创建Pod,使用的镜像是Nginx。
[root@k8s-node01 ~]# docker pull nginx [root@k8s-node02 ~]# docker pull nginx
[root@k8s-master01 ~]# kubectl run nginx --image=nginx kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead. deployment.apps/nginx created [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-dbddb74b8-9f5m6 1/1 Running 1 21h
开启查看日志权限。
[root@k8s-master01 ~]# kubectl create clusterrolebinding cluseter-system-anonymous --clusterrole=cluster-admin --user=system:anonymous clusterrolebinding.rbac.authorization.k8s.io/cluseter-system-anonymous created
通过-o wide参数,输出整个网络状态。可以查看此容器的IP是172.17.11.2,容器是放在IP地址为192.168.200.113的node节点中。
[root@k8s-master01 ~]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-dbddb74b8-9f5m6 1/1 Running 0 4m27s 172.17.11.2 192.168.200.113 <none>
[root@k8s-node01 ~]# ip a s flannel.1 4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether a6:29:7d:74:2d:1a brd ff:ff:ff:ff:ff:ff inet 172.17.11.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::a429:7dff:fe74:2d1a/64 scope link valid_lft forever preferred_lft forever
使用curl访问Pod容器地址172.17.11.2。访问日志会产生信息,回到k8s-master01节点中查看日志信息。并且查看容器。其他的node节点也能访问到。
[root@k8s-node01 ~]# curl 172.17.11.2 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h2>Welcome to nginx!</h2> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
查看日志输出
[root@k8s-master01 ~]# kubectl logs nginx-dbddb74b8-9f5m6 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up 172.17.11.1 - - [29/Jan/2021:12:58:28 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-" [root@k8s-master01 ~]# kubectl logs nginx-dbddb74b8-9f5m6 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up 172.17.11.1 - - [29/Jan/2021:12:58:28 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-" 172.17.11.1 - - [29/Jan/2021:12:59:41 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
感谢你能够认真阅读完这篇文章,希望小编分享的“如何部署Kubernetes高可用”这篇文章对大家有帮助,同时也希望大家多多支持亿速云,关注亿速云行业资讯频道,更多相关知识等着你来学习!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。