这篇文章将为大家详细讲解有关kubernetes高可用集群版如何安装,小编觉得挺实用的,因此分享给大家做个参考,希望大家阅读完这篇文章后可以有所收获。
系统要求:64位centos7.6
关闭防火墙和selinux
关闭操作系统swap分区(使用k8s不推荐打开)
请预配置好每个节点的hostname保证不重名即可
请配置第一个master能秘钥免密登入所有节点(包括自身)
本手册安装方式适用于小规模使用
多主模式(最少三个), 每个master节点上需要安装keepalived
# 切换到配置目录 cd /etc/yum.repos.d/ # 配置docker-ce阿里源 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 配置kubernetes阿里源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
cat <<EOF > /etc/sysctl.d/ceph.conf net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system
# 安装kubeadm kubelet kubectl yum install kubeadm kubectl kubelet -y # 开机启动kubelet和docker systemctl enable docker kubelet # 启动docker systemctl start docker
# 此处如果有Lb可省略 直接使用LB地址 # 安装时候请先在初始化master上执行,保证VIP附着在初始化master上,否则请关闭其他keepalived # 安装完成后可根据自己业务需要实现健康监测 yum install keepalived -y # 备份keepalived原始文件 mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak # 生成新的keepalived配置文件,文中注释部分对每台master请进行修改 cat <<EOF > /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { router_id k8s-master1 #主调度器的主机名 vrrp_mcast_group4 224.26.1.1 } vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 66 nopreempt priority 90 advert_int 1 authentication { auth_type PASS auth_pass 123456 } virtual_ipaddress { 10.20.1.8 #VIP地址声明 } } EOF # 配置keepalived开机启动和启动keepalived systemctl enable keepalived systemctl start keepalived
cd && cat <<EOF > kubeadm.yaml apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable apiServer: certSANs: - "172.29.2.188" #请求改为你的vip地址 controlPlaneEndpoint: "172.29.2.188:6443" #请求改为你的vip地址 imageRepository: registry.cn-hangzhou.aliyuncs.com/peter1009 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 EOF
# 使用上一步生成的kubeadm.yaml kubeadm init --config kubeadm.yaml
# 执行完上一步输出如下 root@k8s4:~# kubeadm init --config kubeadm.yaml I0522 06:20:13.352644 2622 version.go:96] could not fetch a Kubernetes version from ......... 此处省略 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 \ --experimental-control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
cat <<EOF > copy.sh CONTROL_PLANE_IPS="172.16.10.101 172.16.10.102" # 修改这两个ip地址为你第二/第三masterip地址 for host in ${CONTROL_PLANE_IPS}; do ssh $host mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki/ scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.crt scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd/ca.key scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/ done EOF # 如果未配置免密登录,该步骤讲失败 bash -x copy.sh
# 在当前节点执行提示内容,使kubectl能访问集群 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # 在其他master节点上配置执行提示内容(必须要copy.sh文件执行成功以后) kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72 \ --experimental-control-plane
# 在其他非master的节点上配置执行提示内容 kubeadm join 172.16.10.114:6443 --token v2lv3k.aysjlmg3ylcl3498 \ --discovery-token-ca-cert-hash sha256:87b69e590e9d59055c5a9c6651e333044c402dba877beb29906eddfeb0998d72
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
root@k8s4:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s4 Ready master 20m v1.14.2 root@k8s4:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s4 Ready master 20m v1.14.2 root@k8s4:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-8cc96f57d-cfr4j 1/1 Running 0 20m kube-system coredns-8cc96f57d-stcz6 1/1 Running 0 20m kube-system etcd-k8s4 1/1 Running 0 19m kube-system kube-apiserver-k8s4 1/1 Running 0 19m kube-system kube-controller-manager-k8s4 1/1 Running 0 19m kube-system kube-flannel-ds-amd64-k4q6q 1/1 Running 0 50s kube-system kube-proxy-lhjsf 1/1 Running 0 20m kube-system kube-scheduler-k8s4 1/1 Running 0 19m
# 取消节点污点,使master能被正常调度, k8s4请更改为你自有集群的nodename kubectl taint node k8s4 node-role.kubernetes.io/master:NoSchedule- # 创建nginx deploy root@k8s4:~# kubectl create deploy nginx --image nginx deployment.apps/nginx created root@k8s4:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-65f88748fd-9sk6z 1/1 Running 0 2m44s # 暴露nginx到集群外 root@k8s4:~# kubectl expose deploy nginx --port=80 --type=NodePort service/nginx exposed root@k8s4:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25m nginx NodePort 10.104.109.234 <none> 80:32129/TCP 5s root@k8s4:~# curl 127.0.0.1:32129 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h2>Welcome to nginx!</h2> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
关于“kubernetes高可用集群版如何安装”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,使各位可以学到更多知识,如果觉得文章不错,请把它分享出去让更多的人看到。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。