温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

kubernetes高可用集群(多master,v1.15官方最新版)

发布时间:2020-08-01 17:27:11 来源:网络 阅读:3994 作者:Rainbowhhy 栏目:云计算
  1. 开篇介绍

        kubernetes已经在我们生产环境运行近一年时间,目前稳定运行。从系统的搭建到项目的迁移,中间遇到过不少问题。生产环境采用多master节点实现kubernetes的高可用,用haproxy+keepalived负载均衡master。现抽空总结下系统的搭建过程,帮助大家快速搭建自己的k8s系统。

    以下是我生产环境的运行截图

    kubernetes高可用集群(多master,v1.15官方最新版)

         kubernetes高可用集群(多master,v1.15官方最新版)

    kubernente版本更新迭代非常快,我当时搭建生产环境kubernetes时官方的最新版本是v1.11,现在官方已经更新到了v1.15,本文就以最新版本进行概述。


2. kubernetes简介

    kubernetes是google基于borg开源的容器编排调度引擎,一个用于容器集群的自动化部署、扩容以及运维的开源平台。 kubernetes 具备完善的集群管理能力,包括多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和服务发现机制、内建负载均衡器、故障发现和自我修复能力、服务滚动升级和在线扩容、可扩展的资源自动调度机制、多粒度的资源配额管理能力。 kubernetes 还提供完善的管理工具,涵盖开发、部署测试、运维监控等各个环节。kubernetes作为CNCF(Cloud Native Computing Foundation)最重要的成员之一,它的目标不仅仅是一个编排系统,而是提供一个规范,可以让你来描述集群的架构,定义服务的最终状态,kubernetes可以帮你将系统自动地达到和维持在这个状态。


3. kubernetes架构 

kubernetes高可用集群(多master,v1.15官方最新版)


    在这张系统架构图中,可以把服务分为运行在工作节点上的服务和组成集群级别控制节点的服务。kubernetes 节点有运行应用容器必备的服务,而这些都是受master的控制。每次个节点上都要运行docker,docker来负责所有具体的映像下载和容器运行。

    kubernetes主要由以下几个核心组件组成:

  • etcd保存了整个集群的状态;

  • apiserver提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;

  • controller manager负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;

  • scheduler负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;

  • kubelet负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;

  • Container runtime负责镜像管理以及Pod和容器的真正运行(CRI);

  • kube-proxy负责为Service提供cluster内部的服务发现和负载均衡;

    除了核心组件,还有一些推荐的组件:

  • kube-dns负责为整个集群提供DNS服务

  • Ingress Controller为服务提供外网入口

  • Heapster提供资源监控

  • Dashboard提供GUI

  • Federation提供跨可用区的集群

  • Fluentd-elasticsearch提供集群日志采集、存储与查询


4. 搭建过程

    下面开始咱们今天的干货,集群的搭建过程。

4.1 环境准备

机器名称机器配置
机器系统IP地址角色
haproxy1

8C16G

ubuntu16.04192.168.10.1haproxy+keepalived VIP:192.168.10.10
haproxy18C16Gubuntu16.04192.168.10.2haproxy+keepalived VIP:192.168.10.10
master18C16Gubuntu16.04192.168.10.3主节点1
master28C16Gubuntu16.04192.168.10.4主节点2
master38C16Gubuntu16.04192.168.10.5主节点3
node18C16Gubuntu16.04192.168.10.6工作节点1
node28C16Gubuntu16.04192.168.10.7工作节点2
node38C16Gubuntu16.04192.168.10.8工作节点3


4.2 环境说明

    本文采用三台master和三台node搭建kubernetes集群,采用两台机器搭建haproxy+keepalived负载均衡master,保证master高可用,从而保证整个kubernetes高可用。官方要求机器配置必须>=2C2G,操作系统>=16.04。


4.3 搭建过程

4.3.1 基本设置

    修改hosts文件,8台机器全部修改

root@haproxy1:~# cat /etc/hosts

192.168.10.1     haproxy1

192.168.10.2     haproxy2

192.168.10.3     master1

192.168.10.4     master2

192.168.10.5     master3

192.168.10.6     node1

192.168.10.7     node2

192.168.10.8     node3

192.168.10.10    kubernetes.haproxy.com

4.3.2 haproxy+keepalived搭建

    安装haproxy

root@haproxy1:/data# wget https://github.com/haproxy/haproxy/archive/v2.0.0.tar.gz

root@haproxy1:/data# tar -xf v2.0.0.tar.gz

root@haproxy1:/data# cd haproxy-2.0.0/

root@haproxy1:/data/haproxy-2.0.0# make TARGET=linux-glibc

root@haproxy1:/data/haproxy-2.0.0# make install PREFIX=/data/haproxy

root@haproxy1:/data/haproxy# mkdir conf

root@haproxy1:/data/haproxy# vim  conf/haproxy.cfg

global

  log 127.0.0.1 local0 err

  maxconn 50000

  user haproxy

  group haproxy

  daemon

  nbproc 1

  pidfile haproxy.pid

defaults

  mode tcp

  log 127.0.0.1 local0 err

  maxconn 50000

  retries 3

  timeout connect 5s

  timeout client 30s

  timeout server 30s

  timeout check 2s

listen admin_stats

  mode http

  bind 0.0.0.0:1080

  log 127.0.0.1 local0 err

  stats refresh 30s

  stats uri     /haproxy-status

  stats realm   Haproxy\ Statistics

  stats auth    will:will

  stats hide-version

  stats admin if TRUE

frontend k8s

  bind 0.0.0.0:8443

  mode tcp

  default_backend k8s

backend k8s

  mode tcp

  balance roundrobin

  server master1 192.168.10.3:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

  server master2 192.168.10.4:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

  server master3 192.168.10.5:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3

  

root@haproxy1:/data/haproxy# id -u haproxy &> /dev/null || useradd -s /usr/sbin/nologin -r haproxy

root@haproxy1:/data/haproxy# mkdir /usr/share/doc/haproxy

root@haproxy1:/data/haproxy# wget -qO - https://github.com/haproxy/haproxy/blob/master/doc/configuration.txt | gzip -c > /usr/share/doc/haproxy/configuration.txt.gz

root@haproxy1:/data/haproxy# vim /etc/default/haproxy 

# Defaults file for HAProxy

#

# This is sourced by both, the initscript and the systemd unit file, so do not

# treat it as a shell script fragment.


# Change the config file location if needed

#CONFIG="/etc/haproxy/haproxy.cfg"


# Add extra flags here, see haproxy(1) for a few options

#EXTRAOPTS="-de -m 16"


root@haproxy1:/data# vim /lib/systemd/system/haproxy.service 

[Unit]

Description=HAProxy Load Balancer

Documentation=man:haproxy(1)

Documentation=file:/usr/share/doc/haproxy/configuration.txt.gz

After=network.target syslog.service

Wants=syslog.service


[Service]

Environment=CONFIG=/data/haproxy/conf/haproxy.cfg

EnvironmentFile=-/etc/default/haproxy

ExecStartPre=/data/haproxy/sbin/haproxy -f ${CONFIG} -c -q

ExecStart=/data/haproxy/sbin/haproxy -W  -f ${CONFIG} -p /data/haproxy/conf/haproxy.pid $EXTRAOPTS

ExecReload=/data/haproxy/sbin/haproxy -c -f ${CONFIG}

ExecReload=/bin/kill -USR2 $MAINPID

KillMode=mixed

Restart=always

Type=forking


[Install]

WantedBy=multi-user.target


root@haproxy2:/data/haproxy# systemctl daemon-reload 

root@haproxy2:/data/haproxy# systemctl start haproxy

root@haproxy2:/data/haproxy# systemctl status haproxy



    安装keepalived

root@haproxy1:/data# wget https://www.keepalived.org/software/keepalived-2.0.16.tar.gz

root@haproxy1:/data# tar -xf keepalived-2.0.16.tar.gz

root@haproxy1:/data# cd keepalived-2.0.16/

root@haproxy1:/data/keepalived-2.0.16# ./configure --prefix=/data/keepalived

root@haproxy1:/data/keepalived-2.0.16# ./configure --prefix=/data/keepalived

root@haproxy1:/data/keepalived-2.0.16# make && make install

root@haproxy1:/data/keepalived# mkdir conf

root@haproxy1:/data/keepalived# vim conf/keepalived.conf

! Configuration File for keepalived

global_defs {

  notification_email {

    root@localhost

  }

 

  notification_email_from keepalived@localhost

  smtp_server 127.0.0.1

  smtp_connect_timeout 30

  router_id haproxy1

}

 

vrrp_script chk_haproxy {                                   #HAproxy 服务监控脚本                    

  script "/data/keepalived/check_haproxy.sh"

  interval 2

  weight 2

}

 

vrrp_instance VI_1 {

  state MASTER

  interface ens160

  virtual_router_id 1

  priority 100

  advert_int 1

  authentication {

    auth_type PASS

    auth_pass 1111

  }

  track_script {

    chk_haproxy

  }

  virtual_ipaddress {

    192.168.10.10/24

  }

}


root@haproxy1:/data/keepalived# vim /etc/default/keepalived

# Options to pass to keepalived


# DAEMON_ARGS are appended to the keepalived command-line

DAEMON_ARGS=""


root@haproxy1:/data/keepalived# vim /lib/systemd/system/keepalived.service

[Unit]

Description=Keepalive Daemon (LVS and VRRP)

After=network-online.target

Wants=network-online.target

# Only start if there is a configuration file

ConditionFileNotEmpty=/data/keepalived/conf/keepalived.conf


[Service]

Type=forking

KillMode=process

Environment=CONFIG=/data/keepalived/conf/keepalived.conf

# Read configuration variable file if it is present

EnvironmentFile=-/etc/default/keepalived

ExecStart=/data/keepalived/sbin/keepalived -f ${CONFIG} -p /data/keepalived/conf/keepalived.pid $DAEMON_ARGS

ExecReload=/bin/kill -HUP $MAINPID


[Install]

WantedBy=multi-user.target


root@haproxy1:/data/keepalived# systemctl daemon-reload

root@haproxy1:/data/keepalived# systemctl start keepalived.service

root@haproxy1:/data/keepalived# vim /data/keepalived/check_haproxy.sh

#!/bin/bash

A=`ps -C haproxy --no-header | wc -l`

if [ $A -eq 0 ];then

systemctl start haproxy.service

sleep 3

if [ `ps -C haproxy --no-header | wc -l ` -eq 0 ];then

systemctl stop keepalived.service

fi

fi

    同理haproxy2机器上安装haproxy和keepalived


4.3.3 kubernetes集群搭建

    基本设置

    关闭交换分区,kubernetes集群的6台机器必须全部关闭

root@master1:~# free -m

              total        used        free      shared  buff/cache   available

Mem:          16046         128       15727           8         190       15638

Swap:           979           0         979

root@master1:~# swapoff -a

root@master1:~# free -m

              total        used        free      shared  buff/cache   available

Mem:          16046         128       15726           8         191       15638

Swap:             0           0           0

    安装docker

    6台机器均需安装

# 使apt能够使用https访问

root@master1:~# apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common

root@master1:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

OK

root@master1:~# apt-key fingerprint 0EBFCD88

pub   4096R/0EBFCD88 2017-02-22

      Key fingerprint = 9DC8 5822 9FC7 DD38 854A  E2D8 8D81 803C 0EBF CD88

uid                  Docker Release (CE deb) <docker@docker.com>

sub   4096R/F273FCD8 2017-02-22


# 增加docker apt源

root@master1:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# 安装docker

root@master1:~# apt-get update

root@master1:~# apt-get install -y docker-ce docker-ce-cli containerd.io

root@master1:~# docker --version 

Docker version 18.09.6, build 481bc77




    安装kubernetes组件

# 安装kubeadm,kubelet,kubectl 6台机器均需安装

root@master1:~# apt-get update

root@master1:~# apt-get install -y apt-transport-https curl

root@master1:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

OK

root@master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

> deb https://apt.kubernetes.io/ kubernetes-xenial main

> EOF

root@master1:~# apt-get update

root@master1:~# apt-get install -y kubelet kubeadm kubectl

root@master1:~# apt-mark hold kubelet kubeadm kubectl

kubelet set on hold.

kubeadm set on hold.

kubectl set on hold.

    

    创建集群

    控制节点1

root@master1:~# vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2

kind: ClusterConfigurationkubernetes

Version: stable

controlPlaneEndpoint: "kubernetes.haproxy.com:8443"

networking:    podSubnet: "10.244.0.0/16"    


root@master1:~# kubeadm init --config=kubeadm-config.yaml --upload-certs


    完成后截图如下

kubernetes高可用集群(多master,v1.15官方最新版)

root@master1:~# mkdir -p $HOME/.kube

root@master1:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@master1:~# chown $(id -u):$(id -g) $HOME/.kube/config

# 安装网络组件,这里采用fannel

root@master1:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

    

    查看安装结果

root@master1:~# kubectl get pod -n kube-system -w

kubernetes高可用集群(多master,v1.15官方最新版)


    集群加入另外控制节点

    当时我们生产环境v1.11版需每个控制节点写主配置文件,分别在每个节点上执行一系列操作加入集群,现在v1.15版支持kubeadm join直接加入,步骤简单了很多。

    控制节点2

root@master2:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5     --experimental-control-plane --certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77

root@master2:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@master2:~# chown $(id -u):$(id -g) $HOME/.kube/config


    查看安装结果

root@master2:~# kubectl get nodes

kubernetes高可用集群(多master,v1.15官方最新版)

    

    控制节点3

root@master3:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5     --experimental-control-plane --certificate-key a2a84ebc181ba34a943e5003a702b71e2a1e7e236f8d1d687d9a19d2bf803a77

root@master3:~# mkdir -p $HOME/.kube

root@master3:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@master3:~# chown $(id -u):$(id -g) $HOME/.kube/config


    查看安装结果

root@master3:~# kubectl get nodes

kubernetes高可用集群(多master,v1.15官方最新版)

    

    添加工作节点

root@node1:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5

root@node2:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5

root@node3:~# kubeadm join kubernetes.haproxy.com:8443 --token a3g3x0.zc6qxcdqu60jgtz1     --discovery-token-ca-cert-hash sha256:d48d8e4e7f8bc2c66a815a34b7e6a23809ad53bdae4a600e368a3ff28ad7a7d5


    整个集群搭建完成查看结果

    任一master上执行

root@master1:~# kubectl get pods --all-namespaces

kubernetes高可用集群(多master,v1.15官方最新版)

root@master1:~# kubectl get nodes

kubernetes高可用集群(多master,v1.15官方最新版)


    至此,整个高可用集群搭建完毕


5. 参考文档

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin

https://www.kubernetes.org.cn/docs


向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI