这篇文章的知识点包括:kubernetes的组件介绍、kubernetes的核心概念、kubernetes集群部署以及kubernetes的使用,阅读完整文相信大家对kubernetes有了一定的认识。
1、Master组件
●kube-apiserver
Kubernetes API,集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
●kube-controller-manager
处理集群中常规后台任务,一个 资源对应一个控制 器,而ControllerManager就是负责管理这些控制器的。
●kube-scheduler
根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。
●etcd
分布式键值存储系统。用于保存集群状态数据,比如Pod、Service 等对象信息。
2、Node组件
●kubelet
kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下 载secret、获取容器和节点状态等工作。kubelet将 每个Pod转换成一组容器。
●kube-proxy
在Node节点上实现Pod网络代理,维护网络规则和四层负载均衡工作。
●docker或rocket
容器引擎,运行容器。
工作原理:
1、准备包含应用程序的Deployment的yml文件,然后通过kubectl客户端工具发送给ApiServer。
2、ApiServer接收到客户端的请求并将资源内容存储到数据库(etcd)中。
3、Controller组件(包括scheduler、replication、endpoint)监控资源变化并作出反应。
4、ReplicaSet检查数据库变化,创建期望数量的pod实例。
5、Scheduler再次检查数据库变化,发现尚未被分配到具体执行节点(node)的Pod,然后根据一组相关规则将pod分配到可以运行它们的节点上,并更新数据库,记录pod分配情况。
6、Kubelete监控数据库变化,管理后续pod的生命周期,发现被分配到它所在的节点上运行的那些pod。如果找到新pod,则会在该节点上运行这个新pod。
附:kuberproxy运行在集群各个主机上,管理网络通信,如服务发现、负载均衡。当有数据发送到主机时,将其路由到正确的pod或容器。对于从主机上发出的数据,它可以基于请求地址发现远程服务器,并将数据正确路由,在某些情况下会使用轮循调度算法(Round-robin)将请求发送到集群中的多个实例。
Kubernetes核心概念
1、Pod
●最小部署单元
●一组容器的集合
●一个Pod中的容器共享网络命名空间
●Pod是短暂的
2、Controllers
●ReplicaSet :确保 预期的Pod副本数量
●Deployment:无状态应用 部署
●StatefulSet :有状态应用部署
●DaemonSet:确保所有Node运行同一个Pod
●Job:一次性任务
●Cronjob:定时任务
更高级层次对象,部署和管理Pod
3、Service
●防止Pod失联
●定义一组Pod的访问策略
●Label: 标签,附加到某个资源上,用于关联对象、查询和筛选
●Namespaces :命 名空间,将对象逻辑上隔离
●Annotations :注释
Kubernetes集群部署
1.官方提供的三种部署方式
●minikube
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。
部署地址: https://kubernetes.io/docs/setup/minikube/
●kubeadm
Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
●二进制包
推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
下载地址:https://github.com/kubernetes/kubernetes/releases
3.自签SSL证书
Kubernetes二进制部署
k8s软件包:
链接:https://pan.baidu.com/s/1oN2wkGZ_7parS8sMaaogGw
提取码:lbjx
k8s部署规划:
master:192.168.35.128 kube-apiserver kube-controller-manager kube-scheduler etcd
node1:192.168.35.195 kubelet kube-proxy docker flannel etcd
node2:192.168.35.138 kubelet kube-proxy docker flannel etcd
master操作:
[root@localhost ~]# mkdir k8s
[root@localhost ~]# cd k8s/
[root@localhost k8s]# ls //从宿主机拖进来
etcd-cert.sh etcd.sh
[root@localhost k8s]# mkdir etcd-cert
[root@localhost k8s]# mv etcd-cert.sh etcd-cert
上面 etcd-cert.sh etcd.sh脚本
vim etcd.sh
#!/bin/bash
#example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380
ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3
WORK_DIR=/opt/etcd
cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd
vim etcd-cert.sh
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
#-----------------------
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"10.206.240.188",
"10.206.240.189",
"10.206.240.111"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
下载官方包
[root@localhost k8s]# vim cfssl.sh
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
[root@localhost k8s]# bash cfssl.sh //下载cfssl官方包
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9.8M 100 9.8M 0 0 77052 0 0:02:14 0:02:14 --:--:-- 94447
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2224k 100 2224k 0 0 66701 0 0:00:34 0:00:34 --:--:-- 71949
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6440k 100 6440k 0 0 74368 0 0:01:28 0:01:28 --:--:-- 93942
[root@localhost k8s]# ls /usr/local/bin/
cfssl cfssl-certinfo cfssljson
//cfssl 生成证书工具、cfssljson通过传入json文件生成证书、cfssl-certinfo查看证书信息
定义证书
[root@localhost k8s]# cd etcd-cert/
//定义ca证书
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
实现证书签名
//实现证书签名
cat > ca-csr.json <<EOF
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
EOF
生产证书,生成ca-key.pem ca.pem
//生产证书,生成ca-key.pem ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/01/15 18:15:15 [INFO] generating a new CA key and certificate from CSR
2020/01/15 18:15:15 [INFO] generate received request
2020/01/15 18:15:15 [INFO] received CSR
2020/01/15 18:15:15 [INFO] generating key: rsa-2048
2020/01/15 18:15:15 [INFO] encoded CSR
2020/01/15 18:15:15 [INFO] signed certificate with serial number 661808851940283859099066838380794010566731982441
指定etcd三个节点之间的通信验证
//指定etcd三个节点之间的通信验证
cat > server-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"192.168.35.128",
"192.168.35.195",
"192.168.35.138"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
生成ETCD证书 server-key.pem server.pem
//生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
www server-csr.json | cfssljson -bare server
2020/01/15 18:24:09 [INFO] generate received request
2020/01/15 18:24:09 [INFO] received CSR
2020/01/15 18:24:09 [INFO] generating key: rsa-2048
2020/01/15 18:24:09 [INFO] encoded CSR
2020/01/15 18:24:09 [INFO] signed certificate with serial number 613252568370198035643630635602034323043189506463
2020/01/15 18:24:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
复制软件包到centos7中
[root@localhost etcd-cert]# cd /root/k8s/
[root@localhost k8s]# ls //直接拉取到目录下
cfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gz
etcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
解压
[root@localhost k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@localhost k8s]# ls etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
[root@localhost k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p //配置文件,命令文件,证书
[root@localhost k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
证书拷贝
//证书拷贝
[root@localhost k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/
//进入卡住状态等待其他节点加入
[root@localhost k8s]# bash etcd.sh etcd01 192.168.35.128 etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380
使用另外一个会话打开,会发现etcd进程已经开启
[root@localhost ~]# ps aux | grep etcd
root 4653 0.3 0.6 10523616 12140 ? Ssl 19:49 0:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.35.128:2380 --listen-client-urls=https://192.168.35.128:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.35.128:2379 --initial-advertise-peer-urls=https://192.168.35.128:2380 --initial-cluster=etcd01=https://192.168.35.128:2380,etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
root 4719 0.0 0.0 112676 984 pts/2 R+ 19:50 0:00 grep --color=auto etcd
关闭防火墙
systemctl stop firewalld.service
setenforce 0
拷贝证书去其他节点
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.35.195:/opt/
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.35.138:/opt
启动脚本拷贝其他节点
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.35.195:/usr/lib/systemd/system/
[root@localhost k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.35.138:/usr/lib/systemd/system/
在node01节点修改
[root@localhost ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.35.195:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.35.195:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.35.195:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.35.195:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.35.128:2380,etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
在node02节点修改
[root@localhost ~]# vim /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.35.138:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.35.138:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.35.138:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.35.138:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.35.128:2380,etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
先在master上开启服务
[root@localhost system]# cd /root/k8s/
[root@localhost k8s]# bash etcd.sh etcd01 192.168.35.128 etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380
在node1和node2上开启服务
[root@localhost ~]# systemctl start etcd
[root@localhost ~]# systemctl status etcd
再去master查看,会发现同步成功
[root@localhost k8s]# bash etcd.sh etcd01 192.168.35.128 etcd02=https://192.168.35.195:2380,etcd03=https://192.168.35.138:2380
检查群集状态
[root@localhost k8s]# cd etcd-cert/
[root@localhost etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.35.128:2379,https://192.168.35.195:2379,https://192.168.35.138:2379" cluster-health
member 12a96220ac829a49 is healthy: got healthy result from https://192.168.35.195:2379
member 76797989afd0ecba is healthy: got healthy result from https://192.168.35.128:2379
member ff469df2baaba1da is healthy: got healthy result from https://192.168.35.138:2379
cluster is healthy
看完上述内容,你们对kubernetes有进一步的了解吗?如果还想学到更多技能或想了解更多相关内容,欢迎关注亿速云行业资讯频道,感谢各位的阅读。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。