这篇文章给大家分享的是有关Kubernetes多节点部署的内容。小编觉得挺实用的,因此分享给大家学习。如下资料是关于Kubernetes多节点部署的内容。
角色 | 地址 | 安装组件 |
---|---|---|
master | 192.168.142.129 | kube-apiserver kube-controller-manager kube-scheduler etcd |
master2 | 192.168.142.120 | kube-apiserver kube-controller-manager kube-scheduler |
node1 | 192.168.142.130 | kubelet kube-proxy docker flannel etcd |
node2 | 192.168.142.131 | kubelet kube-proxy docker flannel etcd |
nginx1(lbm) | 192.168.142.140 | nginx keepalived |
nginx2(lbb) | 192.168.142.150 | nginx keepalived |
VIP | 192.168.142.20 | - |
资源包链接:
https://pan.baidu.com/s/183G9ZzBNdcUUFV7Y8-K4CQ
提取码:6z0j
1.远程复制master的相关目录
systemctl stop firewalld.service
setenforce 0
scp -r /opt/kubernetes/ root@192.168.142.120:/opt
scp -r /opt/etcd/ root@192.168.142.120:/opt
scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.142.120:/usr/lib/systemd/system/
2.修改kube-apiserver配置文件
vim /opt/kubernetes/cfg/kube-apiserver
#将第5和7行IP地址改为master2主机的地址
--bind-address=192.168.142.120 \
--advertise-address=192.168.142.120 \
3.启动服务并设定开机自启
systemctl start kube-apiserver.service
systemctl enable kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl enable kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl enable kube-scheduler.service
4.追加环境变量并生效
vim /etc/profile
#末尾追加
export PATH=$PATH:/opt/kubernetes/bin/
source /etc/profile
5.查看node节点
kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.142.130 Ready <none> 10d12h v1.12.3
192.168.142.131 Ready <none> 10d11h v1.12.3
1.在lbm&lbb端的操作,安装nginx服务
#nginx.sh
cat > /etc/yum.repos.d/nginx.repo << EOF
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
EOF
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 10.0.0.3:6443;
server 10.0.0.8:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
#keepalived.conf
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/usr/local/nginx/sbin/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
10.0.0.188/24
}
track_script {
check_nginx
}
}
mkdir /usr/local/nginx/sbin/ -p
vim /usr/local/nginx/sbin/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
/etc/init.d/keepalived stop
fi
chmod +x /usr/local/nginx/sbin/check_nginx.sh
vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
yum install nginx -y
vim /etc/nginx/nginx.conf
#在第12行下追加以下内容
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.142.129:6443; #此处为master的ip地址
server 192.168.142.120:6443; #此处为master2的ip地址
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
2.部署keeplived服务
#安装keepalived
yum install keepalived -y
复制前面的keepalived.conf配置文件,覆盖安装后原有的配置文件
cp keepalived.conf /etc/keepalived/keepalived.conf
vim /etc/keepalived/keepalived.conf
script "/etc/nginx/check_nginx.sh" #18行,目录改为/etc/nginx/,脚本后写
interface ens33 #23行,eth0改为ens33,此处的网卡名称可以使用ifconfig命令查询
virtual_router_id 51 #24行,vrrp路由ID实例,每个实例是唯一的
priority 100 #25行,优先级,备服务器设置90
virtual_ipaddress { #31行,
192.168.142.20/24 #32行,vip地址改为之前设定好的192.168.142.20
#38行以下全部删除
vim /etc/nginx/check_nginx.sh
#统计数量
count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #统计数量
#匹配为0,关闭keepalived服务
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
chmod +x /etc/nginx/check_nginx.sh
#启动服务
systemctl start keepalived
ip a
# lbm地址信息
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.142.140/24 brd 192.168.142.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.142.20/24 scope global secondary ens33 //漂移地址在lb01中
valid_lft forever preferred_lft forever
inet6 fe80::53ba:daab:3e22:e711/64 scope link
valid_lft forever preferred_lft forever
#lbb地址信息
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:c9:9d:88 brd ff:ff:ff:ff:ff:ff
inet 192.168.142.150/24 brd 192.168.142.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::55c0:6788:9feb:550d/64 scope link
valid_lft forever preferred_lft forever
#停止lbm端的nginx服务
pkill nginx
#查看服务状态
systemctl status nginx
systemctl status keepalived.service
#此时判断条件若为0,keepalived服务则是停止的
ps -ef |grep nginx |egrep -cv "grep|$$"
ip a
# lbm地址信息
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.142.140/24 brd 192.168.142.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::53ba:daab:3e22:e711/64 scope link
valid_lft forever preferred_lft forever
#lbb地址信息
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:c9:9d:88 brd ff:ff:ff:ff:ff:ff
inet 192.168.142.150/24 brd 192.168.142.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.142.20/24 scope global secondary ens33 //漂移地址在lb01中
valid_lft forever preferred_lft forever
inet6 fe80::55c0:6788:9feb:550d/64 scope link
valid_lft forever preferred_lft forever
#在lbm端启动nginx和keepalived服务
systemctl start nginx
systemctl start keepalived
ip a
# lbm地址信息
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff
inet 192.168.142.140/24 brd 192.168.142.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.142.20/24 scope global secondary ens33 //漂移地址在lb01中
valid_lft forever preferred_lft forever
inet6 fe80::53ba:daab:3e22:e711/64 scope link
valid_lft forever preferred_lft forever
cd /opt/kubernetes/cfg/
#配置文件统一修改为VIP
vim /opt/kubernetes/cfg/bootstrap.kubeconfig
server: https://192.168.142.20:6443
#第5行改为Vip的地址
vim /opt/kubernetes/cfg/kubelet.kubeconfig
server: https://192.168.142.20:6443
#第5行改为Vip的地址
vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
server: https://192.168.142.20:6443
#第5行改为Vip的地址
grep 20 *
bootstrap.kubeconfig: server: https://192.168.142.20:6443
kubelet.kubeconfig: server: https://192.168.142.20:6443
kube-proxy.kubeconfig: server: https://192.168.142.20:6443
tail /var/log/nginx/k8s-access.log
192.168.142.140 192.168.142.129:6443 - [08/Feb/2020:19:20:40 +0800] 200 1119
192.168.142.140 192.168.142.120:6443 - [08/Feb/2020:19:20:40 +0800] 200 1119
192.168.142.150 192.168.142.129:6443 - [08/Feb/2020:19:20:44 +0800] 200 1120
192.168.142.150 192.168.142.120:6443 - [08/Feb/2020:19:20:44 +0800] 200 1120
kubectl run nginx --image=nginx
kubectl get pods
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
kubectl get pods -o wid
mkdir /k8s/dashboard
cd /k8s/dashboard
#上传官方的文件到该目录中
#授权访问api
kubectl create -f dashboard-rbac.yaml
#进行加密
kubectl create -f dashboard-secret.yaml
#配置应用
kubectl create -f dashboard-configmap.yaml
#控制器
kubectl create -f dashboard-controller.yaml
#发布出去进行访问
kubectl create -f dashboard-service.yaml
kubectl get pods -n kube-system
kubectl get pods,svc -n kube-system
1.在master端操作,编写进行证书自签
vim dashboard-cert.sh
cat > dashboard-csr.json <<EOF
{
"CN": "Dashboard",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "NanJing",
"ST": "NanJing"
}
]
}
EOF
K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
2.重新应用新的自签证书
bash dashboard-cert.sh /root/k8s/apiserver/
3.修改yaml文件
vim dashboard-controller.yaml
#在47行下追加以下内容
- --tls-key-file=dashboard-key.pem
- --tls-cert-file=dashboard.pem
4.重新进行部署
kubectl apply -f dashboard-controller.yaml
5.生成登录令牌
kubectl create -f k8s-admin.yaml
kubectl get secret -n kube-system
NAME TYPE DATA AGE
dashboard-admin-token-drs7c kubernetes.io/service-account-token 3 60s
default-token-mmvcg kubernetes.io/service-account-token 3 55m
kubernetes-dashboard-certs Opaque 10 10m
kubernetes-dashboard-key-holder Opaque 2 23m
kubernetes-dashboard-token-crqvs kubernetes.io/service-account-token 3 23m
kubectl describe secret dashboard-admin-token-drs7c -n kube-system
6.复制粘贴令牌后,登录到UI界面
看完这篇文章,你们学会Kubernetes多节点部署的方法了吗?如果还想学到更多技能或想了解更多相关内容,欢迎关注亿速云行业资讯频道,感谢各位的阅读!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。