前提摘要:
1、各节点时间同步;
2、各节点主机名称解析:dns OR hosts 3 /etc/hosts
3、各节点iptables及firewalld服务被disable
https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
配置yum仓库导入docker 下载docker-ce.repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cd /etc/yum.repos.d/
vim k8s.repo #配置kubetnetes仓库
[k8s]
name=K8s Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
enabled=1
yum repolist 查看两个仓库是否能用
yum install docker-ce kubelet kubeadm kubectl -y
如果有报错 手动下载密钥执行下面命令
wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
rpm --import yum-key.gpg
systemctl daemon-reload #重新加载文件
systemctl start docker.service #启动docker
systemctl enable docker.service #设置开机自启动
systemctl enable kubelet.service
docker需要自动到docker仓库去下载镜像或者通过本地仓库下载所依赖的每个镜像文件
1、通过docker自动下载依赖镜像
cd /usr/lib/systemd/system/docker.service
在 # for containers run by docker 底下加入环境变量
Environment="HTTPS_PROXY=http://www.ik8s.io:10080" 通过代理去访问需要加载的镜像文件
Environment="NO_PROXY=127.0.0.0/8,172.20.0.0/16"
2、手动下载依赖镜像用
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.1
k8s.gcr.io/kube-controller-manager:v1.13.1
k8s.gcr.io/kube-scheduler:v1.13.1
k8s.gcr.io/kube-proxy:v1.13.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
下载命令
docker pull docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.1 下载镜像
docker tag docker.io/mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1 给镜像打标签
docker pull docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker pull docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker pull docker.io/mirrorgooglecontainers/kube-proxy:v1.13.1
docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker pull docker.io/mirrorgooglecontainers/pause:3.1
docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker pull docker.io/mirrorgooglecontainers/etcd:3.2.24
docker tag docker.io/mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker pull docker.io/coredns/coredns:1.2.6
docker tag docker.io/coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
-----------------------------------------------------------------------------
编辑kubelet的配置文件/etc/sysconfig/kubelet,设置其忽略Swap启用的状态错误,内容如下:
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
设置kubelet开机自启动
systemctl enable kubelet.service
初始化
kubeadm init --kubernetes-version=v1.13.1 --pod-network-cidr=192.168.0.0/6 --service.cidr=192.168.1.0/12 --ignore-preflight-errors=Swap
--kubernetes-version=v1.11.1 #被初始化的k8s版本
--pod-network-cidr=10.244.0.0/16 #pod使用的网络
--service.cidr=10.96.0.0/12 #service使用的网络地址
--ignore-preflight-errors=Swap #双重否定部属用用交换配置
如果在初始化过程中有error报错那么就查看是什么错误一般会报错镜像tag标签错误找不到该镜像 可以把镜像重新打标签
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir ~/.kube #创建.kube文件一般会提示用普通用户创建要有sudo权限这边我直接用root用户创建
sudo cp /etc/kubernetes/admin.conf ~/.kube/ #复制admin.conf到./kube文件中
sudo chown $(id -u):$(id -g) $HOME/.kube/config #把./kube的属组属主改为创建用户的属组属主
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
#用下列命令把node加入集群中来(建议把下列命令保存起来)
kubeadm join 192.168.1.179:6443 --token 5m7gg1.czd5td6itn9g2fhz --discovery-token-ca-cert-hash sha256:50c64cac88defae6beecf7bdde9b212094d7cc937b709b94f0baaeaaa4246e7e
用root用户创建:
mkdir ~/.kube
cp /etc/kubernetes/admin.conf ~/.kube/
因为应root用户创建的就不用重新指定属组属主
kubectl get cs #查看组件健康状态
kubectl get nodes #查看个节点信息
NAME STATUS ROLES AGE VERSION
master NotReady master 19h v1.13.1 #这边显示未开启状态因为这边少一个flannel插件
安装flannel插件
https://github.com/coreos/flannel #flannel地址和帮助信息
图片
#如果你要受用安装就执行下面的命令就会自动执行安装
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#部署完之后再查看节点信息
NAME STATUS ROLES AGE VERSION
master Ready master 19h v1.13.1
#查看组件状态信息(当前master运行的所有pod组件)
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-86c58d9df4-4frfz 1/1 Running 0 19h
coredns-86c58d9df4-hlr72 1/1 Running 0 19h
etcd-master 1/1 Running 0 19h
kube-apiserver-master 1/1 Running 0 19h
kube-controller-manager-master 1/1 Running 0 19h
kube-flannel-ds-amd64-4c7jx 1/1 Running 0 19h
kube-flannel-ds-amd64-89m8l 1/1 Running 0 17h
kube-flannel-ds-amd64-rmxj9 1/1 Running 0 19h
kube-proxy-8pnqs 1/1 Running 0 17h
kube-proxy-b4hlj 1/1 Running 0 19h
kube-proxy-fzp2m 1/1 Running 0 19h
kube-scheduler-master 1/1 Running 0 19h
#默认如果不指定名名称空间则名称空间kube-system
kubectl get ns
NAME STATUS AGE
default Active 19h
kube-public Active 19h
kube-system Active 19h
#主节点初始化完成
如果用第一种方法让docker自动下载依赖镜像则复制master中docker.service文件到node01中
把master中的k8s.repo和下载的docker-ce的yum仓库配置起来如果在yum过程中有报错则按照master中的方法执行
yum install docker-ce kubelet kubeadm -y
1、开启docker服务
2、把docker 、 kubelet设置开机自启动
systemctl start docker.service #启动docker
systemctl enable docker.service #设置开机自启动
systemctl enable kubelet.service
手动下载
kube-proxy 、 pause 镜像并打标签通master中的下载镜像配置
把node01节点加入集群中去
用刚刚保存的kubeadm job命令
kubeadm join 192.168.1.179:6443 --token 5m7gg1.czd5td6itn9g2fhz --discovery-token-ca-cert-hash sha256:50c64cac88defae6beecf7bdde9b212094d7cc937b709b94f0baaeaaa4246e7e --ignore-preflight-errors=Swap #后面加入这句话
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 20h v1.13.1
node01 Ready <none> 19h v1.13.1 #已经加入进来了
# node02跟node01做同样步骤加入集群
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。