您当前的位置: 首页 > 慢生活 > 程序人生 网站首页程序人生
69、Kubernetes - 高可用的 K8S 构建(4)_]
发布时间:2023-02-08 23:29:01编辑:雪饮阅读()
master01上面确认下
[root@k8s-master01 ~]# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5REND QWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201 bGRHVnpNQjRYRFRJek1ESXdOekUwTkRjMU1Gb1hEVE16TURJd05ERTBORGMxTUZvd0ZURVRNQkVHQTFV RQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dn RUJBTHBYCnMrYUZQdHJwQ1VWMG5OZmdWYVkzM21VU1o3QzE5ZXhyK0w4Q1FVVXkrN1BXak1WbUM1U1Vp Rm1DQng3Sk8vS1cKT0NFZ2hmUGN3YU1EK1V3WDVBN0dnQVdnZC8wMXZvdy9JRUw3N1Q1UkhROWhFcnpy R29FOE1Pd1owcjdrUlpXaworbE4vM0NVak1UcmdVSlVQRzBOU1Z3cVpKYVBUNmg3b2NBS29FeXJJeGdi Z2x0Kzh4UFlmekJWM2p1bXZ5MlZ2CjV4d2VyR2UxZTAxZUVsSEdnOGFBNXRWKzNuNW95WkM2bGVEYkFS dkZmM1dVeC9XK3o2eTZpRURmTytEbkxDTmUKK3QwUjFEOFowN3ptalppWkJhVEpXbVpzRlE5TjQ1Y3d4 V0UybVFOMDZQWC8xbHh0R1VraTNQVVN3M2pVOXlXNgo1WUs5TU1qQ1pVWkd1OHVqQUhzQ0F3RUFBYU1q TUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZj TkFRRUxCUUFEZ2dFQkFGTm9RQlZJRjVkM2FhNE1OOForKzJHS0xjamIKNlpoKzZJaXJyaHg3dklZVzdl cFB6SklQVHVRL3MxVzZnY09oMUJxZUMvR2lrenZiektxU2MrVXA1QVo3SnVRTgo1ai9lU2pVd3VwaUxk aXpESC9Qb0owYlNiQzZTNkdJRGNxQVJUaU9mOVpwMldxNThsakN3eWhOR0krNlN2dWsrCnhNcWFWd2Vi SVFxUFJsNlJRU293ZTNHdzE2bmhmZFhodGZXOTdvbUpSU3BWYldEaVlXM21qZXJDZ01VYWVnWVgKTkdw WmtuaHhVZFAwY2wyN2pEWC82WDVTeStnN2tFN3ZxQmhSUnFmOUtSMWpOZVp0czZ2UGF4YzBDc0h5N2R2 MAp3U1dkQWp5NVNRM1lzODJCL1VWNFRXVDBYcEdjdHpxUWpSNEdleVY2bmVlN2RLL3hhWktzeVFMNXFk OD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.66.100:6444
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWR xZ0F3SUJBZ0lJVmQxYytPRG54cUF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2E zVmlaWEp1WlhSbGN6QWVGdzB5TXpBeU1EY3hORFEzTlRCYUZ3MHlOREF5TURjeE5EUTNOVEphTURReAp GekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnp MV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXNwSFl4dUs 5YkxRNTF4aWgKazdvOHpqemU4dWtQQTVLR1Yyc2xTeTZkd3o0V0QySjZLUitKR1lrVk5UdWhmaHcwZmt SNDBqMVJUUXUzZHJ1VwpVSzloQlpQd2doQjdrbjllYWh4cm9aZzFxaVZLOW45OEtMdXMyeEdMaDBMZUp wOE94MnVnbTRPdVI2S3N2aUtTCmpXL1lSMENOcU56dVJUYm9PM254d2RpcmZMUjRjTlRiQW1YSU91Wnd qN1lPLzhxeVVJOS8rbkpTRkxwU3BOSjEKUHBpaHhzQyt0b0k4L3ErZnMwM2lEYkFQV3hablpPakhkVG9 vKzVCZjlRWExJZ1piVC9EMGxISkdDdi9mSERycgpoWmxBUERoY3A1cXdvNHdLYUwxd2hFSFZKTEI5cmp rZG1VLzJRUmVmbnl1TDl3d3lBYmZxbTFPMFRlTFlzNFZ5CjJDNndyd0lEQVFBQm95Y3dKVEFPQmdOVkh ROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUx CUUFEZ2dFQkFMa0RKeW4xZzhDbUlCR3RGcndoVGlraXdJd24xd1c2MkFlcQpVNjAvT3QxTENqaVZZMWJ 6Z2tIVGZzNFluTnIyYUF6dWoxV3N2TER0Rzd0elVpYzBkRnJlc3c5VHAzR1ROSmRQCmJMa1FoUXpWQmZ jeEJ0SXFoWld2a0RzbzRhTWxlN2hJMStwYm5zamF0eG9FeUlBWEFQMk92Q1pRZ0x6NE9DcWsKN2h5aE5 RVjZ0MmJpNmRua3NTNDVSVnpoR3VBdHNveE1XVjl2bGFqVVlJT1NGdThNN3RrdjhZNHhZVXUyOXNqagp TSURMN25xU1FFWlp4ZDg5aXZYTi95bGdxRGNvY3pucmkrb1RXblJ5OGlUTkk3SWF4TDZXYU91REE2WGl 4YnkvCmx1dGtuU0ZyTlZtK2RDN1ZKa0xqUktpWFA3SGNMSXQ2Y0V2c1FSdjlHV3dIR0ZmSWkzOD0KLS0 tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0N BUUVBc3BIWXh1SzliTFE1MXhpaGs3bzh6anplOHVrUEE1S0dWMnNsU3k2ZHd6NFdEMko2CktSK0pHWWt WTlR1aGZodzBma1I0MGoxUlRRdTNkcnVXVUs5aEJaUHdnaEI3a245ZWFoeHJvWmcxcWlWSzluOTgKS0x 1czJ4R0xoMExlSnA4T3gydWdtNE91UjZLc3ZpS1NqVy9ZUjBDTnFOenVSVGJvTzNueHdkaXJmTFI0Y05 UYgpBbVhJT3Vad2o3WU8vOHF5VUk5LytuSlNGTHBTcE5KMVBwaWh4c0MrdG9JOC9xK2ZzMDNpRGJBUFd 4Wm5aT2pICmRUb28rNUJmOVFYTElnWmJUL0QwbEhKR0N2L2ZIRHJyaFpsQVBEaGNwNXF3bzR3S2FMMXd oRUhWSkxCOXJqa2QKbVUvMlFSZWZueXVMOXd3eUFiZnFtMU8wVGVMWXM0VnkyQzZ3cndJREFRQUJBb0l CQUdHYTByQ3pVdkxOK0NlWQpNUWs4Yk93VjNZOU0wSVlWV3hVQlhkc2dXZDlVV2w2Q1oxOSsrME5YNko yMlFHbGNKVjAzTkF0R3ROKzJIY3ZxCmNSa2RJNTBXNWdsUjFSbVlRUlVpLzduT0p0Y2Zsei94SXY1b3h 1emZSRExrMitTa1lFR2tsSjhzZE9CM0RKREkKK080U1NsZDM4M1p2ZkZXYzA0ZGUra1FJbUlPS2YrUzY wQWJGNjMveWxIdllSOUpvM2ZRRTBGUk1OaTVWLzA5MApUOFNMRTMwalpaejh6dHJUNFoyNnVQN1ZsVnR MUmtEMGJ6WHBOLzJEMENIUVdha3c1M3Fac2UxaVlEWC9tcGpzCkJ2MDlRbjhTOVp5bXJQYVRYYWRIcnd TNXRRRHNWQlV5cGt3VDNlbmVTTFovRzhQVUlQbExIbGl4LzI4cmd6TGgKSEJWRFJvRUNnWUVBMUNMeFV yTncwRVRlVzB5MHlaZjlNemFid2Z4UUZGL2xPcU1sbmplM2pYVXZ4elB4b1Y3ZwptRGhpZGVWK1RLQ1l zeTRFTFJoNG9UZ055cnYxTXd1Y2I4YzFvdXpJVlNOZjhPLy9ua1pONzZXTkp6UWp4N2c4CmNTVS8rZlZ qOVUxWDNmWkNOUDZOa0VpeWkya2FRZDBLbDZ4Y2RQVGxrQ3FXcUQwaWs2SERQRThDZ1lFQTEzNGIKcEJ VZExDeUhvVGIzQXQzOFdKTmhDWlJ5djk0cWlnb0dZRTNTdHZ3SUFjaXJJdnNGUzlFM3VpQytYMjN5dFh UTwpEREErcmZwSUovVFhaYzcxcElDQlFla20xZEVzQXNBemxZOXJHcVVRekRyRE5VTkxya2lyT2hGUUQ rNnBwS09GCk9mOVNyMlVQOGtLUGtMZFFpV291Yk54L2IwUnIrVnhNcXc0ZVRhRUNnWUVBc2JGVFUyTGJ iSmxEYUZhb1dQVG4KTXE3YmFYSmY0YkV4NGh3bXRwRVZQM2ladk5MVjQ4WUZlM3cvZldIdW1XRXNoMnB VTlRINldaRUtmSGRVdkoxTgpQSlF4YVhmTmx3TTZxaWRlaHNWOUl2QVpmRzFBUzFzWHhlN2QyQktrMkN VaEpOdlNPWEhBUXN1aVF3U1c2ZlN0Cm1yN1Y4Mkh2cVFNRGo0a21IV095bGlFQ2dZRUF0aEdxc1B2VjV oakpqNEN0T3hMcnZycm01ZjB5NXNHREY1WlkKeE0xOEYzYmlIUCs2K0pjMlpsU2l6UFFWWlBPMGVYUHp FNEUvdENjZkNBTnFhbTV1UlVyOTZ2NWUvWkQ1cW1sUwpMQzg4d3dwc0l1SVRSTkZUQkRJSjJjbis1emN 5eGhRUzRHbkZKc1F3c1BOajhWV3hDaWxZaUVuVXNlSVJpR0pmCnRMYjlDNEVDZ1lBbk1YOWpaamJpOHV uSmx6S21kakRteVVjZjl6dXRxNnhxL1VWTEpRN1lnTXpWRFJ1L2JFMDUKOE90eGs5ZnpGSmdBOFdVRmZ 5QTRIQU9UMjRqd29LWk4xcWFGZXg3UHN6WFdQeXU1b1Nqd2xUaDVEdjF6aWltbAo3WUlnbHVTNUd3VzN 2bFdLZnowWVNCTUFvOFJjRGlXeSt5ZnBlaXc4U2VQWk9XUWtraVRSTnc9PQotLS0tLUVORCBSU0EgUFJ JVkFURSBLRVktLS0tLQo=
主要是确认
server: https://192.168.66.100:6444
然后master01的这个node也是没有问题,但状态是NotReady
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 22h v1.15.1
然后haproxy.cfg配置文件同步到master02和master03
[root@k8s-master01 ~]# scp haproxy.cfg root@k8s-master02:/root/
root@k8s-master02's password:
haproxy.cfg 100% 972 452.3KB/s 00:00
[root@k8s-master01 ~]# scp haproxy.cfg root@k8s-master03:/root/
root@k8s-master03's password:
haproxy.cfg 100% 972 721.7KB/s 00:00
master02节点解压对应镜像
[root@k8s-master02 ~]# cd /usr/local/kubernetes/install
[root@k8s-master02 install]# tar -zxvf start.keep.tar.gz
data/
data/lb/
data/lb/start-keepalived.sh
data/lb/kubeadm-config.yaml
data/lb/etc/
data/lb/etc/haproxy.cfg
data/lb/start-haproxy.sh
然后将master01昨天配置的那个/data目录同步到master02和master03上面
[root@k8s-master01 ~]# scp -r /data root@k8s-master02:/
root@k8s-master02's password:
kubeadm-config.yaml 100% 832 356.9KB/s 00:00
haproxy.cfg 100% 896 1.3MB/s 00:00
start-haproxy.sh 100% 404 420.8KB/s 00:00
start-keepalived.sh 100% 481 517.7KB/s 00:00
[root@k8s-master01 ~]# scp -r /data root@k8s-master03:/
root@k8s-master03's password:
kubeadm-config.yaml 100% 832 842.9KB/s 00:00
haproxy.cfg 100% 896 1.1MB/s 00:00
start-haproxy.sh 100% 404 625.5KB/s 00:00
start-keepalived.sh 100% 481 727.1KB/s 00:00
然后master02导入镜像
[root@k8s-master02 install]# docker load -i haproxy.tar
Loaded image: wise2c/haproxy-k8s:latest
[root@k8s-master02 install]# docker load -i keepalived.tar
Loaded image: wise2c/keepalived-k8s:latest
然后master02开始haproxy与keepalived
[root@k8s-master02 install]# cd /data/lb/
[root@k8s-master02 lb]# ls
etc kubeadm-config.yaml start-haproxy.sh start-keepalived.sh
[root@k8s-master02 lb]# ./start-haproxy.sh
a38ff8fe392040938411315f79faec9c752f419018e97081f9b544953ae5f662
[root@k8s-master02 lb]# ./start-keepalived.sh
9864a77712334623049acab1327c8b2edf2c036a375767662350033ef8148a23
然后master02
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum remove -y kubelet kubeadm kubectl
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service
然后master02加入master01的那个节点
kubeadm join 192.168.66.100:6444 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:43ccc6fbddb8f78383fb6985f54f1d455e0204f0ad6da6c01b3180960a2f42c5 \
--control-plane --certificate-key c03402f8c9dfc85aa3d7fd7087f3f80da984fc61f7e97c65a4dfe99bd5224c0f
可能会报的错误
error execution phase control-plane-prepare/download-certs: error downloading certs: error downloading the secret: Secret "kubeadm-certs" was not found in the "kube-system" Namespace.
在原来master01节点上执行
kubeadm init phase upload-certs --upload-certs --config kubeadm-config.yaml
生成新的key替换上面命令中--certificate-key后面的值,因为那个key的有效期仅仅2小时。
然后master02加入节点后也操作下这几个步骤
[root@k8s-master02 lb]# mkdir -p $HOME/.kube
[root@k8s-master02 lb]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master02 lb]# chown $(id -u):$(id -g) $HOME/.kube/config
然后节点master02看到两个node就ok,虽然同样是NotReady状态
[root@k8s-master02 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 23h v1.15.1
k8s-master02 NotReady master 3m24s v1.15.1
然后这里也确认下server: https://192.168.66.100:6444
[root@k8s-master02 lb]# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ESXdOekUwTkRjMU1Gb1hEVE16TURJd05ERTBORGMxTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHBYCnMrYUZQdHJwQ1VWMG5OZmdWYVkzM21VU1o3QzE5ZXhyK0w4Q1FVVXkrN1BXak1WbUM1U1VpRm1DQng3Sk8vS1cKT0NFZ2hmUGN3YU1EK1V3WDVBN0dnQVdnZC8wMXZvdy9JRUw3N1Q1UkhROWhFcnpyR29FOE1Pd1owcjdrUlpXaworbE4vM0NVak1UcmdVSlVQRzBOU1Z3cVpKYVBUNmg3b2NBS29FeXJJeGdiZ2x0Kzh4UFlmekJWM2p1bXZ5MlZ2CjV4d2VyR2UxZTAxZUVsSEdnOGFBNXRWKzNuNW95WkM2bGVEYkFSdkZmM1dVeC9XK3o2eTZpRURmTytEbkxDTmUKK3QwUjFEOFowN3ptalppWkJhVEpXbVpzRlE5TjQ1Y3d4V0UybVFOMDZQWC8xbHh0R1VraTNQVVN3M2pVOXlXNgo1WUs5TU1qQ1pVWkd1OHVqQUhzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGTm9RQlZJRjVkM2FhNE1OOForKzJHS0xjamIKNlpoKzZJaXJyaHg3dklZVzdlcFB6SklQVHVRL3MxVzZnY09oMUJxZUMvR2lrenZiektxU2MrVXA1QVo3SnVRTgo1ai9lU2pVd3VwaUxkaXpESC9Qb0owYlNiQzZTNkdJRGNxQVJUaU9mOVpwMldxNThsakN3eWhOR0krNlN2dWsrCnhNcWFWd2ViSVFxUFJsNlJRU293ZTNHdzE2bmhmZFhodGZXOTdvbUpSU3BWYldEaVlXM21qZXJDZ01VYWVnWVgKTkdwWmtuaHhVZFAwY2wyN2pEWC82WDVTeStnN2tFN3ZxQmhSUnFmOUtSMWpOZVp0czZ2UGF4YzBDc0h5N2R2MAp3U1dkQWp5NVNRM1lzODJCL1VWNFRXVDBYcEdjdHpxUWpSNEdleVY2bmVlN2RLL3hhWktzeVFMNXFkOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.66.100:6444
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJS0FzbmxURHBLNjB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBeU1EY3hORFEzTlRCYUZ3MHlOREF5TURneE5EQTFOVE5hTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdSYkZJYUJFeFd5eS8vdngKRnlHTkNqNmhIYVEvakx0WmZVWG5CZVBLZGZydFhtVTVIV3BIdTNDV1N2dDIyaHNoN0FHZWVZV2duNHFUZGpyKwpCd2N3dHpUUE5FMHN4NHFXQnlwWUpLNXovbEY1T0JMSHBaTXFCQVNianFmcHp6K0tGUzdtMmt6OUw4NndFQk9rCkNNN2wyUmFaQ250NDlrMjdpMXBkM1kzSEdpdjR4SE8wcjE2dkhjbndYWG1PUTlKR01jQ21CenZLQXNEK2g2bWEKeHIzYTFkRDQ5dzgwczhGaHFUQnQ1Zm5RRU9SSEFLdUZzcDZxMkV2S2pkdndIZDNDaFdJTzJkQWlWV3dpMmpvZQpIbThDVUVnYUNld0R5cThBQmtISFBBVEZ3OXVtekd6MFlrb0xVeXBlZFdrV0tzYkp4aXpnd1p2MkhwQmpHZ1lJCnJGNGdOd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCSUxNbHRuT3RiRkkyZmFOUVd1QTlCdjJ6QWIvcitLUzl3Vgp1SUR5eDVEUGVBb1MxemllWTdkcFlpUFNxTzV6MTB2NTJzc0x1Sm5PVkdKdVdrdkVua0lLQXMwRzFXdi9Bdi9sCjdaamRYeDV6WVJDYlp5MmtjRUsvM1hkM3RuV0JQME5STUZVT3RUcTBxUWhuWGZHNERUTW1ySUZyQm8wcnJpOXEKWit6Z093VWxMN3ZVcHgzVmFHbUxWaE1zM21WNDc2emlsdXNSZ3pKV2xLSVRtejVuQlc4R2hhczhXSFZZVFI3Nwp2dVFnMzNwOUViTXQrcmE0dFh1NDNCSUdSTDN0UHNZZzFGclUwRGlCZGhLZHA0RDdoUlMxOTRkUVhVem9XMkNCClNheE1xYjRNalhOZWlhVlNFWERORyt5MjYrcWdBUXh1aktWYm53b3ptbGN4ZHJWc1ZvYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBd1JiRklhQkV4V3l5Ly92eEZ5R05DajZoSGFRL2pMdFpmVVhuQmVQS2RmcnRYbVU1CkhXcEh1M0NXU3Z0MjJoc2g3QUdlZVlXZ240cVRkanIrQndjd3R6VFBORTBzeDRxV0J5cFlKSzV6L2xGNU9CTEgKcFpNcUJBU2JqcWZwenorS0ZTN20ya3o5TDg2d0VCT2tDTTdsMlJhWkNudDQ5azI3aTFwZDNZM0hHaXY0eEhPMApyMTZ2SGNud1hYbU9ROUpHTWNDbUJ6dktBc0QraDZtYXhyM2ExZEQ0OXc4MHM4RmhxVEJ0NWZuUUVPUkhBS3VGCnNwNnEyRXZLamR2d0hkM0NoV0lPMmRBaVZXd2kyam9lSG04Q1VFZ2FDZXdEeXE4QUJrSEhQQVRGdzl1bXpHejAKWWtvTFV5cGVkV2tXS3NiSnhpemd3WnYySHBCakdnWUlyRjRnTndJREFRQUJBb0lCQUZCZStQa1JLKzc5V3RpZwpkdTdJNFZzbFRJejVCQmJCR1BQQzkvR0VxbzVIUHl4dWQ4S2RyWFFBM2g0aDQ0dlBoV1FtSEYrNjFtdnlFNFUvCjh1TGNCRlFONEQyRjdpQzB0OVFOdFJpM29NSjRDVHZrM1VNM0tXTDR4QU15TTJrM3FuTTh6WXlLUlV6ei9HY0UKdGQrUXR5MlFjVzFpamF3QUdSTmRMdnI4ZTZhUmN1VW51a0tEOVM2ZngwaW8wUzZKUkhVWmFWRWdxREY3YU9vWgpVUUNaU2Q1VDVKNmlFUTQxd1VSSUN4M1JLb3lZbjJTbkRkRkZwZjNNS3JKbWw2TDVaaHlaNGt6RmVyYWM1cXYvCkpsUTg3RXdBcHFMK2pXc0VvV292bCtQcmQ0ZVZ0T04zbkc1V1BQcklVQUNsSW5ac1dTM0ZHcjhxOWZEcmg5NnYKSC9HUnVERUNnWUVBOXVOQUlaNDhyNnJFNTFEN25tdndQWWQ0QXROT1hYbFBab1NsR0ZwckxFUVJCMnNzc2VKUQp1Q3VLS3MvMXpXTXpaRUh4TFM0eXdDejhtR3o5dGZRSmRNaEJ4NW5nWFBiK0JMV1lzSVg2blFkVlBFK1VyalFnCndTM1RoRVYrUm5CZHBsWndVdUVIZ2VnWlhEV1Qxc3pqN055Wnl6RHV3UllKWGxzbm9YU09LdzhDZ1lFQXlEY3gKL0E3N1VWZXFDTjNMTHFqdGs2eW94THp1bjBjU0lYL2JpU2NTYS9yLzBLT3YxbXdQbnkxSmVVT2FXeklGTWowVwpDL2xpdkhTbkNpQlVsL21hSEFuVnlQYzFiK0FJcUw1czhtYkZUY2x1dDRSN1Q0ZWRkQ2F5aXVQek9Oa1dmSG04ClozMkprSzZBTXM0aEJzTjNRNDhkVjJLeCs1eFIwZG56MTBXY1dGa0NnWUFLSEd4MzgvOFFRcklsdHc5WEFaeXAKS0c4bHpubWJJbWk2RGh5a3pxOHM5T3l0blJvTGZ2VkhWYUVtOTdWZFgvNnUwSFNNSVNRNjhweTFzV0VDbnFmMApmRzhWT1p3U3NwcmNub05PVjI1WUdBREpvNGkzU2JNOXRoNi9nQWtYNFdvMGNiM1A1eDlqbHBuVFNPNXhFWnNVCkRFVFFLWVRkcTRWZXMrVC9tOEpteVFLQmdGMldQMDBkQzZpb0c1anRZODQ1dEdPMDcyYVhFY3R1QXpHWmZGc04KNG5TSzdRenZsbi9hSHlzK2xmdVMrQkhzdmJVUURNQW9JRmtMQmhHYnJ5OGl3MENiOEV4eUVZNXI0R0JRTXNqVAo5U0k1S0FHc2NaOXBPdFpTU0Y3WDBwY2VFbjY0d0xKM1lkZzVXVDltVHRYRWhIa1Y2cGN3VVJYVnFnRTNxZDFVCmNwSnhBb0dBUW5TQklOdGZiRVljdVcrUndmNzRhSlBXL0M1eXgvSDQ0Y3d5aEtYM0thaDdHbjZpZlhoaFRFNjkKZnJ5UTZWTUNmT1BBem5pNEgyV1dGbzY3UTNhSENSZ3pLTmFIOWZ4VC8xTVROaW5MU1lOaDRJdmdLamRvK1lodAppK0VUQVJsdWJwcUpMdk1qSXZNaFZodzQ1WTRkb0twRnhSQTdvbUdPamdMMFZGQnpjLzA9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
然后接下来是master03
[root@k8s-master03 ~]# cd /usr/local/kubernetes/install
[root@k8s-master03 install]# docker load -i haproxy.tar
Loaded image: wise2c/haproxy-k8s:latest
[root@k8s-master03 install]# docker load -i keepalived.tar
Loaded image: wise2c/keepalived-k8s:latest
[root@k8s-master03 install]# cd /data/lb/
[root@k8s-master03 lb]# ./start-haproxy.sh
65137f2e027259c60bbf6c4a7223f3a0f7c710c8cdde1817f89ac9a64f62cc89
[root@k8s-master03 lb]# ./start-keepalived.sh
862426883ba7bf267f642f116a68113fb9429ac7b9dacc589a48b3bce8ac133b
接下来还是master03
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum remove -y kubelet kubeadm kubectl && yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1 && systemctl enable kubelet.service
回到master01可以看到etcd、apiserver、proxy、scheduler这些都已经有两个了
[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-76v7g 0/1 Pending 0 23h
coredns-5c98db65d4-9nkx9 0/1 Pending 0 23h
etcd-k8s-master01 1/1 Running 1 23h
etcd-k8s-master02 1/1 Running 0 18m
kube-apiserver-k8s-master01 1/1 Running 1 23h
kube-apiserver-k8s-master02 1/1 Running 0 18m
kube-controller-manager-k8s-master01 1/1 Running 2 23h
kube-controller-manager-k8s-master02 1/1 Running 0 18m
kube-proxy-br48m 1/1 Running 1 23h
kube-proxy-sp8jj 1/1 Running 0 18m
kube-scheduler-k8s-master01 1/1 Running 2 23h
kube-scheduler-k8s-master02 1/1 Running 0 18m
然后再回去master03节点也将master03节点加入master01节点
kubeadm join 192.168.66.100:6444 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:43ccc6fbddb8f78383fb6985f54f1d455e0204f0ad6da6c01b3180960a2f42c5 --control-plane --certificate-key 88d01b6f88d193d2fabc9c65de42b82229910553365afa6f3c15ea02a260a96c
然后再回到master01节点将配置文件haproxy.cfg中的server rancher配置为完整的3个master
[root@k8s-master01 ~]# cat /data/lb/etc/haproxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
#chroot /usr/share/haproxy
#user haproxy
#group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
timeout connect 5000
timeout client 50000
timeout server 50000
frontend stats-front
bind *:8081
mode http
default_backend stats-back
frontend fe_k8s_6444
bind *:6444
mode tcp
timeout client 1h
log global
option tcplog
default_backend be_k8s_6443
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
backend stats-back
mode http
balance roundrobin
stats uri /haproxy/stats
stats auth pxcstats:secret
backend be_k8s_6443
mode tcp
timeout queue 1h
timeout server 1h
timeout connect 1h
log global
balance roundrobin
server rancher01 192.168.66.10:6443
server rancher02 192.168.66.20:6443
server rancher03 192.168.66.21:6443
然后master01重建haproxy
docker rm -f HAProxy-K8S && bash /data/lb/start-haproxy.sh
并且master01的这个haproxy的配置文件也同步到master02和master03上
[root@k8s-master01 ~]# scp /data/lb/etc/haproxy.cfg root@k8s-master02:/data/lb/etc/
root@k8s-master02's password:
haproxy.cfg 100% 972 700.4KB/s 00:00
[root@k8s-master01 ~]# scp /data/lb/etc/haproxy.cfg root@k8s-master03:/data/lb/etc/
root@k8s-master03's password:
haproxy.cfg 100% 972 861.3KB/s 00:00
然后master03也做些该做的
[root@k8s-master03 lb]# mkdir -p $HOME/.kube
[root@k8s-master03 lb]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master03 lb]# chown $(id -u):$(id -g) $HOME/.kube/config
然后master02及master03
docker rm -f HAProxy-K8S && bash /data/lb/start-haproxy.sh
回到master01可以看到有三个master节点了
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 23h v1.15.1
k8s-master02 NotReady master 30m v1.15.1
k8s-master03 NotReady master 10m v1.15.1
然后master01安装我们之前普通集群时候的那个flannel
[root@k8s-master01 ~]# kubectl apply -f /root/kube-flannel2.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
然后等待master01的这个flannel都running
[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-76v7g 0/1 Error 1 23h
coredns-5c98db65d4-9nkx9 0/1 Error 1 23h
etcd-k8s-master01 1/1 Running 1 23h
etcd-k8s-master02 1/1 Running 0 34m
etcd-k8s-master03 1/1 Running 0 13m
kube-apiserver-k8s-master01 1/1 Running 1 23h
kube-apiserver-k8s-master02 1/1 Running 0 34m
kube-apiserver-k8s-master03 1/1 Running 0 13m
kube-controller-manager-k8s-master01 1/1 Running 2 23h
kube-controller-manager-k8s-master02 1/1 Running 0 34m
kube-controller-manager-k8s-master03 1/1 Running 0 13m
kube-flannel-ds-amd64-cn98g 1/1 Running 0 79s
kube-flannel-ds-amd64-m8ljt 1/1 Running 0 79s
kube-flannel-ds-amd64-snkdh 1/1 Running 0 79s
kube-proxy-br48m 1/1 Running 1 23h
kube-proxy-nsltp 1/1 Running 0 13m
kube-proxy-sp8jj 1/1 Running 0 34m
kube-scheduler-k8s-master01 1/1 Running 2 23h
kube-scheduler-k8s-master02 1/1 Running 0 34m
kube-scheduler-k8s-master03 1/1 Running 0 13m
然后我们master01上可以看到三个节点的状态都ok了(Ready)
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 23h v1.15.1
k8s-master02 Ready master 35m v1.15.1
k8s-master03 Ready master 14m v1.15.1
可能会用到的
master01,master02,master03执行
rm -rf /etc/cni/net.d/*calico*
然后master01执行脚本
[root@k8s-master01 ~]# cat uninstall_calico.sh
#!/bin/bash
kubectl --namespace=kube-system delete ds calico-node
kubectl --namespace=kube-system delete deploy calico-policy-controller
kubectl --namespace=kube-system delete sa calico-node
kubectl --namespace=kube-system delete sa calico-policy-controller
kubectl --namespace=kube-system delete cm calico-config
kubectl --namespace=kube-system delete secret calico-etcd-secrets
rm -rf /etc/cni/net.d/* && rm -rf /var/lib/cni/calico
systemctl restart kubelet
ifconfig tunl0 down
kubectl delete -f /root/kube-flannel2.yml
等待flannel的pod都是不显示在pod列表后
kubectl apply -f /root/kube-flannel2.yml
coredns的错误暂时先不管吧
咱们继续
master01执行
[root@k8s-master01 ~]# shutdown -h now
然后master02执行kubectl get node命令就很卡
[root@k8s-master03 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 24h v1.15.1
k8s-master02 Ready master 67m v1.15.1
k8s-master03 Ready master 47m v1.15.1
[root@k8s-master03 lb]# kubectl get node
^[[A
^[[A
^[[A
^[[A
^[[A
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 24h v1.15.1
k8s-master02 Ready master 68m v1.15.1
k8s-master03 Ready master 47m v1.15.1
[root@k8s-master03 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 24h v1.15.1
k8s-master02 Ready master 68m v1.15.1
k8s-master03 Ready master 47m v1.15.1
[root@k8s-master03 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 24h v1.15.1
k8s-master02 Ready master 68m v1.15.1
k8s-master03 Ready master 47m v1.15.1
[root@k8s-master03 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 24h v1.15.1
k8s-master02 Ready master 68m v1.15.1
k8s-master03 Ready master 47m v1.15.1
[root@k8s-master03 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 24h v1.15.1
k8s-master02 Ready master 68m v1.15.1
k8s-master03 Ready master 47m v1.15.1
[root@k8s-master03 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 24h v1.15.1
k8s-master02 Ready master 68m v1.15.1
k8s-master03 Ready master 47m v1.15.1
那么只需要修改这里
[root@k8s-master02 lb]# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ESXdOekUwTkRjMU1Gb1hEVE16TURJd05ERTBORGMxTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHBYCnMrYUZQdHJwQ1VWMG5OZmdWYVkzM21VU1o3QzE5ZXhyK0w4Q1FVVXkrN1BXak1WbUM1U1VpRm1DQng3Sk8vS1cKT0NFZ2hmUGN3YU1EK1V3WDVBN0dnQVdnZC8wMXZvdy9JRUw3N1Q1UkhROWhFcnpyR29FOE1Pd1owcjdrUlpXaworbE4vM0NVak1UcmdVSlVQRzBOU1Z3cVpKYVBUNmg3b2NBS29FeXJJeGdiZ2x0Kzh4UFlmekJWM2p1bXZ5MlZ2CjV4d2VyR2UxZTAxZUVsSEdnOGFBNXRWKzNuNW95WkM2bGVEYkFSdkZmM1dVeC9XK3o2eTZpRURmTytEbkxDTmUKK3QwUjFEOFowN3ptalppWkJhVEpXbVpzRlE5TjQ1Y3d4V0UybVFOMDZQWC8xbHh0R1VraTNQVVN3M2pVOXlXNgo1WUs5TU1qQ1pVWkd1OHVqQUhzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGTm9RQlZJRjVkM2FhNE1OOForKzJHS0xjamIKNlpoKzZJaXJyaHg3dklZVzdlcFB6SklQVHVRL3MxVzZnY09oMUJxZUMvR2lrenZiektxU2MrVXA1QVo3SnVRTgo1ai9lU2pVd3VwaUxkaXpESC9Qb0owYlNiQzZTNkdJRGNxQVJUaU9mOVpwMldxNThsakN3eWhOR0krNlN2dWsrCnhNcWFWd2ViSVFxUFJsNlJRU293ZTNHdzE2bmhmZFhodGZXOTdvbUpSU3BWYldEaVlXM21qZXJDZ01VYWVnWVgKTkdwWmtuaHhVZFAwY2wyN2pEWC82WDVTeStnN2tFN3ZxQmhSUnFmOUtSMWpOZVp0czZ2UGF4YzBDc0h5N2R2MAp3U1dkQWp5NVNRM1lzODJCL1VWNFRXVDBYcEdjdHpxUWpSNEdleVY2bmVlN2RLL3hhWktzeVFMNXFkOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.66.20:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJS0FzbmxURHBLNjB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBeU1EY3hORFEzTlRCYUZ3MHlOREF5TURneE5EQTFOVE5hTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXdSYkZJYUJFeFd5eS8vdngKRnlHTkNqNmhIYVEvakx0WmZVWG5CZVBLZGZydFhtVTVIV3BIdTNDV1N2dDIyaHNoN0FHZWVZV2duNHFUZGpyKwpCd2N3dHpUUE5FMHN4NHFXQnlwWUpLNXovbEY1T0JMSHBaTXFCQVNianFmcHp6K0tGUzdtMmt6OUw4NndFQk9rCkNNN2wyUmFaQ250NDlrMjdpMXBkM1kzSEdpdjR4SE8wcjE2dkhjbndYWG1PUTlKR01jQ21CenZLQXNEK2g2bWEKeHIzYTFkRDQ5dzgwczhGaHFUQnQ1Zm5RRU9SSEFLdUZzcDZxMkV2S2pkdndIZDNDaFdJTzJkQWlWV3dpMmpvZQpIbThDVUVnYUNld0R5cThBQmtISFBBVEZ3OXVtekd6MFlrb0xVeXBlZFdrV0tzYkp4aXpnd1p2MkhwQmpHZ1lJCnJGNGdOd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCSUxNbHRuT3RiRkkyZmFOUVd1QTlCdjJ6QWIvcitLUzl3Vgp1SUR5eDVEUGVBb1MxemllWTdkcFlpUFNxTzV6MTB2NTJzc0x1Sm5PVkdKdVdrdkVua0lLQXMwRzFXdi9Bdi9sCjdaamRYeDV6WVJDYlp5MmtjRUsvM1hkM3RuV0JQME5STUZVT3RUcTBxUWhuWGZHNERUTW1ySUZyQm8wcnJpOXEKWit6Z093VWxMN3ZVcHgzVmFHbUxWaE1zM21WNDc2emlsdXNSZ3pKV2xLSVRtejVuQlc4R2hhczhXSFZZVFI3Nwp2dVFnMzNwOUViTXQrcmE0dFh1NDNCSUdSTDN0UHNZZzFGclUwRGlCZGhLZHA0RDdoUlMxOTRkUVhVem9XMkNCClNheE1xYjRNalhOZWlhVlNFWERORyt5MjYrcWdBUXh1aktWYm53b3ptbGN4ZHJWc1ZvYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBd1JiRklhQkV4V3l5Ly92eEZ5R05DajZoSGFRL2pMdFpmVVhuQmVQS2RmcnRYbVU1CkhXcEh1M0NXU3Z0MjJoc2g3QUdlZVlXZ240cVRkanIrQndjd3R6VFBORTBzeDRxV0J5cFlKSzV6L2xGNU9CTEgKcFpNcUJBU2JqcWZwenorS0ZTN20ya3o5TDg2d0VCT2tDTTdsMlJhWkNudDQ5azI3aTFwZDNZM0hHaXY0eEhPMApyMTZ2SGNud1hYbU9ROUpHTWNDbUJ6dktBc0QraDZtYXhyM2ExZEQ0OXc4MHM4RmhxVEJ0NWZuUUVPUkhBS3VGCnNwNnEyRXZLamR2d0hkM0NoV0lPMmRBaVZXd2kyam9lSG04Q1VFZ2FDZXdEeXE4QUJrSEhQQVRGdzl1bXpHejAKWWtvTFV5cGVkV2tXS3NiSnhpemd3WnYySHBCakdnWUlyRjRnTndJREFRQUJBb0lCQUZCZStQa1JLKzc5V3RpZwpkdTdJNFZzbFRJejVCQmJCR1BQQzkvR0VxbzVIUHl4dWQ4S2RyWFFBM2g0aDQ0dlBoV1FtSEYrNjFtdnlFNFUvCjh1TGNCRlFONEQyRjdpQzB0OVFOdFJpM29NSjRDVHZrM1VNM0tXTDR4QU15TTJrM3FuTTh6WXlLUlV6ei9HY0UKdGQrUXR5MlFjVzFpamF3QUdSTmRMdnI4ZTZhUmN1VW51a0tEOVM2ZngwaW8wUzZKUkhVWmFWRWdxREY3YU9vWgpVUUNaU2Q1VDVKNmlFUTQxd1VSSUN4M1JLb3lZbjJTbkRkRkZwZjNNS3JKbWw2TDVaaHlaNGt6RmVyYWM1cXYvCkpsUTg3RXdBcHFMK2pXc0VvV292bCtQcmQ0ZVZ0T04zbkc1V1BQcklVQUNsSW5ac1dTM0ZHcjhxOWZEcmg5NnYKSC9HUnVERUNnWUVBOXVOQUlaNDhyNnJFNTFEN25tdndQWWQ0QXROT1hYbFBab1NsR0ZwckxFUVJCMnNzc2VKUQp1Q3VLS3MvMXpXTXpaRUh4TFM0eXdDejhtR3o5dGZRSmRNaEJ4NW5nWFBiK0JMV1lzSVg2blFkVlBFK1VyalFnCndTM1RoRVYrUm5CZHBsWndVdUVIZ2VnWlhEV1Qxc3pqN055Wnl6RHV3UllKWGxzbm9YU09LdzhDZ1lFQXlEY3gKL0E3N1VWZXFDTjNMTHFqdGs2eW94THp1bjBjU0lYL2JpU2NTYS9yLzBLT3YxbXdQbnkxSmVVT2FXeklGTWowVwpDL2xpdkhTbkNpQlVsL21hSEFuVnlQYzFiK0FJcUw1czhtYkZUY2x1dDRSN1Q0ZWRkQ2F5aXVQek9Oa1dmSG04ClozMkprSzZBTXM0aEJzTjNRNDhkVjJLeCs1eFIwZG56MTBXY1dGa0NnWUFLSEd4MzgvOFFRcklsdHc5WEFaeXAKS0c4bHpubWJJbWk2RGh5a3pxOHM5T3l0blJvTGZ2VkhWYUVtOTdWZFgvNnUwSFNNSVNRNjhweTFzV0VDbnFmMApmRzhWT1p3U3NwcmNub05PVjI1WUdBREpvNGkzU2JNOXRoNi9nQWtYNFdvMGNiM1A1eDlqbHBuVFNPNXhFWnNVCkRFVFFLWVRkcTRWZXMrVC9tOEpteVFLQmdGMldQMDBkQzZpb0c1anRZODQ1dEdPMDcyYVhFY3R1QXpHWmZGc04KNG5TSzdRenZsbi9hSHlzK2xmdVMrQkhzdmJVUURNQW9JRmtMQmhHYnJ5OGl3MENiOEV4eUVZNXI0R0JRTXNqVAo5U0k1S0FHc2NaOXBPdFpTU0Y3WDBwY2VFbjY0d0xKM1lkZzVXVDltVHRYRWhIa1Y2cGN3VVJYVnFnRTNxZDFVCmNwSnhBb0dBUW5TQklOdGZiRVljdVcrUndmNzRhSlBXL0M1eXgvSDQ0Y3d5aEtYM0thaDdHbjZpZlhoaFRFNjkKZnJ5UTZWTUNmT1BBem5pNEgyV1dGbzY3UTNhSENSZ3pLTmFIOWZ4VC8xTVROaW5MU1lOaDRJdmdLamRvK1lodAppK0VUQVJsdWJwcUpMdk1qSXZNaFZodzQ1WTRkb0twRnhSQTdvbUdPamdMMFZGQnpjLzA9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
就是说修改为自己的节点ip地址并且端口不与之前的端口相同
https://192.168.66.20:6443
然后master02再次不断请求kubectl get node命令就很快了。
那么master03则是同样
[root@k8s-master03 lb]# cat $HOME/.kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ESXdOekUwTkRjMU1Gb1hEVE16TURJd05ERTBORGMxTUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHBYCnMrYUZQdHJwQ1VWMG5OZmdWYVkzM21VU1o3QzE5ZXhyK0w4Q1FVVXkrN1BXak1WbUM1U1VpRm1DQng3Sk8vS1cKT0NFZ2hmUGN3YU1EK1V3WDVBN0dnQVdnZC8wMXZvdy9JRUw3N1Q1UkhROWhFcnpyR29FOE1Pd1owcjdrUlpXaworbE4vM0NVak1UcmdVSlVQRzBOU1Z3cVpKYVBUNmg3b2NBS29FeXJJeGdiZ2x0Kzh4UFlmekJWM2p1bXZ5MlZ2CjV4d2VyR2UxZTAxZUVsSEdnOGFBNXRWKzNuNW95WkM2bGVEYkFSdkZmM1dVeC9XK3o2eTZpRURmTytEbkxDTmUKK3QwUjFEOFowN3ptalppWkJhVEpXbVpzRlE5TjQ1Y3d4V0UybVFOMDZQWC8xbHh0R1VraTNQVVN3M2pVOXlXNgo1WUs5TU1qQ1pVWkd1OHVqQUhzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGTm9RQlZJRjVkM2FhNE1OOForKzJHS0xjamIKNlpoKzZJaXJyaHg3dklZVzdlcFB6SklQVHVRL3MxVzZnY09oMUJxZUMvR2lrenZiektxU2MrVXA1QVo3SnVRTgo1ai9lU2pVd3VwaUxkaXpESC9Qb0owYlNiQzZTNkdJRGNxQVJUaU9mOVpwMldxNThsakN3eWhOR0krNlN2dWsrCnhNcWFWd2ViSVFxUFJsNlJRU293ZTNHdzE2bmhmZFhodGZXOTdvbUpSU3BWYldEaVlXM21qZXJDZ01VYWVnWVgKTkdwWmtuaHhVZFAwY2wyN2pEWC82WDVTeStnN2tFN3ZxQmhSUnFmOUtSMWpOZVp0czZ2UGF4YzBDc0h5N2R2MAp3U1dkQWp5NVNRM1lzODJCL1VWNFRXVDBYcEdjdHpxUWpSNEdleVY2bmVlN2RLL3hhWktzeVFMNXFkOD0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.66.21:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJYVRTMmRvYnlGaEl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBeU1EY3hORFEzTlRCYUZ3MHlOREF5TURneE5ESTJNemxhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXRRd2IyemxmZUtlSXFONWYKbE9aY0ErclpYZmthYVd6M0FyeGdJRVRHRFFEc1ljZTU3NVNOR2NxLzU5WVZ2QUFkaDQ3N1I2YzVNZ3JiYkh6WgpFZEF5TnlDOHdmVXowUlFTQnlKUVhYbVRFUGdwNDEvN0xUeUFHYm80TTE3ZVk2bHZ2VUM5SEdnU1hQS2x6Zjc3CldLcmtqaVRTaTFVVjVOMjA2aitOSFV5TkpjMndKNVY0THZ0S2Voc2tRemxrRkgyK3RheUVramZlQURibVQrVGEKVXNXd25RYTdCVFBTMFpyaHJDNzNIbk9Qckx4UVRSSkgyZmk3b3VSQUdZYkVzSklkVVN5SnVvY1QwaUlRcHE5Ywo2enI5R1VacytEZitzY21Sc241Mi9vSUxkM1ZtWUMwbE1XQVRwMk15Z1laQXMrSHRpNXRMczBZSkxMa3RsWDVoClFJYk4vUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEZ0dTb1YvZ2dueXFBemtWaXp0ZWs2dUo1My9uUjV3S2lZWApuNmZLenpYMnZ1Wmg0TkVVeDhNbWtlcVhZTXNHU1p2aFFXaHkxSXZMcDdlck9PcytBanF0WmdLQ1JiMnAxdm1ZCnkwQjFDaGYrS0cxN3phcFlBd2FnYW52WnI4ZG5YbEZoZC95bHgyK2N6aFRjVkFuK1VVeTY5c2tlZFRuYXFkVkUKZFgzN2RWR1BKdGswUW1zV29yRHZuR0FycXJqS3p6L055SnlEZFpKSVdOYjAzR1NwYnBiR2Z0K1NWWFhseEVwagoxNGlIRXo4b0FyaUNlSllrQ2d4Yjd0OFhsaHh2OG0vaGpURVo1T21QTHovUlNybGpwZ3psekpNaVh0cUpPTjY0CnIvRUUrNk9ORWI0T1JsYXJYdngxRHpTdjE1SW1xZkZrbEV4RUM3eElqWTMyUUszcVI3ND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdFF3YjJ6bGZlS2VJcU41ZmxPWmNBK3JaWGZrYWFXejNBcnhnSUVUR0RRRHNZY2U1Cjc1U05HY3EvNTlZVnZBQWRoNDc3UjZjNU1ncmJiSHpaRWRBeU55Qzh3ZlV6MFJRU0J5SlFYWG1URVBncDQxLzcKTFR5QUdibzRNMTdlWTZsdnZVQzlIR2dTWFBLbHpmNzdXS3JramlUU2kxVVY1TjIwNmorTkhVeU5KYzJ3SjVWNApMdnRLZWhza1F6bGtGSDIrdGF5RWtqZmVBRGJtVCtUYVVzV3duUWE3QlRQUzBacmhyQzczSG5PUHJMeFFUUkpICjJmaTdvdVJBR1liRXNKSWRVU3lKdW9jVDBpSVFwcTljNnpyOUdVWnMrRGYrc2NtUnNuNTIvb0lMZDNWbVlDMGwKTVdBVHAyTXlnWVpBcytIdGk1dExzMFlKTExrdGxYNWhRSWJOL1FJREFRQUJBb0lCQUV2WnU5MzN4b0RsSjhNZwpEMGx4ellFeXBralJzWGdUMTlVRW1QMUw4dkhGdmtNbEMwaE5vMlAzNXJpNW43ZDVFT1lYU0QxMzJPV1hXT0MxCjJiZTEweVAzaWoxMGZuWU5BNVNMa3NIbXltK2ttT0FTK1VlMWZqSEpLL3lSdFhocHAwL1J6S2tYRFFKMkFuTXcKYlp0elZYZ2NBejJ1c3hLRXRHUWpwZnB0ZFFFdDIvMEg4bTNhVU40MTJHeWw2SUh6ZkZXeU5RWTAxS0lJSDlqcwo3aUxVcjQ5bmhRREN6UU1pTk52UVZ4QXppdzdNRnlJTmN0UGFXNStRa1JpbGtkZnhtS0ZESHRwTkhlQkJyT2t4CmJoSldpVzRuV205SUpKb2Z3QlpqbkdOWml2Rm1HaDlSb2FWSXYvOExOUUxBZld3SDduUVNLME1EdU15K1I1MHUKYVAyUHNZRUNnWUVBeTEvVkVnQWV0dmtObWNjdUtVRkZuMldjNnB6SE01LzZ2a2ZvSktXcm5xQ3huajlvUXJ5UgpmTHUxb25XUjJWTlFDYUVIR0NJVzBWNktZbjRvamczZnZGT1hlOU81Q1JVWHh6bWQrdSt3bnMrWlNnaDAxNGsrCnd6cmsra0h6SnVqWWZEQ0d4SVR6SkM1eU5manZ5WWJIVE1HZWRsazZkTko5RXVIOHJyazRScUVDZ1lFQTQrVkMKOXFvZEZIYldPaDZHUEVwS3VHbysvd2hsZjAxRjNRcGJnTHF5aVYxM09WR3dlR2owL29Kak5POStWcFV0VGYxagpKYUwzdnlleWw4Vkg4YVVaSTVZUVdYVG9QV1AwNmxnd1pBdVhGb1JlN3ZFZVFYQjJNd0RGcnZ0d0NiMk1RZ0xjCmZoY3JKUjlzUTNXR2g1NUgzdy9RN05ZYks1L3JUVk1ObEN2Q3RkMENnWUJnVkZvV0ZweDF5bTNZd3ZGb2RSUkgKTmRnbmdHOFNVdHB2dXB1SWtEaEVBSlZoQVdPZkNMWll3SWgrRlBZcVhEM3k4YVRzbDJqN2JxNVpqS3drN1FsbQpxS2w5NjRFZmZqQXZHMmxxN0pGYUI3Ynh6Q09iMjlRd29QcklWdWlYSzM4dkE4VXgzRTlXZWZGN0F4aUEraWY0CmdWVlBkV0FzNlc1NHZUWDBoS0xWUVFLQmdRQ2NBdm41cFBGdGJnRXdIbTlrM0xNVVZsK3o5Y3FPQUpkZ1A5UHUKWjJFTDJybGd1d1NsR2EwR2dycHBwYjZHaFc5VFliQzdOanFHV1NYUThwUlMzK1E2MFdOMTZpdUd3MlFKL2I5Ngo3ZGhMNk9pWWlPWmVoQi9Xd0tPVUs3dENYOG1oOHhXQkdGbEgrNkFBK25iVFpzN3E3SWZwYXBXRkl1QlJ1aGFrCnBlU1EzUUtCZ0RQMitlQjF2c0I0VmkwRGNVM1llcWtYaDdua1dEVzVTMkFrYzNEbFdiQ2pyMDNoNUxrM1lQc3EKUzBaY0x3bmp0djJQMkZERzlWVEtUUExwcEROMG5scXVWYitjem0wdEZLZmFobFEvNnd0YU9pcEFtNlhrZHhXRApjSnhscWdEaHBNbEN4Q0sycmNnN2g5b2VlWjNINXZPSTdQYStuK3pIcThwaWoxYnpZUEUzCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
端口保持和刚才master02修改后的那个端口一样
server: https://192.168.66.21:6443
不过现在就是说master03是有这样的问题
[root@k8s-master03 lb]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 24h v1.15.1
k8s-master02 Ready master 76m v1.15.1
k8s-master03 NotReady master 55m v1.15.1
可以看到master03是NotReady状态,可能是我中间修改那个$HOME/.kube/config时候ip和端口本来要先操作master02上面,结果在master03上面操作的master02的。。。
然后再次启动master01节点
最后:
就是上面出现的那个coredns的问题我又想了下,可能是当时我集群中的那个vip服务器没有开启(之前的harbor,192.168.66.100这个。。。)
好像又不是,不管了,今天先这样了。
关键字词:Kubernetes,高可用,K8S
相关文章
- 68、Kubernetes - 高可用的 K8S 构建(3)_]
- 67、Kubernetes - 高可用的 K8S 构建(2)_]
- 64、Kubernetes - Helm 及其它功能性组件 - EFK 日志_
- 66、Kubernetes - 高可用的 K8S 构建(1)_]
- 65、Kubernetes - 证书可用年限修改_
- 64、Kubernetes - Helm 及其它功能性组件 - EFK 日志_
- no available release name found问题排查
- 56、Kubernetes - 安全 鉴权(3]
- 53、Kubernetes - 安全 认证
- 23、Kubernetes - 资源清单 - start、stop、相位