# kubectl get node NAME STATUS ROLES AGE VERSION mater Ready control-plane 36m v1.24.1 node1 Ready <none> 34m v1.24.1 node2 Ready <none> 34m v1.24.1 wulaoer Ready <none> 8m15s v1.24.1
为了验证我升级后的集群没有改变,我先在集群里部署一个mysql,并在mysql里创建了一个库,等下集群升级之后,我把现在的这个master关闭掉,然后看一下mysql是否可以使用。如果可以使用,那就说明我的集群升级没有问题,如果不能使用说明我的集群升级失败。
我这里已经安装好了,安装过程就不多说了,网上有很多安装方法,这里就不叙述,也可以使用其他的代替即可。
[root@Mater mysql]# kubectl get pod -n wulaoer -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES mysql-6555b554cf-vjgsn 1/1 Running 0 2m6s 10.244.76.136 wulaoer <none> <none> [root@Mater mysql]# kubectl exec -it -n wulaoer mysql-6555b554cf-vjgsn /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@mysql-6555b554cf-vjgsn:/# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 15 Server version: 8.0.19 MySQL Community Server - GPL Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.00 sec) mysql> create database wulaoer -> ; Query OK, 1 row affected (0.01 sec) mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | | wulaoer | +--------------------+ 5 rows in set (0.00 sec)
在升级之前避免和旧的文件有冲突,我这里创建了一个目录,升级用的都放在当前的文件下,我们先把kubeadm-config导出来,kubeadm-config是当初初始化集群的文件,我们需要在这个文件中定义一下我们的虚拟vip,然后在初始化集群,这样就能替换成api的地址了。
[root@Mater k8s]# mkdir update [root@Mater k8s]# cd update/ [root@Mater update]# ls [root@Mater update]# kubectl -n kube-system get configmap kubeadm-config -o jsonpath='{.data.ClusterConfiguration}' > kubeadm.yaml [root@Mater update]# vim kubeadm.yaml "kubeadm.yaml" 23L, 550C 17,35 All apiServer: certSANs: - 10.211.55.245 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.24.1 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
这里定义一个不用的ip地址作为我后期的虚拟vip,然后需要把kubeadm的证书先移动出来一分,如果目录下有证书和密钥,更新就不会重新创建会使用原来的也就无法让虚拟vip生效了。因为我这里使用的是kube-vip作为虚拟VIP来做负载均衡,所以在生成新的证书之前先把kube-vip的pod生成一下,然后在进行更新。
[root@Mater update]# mv /etc/kubernetes/pki/apiserver.{crt,key} ~ [root@Mater update]# export VIP=10.211.55.245 [root@Mater update]# export INTERFACE=enp0s5 [root@Mater update]# ctr image pull docker.io/plndr/kube-vip:v0.6.2 docker.io/plndr/kube-vip:v0.6.2: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:d54f230f5e9cba46623eeb2e115c20e200221971e59fd6895601c93dce1fcded: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:4cac60a3ec0568a710e70dadce98802f27fcb1c3badce2f5c35bee773395fa54: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:a0e0a2f01850700b12f89dc705a998cb82446e9a9374685c2978c944a8b301b5: done |++++++++++++++++++++++++++++++++++++++| config-sha256:404ca3549f735c07e30e133518a04ebb8485c3c874dea7da0282d6375678c166: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:311a626d745f1811014fe4a3613da646d60ef96f2291a28efb5accf3d2b8dd2f: done |++++++++++++++++++++++++++++++++++++++| elapsed: 12.1s total: 11.1 M (941.3 KiB/s) unpacking linux/amd64 sha256:d54f230f5e9cba46623eeb2e115c20e200221971e59fd6895601c93dce1fcded... done: 330.795221ms [root@Mater update]# ctr run --rm --net-host docker.io/plndr/kube-vip:v0.6.2 vip \ > /kube-vip manifest pod \ > --interface $INTERFACE \ > --vip $VIP \ > --controlplane \ > --services \ > --arp \ > --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null name: kube-vip namespace: kube-system spec: containers: - args: - manager env: - name: vip_arp value: "true" - name: port value: "6443" - name: vip_interface value: enp0s5 - name: vip_cidr value: "32" - name: cp_enable value: "true" - name: cp_namespace value: kube-system - name: vip_ddns value: "false" - name: svc_enable value: "true" - name: vip_leaderelection value: "true" - name: vip_leasename value: plndr-cp-lock - name: vip_leaseduration value: "5" - name: vip_renewdeadline value: "3" - name: vip_retryperiod value: "1" - name: vip_address value: 10.211.55.245 - name: prometheus_server value: :2112 image: ghcr.io/kube-vip/kube-vip:v0.6.2 imagePullPolicy: Always name: kube-vip resources: {} securityContext: capabilities: add: - NET_ADMIN - NET_RAW volumeMounts: - mountPath: /etc/kubernetes/admin.conf name: kubeconfig hostAliases: - hostnames: - kubernetes ip: 127.0.0.1 hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/admin.conf name: kubeconfig status: {}
更新后,你会发现在kube-system下创建了一个kube-vip-mater
的pod,这个就是kube-vip的容器,并且在我们指定的网卡上多了一个我们定义的ip地址
[root@Mater update]# kubeadm init phase certs apiserver --config kubeadm.yaml [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local mater] and IPs [10.96.0.1 10.211.55.11 10.211.55.245] [root@Mater update]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-7fc4577899-85xk7 1/1 Running 0 17m calico-kube-controllers-7fc4577899-rfdx4 1/1 Terminating 0 38h calico-node-h5sfl 1/1 Running 0 38h calico-node-kgfh8 1/1 Running 1 (24m ago) 38h calico-node-q8x6r 1/1 Running 1 (23m ago) 38h calico-node-zkhjr 1/1 Running 0 38h coredns-74586cf9b6-vk9rd 1/1 Running 1 (24m ago) 38h coredns-74586cf9b6-wlpzw 1/1 Running 1 (24m ago) 38h etcd-mater 1/1 Running 1 (24m ago) 38h kube-apiserver-mater 1/1 Running 1 (24m ago) 38h kube-controller-manager-mater 1/1 Running 1 (24m ago) 38h kube-proxy-7l5q8 1/1 Running 1 (23m ago) 38h kube-proxy-g6hfx 1/1 Running 1 (24m ago) 38h kube-proxy-jtsg6 1/1 Running 0 38h kube-proxy-tn7sw 1/1 Running 0 38h kube-scheduler-mater 1/1 Running 1 (24m ago) 38h kube-vip-mater 1/1 Running 0 41s [root@Mater update]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:1c:42:03:c5:a9 brd ff:ff:ff:ff:ff:ff inet 10.211.55.11/24 brd 10.211.55.255 scope global dynamic noprefixroute enp0s5 valid_lft 1143sec preferred_lft 1143sec inet 10.211.55.245/32 scope global enp0s5 valid_lft forever preferred_lft forever inet6 fdb2:2c26:f4e4:0:21c:42ff:fe03:c5a9/64 scope global dynamic noprefixroute valid_lft 2591613sec preferred_lft 604413sec inet6 fe80::21c:42ff:fe03:c5a9/64 scope link noprefixroute valid_lft forever preferred_lft forever ...............................................
这个是需要等kube-vip-mater
创建成功后创建的,此时我们的负载均衡就创建好了,后面先把apiserver重启一下,然后对外公布使用了kube-vip的ip地址。然后验证一下,确定在DNS里有保护kube-vip的ip地址说明我们创建成功了。
[root@Mater update]# kubectl delete pod -n kube-system kube-apiserver-mater pod "kube-apiserver-mater" deleted [root@Mater update]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text Certificate: Data: ............................... X509v3 Subject Alternative Name: DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:mater, IP Address:10.96.0.1, IP Address:10.211.55.11, IP Address:10.211.55.245 ...............................
下面配置kubeadm-config的configmap,上面已经更新到apiserver里了,但是configmap里还没有更新可以使用
[root@Mater update]# kubectl -n kube-system edit configmap kubeadm-config # Please edit the object below. Lines beginning with a '#' will be ignored, Edit cancelled, no changes made. #直接更新报错了,我们也可以直接在配置文件中修改 [root@Mater update]# kubectl -n kube-system edit configmap kubeadm-config apiVersion: v1 data: ClusterConfiguration: | apiServer: certSANs: #添加 - 10.211.55.245 extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta3 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: {} etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/google_containers kind: ClusterConfiguration controlPlaneEndpoint: 10.211.55.245:6443 #添加 kubernetesVersion: v1.24.1 ........................................
添加后,需要验证一下是否添加成功
[root@Mater update]# kubectl -n kube-system get configmap kubeadm-config -o yaml
下面就需要把master节点的中间件也需要修改一下,然后重启即可。
[root@Mater update]# sed -i "s/10.211.55.11/10.211.55.245/g" /etc/kubernetes/kubelet.conf [root@Mater update]# systemctl restart kubelet [root@Mater update]# sed -i "s/10.211.55.11/10.211.55.245/g" /etc/kubernetes/controller-manager.conf [root@Mater update]# kubectl delete pod -n kube-system kube-controller-manager-mater --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "kube-controller-manager-mater" force deleted [root@Mater update]# sed -i "s/10.211.55.11/10.211.55.245/g" /etc/kubernetes/scheduler.conf [root@Mater update]# kubectl delete pod -n kube-system kube-scheduler-mater --force warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "kube-scheduler-mater" force deleted
下面修改kube-proxy的配置信息,另外本地的config也需要修改一下。
[root@Mater update]# kubectl -n kube-system edit cm kube-proxy ................................. kubeconfig.conf: |- apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://10.211.55.245:6443 ................................. [root@Mater update]# vim /root/.kube/config ................................. server: https://10.211.55.245:6443 .................................
最后更新集群对外的信息,修改成功后在验证一下即可。至此,单节点修改成负载均衡对外成功修改完了。
[root@Mater update]# kubectl -n kube-public edit cm cluster-info ................................. server: https://10.211.55.245:6443 ................................. [root@Mater update]# kubectl cluster-info Kubernetes control plane is running at https://10.211.55.245:6443 CoreDNS is running at https://10.211.55.245:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
下面在另一个节点上添加一个master节点,因为我的资源问题,所以我就把我的node节点删掉然后加入集群,另外有一个节点不会升级到master节点,所以需要修改一下kubelet.conf
配置文件即可
[root@wulaoer ~]# sed -i "s/10.211.55.11/10.211.55.245/g" /etc/kubernetes/kubelet.conf [root@wulaoer ~]# systemctl restart kubelet [root@Mater update]# kubectl get node NAME STATUS ROLES AGE VERSION mater Ready control-plane 143m v1.24.1 node1 Ready <none> 141m v1.24.1 node2 Ready <none> 140m v1.24.1 wulaoer Ready <none> 134m v1.24.1
下面我把node1先从集群冲推出,然后重新以master节点的方式加入集群。
[root@Node1 ~]# kubeadm reset [root@Mater update]# kubectl delete node node1 node "node1" deleted [root@Mater update]# kubectl get node NAME STATUS ROLES AGE VERSION mater Ready control-plane 145m v1.24.1 node2 Ready <none> 142m v1.24.1 wulaoer Ready <none> 137m v1.24.1
然后在master节点上创建加入密钥,在node1上执行即可。
[root@Mater update]# kubeadm init phase upload-certs --upload-certs I1214 17:38:12.001210 60307 version.go:255] remote version is much newer: v1.29.0; falling back to: stable-1.24 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 3d067130fbb46c6394e94fe8f1f5e648430c0bef9edf36b5b58d51fede70f889 [root@Mater update]# kubeadm token create --print-join-command --config kubeadm.yaml kubeadm join 10.211.55.245:6443 --token 83wdhy.pc1xm9st3voqifr9 --discovery-token-ca-cert-hash sha256:c10060236cfb34582c6374f460e9ecdd321eebf9190b1c4f9c68de5c2fec5e70
这里需要注意密钥有效期是24小时,另外如果添加master节点就使用
kubeadm join 10.211.55.245:6443 --token 83wdhy.pc1xm9st3voqifr9 --discovery-token-ca-cert-hash sha256:c10060236cfb34582c6374f460e9ecdd321eebf9190b1c4f9c68de5c2fec5e70 --control-plane --certificate-key 3d067130fbb46c6394e94fe8f1f5e648430c0bef9edf36b5b58d51fede70f889
拼接的形式,如果加入的是node节点可以不用后面拼接的内容,直接使用
kubeadm join 10.211.55.245:6443 --token 83wdhy.pc1xm9st3voqifr9 --discovery-token-ca-cert-hash sha256:c10060236cfb34582c6374f460e9ecdd321eebf9190b1c4f9c68de5c2fec5e70
就可以了,因为我加入的master节点需要配置kube-vip才可以实现高可用,所以在加入集群之前需要配置一下kube-vip的pod文件,然后在加入集群。
[root@Node1 ~]# export VIP=10.211.55.245 [root@Node1 ~]# export INTERFACE=enp0s5 [root@Node1 ~]# ctr image pull docker.io/plndr/kube-vip:v0.6.2 docker.io/plndr/kube-vip:v0.6.2: resolved |++++++++++++++++++++++++++++++++++++++| index-sha256:d54f230f5e9cba46623eeb2e115c20e200221971e59fd6895601c93dce1fcded: done |++++++++++++++++++++++++++++++++++++++| manifest-sha256:4cac60a3ec0568a710e70dadce98802f27fcb1c3badce2f5c35bee773395fa54: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:a0e0a2f01850700b12f89dc705a998cb82446e9a9374685c2978c944a8b301b5: done |++++++++++++++++++++++++++++++++++++++| config-sha256:404ca3549f735c07e30e133518a04ebb8485c3c874dea7da0282d6375678c166: done |++++++++++++++++++++++++++++++++++++++| layer-sha256:311a626d745f1811014fe4a3613da646d60ef96f2291a28efb5accf3d2b8dd2f: done |++++++++++++++++++++++++++++++++++++++| elapsed: 12.1s total: 12.1 M (1.0 MiB/s) unpacking linux/amd64 sha256:d54f230f5e9cba46623eeb2e115c20e200221971e59fd6895601c93dce1fcded... done: 499.486146ms [root@Node1 ~]# ctr run --rm --net-host docker.io/plndr/kube-vip:v0.6.2 vip \ > /kube-vip manifest pod \ > --interface $INTERFACE \ > --vip $VIP \ > --controlplane \ > --services \ > --arp \ > --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml apiVersion: v1 kind: Pod metadata: creationTimestamp: null name: kube-vip namespace: kube-system spec: containers: - args: - manager env: - name: vip_arp value: "true" - name: port value: "6443" - name: vip_interface value: enp0s5 - name: vip_cidr value: "32" - name: cp_enable value: "true" - name: cp_namespace value: kube-system - name: vip_ddns value: "false" - name: svc_enable value: "true" - name: vip_leaderelection value: "true" - name: vip_leasename value: plndr-cp-lock - name: vip_leaseduration value: "5" - name: vip_renewdeadline value: "3" - name: vip_retryperiod value: "1" - name: vip_address value: 10.211.55.245 - name: prometheus_server value: :2112 image: ghcr.io/kube-vip/kube-vip:v0.6.2 imagePullPolicy: Always name: kube-vip resources: {} securityContext: capabilities: add: - NET_ADMIN - NET_RAW volumeMounts: - mountPath: /etc/kubernetes/admin.conf name: kubeconfig hostAliases: - hostnames: - kubernetes ip: 127.0.0.1 hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/admin.conf name: kubeconfig status: {} [root@Node1 ~]# kubeadm join 10.211.55.245:6443 --token 83wdhy.pc1xm9st3voqifr9 --discovery-token-ca-cert-hash sha256:c10060236cfb34582c6374f460e9ecdd321eebf9190b1c4f9c68de5c2fec5e70 --control-plane --certificate-key 3d067130fbb46c6394e94fe8f1f5e648430c0bef9edf36b5b58d51fede70f889 、[preflight] Running pre-flight checks [WARNING FileExisting-tc]: tc not found in system path [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks before initializing the new control plane instance [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [localhost node1] and IPs [10.211.55.12 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [localhost node1] and IPs [10.211.55.12 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node1] and IPs [10.96.0.1 10.211.55.12 10.211.55.245] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki" [certs] Using the existing "sa" key [kubeconfig] Generating kubeconfig files [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [check-etcd] Checking that the etcd cluster is healthy [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [etcd] Announced new etcd member joining to the existing etcd cluster [etcd] Creating static Pod manifest for "etcd" [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s The 'update-status' phase is deprecated and will be removed in a future release. Currently it performs no operation [mark-control-plane] Marking the node node1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] [mark-control-plane] Marking the node node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule] This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Control plane label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.
node1加入集群创建好了,看一下在集群里都创建了什么.
[root@Node1 ~]# mkdir -p $HOME/.kube [root@Node1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@Node1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@Node1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION mater Ready control-plane 154m v1.24.1 node1 Ready control-plane 61s v1.24.1 node2 Ready <none> 151m v1.24.1 wulaoer Ready <none> 146m v1.24.1 [root@Node1 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE default nfs-client-provisioner-645dcf6f9d-8fkpb 1/1 Running 1 (132m ago) 140m kube-system calico-kube-controllers-7fc4577899-fdp6k 1/1 Running 0 8m56s kube-system calico-node-f6v6j 1/1 Running 0 146m kube-system calico-node-lcqpm 1/1 Running 0 150m kube-system calico-node-njlpj 1/1 Running 0 88s kube-system calico-node-pch76 1/1 Running 0 150m kube-system coredns-74586cf9b6-4pczv 1/1 Running 0 8m56s kube-system coredns-74586cf9b6-ck6zr 1/1 Running 0 8m56s kube-system etcd-mater 1/1 Running 1 155m kube-system etcd-node1 1/1 Running 0 77s kube-system kube-apiserver-mater 1/1 Running 1 53m kube-system kube-apiserver-node1 1/1 Running 1 (79s ago) 77s kube-system kube-controller-manager-mater 1/1 Running 1 39m kube-system kube-controller-manager-node1 1/1 Running 0 9s kube-system kube-proxy-4jb6d 1/1 Running 0 88s kube-system kube-proxy-b5fs8 1/1 Running 0 154m kube-system kube-proxy-w2f99 1/1 Running 0 146m kube-system kube-proxy-xb88f 1/1 Running 0 152m kube-system kube-scheduler-mater 1/1 Running 1 38m kube-system kube-scheduler-node1 1/1 Running 0 10s kube-system kube-vip-mater 1/1 Running 1 (79s ago) 54m kube-system kube-vip-node1 1/1 Running 0 26s wulaoer mysql-6555b554cf-s8vpd 1/1 Running 0 15m
在集群里增加了一个etcd-node1
的容器,也增加了一个kube-scheduler-node1
和kube-vip-node1
这个就是kube-vip作为负载均衡来使用的,因为kube-vip必须是三个master节点才可以实现高可用,使用同样的方法在加一个master节点。
[root@Node2 ~]# mkdir -p $HOME/.kube [root@Node2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@Node2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@Node2 ~]# kubectl get node NAME STATUS ROLES AGE VERSION mater Ready control-plane 169m v1.24.1 node1 Ready control-plane 15m v1.24.1 node2 Ready control-plane 165m v1.24.1 wulaoer Ready <none> 160m v1.24.1 [root@Node2 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE default nfs-client-provisioner-645dcf6f9d-8fkpb 1/1 Running 1 (146m ago) 154m kube-system calico-kube-controllers-7fc4577899-fdp6k 1/1 Running 2 (2m54s ago) 22m kube-system calico-node-f6v6j 1/1 Running 0 160m kube-system calico-node-lcqpm 1/1 Running 2 164m kube-system calico-node-njlpj 1/1 Running 0 15m kube-system calico-node-pch76 1/1 Running 0 164m kube-system coredns-74586cf9b6-4pczv 1/1 Running 0 22m kube-system coredns-74586cf9b6-ck6zr 1/1 Running 2 (2m52s ago) 22m kube-system etcd-mater 1/1 Running 1 169m kube-system etcd-node1 1/1 Running 0 15m kube-system etcd-node2 1/1 Running 0 87s kube-system kube-apiserver-mater 1/1 Running 1 67m kube-system kube-apiserver-node1 1/1 Running 1 (15m ago) 15m kube-system kube-apiserver-node2 1/1 Running 1 3m4s kube-system kube-controller-manager-mater 1/1 Running 1 53m kube-system kube-controller-manager-node1 1/1 Running 0 14m kube-system kube-controller-manager-node2 1/1 Running 1 3m14s kube-system kube-proxy-4jb6d 1/1 Running 0 15m kube-system kube-proxy-b5fs8 1/1 Running 0 168m kube-system kube-proxy-w2f99 1/1 Running 0 160m kube-system kube-proxy-xb88f 1/1 Running 2 166m kube-system kube-scheduler-mater 1/1 Running 1 52m kube-system kube-scheduler-node1 1/1 Running 0 14m kube-system kube-scheduler-node2 1/1 Running 1 3m16s kube-system kube-vip-mater 1/1 Running 1 (15m ago) 68m kube-system kube-vip-node1 1/1 Running 0 14m kube-system kube-vip-node2 1/1 Running 0 17s wulaoer mysql-6555b554cf-s8vpd 1/1 Running 0 29m
至此,单节点集群升级多节点集群完成,下面就需要做的就是验证,在验证之前有几个主要事项,在加入集群之前一定要生成一个kube-vip的yaml文件,要不不会创建kube-vip,也就无法达到高可用的问题了。我们先把原来的master停机,然后看一下集群的状态。
[root@Node1 ~]# kubectl get node NAME STATUS ROLES AGE VERSION mater NotReady control-plane 3h13m v1.24.1 node1 Ready control-plane 39m v1.24.1 node2 Ready control-plane 3h10m v1.24.1 wulaoer Ready <none> 3h5m v1.24.1 [root@Node1 ~]# kubectl get pod -A NAMESPACE NAME READY STATUS RESTARTS AGE default nfs-client-provisioner-645dcf6f9d-8fkpb 1/1 Running 1 (170m ago) 179m kube-system calico-kube-controllers-7fc4577899-fdp6k 1/1 Running 2 (27m ago) 47m kube-system calico-node-f6v6j 1/1 Running 0 3h5m kube-system calico-node-lcqpm 1/1 Running 2 3h9m kube-system calico-node-njlpj 1/1 Running 0 39m kube-system calico-node-pch76 1/1 Running 0 3h9m kube-system coredns-74586cf9b6-4pczv 1/1 Running 0 47m kube-system coredns-74586cf9b6-ck6zr 1/1 Running 2 (27m ago) 47m kube-system etcd-mater 1/1 Running 1 3h13m kube-system etcd-node1 1/1 Running 0 39m kube-system etcd-node2 1/1 Running 0 25m kube-system kube-apiserver-mater 1/1 Running 1 91m kube-system kube-apiserver-node1 1/1 Running 1 (39m ago) 39m kube-system kube-apiserver-node2 1/1 Running 1 27m kube-system kube-controller-manager-mater 1/1 Running 1 78m kube-system kube-controller-manager-node1 1/1 Running 0 38m kube-system kube-controller-manager-node2 1/1 Running 1 27m kube-system kube-proxy-4jb6d 1/1 Running 0 39m kube-system kube-proxy-b5fs8 1/1 Running 0 3h13m kube-system kube-proxy-w2f99 1/1 Running 0 3h5m kube-system kube-proxy-xb88f 1/1 Running 2 3h10m kube-system kube-scheduler-mater 1/1 Running 1 77m kube-system kube-scheduler-node1 1/1 Running 0 38m kube-system kube-scheduler-node2 1/1 Running 1 27m kube-system kube-vip-mater 1/1 Running 1 (39m ago) 92m kube-system kube-vip-node1 1/1 Running 0 38m kube-system kube-vip-node2 1/1 Running 0 24m wulaoer mysql-6555b554cf-s8vpd 1/1 Running 0 54m [root@Node1 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:1c:42:b3:c4:fb brd ff:ff:ff:ff:ff:ff inet 10.211.55.12/24 brd 10.211.55.255 scope global dynamic noprefixroute enp0s5 valid_lft 1002sec preferred_lft 1002sec inet 10.211.55.245/32 scope global enp0s5 valid_lft forever preferred_lft forever inet6 fdb2:2c26:f4e4:0:21c:42ff:feb3:c4fb/64 scope global dynamic noprefixroute valid_lft 2591742sec preferred_lft 604542sec inet6 fe80::21c:42ff:feb3:c4fb/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 inet 10.244.166.128/32 scope global tunl0 valid_lft forever preferred_lft forever
原来在master节点上的虚拟ip地址现在跳转到node1上了,node节点状态也是离线状态,所以不影响集群使用的,但是在集群中的pod状态还是正常,这个可以忽略的,因为容器已经不工作了,可以删除的。如果删除后master节点启动后会自动创建的,这个针对整个集群来说是不影响使用的。下面验证一下我们前面创建的数据库。
[root@Node1 ~]# kubectl exec -it -n wulaoer mysql-6555b554cf-s8vpd /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@mysql-6555b554cf-s8vpd:/# mysql -uroot -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 706 Server version: 8.0.19 MySQL Community Server - GPL Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | | wulaoer | +--------------------+ 5 rows in set (0.01 sec)
数据库还在,说明我们这个集群就是原来的集群,数据也保留了,集群升级完成。注意三个master节点可以允许有一个关机,五个master节点允许有两个关机,
您可以选择一种方式赞助本站
支付宝扫一扫赞助
微信钱包扫描赞助
赏