Ceci est une ancienne révision du document !
Table des matières
Version - 2022.01
Dernière mise-à-jour : 2022/09/08 09:40
DOF303 - Gestion de la Maintenance et des Mises-à-jour du Cluster
Contenu du Module
- DOF303 - Gestion de la Maintenance et des Mises-à-jour du Cluster
- Contenu du Module
- LAB #1 - Gestion de la Maintenance
- 1.1 - La Commande drain
- 1.2 - La Commande uncordon
- LAB #2 - Gestion des Mises-à-jour
- 2.1 - Mise-à-jour de kubeadm
- 2.2 - Mise-à-jour des Travailleurs
LAB #1 - Gestion de la Maintenance
Afin de procéder à la maintenance d'un noeud, il est souvent nécessaire de le sortir du cluster. Cette opération s'appelle un drain.
1.1 - La Commande drain
Constatez l'état des pods :
root@kubemaster:~# kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default myapp-deployment-57c6cb89d9-dh4cb 1/1 Running 0 27m 192.168.150.2 kubenode2.ittraining.loc <none> <none> default myapp-deployment-57c6cb89d9-f69nk 1/1 Running 0 27m 192.168.239.2 kubenode1.ittraining.loc <none> <none> default myapp-deployment-57c6cb89d9-q7d4p 1/1 Running 0 27m 192.168.150.3 kubenode2.ittraining.loc <none> <none> default nginx 1/1 Running 0 32m 192.168.239.1 kubenode1.ittraining.loc <none> <none> kube-system calico-kube-controllers-6799f5f4b4-zk298 1/1 Running 0 60m 192.168.55.195 kubemaster.ittraining.loc <none> <none> kube-system calico-node-5htrc 1/1 Running 0 50m 192.168.56.3 kubenode1.ittraining.loc <none> <none> kube-system calico-node-dc7hd 1/1 Running 0 60m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system calico-node-qk5kt 1/1 Running 0 52m 192.168.56.4 kubenode2.ittraining.loc <none> <none> kube-system coredns-6d4b75cb6d-kxtqk 1/1 Running 0 62m 192.168.55.194 kubemaster.ittraining.loc <none> <none> kube-system coredns-6d4b75cb6d-td7cf 1/1 Running 0 62m 192.168.55.193 kubemaster.ittraining.loc <none> <none> kube-system etcd-kubemaster.ittraining.loc 1/1 Running 1 (57m ago) 63m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-apiserver-kubemaster.ittraining.loc 1/1 Running 2 (55m ago) 63m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-controller-manager-kubemaster.ittraining.loc 1/1 Running 5 (50m ago) 63m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-proxy-fpksg 1/1 Running 0 62m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-proxy-sn26v 1/1 Running 0 50m 192.168.56.3 kubenode1.ittraining.loc <none> <none> kube-system kube-proxy-wxm4z 1/1 Running 0 52m 192.168.56.4 kubenode2.ittraining.loc <none> <none> kube-system kube-scheduler-kubemaster.ittraining.loc 1/1 Running 5 (51m ago) 63m 10.0.2.65 kubemaster.ittraining.loc <none> <none>
Important : Notez que sur kubenode1.ittraining.loc, il y a 4 pods, à savoir myapp-deployment-57c6cb89d9-f69nk, nginx, calico-node-5htrc et kube-proxy-sn26v.
Procédez maintenant au drain de kubenode1.ittraining.loc :
root@kubemaster:~# kubectl drain kubenode1.ittraining.loc node/kubenode1.ittraining.loc cordoned error: unable to drain node "kubenode1.ittraining.loc" due to error:[cannot delete Pods declare no controller (use --force to override): default/nginx, cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-5htrc, kube-system/kube-proxy-sn26v], continuing command... There are pending nodes to be drained: kubenode1.ittraining.loc cannot delete Pods declare no controller (use --force to override): default/nginx cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-5htrc, kube-system/kube-proxy-sn26v <code> Notez que la commande retourne deux erreurs : * cannot delete Pods declare no controller (use --force to override): default/nginx * cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-5htrc, kube-system/kube-proxy-sn26v La première erreur est due au fait que l'opération ne peux pas déplacer un pod isolé, autrement dit un pod qui n'est pas géré par un Controller d'un vers un autre noeud. Dans ce cas, le drain ne peut que supprimer le pod **nginx** et refuse donc de le faire sans l'utilisation dee l'option **--force**. <WRAP center round important 60%> **Important** : Le mot Controller implique un ReplicationController, un ReplicaSet, un Job, un DaemonSet et un StatefulSet. </WRAP> La deuxième erreur est due au fait que l'opération ne peut pas traiter les DaemonSets. <WRAP center round important 60%> **Important** : Un DaemonSet contient des pods qui sont **liés** à des noeuds **spécifiques**. </WRAP> Exécutez donc la commande de nouveau en ajoutant les deux options **--ignore-daemonsets** et **--force** : <code> root@kubemaster:~# kubectl drain kubenode1.ittraining.loc --ignore-daemonsets --force node/kubenode1.ittraining.loc already cordoned WARNING: deleting Pods that declare no controller: default/nginx; ignoring DaemonSet-managed Pods: kube-system/calico-node-5htrc, kube-system/kube-proxy-sn26v evicting pod default/nginx evicting pod default/myapp-deployment-57c6cb89d9-f69nk pod/nginx evicted pod/myapp-deployment-57c6cb89d9-f69nk evicted node/kubenode1.ittraining.loc drained
Important : Notez que la commande n'a pas retourné d'erreurs.
Consultez de nouveau l'état des pods :
root@kubemaster:~# kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default myapp-deployment-57c6cb89d9-dh4cb 1/1 Running 0 45m 192.168.150.2 kubenode2.ittraining.loc <none> <none> default myapp-deployment-57c6cb89d9-f69nk 1/1 Running 0 45m 192.168.150.3 kubenode2.ittraining.loc <none> <none> default myapp-deployment-57c6cb89d9-l7lkd 1/1 Running 0 6m22s 192.168.150.4 kubenode2.ittraining.loc <none> <none> kube-system calico-kube-controllers-6799f5f4b4-zk298 1/1 Running 0 77m 192.168.55.195 kubemaster.ittraining.loc <none> <none> kube-system calico-node-5htrc 1/1 Running 0 68m 192.168.56.3 kubenode1.ittraining.loc <none> <none> kube-system calico-node-dc7hd 1/1 Running 0 77m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system calico-node-qk5kt 1/1 Running 0 70m 192.168.56.4 kubenode2.ittraining.loc <none> <none> kube-system coredns-6d4b75cb6d-kxtqk 1/1 Running 0 80m 192.168.55.194 kubemaster.ittraining.loc <none> <none> kube-system coredns-6d4b75cb6d-td7cf 1/1 Running 0 80m 192.168.55.193 kubemaster.ittraining.loc <none> <none> kube-system etcd-kubemaster.ittraining.loc 1/1 Running 1 (74m ago) 80m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-apiserver-kubemaster.ittraining.loc 1/1 Running 2 (73m ago) 80m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-controller-manager-kubemaster.ittraining.loc 1/1 Running 5 (67m ago) 80m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-proxy-fpksg 1/1 Running 0 80m 10.0.2.65 kubemaster.ittraining.loc <none> <none> kube-system kube-proxy-sn26v 1/1 Running 0 68m 192.168.56.3 kubenode1.ittraining.loc <none> <none> kube-system kube-proxy-wxm4z 1/1 Running 0 70m 192.168.56.4 kubenode2.ittraining.loc <none> <none> kube-system kube-scheduler-kubemaster.ittraining.loc 1/1 Running 5 (68m ago) 80m 10.0.2.65 kubemaster.ittraining.loc <none> <none>
Important : Notez que le pod nginx a été détruit tandis que le pod myapp-deployment-57c6cb89d9-f69nk a été expulsé. Un nouveau pod dénommé myapp-deployment-57c6cb89d9-l7lkd a été créé sur kubenode2.ittraining.loc afin de maintenir le nombre à 3. Les deux pods calico-node-5htrc et kube-proxy-sn26v ont été ignorés.
Constatez maintenant l'état des noeuds :
root@kubemaster:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster.ittraining.loc Ready control-plane 91m v1.24.2 kubenode1.ittraining.loc Ready,SchedulingDisabled <none> 80m v1.24.2 kubenode2.ittraining.loc Ready <none> 82m v1.24.2
Important : Notez que le STATUS de kubenode1.ittraining.loc est SchedulingDisabled ce qui implique que le noeud n'accepte plus de nouveaux pods. Dans cet état le neoud est dit cordoned.
1.2 - La Commande uncordon
Pour permettre le noeud de recevoir de nouveau des pods, il convient d'utiliser la commande suivante :
root@kubemaster:~# kubectl uncordon kubenode1.ittraining.loc node/kubenode1.ittraining.loc uncordoned
Constatez de nouveau l'état des noeuds :
root@kubemaster:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster.ittraining.loc Ready control-plane 124m v1.24.2 kubenode1.ittraining.loc Ready <none> 113m v1.24.2 kubenode2.ittraining.loc Ready <none> 115m v1.24.2
Dernièrement consultez de nouveau l'état des pods :
root@kubemaster:~# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deployment-57c6cb89d9-dh4cb 1/1 Running 0 91m 192.168.150.2 kubenode2.ittraining.loc <none> <none> myapp-deployment-57c6cb89d9-f69nk 1/1 Running 0 91m 192.168.150.3 kubenode2.ittraining.loc <none> <none> myapp-deployment-57c6cb89d9-l7lkd 1/1 Running 0 52m 192.168.150.4 kubenode2.ittraining.loc <none> <none>
Important : Notez que l'utilisation de la commande uncordon n'implique pas le basculement du pod l7lkd vers le noeud kubenode2.ittraining.loc.
LAB #2 - Gestion des Mises-à-jour
2.1 - Mise-à-jour de kubeadm
Afin de mettre à jour kubeadm, il convient de faire un drain du Contrôleur :
root@kubemaster:~# kubectl drain kubemaster.ittraining.loc --ignore-daemonsets node/kubemaster.ittraining.loc cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-dc7hd, kube-system/kube-proxy-fpksg evicting pod kube-system/coredns-6d4b75cb6d-td7cf evicting pod kube-system/calico-kube-controllers-6799f5f4b4-zk298 evicting pod kube-system/coredns-6d4b75cb6d-kxtqk pod/calico-kube-controllers-6799f5f4b4-zk298 evicted pod/coredns-6d4b75cb6d-td7cf evicted pod/coredns-6d4b75cb6d-kxtqk evicted node/kubemaster.ittraining.loc drained
Afin de connaître la ou les version(s) supérieure(s) à celle installée, utilisez la commande suivante :
root@kubemaster:~# apt-cache madison kubeadm | more kubeadm | 1.25.0-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.24.4-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.24.3-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.24.2-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.24.1-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.24.0-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.10-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.9-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.8-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.7-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.6-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.5-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.4-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.3-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.2-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.1-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.23.0-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.13-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.12-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.11-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.10-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.9-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.8-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.7-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages kubeadm | 1.22.6-00 | https://apt.kubernetes.io kubernetes-xenial/main amd64 Packages --Plus-- [q]
Important : Notez que la version la plus récente est la 1.25.0-00.
Procédez maintenant à la mise-à-jour de kubeadm :
root@kubemaster:~# apt-get update && apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00 Atteint:1 http://security.debian.org/debian-security stretch/updates InRelease Ign:2 http://ftp.fr.debian.org/debian stretch InRelease Atteint:3 http://ftp.fr.debian.org/debian stretch-updates InRelease Atteint:4 http://ftp.fr.debian.org/debian stretch Release Réception de:5 https://download.docker.com/linux/debian stretch InRelease [44,8 kB] Atteint:6 https://packages.cloud.google.com/apt kubernetes-xenial InRelease 44,8 ko réceptionnés en 0s (80,5 ko/s) Lecture des listes de paquets... Fait Lecture des listes de paquets... Fait Construction de l'arbre des dépendances Lecture des informations d'état... Fait Les paquets suivants ont été installés automatiquement et ne sont plus nécessaires : libjsoncpp1 linux-image-4.9.0-8-amd64 Veuillez utiliser « apt autoremove » pour les supprimer. Les paquets retenus suivants seront changés : kubeadm Les paquets suivants seront mis à jour : kubeadm 1 mis à jour, 0 nouvellement installés, 0 à enlever et 5 non mis à jour. Il est nécessaire de prendre 9 213 ko dans les archives. Après cette opération, 586 ko d'espace disque seront libérés. Réception de:1 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 kubeadm amd64 1.25.0-00 [9 213 kB] 9 213 ko réceptionnés en 0s (15,3 Mo/s) apt-listchanges : Lecture des fichiers de modifications (« changelog »)... (Lecture de la base de données... 137041 fichiers et répertoires déjà installés.) Préparation du dépaquetage de .../kubeadm_1.25.0-00_amd64.deb ... Dépaquetage de kubeadm (1.25.0-00) sur (1.24.2-00) ... Paramétrage de kubeadm (1.25.0-00) ...
Important : Notez que l'utilisation de l'option –allow-change-held-packages.
Vérifiez que la version désirée a été installée :
root@kubemaster:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:43:25Z", GoVersion:"go1.19", Compiler:"gc", Platform:"linux/amd64"}
Afin de connaître les version des composants du Control Plane compatibles avec la version 1.25.0 de kubeadm, utilisez la commande kubeadm upgrade plan :
root@kubemaster:~# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.24.2 [upgrade/versions] kubeadm version: v1.25.0 [upgrade/versions] Target version: v1.25.0 [upgrade/versions] Latest version in the v1.24 series: v1.24.4 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT TARGET kubelet 3 x v1.24.2 v1.24.4 Upgrade to the latest version in the v1.24 series: COMPONENT CURRENT TARGET kube-apiserver v1.24.2 v1.24.4 kube-controller-manager v1.24.2 v1.24.4 kube-scheduler v1.24.2 v1.24.4 kube-proxy v1.24.2 v1.24.4 CoreDNS v1.8.6 v1.9.3 etcd 3.5.3-0 3.5.4-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.24.4 _____________________________________________________________________ Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT TARGET kubelet 3 x v1.24.2 v1.25.0 Upgrade to the latest stable version: COMPONENT CURRENT TARGET kube-apiserver v1.24.2 v1.25.0 kube-controller-manager v1.24.2 v1.25.0 kube-scheduler v1.24.2 v1.25.0 kube-proxy v1.24.2 v1.25.0 CoreDNS v1.8.6 v1.9.3 etcd 3.5.3-0 3.5.4-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.25.0 _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________
Procédez donc à la mise-à-jour de kubeadm vers la version 1.25.0 :
root@kubemaster:~# kubeadm upgrade apply v1.25.0 [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to "v1.25.0" [upgrade/versions] Cluster version: v1.24.2 [upgrade/versions] kubeadm version: v1.25.0 [upgrade] Are you sure you want to proceed? [y/N]: y
A l'issu de processus, vous verrez les deux lignes suivantes :
... [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.25.0". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. root@kubemaster:~#
Mettez-à-jour maintenant kubelet et kubectl :
root@kubemaster:~# apt-get update && apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00 ...
Au cas où le fichier du service de kubelet a subi des modifications, re-démarrez le daemon systemctl ainsi que le service kubelet :
root@kubemaster:~# systemctl daemon-reload root@kubemaster:~# systemctl restart kubelet
Annulez le drain de kubemaster :
root@kubemaster:~# kubectl uncordon kubemaster.ittraining.loc node/kubemaster.ittraining.loc uncordoned
Constatez maintenant l'état des noeuds :
root@kubemaster:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster.ittraining.loc Ready control-plane 3h15m v1.25.0 kubenode1.ittraining.loc Ready <none> 3h4m v1.24.2 kubenode2.ittraining.loc Ready <none> 3h6m v1.24.2
Important : Notez que le Control Plane est à la version 1.25.0 tandis que les Travailleurs sont à la version 1.24.2.
2.2 - Mise-à-jour des Travailleurs
Afin de mettre à jour un Travailleur, il convient de faire un drain du Travailleur concerné :
root@kubemaster:~# kubectl drain kubenode1.ittraining.loc --ignore-daemonsets --force node/kubenode1.ittraining.loc cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-5htrc, kube-system/kube-proxy-x5j2r evicting pod kube-system/coredns-565d847f94-rh7vb evicting pod kube-system/calico-kube-controllers-6799f5f4b4-6ng7z pod/calico-kube-controllers-6799f5f4b4-6ng7z evicted pod/coredns-565d847f94-rh7vb evicted node/kubenode1.ittraining.loc drained
Conectez-vous à kubenode1 :
root@kubemaster:~# ssh -l trainee kubenode1 trainee@kubenode1's password: trainee Linux kubenode1.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Sun Sep 4 09:40:17 2022 from 192.168.56.2 trainee@kubenode1:~$ su - Mot de passe : fenestros root@kubenode1:~#
Mettez-à-jour le paquet kubeadm :
root@kubenode1:~# apt-get update && apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00 ...
Mettez-à-jour la configuration de kubelet :
root@kubenode1:~# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
Mettez-à-jour maintenant kubelet et kubectl :
root@kubenode1:~# apt-get update && apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00 ...
Au cas où le fichier du service de kubelet a subi des modifications, re-démarrez le daemon systemctl ainsi que le service kubelet :
root@kubenode1:~# systemctl daemon-reload root@kubenode1:~# systemctl restart kubelet
Retournez à la machine kubemaster :
root@kubenode1:~# exit déconnexion trainee@kubenode1:~$ exit déconnexion Connection to kubenode1 closed. root@kubemaster:~#
Annulez le drain de kubenode1 :
root@kubemaster:~# kubectl uncordon kubenode1.ittraining.loc node/kubenode1.ittraining.loc uncordoned
Constatez maintenant l'état des noeuds :
root@kubemaster:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster.ittraining.loc Ready control-plane 3h43m v1.25.0 kubenode1.ittraining.loc Ready <none> 3h32m v1.25.0 kubenode2.ittraining.loc Ready <none> 3h34m v1.24.2
Important : Notez que le Control Plane et kubenode1 sont à la version 1.25.0 tandis que kubenode2 est à la version 1.24.2.
Faites un drain du kubenode2 :
root@kubemaster:~# kubectl drain kubenode2.ittraining.loc --ignore-daemonsets --force node/kubenode2.ittraining.loc cordoned Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-qk5kt, kube-system/kube-proxy-ggmt6 evicting pod kube-system/coredns-565d847f94-phx7b evicting pod default/myapp-deployment-689f9d59-5446p evicting pod default/myapp-deployment-689f9d59-9pkjz evicting pod default/myapp-deployment-689f9d59-l7lkd evicting pod kube-system/calico-kube-controllers-6799f5f4b4-pg6rm pod/myapp-deployment-689f9d59-5446p evicted pod/calico-kube-controllers-6799f5f4b4-pg6rm evicted pod/myapp-deployment-689f9d59-9pkjz evicted pod/myapp-deployment-689f9d59-l7lkd evicted pod/coredns-565d847f94-phx7b evicted node/kubenode2.ittraining.loc drained
Connectez-vous à kubenode2 :
root@kubemaster:~# ssh -l trainee kubenode2 The authenticity of host 'kubenode2 (192.168.56.4)' can't be established. ECDSA key fingerprint is SHA256:sEfHBv9azmK60cjqF/aJgUc9jg56slNaZQdAUcvBOvE. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'kubenode2,192.168.56.4' (ECDSA) to the list of known hosts. trainee@kubenode2's password: trainee Linux kubenode2.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Sun Sep 4 09:42:58 2022 from 192.168.56.1 trainee@kubenode2:~$ su - Mot de passe : fenestros root@kubenode2:~#
Mettez-à-jour le paquet kubeadm :
root@kubenode2:~# apt-get update && apt-get install -y --allow-change-held-packages kubeadm=1.25.0-00 ...
Mettez-à-jour la configuration de kubelet :
root@kubenode2:~# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
Mettez-à-jour maintenant kubelet et kubectl :
root@kubenode2:~# apt-get update && apt-get install -y --allow-change-held-packages kubelet=1.25.0-00 kubectl=1.25.0-00 ...
Au cas où le fichier du service de kubelet a subi des modifications, re-démarrez le daemon systemctl ainsi que le service kubelet :
root@kubenode2:~# systemctl daemon-reload root@kubenode2:~# systemctl restart kubelet
Retournez à la machine kubemaster :
root@kubenode2:~# exit déconnexion trainee@kubenode2:~$ exit déconnexion Connection to kubenode2 closed. root@kubemaster:~#
Annulez le drain de kubenode1 :
root@kubemaster:~# kubectl uncordon kubenode2.ittraining.loc node/kubenode2.ittraining.loc uncordoned
Constatez maintenant l'état des noeuds :
root@kubemaster:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kubemaster.ittraining.loc Ready control-plane 3h56m v1.25.0 kubenode1.ittraining.loc Ready <none> 3h45m v1.25.0 kubenode2.ittraining.loc Ready <none> 3h47m v1.25.0
Important : Notez que tout a été mis-à-jour.
LAB #3 - Gestion de la Sauvegarde
Copyright © 2022 Hugh Norris