Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédenteProchaine révision | Révision précédente | ||
elearning:workbooks:kubernetes:k8s06 [2024/02/21 13:03] – admin | elearning:workbooks:kubernetes:k8s06 [2025/01/17 15:25] (Version actuelle) – admin | ||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
+ | ~~PDF: | ||
+ | Version - **2024.01** | ||
+ | |||
+ | Dernière mise-à-jour : ~~LASTMOD~~ | ||
+ | |||
+ | |||
+ | ======DOF307 - Troubleshooting K8s====== | ||
+ | |||
+ | =====Contenu du Module===== | ||
+ | |||
+ | * **DOF307 - Troubleshooting K8s** | ||
+ | * Contenu du Module | ||
+ | * LAB #1 - Le Serveur API | ||
+ | * 1.1 - Connexion Refusée | ||
+ | * 1.2 - Journaux des Pods Système | ||
+ | * LAB #2 - Les Noeuds | ||
+ | * 2.1 - Le Statut NotReady | ||
+ | * LAB #3 - Les Pods | ||
+ | * 3.1 - L' | ||
+ | * 3.2 - L' | ||
+ | * LAB #4 - Les Conteneurs | ||
+ | * 4.1 - La Commande exec | ||
+ | * LAB #5 - Le Réseau | ||
+ | * 5.1 - kube-proxy et le DNS | ||
+ | * 5.2 - Le Conteneur netshoot | ||
+ | |||
+ | =====LAB #1 - Le Serveur API===== | ||
+ | |||
+ | ====1.1 - Connexion Refusée==== | ||
+ | |||
+ | Quand il n'est pas possible de se connecter au serveur API de K8s, on obtient une erreur telle que : | ||
+ | |||
+ | < | ||
+ | trainee@kubemaster: | ||
+ | The connection to the server localhost: | ||
+ | </ | ||
+ | |||
+ | En règle générale, cette erreur est due à une des trois situations suivantes : | ||
+ | |||
+ | ===Le Service kubelet=== | ||
+ | |||
+ | Vérifiez que le service kubelet est activé et en cours d' | ||
+ | |||
+ | < | ||
+ | trainee@kubemaster: | ||
+ | Mot de passe : fenestros | ||
+ | |||
+ | root@kubemaster: | ||
+ | ● kubelet.service - kubelet: The Kubernetes Node Agent | ||
+ | | ||
+ | Drop-In: / | ||
+ | | ||
+ | | ||
+ | Docs: https:// | ||
+ | Main PID: 550 (kubelet) | ||
+ | Tasks: 17 (limit: 4915) | ||
+ | | ||
+ | CPU: 4h 16min 54.676s | ||
+ | | ||
+ | | ||
+ | |||
+ | Warning: Journal has been rotated since unit was started. Log output is incomplete or | ||
+ | lines 1-14/14 (END) | ||
+ | [q] | ||
+ | </ | ||
+ | |||
+ | ===La Variable KUBECONFIG=== | ||
+ | |||
+ | Si vous utilisez le compte root pour interagir avec K8s, vérifiez que la variable **KUBECONFIG** est renseignée correctement : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | / | ||
+ | </ | ||
+ | |||
+ | ===Le Fichier $HOME/ | ||
+ | |||
+ | Si vous utilisez un compte d'un utilisateur normal pour interagir avec K8s, vérifiez que le fichier **$HOME/ | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | déconnexion | ||
+ | trainee@kubemaster: | ||
+ | |||
+ | trainee@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | clusters: | ||
+ | - cluster: | ||
+ | certificate-authority-data: | ||
+ | server: https:// | ||
+ | name: kubernetes | ||
+ | contexts: | ||
+ | - context: | ||
+ | cluster: kubernetes | ||
+ | user: kubernetes-admin | ||
+ | name: kubernetes-admin@kubernetes | ||
+ | current-context: | ||
+ | kind: Config | ||
+ | preferences: | ||
+ | users: | ||
+ | - name: kubernetes-admin | ||
+ | user: | ||
+ | client-certificate-data: | ||
+ | client-key-data: | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | trainee@kubemaster: | ||
+ | -rw------- 1 trainee sudo 5636 sept. 28 12:56 / | ||
+ | |||
+ | trainee@kubemaster: | ||
+ | Mot de passe : | ||
+ | root@kubemaster: | ||
+ | </ | ||
+ | |||
+ | ====1.2 - Journaux des Pods Système==== | ||
+ | |||
+ | Si, à ce stade, vous n'avez pas trouvé d' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | total 28 | ||
+ | drwxr-xr-x 6 root root 4096 sept. 4 09:44 kube-system_calico-node-dc7hd_3fe340ed-6df4-4252-9e4e-8c244453176a | ||
+ | drwxr-xr-x 3 root root 4096 sept. 4 13:00 kube-system_coredns-565d847f94-tqd8z_d96f42ed-ebd4-4eb9-8c89-2d80b81ef9cf | ||
+ | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_etcd-kubemaster.ittraining.loc_ddbb10499877103d862e5ce637b18ab1 | ||
+ | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_kube-apiserver-kubemaster.ittraining.loc_ec70600cac9ca8c8ea9545f1a42f82e5 | ||
+ | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_kube-controller-manager-kubemaster.ittraining.loc_0e3dcf54223b4398765d21e9e6aaebc6 | ||
+ | drwxr-xr-x 3 root root 4096 sept. 4 12:31 kube-system_kube-proxy-x7fpc_80673937-ff21-4dba-a821-fb3b0b1541a4 | ||
+ | drwxr-xr-x 3 root root 4096 sept. 4 12:36 kube-system_kube-scheduler-kubemaster.ittraining.loc_c3485d2a42b90757729a745cd8ee5f7d | ||
+ | |||
+ | root@kubemaster: | ||
+ | total 4 | ||
+ | drwxr-xr-x 2 root root 4096 sept. 16 09:31 kube-apiserver | ||
+ | |||
+ | root@kubemaster: | ||
+ | total 2420 | ||
+ | -rw-r----- 1 root root 1009731 sept. 16 08:19 0.log | ||
+ | -rw-r----- 1 root root 1460156 sept. 28 12:22 1.log | ||
+ | |||
+ | root@kubemaster: | ||
+ | 2022-09-28T11: | ||
+ | 2022-09-28T11: | ||
+ | 2022-09-28T11: | ||
+ | 2022-09-28T11: | ||
+ | 2022-09-28T12: | ||
+ | 2022-09-28T12: | ||
+ | 2022-09-28T12: | ||
+ | 2022-09-28T12: | ||
+ | 2022-09-28T12: | ||
+ | 2022-09-28T12: | ||
+ | </ | ||
+ | |||
+ | A noter que quand le serveur API redevient fonctionnel, | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME READY | ||
+ | calico-kube-controllers-6799f5f4b4-2tgpq | ||
+ | calico-node-5htrc | ||
+ | calico-node-dc7hd | ||
+ | calico-node-qk5kt | ||
+ | coredns-565d847f94-kkpbp | ||
+ | coredns-565d847f94-tqd8z | ||
+ | etcd-kubemaster.ittraining.loc | ||
+ | kube-apiserver-kubemaster.ittraining.loc | ||
+ | kube-controller-manager-kubemaster.ittraining.loc | ||
+ | kube-proxy-ggmt6 | ||
+ | kube-proxy-x5j2r | ||
+ | kube-proxy-x7fpc | ||
+ | kube-scheduler-kubemaster.ittraining.loc | ||
+ | metrics-server-5dbb5ff5bd-vh5fz | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Trace[1595276047]: | ||
+ | I0928 09: | ||
+ | Trace[1267846829]: | ||
+ | Trace[1267846829]: | ||
+ | I0928 10: | ||
+ | I0928 10: | ||
+ | Trace[338168453]: | ||
+ | I0928 10: | ||
+ | Trace[238339745]: | ||
+ | Trace[238339745]: | ||
+ | </ | ||
+ | |||
+ | =====LAB #2 - Les Nœuds===== | ||
+ | |||
+ | ====2.1 - Le Statut NotReady==== | ||
+ | |||
+ | Quand un nœud du cluster démontre un problème, il convient de regarder la section **Conditions** dans la sortie de la commande **kubectl describe node** du nœud concerné : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | ... | ||
+ | Conditions: | ||
+ | Type | ||
+ | ---- | ||
+ | NetworkUnavailable | ||
+ | MemoryPressure | ||
+ | DiskPressure | ||
+ | PIDPressure | ||
+ | Ready True Wed, 28 Sep 2022 09:17:21 +0200 Thu, 15 Sep 2022 17:57:04 +0200 | ||
+ | ... | ||
+ | </ | ||
+ | |||
+ | En règle générale, le statut de NotReady est créé par la panne du service **kubelet** sur le nœud, comme démontre l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | trainee@192.168.56.3' | ||
+ | Linux kubenode1.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 | ||
+ | |||
+ | The programs included with the Debian GNU/Linux system are free software; | ||
+ | the exact distribution terms for each program are described in the | ||
+ | individual files in / | ||
+ | |||
+ | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent | ||
+ | permitted by applicable law. | ||
+ | Last login: Fri Sep 16 18:07:39 2022 from 192.168.56.2 | ||
+ | trainee@kubenode1: | ||
+ | Mot de passe : fenestros | ||
+ | |||
+ | root@kubenode1: | ||
+ | |||
+ | root@kubenode1: | ||
+ | Removed / | ||
+ | |||
+ | root@kubenode1: | ||
+ | déconnexion | ||
+ | trainee@kubenode1: | ||
+ | déconnexion | ||
+ | Connection to 192.168.56.3 closed. | ||
+ | |||
+ | root@kubemaster: | ||
+ | NAME STATUS | ||
+ | kubemaster.ittraining.loc | ||
+ | kubenode1.ittraining.loc | ||
+ | kubenode2.ittraining.loc | ||
+ | </ | ||
+ | |||
+ | En activant et en démarrant le service, le nœud retrouve son statut de **Ready** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | trainee@192.168.56.3' | ||
+ | Linux kubenode1.ittraining.loc 4.9.0-19-amd64 #1 SMP Debian 4.9.320-2 (2022-06-30) x86_64 | ||
+ | |||
+ | The programs included with the Debian GNU/Linux system are free software; | ||
+ | the exact distribution terms for each program are described in the | ||
+ | individual files in / | ||
+ | |||
+ | Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent | ||
+ | permitted by applicable law. | ||
+ | Last login: Wed Sep 28 09:20:14 2022 from 192.168.56.2 | ||
+ | trainee@kubenode1: | ||
+ | Mot de passe : fenestros | ||
+ | |||
+ | root@kubenode1: | ||
+ | Created symlink / | ||
+ | |||
+ | root@kubenode1: | ||
+ | |||
+ | root@kubenode1: | ||
+ | ● kubelet.service - kubelet: The Kubernetes Node Agent | ||
+ | | ||
+ | Drop-In: / | ||
+ | | ||
+ | | ||
+ | Docs: https:// | ||
+ | Main PID: 5996 (kubelet) | ||
+ | Tasks: 18 (limit: 4915) | ||
+ | | ||
+ | CPU: 555ms | ||
+ | | ||
+ | | ||
+ | |||
+ | sept. 28 09:54:51 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:52 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:54 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:56 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:56 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | sept. 28 09:54:57 kubenode1.ittraining.loc kubelet[5996]: | ||
+ | root@kubenode1: | ||
+ | |||
+ | root@kubenode1: | ||
+ | déconnexion | ||
+ | trainee@kubenode1: | ||
+ | déconnexion | ||
+ | Connection to 192.168.56.3 closed. | ||
+ | |||
+ | root@kubemaster: | ||
+ | NAME STATUS | ||
+ | kubemaster.ittraining.loc | ||
+ | kubenode1.ittraining.loc | ||
+ | kubenode2.ittraining.loc | ||
+ | </ | ||
+ | |||
+ | =====LAB #3 - Les Pods===== | ||
+ | |||
+ | Quand un pod du cluster démontre un problème, il convient de regarder la section **Events** dans la sortie de la commande **kubectl describe pod** du pod concerné. | ||
+ | |||
+ | ====3.1 - L' | ||
+ | |||
+ | Commencez par créer le fichier **deployment-postgresql.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: postgresql | ||
+ | labels: | ||
+ | app: postgresql | ||
+ | spec: | ||
+ | replicas: 1 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: postgresql | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: postgresql | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: bitnami/ | ||
+ | imagePullPolicy: | ||
+ | name: postgresql | ||
+ | </ | ||
+ | |||
+ | Déployez ensuite l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | deployment.apps/ | ||
+ | </ | ||
+ | |||
+ | En consultant le pod créé, vous verrez qu'il y a une erreur de type **ImagePullBackOff** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME READY | ||
+ | postgresql-6778f6569c-x84xd | ||
+ | sharedvolume | ||
+ | volumepod | ||
+ | </ | ||
+ | |||
+ | Consultez la section **Events** de la sortie de la commande **describe** pour voir ce que se passe : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | | ||
+ | Events: | ||
+ | Type | ||
+ | ---- | ||
+ | Normal | ||
+ | Normal | ||
+ | Warning | ||
+ | Warning | ||
+ | Normal | ||
+ | Warning | ||
+ | </ | ||
+ | |||
+ | Comme vous pouvez constater, il existe trois avertissements | ||
+ | |||
+ | < | ||
+ | Warning | ||
+ | |||
+ | Warning | ||
+ | |||
+ | Warning | ||
+ | </ | ||
+ | |||
+ | Le premier des trois avertissements nous dit clairement qu'il y a un problème au niveau du tag de l' | ||
+ | |||
+ | Modifiez donc le tag dans ce fichier à ** 10.13.0** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: postgresql | ||
+ | labels: | ||
+ | app: postgresql | ||
+ | spec: | ||
+ | replicas: 1 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: postgresql | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: postgresql | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: bitnami/ | ||
+ | imagePullPolicy: | ||
+ | name: postgresql | ||
+ | </ | ||
+ | |||
+ | Appliquez maintenant le fichier : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | deployment.apps/ | ||
+ | </ | ||
+ | |||
+ | ====3.2 - L' | ||
+ | |||
+ | En consultant le deuxième Pod créé, vous verrez qu'il y a une erreur de type **CrashLoopBackOff** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME READY | ||
+ | postgresql-6668d5d6b5-swr9g | ||
+ | postgresql-6778f6569c-x84xd | ||
+ | sharedvolume | ||
+ | volumepod | ||
+ | </ | ||
+ | |||
+ | Consultez la section **Events** de la sortie de la commande **describe** pour voir ce que se passe avec le deuxième pod : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Events: | ||
+ | Type | ||
+ | ---- | ||
+ | Normal | ||
+ | Normal | ||
+ | Normal | ||
+ | Normal | ||
+ | Normal | ||
+ | Normal | ||
+ | Warning | ||
+ | </ | ||
+ | |||
+ | Cette fois-ci, la section **Events** nous donne aucune indication concernant le problème ! | ||
+ | |||
+ | Pour obtenir plus d' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | postgresql 08: | ||
+ | postgresql 08:43:48.60 Welcome to the Bitnami postgresql container | ||
+ | postgresql 08:43:48.60 Subscribe to project updates by watching https:// | ||
+ | postgresql 08:43:48.60 Submit issues and feature requests at https:// | ||
+ | postgresql 08: | ||
+ | postgresql 08:43:48.62 INFO ==> ** Starting PostgreSQL setup ** | ||
+ | postgresql 08:43:48.63 INFO ==> Validating settings in POSTGRESQL_* env vars.. | ||
+ | postgresql 08:43:48.63 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development. | ||
+ | postgresql 08:43:48.63 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development. | ||
+ | </ | ||
+ | |||
+ | La sortie de la commande **logs** nous indique clairement que le problème est lié au contenu de la variable **POSTGRESQL_PASSWORD** qui est vide. Elle nous indique aussi que nous pourrions fixer la valeur de la variable **ALLOW_EMPTY_PASSWORD** à **yes** pour contourner ce problème : | ||
+ | |||
+ | < | ||
+ | ... | ||
+ | postgresql 08:43:48.63 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development. | ||
+ | </ | ||
+ | |||
+ | Mettez à jour donc le fichier **deployment-postgresql.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: postgresql | ||
+ | labels: | ||
+ | app: postgresql | ||
+ | spec: | ||
+ | replicas: 1 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | app: postgresql | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: postgresql | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: bitnami/ | ||
+ | imagePullPolicy: | ||
+ | name: postgresql | ||
+ | env: | ||
+ | - name: POSTGRESQL_PASSWORD | ||
+ | value: " | ||
+ | </ | ||
+ | |||
+ | Appliquez la configuration : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | deployment.apps/ | ||
+ | </ | ||
+ | |||
+ | Constatez l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME READY | ||
+ | postgresql-6f885d8957-tnlbb | ||
+ | sharedvolume | ||
+ | volumepod | ||
+ | |||
+ | root@kubemaster: | ||
+ | NAME | ||
+ | postgresql | ||
+ | </ | ||
+ | |||
+ | Utilisez maintenant l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | postgresql 08: | ||
+ | postgresql 08:48:35.14 Welcome to the Bitnami postgresql container | ||
+ | postgresql 08:48:35.14 Subscribe to project updates by watching https:// | ||
+ | postgresql 08:48:35.14 Submit issues and feature requests at https:// | ||
+ | postgresql 08: | ||
+ | postgresql 08:48:35.16 INFO ==> ** Starting PostgreSQL setup ** | ||
+ | postgresql 08:48:35.17 INFO ==> Validating settings in POSTGRESQL_* env vars.. | ||
+ | postgresql 08:48:35.18 INFO ==> Loading custom pre-init scripts... | ||
+ | postgresql 08:48:35.18 INFO ==> Initializing PostgreSQL database... | ||
+ | postgresql 08:48:35.20 INFO ==> pg_hba.conf file not detected. Generating it... | ||
+ | postgresql 08:48:35.20 INFO ==> Generating local authentication configuration | ||
+ | postgresql 08:48:47.94 INFO ==> Starting PostgreSQL in background... | ||
+ | postgresql 08:48:48.36 INFO ==> Changing password of postgres | ||
+ | postgresql 08:48:48.39 INFO ==> Configuring replication parameters | ||
+ | postgresql 08:48:48.46 INFO ==> Configuring fsync | ||
+ | postgresql 08:48:48.47 INFO ==> Loading custom scripts... | ||
+ | postgresql 08:48:48.47 INFO ==> Enabling remote connections | ||
+ | postgresql 08:48:48.48 INFO ==> Stopping PostgreSQL... | ||
+ | postgresql 08:48:49.49 INFO ==> ** PostgreSQL setup finished! ** | ||
+ | |||
+ | postgresql 08:48:49.50 INFO ==> ** Starting PostgreSQL ** | ||
+ | 2022-09-28 08: | ||
+ | 2022-09-28 08: | ||
+ | 2022-09-28 08: | ||
+ | 2022-09-28 08: | ||
+ | 2022-09-28 08: | ||
+ | ^C | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | **Important** : Notez l' | ||
+ | </ | ||
+ | |||
+ | =====LAB #4 - Les Conteneurs===== | ||
+ | |||
+ | ====4.1 - La Commande exec==== | ||
+ | |||
+ | La commande **exec** peut être utilisée pour exécuter une commande à l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | # ----------------------------- | ||
+ | # PostgreSQL configuration file | ||
+ | # ----------------------------- | ||
+ | # | ||
+ | # This file consists of lines of the form: | ||
+ | # | ||
+ | # name = value | ||
+ | # | ||
+ | # (The " | ||
+ | # "#" | ||
+ | # values can be found in the PostgreSQL documentation. | ||
+ | # | ||
+ | # The commented-out settings shown in this file represent the default values. | ||
+ | # Re-commenting a setting is NOT sufficient to revert it to the default value; | ||
+ | # you need to reload the server. | ||
+ | # | ||
+ | # This file is read on server startup and when the server receives a SIGHUP | ||
+ | # signal. | ||
+ | # server for the changes to take effect, run " | ||
+ | # " | ||
+ | # require a server shutdown and restart to take effect. | ||
+ | # | ||
+ | # Any parameter can also be given as a command-line option to the server, e.g., | ||
+ | # " | ||
+ | # with the " | ||
+ | # | ||
+ | # Memory units: | ||
+ | # MB = megabytes | ||
+ | # GB = gigabytes | ||
+ | # TB = terabytes | ||
+ | # | ||
+ | |||
+ | |||
+ | # | ||
+ | # FILE LOCATIONS | ||
+ | # | ||
+ | |||
+ | # The default values of these variables are driven from the -D command-line | ||
+ | # option or PGDATA environment variable, represented here as ConfigDir. | ||
+ | |||
+ | # | ||
+ | # (change requires restart) | ||
+ | #hba_file = ' | ||
+ | # (change requires restart) | ||
+ | #ident_file = ' | ||
+ | # (change requires restart) | ||
+ | |||
+ | # If external_pid_file is not explicitly set, no extra PID file is written. | ||
+ | # | ||
+ | # (change requires restart) | ||
+ | |||
+ | |||
+ | # | ||
+ | # CONNECTIONS AND AUTHENTICATION | ||
+ | # | ||
+ | |||
+ | --More-- | ||
+ | </ | ||
+ | |||
+ | Dernièrement, | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | I have no name!@postgresql-6f885d8957-tnlbb:/ | ||
+ | exit | ||
+ | root@kubemaster: | ||
+ | </ | ||
+ | |||
+ | =====LAB #5 - Le Réseau====== | ||
+ | |||
+ | ====5.1 - kube-proxy et le DNS==== | ||
+ | |||
+ | Utilisez la commande **kubectl get pods** pour obtenir les noms des pods **kube-proxy** et **coredns** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME READY | ||
+ | calico-kube-controllers-6799f5f4b4-2tgpq | ||
+ | calico-node-5htrc | ||
+ | calico-node-dc7hd | ||
+ | calico-node-qk5kt | ||
+ | coredns-565d847f94-kkpbp | ||
+ | coredns-565d847f94-tqd8z | ||
+ | etcd-kubemaster.ittraining.loc | ||
+ | kube-apiserver-kubemaster.ittraining.loc | ||
+ | kube-controller-manager-kubemaster.ittraining.loc | ||
+ | kube-proxy-ggmt6 | ||
+ | kube-proxy-x5j2r | ||
+ | kube-proxy-x7fpc | ||
+ | kube-scheduler-kubemaster.ittraining.loc | ||
+ | metrics-server-5dbb5ff5bd-vh5fz | ||
+ | </ | ||
+ | |||
+ | Recherchez des erreurs éventuelles dans les journaux de chaque pod : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | I0916 07: | ||
+ | Trace[210170851]: | ||
+ | </ | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | [INFO] plugin/ | ||
+ | [INFO] plugin/ | ||
+ | .:53 | ||
+ | [INFO] plugin/ | ||
+ | CoreDNS-1.9.3 | ||
+ | linux/ | ||
+ | </ | ||
+ | |||
+ | ====5.2 - Le Conteneur netshoot==== | ||
+ | |||
+ | Si, à ce stade, vous n'avez pas trouvé d' | ||
+ | |||
+ | {{ : | ||
+ | |||
+ | Créez le fichier **nginx-netshoot.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: nginx-netshoot | ||
+ | labels: | ||
+ | app: nginx-netshoot | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: nginx | ||
+ | image: nginx: | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: service-netshoot | ||
+ | spec: | ||
+ | type: ClusterIP | ||
+ | selector: | ||
+ | app: nginx-netshoot | ||
+ | ports: | ||
+ | - protocol: TCP | ||
+ | port: 80 | ||
+ | targetPort: 80 | ||
+ | </ | ||
+ | |||
+ | Créez le pod et le service : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | pod/ | ||
+ | service/ | ||
+ | </ | ||
+ | |||
+ | Vérifiez que le service est en cours d' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME | ||
+ | kubernetes | ||
+ | service-netshoot | ||
+ | </ | ||
+ | |||
+ | Créez maintenant le fichier **netshoot.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: netshoot | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: netshoot | ||
+ | image: nicolaka/ | ||
+ | command: [' | ||
+ | </ | ||
+ | |||
+ | Créez le pod : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | pod/ | ||
+ | </ | ||
+ | |||
+ | Vérifiez que le status du pod est **READY** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME READY | ||
+ | netshoot | ||
+ | nginx-netshoot | ||
+ | postgresql-6f885d8957-tnlbb | ||
+ | sharedvolume | ||
+ | troubleshooting | ||
+ | volumepod | ||
+ | </ | ||
+ | |||
+ | Entrez dans le conteneur **netshoot** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | bash-5.1# | ||
+ | </ | ||
+ | |||
+ | Testez le bon fonctionnement du service **service-netshoot** : | ||
+ | |||
+ | < | ||
+ | bash-5.1# curl service-netshoot | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | < | ||
+ | body { | ||
+ | width: 35em; | ||
+ | margin: 0 auto; | ||
+ | font-family: | ||
+ | } | ||
+ | </ | ||
+ | </ | ||
+ | < | ||
+ | < | ||
+ | <p>If you see this page, the nginx web server is successfully installed and | ||
+ | working. Further configuration is required.</ | ||
+ | |||
+ | < | ||
+ | <a href=" | ||
+ | Commercial support is available at | ||
+ | <a href=" | ||
+ | |||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | Dernièrement, | ||
+ | |||
+ | < | ||
+ | bash-5.1# nslookup service-netshoot | ||
+ | Server: | ||
+ | Address: | ||
+ | |||
+ | Name: | ||
+ | Address: 10.107.115.28 | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | **Important** : Pour plus d' | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | Copyright © 2024 Hugh Norris |