Ceci est une ancienne révision du document !
Table des matières
Version - 2020.02
Dernière mise-à-jour : 2020/12/13 14:45
DOF308 - Utilisation de la Commande kubectl
Contenu du Module
- DOF308 - Utilisation de la Commande kubectl
- Contenu du Module
- LAB #1 - Travailler avec kubectl
- 1.1 - Équivalences entre la commande docker et la commande kubectl
- 1.2 - Obtenir de l'Aide sur les Commandes de kubectl
- 1.3 - Obtenir de l'Information sur le Cluster
- La Commande version
- La Commande cluster-info
- La Commande api-versions
- La Commande api-resources
- 1.4 - Travailler avec les Nœuds
- La Commande describe
- La Commande top
- Les Commandes cordon et uncordon
- La Commande drain
- La Commande delete
- 1.5 - Gestion des Applications
- La Commande expose
- La Commande get
- La Commande set
- La Commande rollout
- 1.6 - Déboguer une Application
- La Commande logs
- La Commande exec
- 1.7 - Gérer les Plugins de kubectl
- La Commande krew
- 1.8 - Gérer des patchs
- La Commande kustomize
- 1.9 - Alias utile
- L'Alias kg
- L'Alias kd
- L'Alias kga
- L'Alias kp
- L'Alias kap
- L'Alias kei
- L'Alias ke
- L'Alias kpf
- L'Alias kl
Préparation
Présentation de kind
kind est un outil utilisé pour exécuter un cluster Kubernetes localement en utilisant des conteneurs Docker en tant que nœuds. kind a été développé pour tester Kubernetes lui-même mais peut aussi être utilisé pour du développement local.
Le site web de kind est https://kind.sigs.k8s.io/docs/user/quick-start/. Le lien du projet sur github est https://github.com/kubernetes-sigs/kind.
Installation de Docker-CE dans la VM Debian_10
Commencez par augmenter la RAM de la machine virtuelle Debian_10 :
desktop@serverXX:~$ VBoxManage modifyvm Debian_10 --memory 8192
Configurez ensuite la redirection de port pour le service ssh :
desktop@serverXX:~$ VBoxManage modifyvm "Debian_10" --natpf1 "Debian_10,tcp,,9022,,22"
Démarrez la machine virtuelle Debian_10 :
desktop@serverXX:~$ VBoxManage startvm Debian_10 --type headless Waiting for VM "Debian_10" to power on... VM "Debian_10" has been successfully started.
Patientez 2 minutes puis connectez-vous à la machine virtuelle :
desktop@serverXX:~$ ssh -l trainee localhost -p 9022 trainee@localhost's password: Linux debian10 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Mon Nov 30 15:50:01 2020 from 10.0.2.2
Installez ensuite Docker-CE :
trainee@debian10:~$ su - Password: fenestros root@debian10:~# root@debian10:~# apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common gnupg2 ... root@debian10:~# curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - ... root@debian10:~# add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" ... root@debian10:~# apt-get update && apt-get install -y containerd.io=1.2.13-2 docker-ce=5:19.03.11~3-0~debian-$(lsb_release -cs) docker-ce-cli=5:19.03.11~3-0~debian-$(lsb_release -cs) ... root@debian10:~# vi /etc/docker/daemon.json root@debian10:~# cat /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } root@debian10:~# mkdir -p /etc/systemd/system/docker.service.d root@debian10:~# systemctl daemon-reload root@debian10:~# systemctl restart docker root@debian10:~# docker version Client: Docker Engine - Community Version: 19.03.11 API version: 1.40 Go version: go1.13.10 Git commit: 42e35e61f3 Built: Mon Jun 1 09:12:44 2020 OS/Arch: linux/amd64 Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.11 API version: 1.40 (minimum version 1.12) Go version: go1.13.10 Git commit: 42e35e61f3 Built: Mon Jun 1 09:11:17 2020 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683
Installation de kubelet, kubeadm et kubectl
Ajoutez la clef GPG pour le dépôt Kubernetes :
root@debian10:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - OK
Ajoutez le dépôt de Kubernetes :
root@debian10:~# echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main
Procédez à l'installation de kubeadm, kubelet et kubectl :
root@debian10:~# apt-get update && apt-get install -y kubeadm kubelet kubectl
Bloquez les mises-à-jour de kubeadm, kubelet et kubectl :
root@debian10:~# apt-mark hold kubelet kubeadm kubectl kubelet set on hold. kubeadm set on hold. kubectl set on hold.
Installation de kind et Démarrage du Cluster
Installez kind :
root@debian10:~# curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.9.0/kind-linux-amd64 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 97 100 97 0 0 339 0 --:--:-- --:--:-- --:--:-- 337 100 642 100 642 0 0 1414 0 --:--:-- --:--:-- --:--:-- 1414 100 7247k 100 7247k 0 0 3549k 0 0:00:02 0:00:02 --:--:-- 9522k root@debian10:~# chmod +x ./kind root@debian10:~# mv kind /usr/local/bin/ root@debian10:~# which kind /usr/local/bin/kind
Re-démarrez la machine virtuelle :
root@debian10:~# shutdown -r now
Connectez-vous à la machine virtuelle Debian_10 :
desktop@serverXX:~$ ssh -l trainee localhost -p 9022 trainee@localhost's password: trainee Linux debian10 4.19.0-6-amd64 #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Mon Nov 30 13:47:09 2020 from 10.0.2.2
Devenez root et créez le fichier config.yaml :
trainee@debian10:~$ su - Password: fenestros root@debian10:~# vi config.yaml root@debian10:~# cat config.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 kubeadmConfigPatches: - | apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration evictionHard: nodefs.available: "0%" kubeadmConfigPatchesJSON6902: - group: kubeadm.k8s.io version: v1beta2 kind: ClusterConfiguration patch: | - op: add path: /apiServer/certSANs/- value: my-hostname nodes: - role: control-plane - role: worker - role: worker - role: worker
Créez un cluster avec kind :
root@debian10:~# kind create cluster --config config.yaml Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.19.1) 🖼 ✓ Preparing nodes 📦 📦 📦 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 ✓ Joining worker nodes 🚜 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
LAB #1 - Travailler avec kubectl
1.1 - Équivalences entre la commande docker et la commande kubectl
Un pré-requis pour travailler avec Kubernetes est de bien comprendre la commande docker. Pour faciliter la migration de l'utilisation de la commande docker vers la commande kubectl, voici une liste des commandes les plus utilisées :
Pour obtenir de l'information concernant le cluster la commande docker :
# docker info
devient :
# kubectl cluster-info
Pour obtenir l'information concernant la version de la commande docker :
# docker version
devient
# kubectl info
Pour lancer un conteneur nginx en exposant le port 80, la commande docker :
# docker run -d --restart=always --name nginx -p 80:80 nginx
devient deux commandes :
# kubectl create deployment --image=nginx nginx
et :
# kubectl expose deployment nginx --port=80 --name=nginx
Pour voir les traces en continue des logs du conteneur nginx, la commande docker :
# docker logs -f nginx
où nginx est le nom du conteneur,
devient :
# kubectl logs -f nginx
où nginx est le nom du pod.
Un POD est un objet qui encapsule un conteneur. Le conteneur est un instance d'une application. La relation entre un POD et un conteneur d'application est en règle générale 1:1, c'est-à-dire que dans le cas d'une augmentation de la charge, des PODs additionnels sont créés, chacun contenant un conteneur d'application au lieu de créer plusieurs conteneurs dans le même POD.
Pour se place dans le conteneur nginx, la commande docker :
# docker exec -it nginx /bin/bash
où nginx est le nom du conteneur,
devient :
# kubectl exec -it nginx
où nginx est le nom du pod.
Pour obtenir la liste des conteneurs, la commande docker :
# docker ps -a
devient :
# kubectl get pods
Pour arrêter et supprimer le conteneur nginx, les commandes docker :
# docker stop nginx # docker rm nginx
deviennent :
# kubectl delete deployment nginx
où nginx est le nom de déploiement et,
# kubectl delete pod nginx
Un Deployment sous Kubernetes est un objet hiérarchiquement supérieur à un ReplicaSet. Un ReplicaSet remplit la même fonction qu'un Contrôleur de Réplication. ReplicaSets sont la façon la plus récente de gérer la réplication. Un Contrôleur de Réplication permet d'exécuter plusieurs instances du même POD de façon à offrir de la haute disponibilité au cas où l'application crash et le POD se met en échec. Même dans le cas où il n'y a qu'un seul POD, le Contrôleur de Réplication peut démarrer automatiquement un autre POD contenant l'application.
1.2 - Obtenir de l'Aide sur les Commandes de kubectl
Les commandes de kubectl sont regroupées par catégorie :
root@debian10:~# kubectl --help kubectl controls the Kubernetes cluster manager. Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/ Basic Commands (Beginner): create Create a resource from a file or from stdin. expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service run Run a particular image on the cluster set Set specific features on objects Basic Commands (Intermediate): explain Documentation of resources get Display one or many resources edit Edit a resource on the server delete Delete resources by filenames, stdin, resources and names, or by resources and label selector Deploy Commands: rollout Manage the rollout of a resource scale Set a new size for a Deployment, ReplicaSet or Replication Controller autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController Cluster Management Commands: certificate Modify certificate resources. cluster-info Display cluster info top Display Resource (CPU/Memory/Storage) usage. cordon Mark node as unschedulable uncordon Mark node as schedulable drain Drain node in preparation for maintenance taint Update the taints on one or more nodes Troubleshooting and Debugging Commands: describe Show details of a specific resource or group of resources logs Print the logs for a container in a pod attach Attach to a running container exec Execute a command in a container port-forward Forward one or more local ports to a pod proxy Run a proxy to the Kubernetes API server cp Copy files and directories to and from containers. auth Inspect authorization Advanced Commands: diff Diff live version against would-be applied version apply Apply a configuration to a resource by filename or stdin patch Update field(s) of a resource using strategic merge patch replace Replace a resource by filename or stdin wait Experimental: Wait for a specific condition on one or many resources. convert Convert config files between different API versions kustomize Build a kustomization target from a directory or a remote url. Settings Commands: label Update the labels on a resource annotate Update the annotations on a resource completion Output shell completion code for the specified shell (bash or zsh) Other Commands: alpha Commands for features in alpha api-resources Print the supported API resources on the server api-versions Print the supported API versions on the server, in the form of "group/version" config Modify kubeconfig files plugin Provides utilities for interacting with plugins. version Print the client and server version information Usage: kubectl [flags] [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands).
Plus d'informations sur chaque commande peut être obtenue en passant l'option –help, par exemple :
root@debian10:~# kubectl create --help Create a resource from a file or from stdin. JSON and YAML formats are accepted. Examples: # Create a pod using the data in pod.json. kubectl create -f ./pod.json # Create a pod based on the JSON passed into stdin. cat pod.json | kubectl create -f - # Edit the data in docker-registry.yaml in JSON then create the resource using the edited data. kubectl create -f docker-registry.yaml --edit -o json Available Commands: clusterrole Create a ClusterRole. clusterrolebinding Create a ClusterRoleBinding for a particular ClusterRole configmap Create a configmap from a local file, directory or literal value cronjob Create a cronjob with the specified name. deployment Create a deployment with the specified name. job Create a job with the specified name. namespace Create a namespace with the specified name poddisruptionbudget Create a pod disruption budget with the specified name. priorityclass Create a priorityclass with the specified name. quota Create a quota with the specified name. role Create a role with single rule. rolebinding Create a RoleBinding for a particular Role or ClusterRole secret Create a secret using specified subcommand service Create a service using specified subcommand. serviceaccount Create a service account with the specified name Options: --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in the template. Only applies to golang and jsonpath output formats. --dry-run='none': Must be "none", "server", or "client". If client strategy, only print the object that would be sent, without sending it. If server strategy, submit server-side request without persisting the resource. --edit=false: Edit the API resource before creating --field-manager='kubectl-create': Name of the manager used to track field ownership. -f, --filename=[]: Filename, directory, or URL to files to use to create the resource -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R. -o, --output='': Output format. One of: json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-as-json|jsonpath-file. --raw='': Raw URI to POST to the server. Uses the transport specified by the kubeconfig file. --record=false: Record current kubectl command in the resource annotation. If set to false, do not record the command. If set to true, record the command. If not set, default to updating the existing annotation value only if one already exists. -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage related manifests organized within the same directory. --save-config=false: If true, the configuration of current object will be saved in its annotation. Otherwise, the annotation will be unchanged. This flag is useful when you want to perform kubectl apply on this object in the future. -l, --selector='': Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2) --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview]. --validate=true: If true, use a schema to validate the input before sending it --windows-line-endings=false: Only relevant if --edit=true. Defaults to the line ending native to your platform. Usage: kubectl create -f FILENAME [options] Use "kubectl <command> --help" for more information about a given command. Use "kubectl options" for a list of global command-line options (applies to all commands).
Dernièrement les commandes kubectl peuvent recevoir des options. Pour visualiser les options qui peuvent être passées à toutes les commandes kubectl, saisissez la commande suivante :
root@debian10:~# kubectl options The following options can be passed to any command: --add-dir-header=false: If true, adds the file directory to the header of the log messages --alsologtostderr=false: log to standard error as well as files --as='': Username to impersonate for the operation --as-group=[]: Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir='/root/.kube/cache': Default cache directory --certificate-authority='': Path to a cert file for the certificate authority --client-certificate='': Path to a client certificate file for TLS --client-key='': Path to a client key file for TLS --cluster='': The name of the kubeconfig cluster to use --context='': The name of the kubeconfig context to use --insecure-skip-tls-verify=false: If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig='': Path to the kubeconfig file to use for CLI requests. --log-backtrace-at=:0: when logging hits line file:N, emit a stack trace --log-dir='': If non-empty, write log files in this directory --log-file='': If non-empty, use this log file --log-file-max-size=1800: Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. --log-flush-frequency=5s: Maximum number of seconds between log flushes --logtostderr=true: log to standard error instead of files --match-server-version=false: Require server version to match client version -n, --namespace='': If present, the namespace scope for this CLI request --password='': Password for basic authentication to the API server --profile='none': Name of profile to capture. One of (none|cpu|heap|goroutine|threadcreate|block|mutex) --profile-output='profile.pprof': Name of the file to write the profile to --request-timeout='0': The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. -s, --server='': The address and port of the Kubernetes API server --skip-headers=false: If true, avoid header prefixes in the log messages --skip-log-headers=false: If true, avoid headers when opening log files --stderrthreshold=2: logs at or above this threshold go to stderr --tls-server-name='': Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token='': Bearer token for authentication to the API server --user='': The name of the kubeconfig user to use --username='': Username for basic authentication to the API server -v, --v=0: number for the log level verbosity --vmodule=: comma-separated list of pattern=N settings for file-filtered logging --warnings-as-errors=false: Treat warnings received from the server as errors and exit with a non-zero exit code
1.3 - Obtenir de l'Information sur le Cluster
La Commande version
Commencez par obtenir l'information concernant la version du client et du serveur :
root@debian10:~# kubectl version --short Client Version: v1.20.0 Server Version: v1.19.1
La Commande cluster-info
Consultez ensuite les informations concernant le cluster :
root@debian10:~# kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:46537 KubeDNS is running at https://127.0.0.1:46537/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
La Commande api-versions
Afin de connaître les versions des API compatibles avec la version de Kubernetes installée, exécutez la commande api-versions :
root@debian10:~# kubectl api-versions admissionregistration.k8s.io/v1 admissionregistration.k8s.io/v1beta1 apiextensions.k8s.io/v1 apiextensions.k8s.io/v1beta1 apiregistration.k8s.io/v1 apiregistration.k8s.io/v1beta1 apps/v1 authentication.k8s.io/v1 authentication.k8s.io/v1beta1 authorization.k8s.io/v1 authorization.k8s.io/v1beta1 autoscaling/v1 autoscaling/v2beta1 autoscaling/v2beta2 batch/v1 batch/v1beta1 certificates.k8s.io/v1 certificates.k8s.io/v1beta1 coordination.k8s.io/v1 coordination.k8s.io/v1beta1 discovery.k8s.io/v1beta1 events.k8s.io/v1 events.k8s.io/v1beta1 extensions/v1beta1 networking.k8s.io/v1 networking.k8s.io/v1beta1 node.k8s.io/v1beta1 policy/v1beta1 rbac.authorization.k8s.io/v1 rbac.authorization.k8s.io/v1beta1 scheduling.k8s.io/v1 scheduling.k8s.io/v1beta1 storage.k8s.io/v1 storage.k8s.io/v1beta1 v1
La Commande api-resources
La commande api-resources permet de consulter la liste des ressources du cluster, à savoir :
- le nom de la ressource - NAME,
- le nom court à utiliser avec kubectl - SHORTNAMES,
- le groupe API auquel la ressource appartient - APIGROUP,
- si oui ou non la ressource est liée à un namespace - NAMESPACED,
- le type KIND de la ressource - KIND.
root@debian10:~# kubectl api-resources NAME SHORTNAMES APIVERSION NAMESPACED KIND bindings v1 true Binding componentstatuses cs v1 false ComponentStatus configmaps cm v1 true ConfigMap endpoints ep v1 true Endpoints events ev v1 true Event limitranges limits v1 true LimitRange namespaces ns v1 false Namespace nodes no v1 false Node persistentvolumeclaims pvc v1 true PersistentVolumeClaim persistentvolumes pv v1 false PersistentVolume pods po v1 true Pod podtemplates v1 true PodTemplate replicationcontrollers rc v1 true ReplicationController resourcequotas quota v1 true ResourceQuota secrets v1 true Secret serviceaccounts sa v1 true ServiceAccount services svc v1 true Service mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition apiservices apiregistration.k8s.io/v1 false APIService controllerrevisions apps/v1 true ControllerRevision daemonsets ds apps/v1 true DaemonSet deployments deploy apps/v1 true Deployment replicasets rs apps/v1 true ReplicaSet statefulsets sts apps/v1 true StatefulSet tokenreviews authentication.k8s.io/v1 false TokenReview localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview horizontalpodautoscalers hpa autoscaling/v1 true HorizontalPodAutoscaler cronjobs cj batch/v1beta1 true CronJob jobs batch/v1 true Job certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest leases coordination.k8s.io/v1 true Lease endpointslices discovery.k8s.io/v1beta1 true EndpointSlice events ev events.k8s.io/v1 true Event ingresses ing extensions/v1beta1 true Ingress ingressclasses networking.k8s.io/v1 false IngressClass ingresses ing networking.k8s.io/v1 true Ingress networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy runtimeclasses node.k8s.io/v1beta1 false RuntimeClass poddisruptionbudgets pdb policy/v1beta1 true PodDisruptionBudget podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding roles rbac.authorization.k8s.io/v1 true Role priorityclasses pc scheduling.k8s.io/v1 false PriorityClass csidrivers storage.k8s.io/v1 false CSIDriver csinodes storage.k8s.io/v1 false CSINode storageclasses sc storage.k8s.io/v1 false StorageClass volumeattachments storage.k8s.io/v1 false VolumeAttachment
1.4 -Travailler avec les Nœuds
Pour consulter les nœuds du cluster utilisez la commande get nodes :
root@debian10:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 4m56s v1.19.1 kind-worker Ready <none> 4m16s v1.19.1 kind-worker2 Ready <none> 4m17s v1.19.1 kind-worker3 Ready <none> 4m18s v1.19.1
La Commande describe node
De l'information sur le nœud peut être obtenue grâce à la commande describe node. Dans la première partie de la sortie de la commande on peut constater :
- la section Labels:. Les Labels peuvent être utilisés pour gérer l'affinité d'un pod, autrement dit sur quel nœud un pod peut être schedulé en fonction des étiquettes associées au pod,
- la ligne Unschedulable: false qui indique que le nœud accepte des pods.
root@debian10:~# kubectl describe node kind-control-plane Name: kind-control-plane Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=kind-control-plane kubernetes.io/os=linux node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Fri, 11 Dec 2020 12:08:17 +0100 Taints: node-role.kubernetes.io/master:NoSchedule Unschedulable: false Lease: HolderIdentity: kind-control-plane AcquireTime: <unset> RenewTime: Fri, 11 Dec 2020 12:13:33 +0100 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 11 Dec 2020 12:09:12 +0100 Fri, 11 Dec 2020 12:08:11 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 11 Dec 2020 12:09:12 +0100 Fri, 11 Dec 2020 12:08:11 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 11 Dec 2020 12:09:12 +0100 Fri, 11 Dec 2020 12:08:11 +0100 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 11 Dec 2020 12:09:12 +0100 Fri, 11 Dec 2020 12:09:12 +0100 KubeletReady kubelet is posting ready status ...
Dans la deuxième partie de la sortie, on peut constater :
- la section Addresses: contenant l'adresse IP ainsi que le nom d'hôte du nœud.
... Addresses: InternalIP: 172.18.0.3 Hostname: kind-control-plane Capacity: cpu: 1 ephemeral-storage: 19478160Ki hugepages-2Mi: 0 memory: 8170464Ki pods: 110 Allocatable: cpu: 1 ephemeral-storage: 19478160Ki hugepages-2Mi: 0 memory: 8170464Ki pods: 110 ...
Dans la troisième partie de la sortie, on peut constater :
- la section System Info: contenant de l'information sur le système d'exploitation ainsi que les versions de Docker et de Kubernetes,
- la section Non-terminated Pods contenant de l'information sur les limites du CPU et de la mémoire de chaque POD en cours d'exécution.
... System Info: Machine ID: 5fd4d39b652b43ce9feba503c0c53d17 System UUID: 734424d3-5513-4e2d-a7e3-7f4429493214 Boot ID: 05357148-f589-4478-b096-5b18c0ddc66f Kernel Version: 4.19.0-6-amd64 OS Image: Ubuntu Groovy Gorilla (development branch) Operating System: linux Architecture: amd64 Container Runtime Version: containerd://1.4.0 Kubelet Version: v1.19.1 Kube-Proxy Version: v1.19.1 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 ProviderID: kind://docker/kind/kind-control-plane Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-f9fd979d6-b87p7 100m (10%) 0 (0%) 70Mi (0%) 170Mi (2%) 5m3s kube-system coredns-f9fd979d6-jwd68 100m (10%) 0 (0%) 70Mi (0%) 170Mi (2%) 5m3s kube-system etcd-kind-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m9s kube-system kindnet-vrqzw 100m (10%) 100m (10%) 50Mi (0%) 50Mi (0%) 5m3s kube-system kube-apiserver-kind-control-plane 250m (25%) 0 (0%) 0 (0%) 0 (0%) 5m9s kube-system kube-controller-manager-kind-control-plane 200m (20%) 0 (0%) 0 (0%) 0 (0%) 5m9s kube-system kube-proxy-5zpkb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m3s kube-system kube-scheduler-kind-control-plane 100m (10%) 0 (0%) 0 (0%) 0 (0%) 5m9s local-path-storage local-path-provisioner-78776bfc44-5rzmk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m3s ...
Dans la dernière partie de la sortie, on peut constater :
- la section Allocated resources: qui indique les ressources allouées au noeud.
Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (85%) 100m (10%) memory 190Mi (2%) 390Mi (4%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientMemory 5m50s (x5 over 5m50s) kubelet Node kind-control-plane status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 5m50s (x5 over 5m50s) kubelet Node kind-control-plane status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 5m50s (x4 over 5m50s) kubelet Node kind-control-plane status is now: NodeHasSufficientPID Normal Starting 5m9s kubelet Starting kubelet. Normal NodeHasSufficientMemory 5m9s kubelet Node kind-control-plane status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 5m9s kubelet Node kind-control-plane status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 5m9s kubelet Node kind-control-plane status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 5m9s kubelet Updated Node Allocatable limit across pods Normal Starting 4m44s kube-proxy Starting kube-proxy. Normal NodeReady 4m29s kubelet Node kind-control-plane status is now: NodeReady
La Commande top
La commande top nécessite à ce que l'API Metrics soit disponible dans le cluster. Pour déployer le serveur Metrics, téléchargez le fichier components.yaml :
root@debian10:~# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml --2020-12-11 12:18:04-- https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml Resolving github.com (github.com)... 140.82.121.4 Connecting to github.com (github.com)|140.82.121.4|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml [following] --2020-12-11 12:18:05-- https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml Reusing existing connection to github.com:443. HTTP request sent, awaiting response... 302 Found Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/febd5000-290f-11eb-9fcb-f4b297446db8?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20201211%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201211T111612Z&X-Amz-Expires=300&X-Amz-Signature=24dc37640df34082c3b89641b41443e32dbd400b013e8ec94848e5ff52483159&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream [following] --2020-12-11 12:18:05-- https://github-production-release-asset-2e65be.s3.amazonaws.com/92132038/febd5000-290f-11eb-9fcb-f4b297446db8?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20201211%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201211T111612Z&X-Amz-Expires=300&X-Amz-Signature=24dc37640df34082c3b89641b41443e32dbd400b013e8ec94848e5ff52483159&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=92132038&response-content-disposition=attachment%3B%20filename%3Dcomponents.yaml&response-content-type=application%2Foctet-stream Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.104.236 Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.104.236|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 3962 (3.9K) [application/octet-stream] Saving to: ‘components.yaml’ components.yaml 100%[===================>] 3.87K --.-KB/s in 0s 2020-12-11 12:18:05 (162 MB/s) - ‘components.yaml’ saved [3962/3962]
Modifiez la section containers du fichier components.yaml :
root@debian10:~# vi components.yaml root@debian10:~# ... spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP - --kubelet-use-node-status-port ...
Déployez le serveur Metrics :
root@debian10:~# kubectl apply -f components.yaml serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
Vérifiez l'état du deployment :
root@debian10:~# kubectl get deployments --all-namespaces NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE kube-system coredns 2/2 2 2 14m kube-system metrics-server 1/1 1 1 41s local-path-storage local-path-provisioner 1/1 1 1 14m
Pour connaître l'utilisation des ressources par le nœud, utilisez la commande top nodes :
root@debian10:~# kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% kind-control-plane 222m 22% 786Mi 9% kind-worker 254m 25% 159Mi 1% kind-worker2 22m 2% 146Mi 1% kind-worker3 25m 2% 143Mi 1%
Pour voir l'évolution de l'utilisation des ressources par le nœud, utilisez la commande watch
root@debian10:~$ watch kubectl top nodes Every 2.0s: kubectl top nodes debian10: Fri Dec 11 12:24:21 2020 NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% kind-control-plane 110m 11% 799Mi 10% kind-worker 18m 1% 170Mi 2% kind-worker2 13m 1% 146Mi 1% kind-worker3 14m 1% 143Mi 1% ... ^C
Important : Notez l'utilisation de ^C pour sortir de l'écran de la commande watch.
Les Commandes cordon et uncordon
Afin d'empêcher un nœud de recevoir de nouveaux pods, utilisez la commande cordon :
root@debian10:~# kubectl cordon kind-worker node/kind-worker cordoned
Consultez la ligne Unschedulable: dans la sortie de la commande describe node :
root@debian10:~# kubectl describe node kind-worker | grep Unschedulable Unschedulable: true
Important : Dans le cas d'un cluster à plusieurs nœuds, si le nœud concerné par la commande kubectl cordon redémarre, tous les pods seront ré-alloués aux autres nœuds.
Pour autoriser de nouveau un nœud à recevoir de nouveaux pods, utilisez la commande uncordon :
root@debian10:~# kubectl uncordon kind-worker node/kind-worker uncordoned root@debian10:~# kubectl describe node kind-worker | grep Unschedulable Unschedulable: false
La Commande drain
La commande drain permet d'expulser les pods d'un nœud en les arrêtant.
Commencez par constater quels pods sont sur le nœud kind-worker :
root@debian10:~# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-f9fd979d6-b87p7 1/1 Running 0 3h16m 10.244.0.2 kind-control-plane <none> <none> kube-system coredns-f9fd979d6-jwd68 1/1 Running 0 3h16m 10.244.0.3 kind-control-plane <none> <none> kube-system etcd-kind-control-plane 1/1 Running 0 3h16m 172.18.0.3 kind-control-plane <none> <none> kube-system kindnet-5dk8c 1/1 Running 0 3h15m 172.18.0.2 kind-worker <none> <none> kube-system kindnet-7jqqb 1/1 Running 0 3h15m 172.18.0.5 kind-worker3 <none> <none> kube-system kindnet-t8t9q 1/1 Running 0 3h15m 172.18.0.4 kind-worker2 <none> <none> kube-system kindnet-vrqzw 1/1 Running 0 3h16m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-apiserver-kind-control-plane 1/1 Running 0 3h16m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 3h16m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-5zpkb 1/1 Running 0 3h16m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-8pp5m 1/1 Running 0 3h15m 172.18.0.4 kind-worker2 <none> <none> kube-system kube-proxy-ltx6c 1/1 Running 0 3h15m 172.18.0.5 kind-worker3 <none> <none> kube-system kube-proxy-nrkql 1/1 Running 0 3h15m 172.18.0.2 kind-worker <none> <none> kube-system kube-scheduler-kind-control-plane 1/1 Running 0 3h16m 172.18.0.3 kind-control-plane <none> <none> kube-system metrics-server-594b87569-9fcsq 1/1 Running 0 3h2m 10.244.3.2 kind-worker <none> <none> local-path-storage local-path-provisioner-78776bfc44-5rzmk 1/1 Running 0 3h16m 10.244.0.4 kind-control-plane <none> <none>
Important : Dans le cas ci-dessus les pods kindnet-5dk8c, kube-proxy-nrkql et metrics-server-594b87569-9fcsq se trouvent sur le noeud kind-worker.
Utilisez la commande drain pour expulser les pods sur le nœud kind-worker :
root@debian10:~# kubectl drain kind-worker --ignore-daemonsets --delete-emptydir-data --force node/kind-worker cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/kindnet-5dk8c, kube-system/kube-proxy-nrkql evicting pod kube-system/metrics-server-594b87569-9fcsq pod/metrics-server-594b87569-9fcsq evicted node/kind-worker evicted
Constatez de nouveau quels pods sont sur le nœud kind-worker :
root@debian10:~# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-f9fd979d6-b87p7 1/1 Running 0 3h24m 10.244.0.2 kind-control-plane <none> <none> kube-system coredns-f9fd979d6-jwd68 1/1 Running 0 3h24m 10.244.0.3 kind-control-plane <none> <none> kube-system etcd-kind-control-plane 1/1 Running 0 3h24m 172.18.0.3 kind-control-plane <none> <none> kube-system kindnet-5dk8c 1/1 Running 0 3h24m 172.18.0.2 kind-worker <none> <none> kube-system kindnet-7jqqb 1/1 Running 0 3h24m 172.18.0.5 kind-worker3 <none> <none> kube-system kindnet-t8t9q 1/1 Running 0 3h24m 172.18.0.4 kind-worker2 <none> <none> kube-system kindnet-vrqzw 1/1 Running 0 3h24m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-apiserver-kind-control-plane 1/1 Running 0 3h24m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 3h24m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-5zpkb 1/1 Running 0 3h24m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-8pp5m 1/1 Running 0 3h24m 172.18.0.4 kind-worker2 <none> <none> kube-system kube-proxy-ltx6c 1/1 Running 0 3h24m 172.18.0.5 kind-worker3 <none> <none> kube-system kube-proxy-nrkql 1/1 Running 0 3h24m 172.18.0.2 kind-worker <none> <none> kube-system kube-scheduler-kind-control-plane 1/1 Running 0 3h24m 172.18.0.3 kind-control-plane <none> <none> kube-system metrics-server-594b87569-28r4s 1/1 Running 0 4m11s 10.244.1.2 kind-worker3 <none> <none> local-path-storage local-path-provisioner-78776bfc44-5rzmk 1/1 Running 0 3h24m 10.244.0.4 kind-control-plane <none> <none>
Important : Notez que seul le pod metrics-server-594b87569-9fcsq a été expulsé. En effet les deux pods kube-proxy-nrkql et kindnet-5dk8c ont été créés par des DaemonSets. Un DaemonSet ne peut pas être supprimé d'un nœud Kubernetes, d'où l'utilisation des options –ignore-daemonsets et –force.
Constatez maintenant l'état des nœuds :
root@debian10:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 3h26m v1.19.1 kind-worker Ready,SchedulingDisabled <none> 3h25m v1.19.1 kind-worker2 Ready <none> 3h25m v1.19.1 kind-worker3 Ready <none> 3h25m v1.19.1
Important : Notez la présence de SchedulingDisabled dans la colonne STATUS.
La Commande delete
Pour supprimer un nœud du cluster, utilisez la commande delete :
root@debian10:~# kubectl delete node kind-worker node "kind-worker" deleted
En saisissant la commande get nodes, vous constaterez que le nœud semble avoir été supprimé :
root@debian10:~# kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 3h27m v1.19.1 kind-worker2 Ready <none> 3h26m v1.19.1 kind-worker3 Ready <none> 3h26m v1.19.1
En réalité, le nœud n'est pas supprimé tant que tous les pods du nœud n'ont pas été détruits :
root@debian10:~# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-f9fd979d6-b87p7 1/1 Running 0 3h27m 10.244.0.2 kind-control-plane <none> <none> kube-system coredns-f9fd979d6-jwd68 1/1 Running 0 3h27m 10.244.0.3 kind-control-plane <none> <none> kube-system etcd-kind-control-plane 1/1 Running 0 3h27m 172.18.0.3 kind-control-plane <none> <none> kube-system kindnet-5dk8c 1/1 Running 0 3h27m 172.18.0.2 kind-worker <none> <none> kube-system kindnet-7jqqb 1/1 Running 0 3h27m 172.18.0.5 kind-worker3 <none> <none> kube-system kindnet-t8t9q 1/1 Running 0 3h27m 172.18.0.4 kind-worker2 <none> <none> kube-system kindnet-vrqzw 1/1 Running 0 3h27m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-apiserver-kind-control-plane 1/1 Running 0 3h27m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 3h27m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-5zpkb 1/1 Running 0 3h27m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-8pp5m 1/1 Running 0 3h27m 172.18.0.4 kind-worker2 <none> <none> kube-system kube-proxy-ltx6c 1/1 Running 0 3h27m 172.18.0.5 kind-worker3 <none> <none> kube-system kube-proxy-nrkql 1/1 Running 0 3h27m 172.18.0.2 kind-worker <none> <none> kube-system kube-scheduler-kind-control-plane 1/1 Running 0 3h27m 172.18.0.3 kind-control-plane <none> <none> kube-system metrics-server-594b87569-28r4s 1/1 Running 0 7m6s 10.244.1.2 kind-worker3 <none> <none> local-path-storage local-path-provisioner-78776bfc44-5rzmk 1/1 Running 0 3h27m 10.244.0.4 kind-control-plane <none> <none>
Après la destruction des pods, le nœud est effectivement détruit :
root@debian10:~# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-f9fd979d6-b87p7 1/1 Running 0 3h29m 10.244.0.2 kind-control-plane <none> <none> kube-system coredns-f9fd979d6-jwd68 1/1 Running 0 3h29m 10.244.0.3 kind-control-plane <none> <none> kube-system etcd-kind-control-plane 1/1 Running 0 3h29m 172.18.0.3 kind-control-plane <none> <none> kube-system kindnet-7jqqb 1/1 Running 0 3h29m 172.18.0.5 kind-worker3 <none> <none> kube-system kindnet-t8t9q 1/1 Running 0 3h29m 172.18.0.4 kind-worker2 <none> <none> kube-system kindnet-vrqzw 1/1 Running 0 3h29m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-apiserver-kind-control-plane 1/1 Running 0 3h29m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 3h29m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-5zpkb 1/1 Running 0 3h29m 172.18.0.3 kind-control-plane <none> <none> kube-system kube-proxy-8pp5m 1/1 Running 0 3h29m 172.18.0.4 kind-worker2 <none> <none> kube-system kube-proxy-ltx6c 1/1 Running 0 3h29m 172.18.0.5 kind-worker3 <none> <none> kube-system kube-scheduler-kind-control-plane 1/1 Running 0 3h29m 172.18.0.3 kind-control-plane <none> <none> kube-system metrics-server-594b87569-28r4s 1/1 Running 0 9m31s 10.244.1.2 kind-worker3 <none> <none> local-path-storage local-path-provisioner-78776bfc44-5rzmk 1/1 Running 0 3h29m 10.244.0.4 kind-control-plane <none> <none>
1.5 - Gestion des Applications
La Commande expose
Créez un deployment à partir de l'image nginx :
root@debian10:~# kubectl create deployment nginx --image=nginx deployment.apps/nginx created root@debian10:~# kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 22s root@debian10:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-sgc8w 1/1 Running 0 30s
Exposez maintenant le port 80 du pod grâce à la commande expose :
root@debian10:~# kubectl expose deployment nginx --port=80 --target-port=80 service/nginx exposed root@debian10:~# kubectl describe service nginx Name: nginx Namespace: default Labels: app=nginx Annotations: <none> Selector: app=nginx Type: ClusterIP IP Families: <none> IP: 10.96.114.21 IPs: <none> Port: <unset> 80/TCP TargetPort: 80/TCP Endpoints: 10.244.2.2:80 Session Affinity: None Events: <none>
La Commande get
Visualisez le service avec la commande get :
root@debian10:~# kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h47m nginx ClusterIP 10.96.114.21 <none> 80/TCP 12s
La commande get peut être utilisée pour créer une fichier au format YAML pour une création ultérieure du service :
root@debian10:~# kubectl get service nginx -o yaml > service.yaml root@debian10:~# cat service.yaml apiVersion: v1 kind: Service metadata: creationTimestamp: "2020-12-11T14:55:41Z" labels: app: nginx managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:spec: f:ports: .: {} k:{"port":80,"protocol":"TCP"}: .: {} f:port: {} f:protocol: {} f:targetPort: {} f:selector: .: {} f:app: {} f:sessionAffinity: {} f:type: {} manager: kubectl-expose operation: Update time: "2020-12-11T14:55:41Z" name: nginx namespace: default resourceVersion: "41428" selfLink: /api/v1/namespaces/default/services/nginx uid: 4ef7c806-d88b-43fb-b53c-2bf418583290 spec: clusterIP: 10.96.114.21 ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx sessionAffinity: None type: ClusterIP status: loadBalancer: {}
La commande get peut aussi être utilisée pour créer une fichier au format YAML pour une création ultérieure du deployment :
root@debian10:~# kubectl get deployment nginx -o yaml > deployment.yaml root@debian10:~# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" creationTimestamp: "2020-12-11T14:54:24Z" generation: 1 labels: app: nginx managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:labels: .: {} f:app: {} f:spec: f:progressDeadlineSeconds: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: f:matchLabels: .: {} f:app: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:spec: f:containers: k:{"name":"nginx"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: kubectl-create operation: Update time: "2020-12-11T14:54:24Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:deployment.kubernetes.io/revision: {} f:status: f:availableReplicas: {} f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:readyReplicas: {} f:replicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update time: "2020-12-11T14:54:44Z" name: nginx namespace: default resourceVersion: "41257" selfLink: /apis/apps/v1/namespaces/default/deployments/nginx uid: ebd84992-0957-460e-bbdb-ab9a8e80f099 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: nginx strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx imagePullPolicy: Always name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: availableReplicas: 1 conditions: - lastTransitionTime: "2020-12-11T14:54:44Z" lastUpdateTime: "2020-12-11T14:54:44Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available - lastTransitionTime: "2020-12-11T14:54:24Z" lastUpdateTime: "2020-12-11T14:54:44Z" message: ReplicaSet "nginx-6799fc88d8" has successfully progressed. reason: NewReplicaSetAvailable status: "True" type: Progressing observedGeneration: 1 readyReplicas: 1 replicas: 1 updatedReplicas: 1
Éditez le fichier deployment.yaml pour augmenter le nombre de replicas :
root@debian10:~$ vi deployment.yaml root@debian10:~$ cat deployment.yaml .. spec: progressDeadlineSeconds: 600 replicas: 3 ..
Appliquez le fichier deployment.yaml :
root@debian10:~# kubectl apply -f deployment.yaml Warning: resource deployments/nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. deployment.apps/nginx configured
Vérifiez ensuite le statut du deployment :
root@debian10:~# kubectl get deployment nginx NAME READY UP-TO-DATE AVAILABLE AGE nginx 3/3 3 3 8m10s root@debian10:~# kubectl get pods NAME READY STATUS RESTARTS AGE nginx-6799fc88d8-gg4gt 1/1 Running 0 68s nginx-6799fc88d8-qs8bq 1/1 Running 0 68s nginx-6799fc88d8-sgc8w 1/1 Running 0 8m27s
La Commande set
Utilisez la commande set afin de mettre à jour l'application vers la version 1.19.0 de nginx :
root@debian10:~# kubectl set image deployment nginx nginx=nginx:1.19.0 --record deployment.apps/nginx image updated
Vérifiez l'utilisation de l'image nginx 1.19.0 grâce à la commande describe :
root@debian10:~# kubectl describe deployment nginx Name: nginx Namespace: default CreationTimestamp: Fri, 11 Dec 2020 15:54:24 +0100 Labels: app=nginx Annotations: deployment.kubernetes.io/revision: 2 kubernetes.io/change-cause: kubectl set image deployment nginx nginx=nginx:1.19.0 --record=true Selector: app=nginx Replicas: 3 desired | 1 updated | 4 total | 3 available | 1 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.19.0 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True ReplicaSetUpdated OldReplicaSets: nginx-6799fc88d8 (3/3 replicas created) NewReplicaSet: nginx-6d5fb79b7f (1/1 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-6799fc88d8 to 1 Normal ScalingReplicaSet 7m32s deployment-controller Scaled up replica set nginx-6799fc88d8 to 3 Normal ScalingReplicaSet 52s deployment-controller Scaled up replica set nginx-6d5fb79b7f to 1 Normal ScalingReplicaSet 31s deployment-controller Scaled down replica set nginx-6799fc88d8 to 2 Normal ScalingReplicaSet 30s deployment-controller Scaled up replica set nginx-6d5fb79b7f to 2 Normal ScalingReplicaSet 12s deployment-controller Scaled down replica set nginx-6799fc88d8 to 1 Normal ScalingReplicaSet 12s deployment-controller Scaled up replica set nginx-6d5fb79b7f to 3 Normal ScalingReplicaSet 9s deployment-controller Scaled down replica set nginx-6799fc88d8 to 0
Important : Notez la ligne Image: nginx:1.19.0.
Comme attendu, la commande set n'a pas mis à jour le fichier deployment.yaml :
root@debian10:~$ cat deployment.yaml ... spec: containers: - image: nginx imagePullPolicy: Always name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File ...
Mettez donc à jour le fichier :
root@debian10:~$ vi deployment.yaml root@debian10:~$ cat deployment.yaml ... spec: containers: - image: nginx:1.19.0 imagePullPolicy: Always name: nginx resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File ...
La Commande rollout
Vérifiez le statut du rollout :
root@debian10:~# kubectl rollout status deployment nginx deployment "nginx" successfully rolled out
Faites un roll back :
root@debian10:~# kubectl rollout undo deployment nginx deployment.apps/nginx rolled back
Vérifiez maintenant le statut du deployment avec la commande describe :
root@debian10:~# kubectl describe deployment nginx Name: nginx Namespace: default CreationTimestamp: Fri, 11 Dec 2020 15:54:24 +0100 Labels: app=nginx Annotations: deployment.kubernetes.io/revision: 3 Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-6799fc88d8 (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 56m deployment-controller Scaled up replica set nginx-6d5fb79b7f to 1 Normal ScalingReplicaSet 56m deployment-controller Scaled down replica set nginx-6799fc88d8 to 2 Normal ScalingReplicaSet 56m deployment-controller Scaled up replica set nginx-6d5fb79b7f to 2 Normal ScalingReplicaSet 56m deployment-controller Scaled up replica set nginx-6d5fb79b7f to 3 Normal ScalingReplicaSet 56m deployment-controller Scaled down replica set nginx-6799fc88d8 to 1 Normal ScalingReplicaSet 56m deployment-controller Scaled down replica set nginx-6799fc88d8 to 0 Normal ScalingReplicaSet 7m9s (x2 over 70m) deployment-controller Scaled up replica set nginx-6799fc88d8 to 1 Normal ScalingReplicaSet 7m6s deployment-controller Scaled down replica set nginx-6d5fb79b7f to 2 Normal ScalingReplicaSet 7m6s deployment-controller Scaled up replica set nginx-6799fc88d8 to 2 Normal ScalingReplicaSet 7m3s (x2 over 63m) deployment-controller Scaled up replica set nginx-6799fc88d8 to 3 Normal ScalingReplicaSet 7m3s deployment-controller Scaled down replica set nginx-6d5fb79b7f to 1 Normal ScalingReplicaSet 7m deployment-controller Scaled down replica set nginx-6d5fb79b7f to 0
Important : Notez la ligne Image: nginx.
Consultez l'historique des révisions :
root@debian10:~# kubectl rollout history deployment nginx deployment.apps/nginx REVISION CHANGE-CAUSE 2 kubectl set image deployment nginx nginx=nginx:1.19.0 --record=true 3 <none>
Revenez à la révision numéro 2 :
root@debian10:~# kubectl rollout undo deployment nginx --to-revision=2 deployment.apps/nginx rolled back root@debian10:~# kubectl describe deployment nginx Name: nginx Namespace: default CreationTimestamp: Fri, 11 Dec 2020 15:54:24 +0100 Labels: app=nginx Annotations: deployment.kubernetes.io/revision: 4 kubernetes.io/change-cause: kubectl set image deployment nginx nginx=nginx:1.19.0 --record=true Selector: app=nginx Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable StrategyType: RollingUpdate MinReadySeconds: 0 RollingUpdateStrategy: 25% max unavailable, 25% max surge Pod Template: Labels: app=nginx Containers: nginx: Image: nginx:1.19.0 Port: <none> Host Port: <none> Environment: <none> Mounts: <none> Volumes: <none> Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: nginx-6d5fb79b7f (3/3 replicas created) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 15m (x2 over 78m) deployment-controller Scaled up replica set nginx-6799fc88d8 to 1 Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set nginx-6d5fb79b7f to 2 Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-6799fc88d8 to 2 Normal ScalingReplicaSet 14m (x2 over 71m) deployment-controller Scaled up replica set nginx-6799fc88d8 to 3 Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set nginx-6d5fb79b7f to 1 Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set nginx-6d5fb79b7f to 0 Normal ScalingReplicaSet 19s (x2 over 64m) deployment-controller Scaled up replica set nginx-6d5fb79b7f to 1 Normal ScalingReplicaSet 16s (x2 over 64m) deployment-controller Scaled down replica set nginx-6799fc88d8 to 2 Normal ScalingReplicaSet 16s (x2 over 64m) deployment-controller Scaled up replica set nginx-6d5fb79b7f to 2 Normal ScalingReplicaSet 14s (x2 over 64m) deployment-controller Scaled down replica set nginx-6799fc88d8 to 1 Normal ScalingReplicaSet 13s (x2 over 64m) deployment-controller Scaled up replica set nginx-6d5fb79b7f to 3 Normal ScalingReplicaSet 11s (x2 over 64m) deployment-controller Scaled down replica set nginx-6799fc88d8 to 0
Important : Notez la ligne Image: nginx:1.19.0.
La Commande delete
Supprimez maintenant le deployment nginx :
root@debian10:~# kubectl delete deployment nginx deployment.apps "nginx" deleted root@debian10:~# kubectl get deployments No resources found in default namespace.
1.6 - Déboguer une Application
Vous venez de supprimer le deployment nginx. Créez donc un autre deployement basé sur l'image bitnami/postgresql. Commencez par créer le fichier deployment-postgresql.yaml :
root@debian10:~# vi deployment-postgresql.yaml root@debian10:~# cat deployment-postgresql.yaml apiVersion: apps/v1 kind: Deployment metadata: name: postgresql labels: app: postgresql spec: replicas: 1 selector: matchLabels: app: postgresql template: metadata: labels: app: postgresql spec: containers: - image: bitnami/postgresql:10.12.10 imagePullPolicy: IfNotPresent name: postgresql
Deployez ensuite l'application :
oot@debian10:~# kubectl apply -f deployment-postgresql.yaml deployment.apps/postgresql created
En consultant le pod créé, vous verrez qu'il y a une erreur de type ErrImagePull :
root@debian10:~# kubectl get pods NAME READY STATUS RESTARTS AGE postgresql-586d47479b-kf24b 0/1 ErrImagePull 0 40s
Consultez la section Events de la sortie de la commande describe pour voir ce que se passe :
root@debian10:~# kubectl describe pod postgresql-586d47479b-kf24b ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 81s default-scheduler Successfully assigned default/postgresql-586d47479b-kf24b to kind-worker2 Normal Pulling 35s (x3 over 81s) kubelet Pulling image "bitnami/postgresql:10.12.10" Warning Failed 33s (x3 over 79s) kubelet Failed to pull image "bitnami/postgresql:10.12.10": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/bitnami/postgresql:10.12.10": failed to resolve reference "docker.io/bitnami/postgresql:10.12.10": docker.io/bitnami/postgresql:10.12.10: not found Warning Failed 33s (x3 over 79s) kubelet Error: ErrImagePull Normal BackOff 6s (x4 over 78s) kubelet Back-off pulling image "bitnami/postgresql:10.12.10" Warning Failed 6s (x4 over 78s) kubelet Error: ImagePullBackOff
Comme vous pouvez constater, il existe trois avertissements
Warning Failed 33s (x3 over 79s) kubelet Failed to pull image "bitnami/postgresql:10.12.10": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/bitnami/postgresql:10.12.10": failed to resolve reference "docker.io/bitnami/postgresql:10.12.10": docker.io/bitnami/postgresql:10.12.10: not found Warning Failed 33s (x3 over 79s) kubelet Error: ErrImagePull Normal BackOff 6s (x4 over 78s) kubelet Back-off pulling image "bitnami/postgresql:10.12.10" Warning Failed 6s (x4 over 78s) kubelet Error: ImagePullBackOff
Le premier des trois avertissements nous dit clairement qu'il y a un problème au niveau du tag de l'image spécifié dans le fichier deployment-postgresql.yaml : docker.io/bitnami/postgresql:10.12.10: not found.
Modifiez donc le tage dans ce fichier à 10.13.0 :
root@debian10:~# vi deployment-postgresql.yaml root@debian10:~# cat deployment-postgresql.yaml apiVersion: apps/v1 kind: Deployment metadata: name: postgresql labels: app: postgresql spec: replicas: 1 selector: matchLabels: app: postgresql template: metadata: labels: app: postgresql spec: containers: - image: bitnami/postgresql:10.13.0 imagePullPolicy: IfNotPresent name: postgresql
Appliquez maintenant le fichier :
root@debian10:~# kubectl apply -f deployment-postgresql.yaml deployment.apps/postgresql configured
En consultant le deuxième Pod créé, vous verrez qu'il y a une erreur de type CrashLoopBackOff :
root@debian10:~# kubectl get pods NAME READY STATUS RESTARTS AGE postgresql-586d47479b-kf24b 0/1 ImagePullBackOff 0 4m9s postgresql-5cc57c477d-dr7nx 0/1 ContainerCreating 0 29s root@debian10:~# kubectl get pods NAME READY STATUS RESTARTS AGE postgresql-586d47479b-kf24b 0/1 ImagePullBackOff 0 4m13s postgresql-5cc57c477d-dr7nx 0/1 Error 1 33s root@debian10:~# kubectl get pods NAME READY STATUS RESTARTS AGE postgresql-586d47479b-kf24b 0/1 ImagePullBackOff 0 4m15s postgresql-5cc57c477d-dr7nx 0/1 CrashLoopBackOff 1 35s
Consultez la section Events de la sortie de la commande describe pour voir ce que se passe avec le deuxième pod :
root@debian10:~# kubectl describe pod postgresql-5cc57c477d-dr7nx ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 89s default-scheduler Successfully assigned default/postgresql-5cc57c477d-dr7nx to kind-worker2 Normal Pulling 89s kubelet Pulling image "bitnami/postgresql:10.13.0" Normal Pulled 72s kubelet Successfully pulled image "bitnami/postgresql:10.13.0" in 16.7361756s Normal Pulled 16s (x3 over 58s) kubelet Container image "bitnami/postgresql:10.13.0" already present on machine Normal Created 15s (x4 over 60s) kubelet Created container postgresql Normal Started 15s (x4 over 59s) kubelet Started container postgresql Warning BackOff 1s (x6 over 57s) kubelet Back-off restarting failed container
Cette fois-ci, la section Events nous donne aucune indication concernant le problème !
La Commande logs
Pour obtenir plus d'information concernant le problème, on peut utiliser la commande logs :
root@debian10:~# kubectl logs postgresql-5cc57c477d-dr7nx postgresql 14:56:53.17 postgresql 14:56:53.17 Welcome to the Bitnami postgresql container postgresql 14:56:53.18 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql postgresql 14:56:53.18 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues postgresql 14:56:53.18 postgresql 14:56:53.21 INFO ==> ** Starting PostgreSQL setup ** postgresql 14:56:53.23 INFO ==> Validating settings in POSTGRESQL_* env vars.. postgresql 14:56:53.23 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development. postgresql 14:56:53.23 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development.
La sortie de la commande logs nous indique clairement que le problème est lié au contenu de la variable POSTGRESQL_PASSWORD qui est vide. Elle nous indique aussi que nous pourrions fixer la valeur de la variable ALLOW_EMPTY_PASSWORD à yes pour contourner ce problème :
... postgresql 14:56:53.23 ERROR ==> The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development.
Mettez à jour donc le fichier deployment-postgresql.yaml :
root@debian10:~# vi deployment-postgresql.yaml root@debian10:~# cat deployment-postgresql.yaml apiVersion: apps/v1 kind: Deployment metadata: name: postgresql labels: app: postgresql spec: replicas: 1 selector: matchLabels: app: postgresql template: metadata: labels: app: postgresql spec: containers: - image: bitnami/postgresql:10.13.0 imagePullPolicy: IfNotPresent name: postgresql env: - name: POSTGRESQL_PASSWORD value: "VerySecurePassword:-)"
Appliquez la configuration :
root@debian10:~# kubectl apply -f deployment-postgresql.yaml deployment.apps/postgresql configured
Constatez l'état du Pod ainsi que le deployment :
root@debian10:~# kubectl get pods NAME READY STATUS RESTARTS AGE postgresql-6c99978556-kqkp4 1/1 Running 0 28s root@debian10:~# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE postgresql 1/1 1 1 38s
Utilisez maintenant l'option -f de la commande logs pour voir les traces en continu :
root@debian10:~# kubectl logs postgresql-6c99978556-kqkp4 -f postgresql 14:58:48.79 postgresql 14:58:48.79 Welcome to the Bitnami postgresql container postgresql 14:58:48.79 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql postgresql 14:58:48.79 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues postgresql 14:58:48.79 postgresql 14:58:48.81 INFO ==> ** Starting PostgreSQL setup ** postgresql 14:58:48.83 INFO ==> Validating settings in POSTGRESQL_* env vars.. postgresql 14:58:48.84 INFO ==> Loading custom pre-init scripts... postgresql 14:58:48.85 INFO ==> Initializing PostgreSQL database... postgresql 14:58:48.87 INFO ==> pg_hba.conf file not detected. Generating it... postgresql 14:58:48.87 INFO ==> Generating local authentication configuration postgresql 14:58:53.51 INFO ==> Starting PostgreSQL in background... postgresql 14:58:53.64 INFO ==> Changing password of postgres postgresql 14:58:53.66 INFO ==> Configuring replication parameters postgresql 14:58:53.69 INFO ==> Configuring fsync postgresql 14:58:53.70 INFO ==> Loading custom scripts... postgresql 14:58:53.71 INFO ==> Enabling remote connections postgresql 14:58:53.73 INFO ==> Stopping PostgreSQL... postgresql 14:58:54.74 INFO ==> ** PostgreSQL setup finished! ** postgresql 14:58:54.78 INFO ==> ** Starting PostgreSQL ** 2020-12-12 14:58:54.819 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2020-12-12 14:58:54.820 GMT [1] LOG: listening on IPv6 address "::", port 5432 2020-12-12 14:58:54.829 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-12-12 14:58:54.843 GMT [106] LOG: database system was shut down at 2020-12-12 14:58:53 GMT 2020-12-12 14:58:54.850 GMT [1] LOG: database system is ready to accept connections ^C
Important : Notez l'utilisation de ^C pour arrêter la commande kubectl logs postgresql-6c99978556-vcfmm -f.
La Commande exec
La commande exec peut être utilisée pour exécuter une commande à l'intérieur d'un conteneur dans un pod. Imaginons que vous souhaitez vérifier le contenu du fichier de configuration de PostgreSQL, postgresql.conf :
root@debian10:~# kubectl exec postgresql-6c99978556-vcfmm -- cat /opt/bitnami/postgresql/conf/postgresql.conf | more root@debian10:~# kubectl logs postgresql-6c99978556-vcfmm -f postgresql 14:58:48.79 postgresql 14:58:48.79 Welcome to the Bitnami postgresql container postgresql 14:58:48.79 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql postgresql 14:58:48.79 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues postgresql 14:58:48.79 postgresql 14:58:48.81 INFO ==> ** Starting PostgreSQL setup ** postgresql 14:58:48.83 INFO ==> Validating settings in POSTGRESQL_* env vars.. postgresql 14:58:48.84 INFO ==> Loading custom pre-init scripts... postgresql 14:58:48.85 INFO ==> Initializing PostgreSQL database... postgresql 14:58:48.87 INFO ==> pg_hba.conf file not detected. Generating it... postgresql 14:58:48.87 INFO ==> Generating local authentication configuration postgresql 14:58:53.51 INFO ==> Starting PostgreSQL in background... postgresql 14:58:53.64 INFO ==> Changing password of postgres postgresql 14:58:53.66 INFO ==> Configuring replication parameters postgresql 14:58:53.69 INFO ==> Configuring fsync postgresql 14:58:53.70 INFO ==> Loading custom scripts... postgresql 14:58:53.71 INFO ==> Enabling remote connections postgresql 14:58:53.73 INFO ==> Stopping PostgreSQL... postgresql 14:58:54.74 INFO ==> ** PostgreSQL setup finished! ** postgresql 14:58:54.78 INFO ==> ** Starting PostgreSQL ** 2020-12-12 14:58:54.819 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2020-12-12 14:58:54.820 GMT [1] LOG: listening on IPv6 address "::", port 5432 2020-12-12 14:58:54.829 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-12-12 14:58:54.843 GMT [106] LOG: database system was shut down at 2020-12-12 14:58:53 GMT 2020-12-12 14:58:54.850 GMT [1] LOG: database system is ready to accept connections ^C root@debian10:~# kubectl exec postgresql-6c99978556-vcfmm -- cat /opt/bitnami/postgresql/conf/postgresql.conf | more # ----------------------------- # PostgreSQL configuration file # ----------------------------- # # This file consists of lines of the form: # # name = value # # (The "=" is optional.) Whitespace may be used. Comments are introduced with # "#" anywhere on a line. The complete list of parameter names and allowed # values can be found in the PostgreSQL documentation. # # The commented-out settings shown in this file represent the default values. # Re-commenting a setting is NOT sufficient to revert it to the default value; # you need to reload the server. # # This file is read on server startup and when the server receives a SIGHUP # signal. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, run "pg_ctl reload", or execute # "SELECT pg_reload_conf()". Some parameters, which are marked below, # require a server shutdown and restart to take effect. # # Any parameter can also be given as a command-line option to the server, e.g., # "postgres -c log_connections=on". Some parameters can be changed at run time # with the "SET" SQL command. # # Memory units: kB = kilobytes Time units: ms = milliseconds # MB = megabytes s = seconds # GB = gigabytes min = minutes # TB = terabytes h = hours # d = days #------------------------------------------------------------------------------ # FILE LOCATIONS #------------------------------------------------------------------------------ # The default values of these variables are driven from the -D command-line # option or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) --More--
La commande exec peut aussi être utilisé pour lancer bash en mode interactif :
root@debian10:~# kubectl exec -it postgresql-6c99978556-vcfmm -- bash I have no name!@postgresql-6c99978556-vcfmm:/$ ls bin bitnami boot dev docker-entrypoint-initdb.d docker-entrypoint-preinitdb.d etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var I have no name!@postgresql-6c99978556-vcfmm:/$ exit exit root@debian10:~#
1.7 - Gérer les Plugins de kubectl
Les plugins de kubectl étendent ses fonctionnalités. Le gestionnaire des plugins krew est disponible pour macOS™, Windows™ et Linux. Un plugin est un simple exécutable écrit, par exemple, en bash ou en Go.
La Commande krew
Afin d'installer la commande krew, il faut d'abord installer git :
root@debian10:~# apt install git-all
Installez ensuite krew avec la commande suivante :
( set -x; cd "$(mktemp -d)" && curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.tar.gz" && tar zxvf krew.tar.gz && KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_$(uname -m | sed -e 's/x86_64/amd64/' -e 's/arm.*$/arm/')" && "$KREW" install krew )
root@debian10:~# ( > set -x; cd "$(mktemp -d)" && > curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/krew.tar.gz" && > tar zxvf krew.tar.gz && > KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_$(uname -m | sed -e 's/x86_64/amd64/' -e 's/arm.*$/arm/')" && > "$KREW" install krew > )
A la fin de l'installation, vous verrez la sortie suivante :
... Installing plugin: krew Installed plugin: krew \ | Use this plugin: | kubectl krew | Documentation: | https://krew.sigs.k8s.io/ | Caveats: | \ | | krew is now installed! To start using kubectl plugins, you need to add | | krew's installation directory to your PATH: | | | | * macOS/Linux: | | - Add the following to your ~/.bashrc or ~/.zshrc: | | export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH" | | - Restart your shell. | | | | * Windows: Add %USERPROFILE%\.krew\bin to your PATH environment variable | | | | To list krew commands and to get help, run: | | $ kubectl krew | | For a full list of available plugins, run: | | $ kubectl krew search | | | | You can find documentation at | | https://krew.sigs.k8s.io/docs/user-guide/quickstart/. | / /
Ensuite ajoutez $HOME/.krew/bin à votre PATH :
root@debian10:~# export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
Afin de ne pas avoir besoin de redéfinir le PATH après chaque ouverture de session, ajoutez la ligne à la fin du fichier .bashrc :
root@debian10:~# echo 'export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"' >> .bashrc root@debian10:~# tail .bashrc # eval "`dircolors`" # alias ls='ls $LS_OPTIONS' # alias ll='ls $LS_OPTIONS -l' # alias l='ls $LS_OPTIONS -lA' # # Some more alias to avoid making mistakes: # alias rm='rm -i' # alias cp='cp -i' # alias mv='mv -i' export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
Mettez à jour la liste des plugins :
root@debian10:~# kubectl krew update Updated the local copy of plugin index.
Pour visualiser la liste des plugins, utiisez la commande search :
root@debian10:~# kubectl krew search NAME DESCRIPTION INSTALLED access-matrix Show an RBAC access matrix for server resources no advise-psp Suggests PodSecurityPolicies for cluster. no allctx Run commands on contexts in your kubeconfig no apparmor-manager Manage AppArmor profiles for cluster. no auth-proxy Authentication proxy to a pod or service no bd-xray Run Black Duck Image Scans no bulk-action Do bulk actions on Kubernetes resources. no ca-cert Print the PEM CA certificate of the current clu... no capture Triggers a Sysdig capture to troubleshoot the r... no cert-manager Manage cert-manager resources inside your cluster no change-ns View or change the current namespace via kubectl. no cilium Easily interact with Cilium agents. no cluster-group Exec commands across a group of contexts. no config-cleanup Automatically clean up your kubeconfig no config-registry Switch between registered kubeconfigs no creyaml Generate custom resource YAML manifest no cssh SSH into Kubernetes nodes no ctx Switch between contexts in your kubeconfig no custom-cols A "kubectl get" replacement with customizable c... no datadog Manage the Datadog Operator no debug Attach ephemeral debug container to running pod no debug-shell Create pod with interactive kube-shell. no deprecations Checks for deprecated objects in a cluster no df-pv Show disk usage (like unix df) for persistent v... no doctor Scans your cluster and reports anomalies. no duck List custom resources with ducktype support no edit-status Edit /status subresources of CRs no eksporter Export resources and removes a pre-defined set ... no emit-event Emit Kubernetes Events for the requested object no evict-pod Evicts the given pod no example Prints out example manifest YAMLs no exec-as Like kubectl exec, but offers a `user` flag to ... no exec-cronjob Run a CronJob immediately as Job no fields Grep resources hierarchy by field name no flame Generate CPU flame graphs from pods no fleet Shows config and resources of a fleet of clusters no fuzzy Fuzzy and partial string search for kubectl no gadget Gadgets for debugging and introspecting apps no get-all Like `kubectl get all` but _really_ everything no gke-credentials Fetch credentials for GKE clusters no gopass Imports secrets from gopass no graph Visualize Kubernetes resources and relationships. no grep Filter Kubernetes resources by matching their n... no gs Handle custom resources with Giant Swarm no hns Manage hierarchical namespaces (part of HNC) no iexec Interactive selection tool for `kubectl exec` no images Show container images used in the cluster. no ingress-nginx Interact with ingress-nginx no ipick A kubectl wrapper for interactive resource sele... no konfig Merge, split or import kubeconfig files no krew Package manager for kubectl plugins. yes kubesec-scan Scan Kubernetes resources with kubesec.io. no kudo Declaratively build, install, and run operators... no kuttl Declaratively run and test operators no kyverno Kyverno is a policy engine for kubernetes no match-name Match names of pods and other API objects no minio Deploy and manage MinIO Operator and Tenant(s) no modify-secret modify secret with implicit base64 translations no mtail Tail logs from multiple pods matching label sel... no neat Remove clutter from Kubernetes manifests to mak... no net-forward Proxy to arbitrary TCP services on a cluster ne... no node-admin List nodes and run privileged pod with chroot no node-restart Restart cluster nodes sequentially and gracefully no node-shell Spawn a root shell on a node via kubectl no np-viewer Network Policies rules viewer no ns Switch between Kubernetes namespaces no oidc-login Log in to the OpenID Connect provider no open-svc Open the Kubernetes URL(s) for the specified se... no operator Manage operators with Operator Lifecycle Manager no oulogin Login to a cluster via OpenUnison no outdated Finds outdated container images running in a cl... no passman Store kubeconfig credentials in keychains or pa... no pod-dive Shows a pod's workload tree and info inside a node no pod-logs Display a list of pods to get logs from no pod-shell Display a list of pods to execute a shell in no podevents Show events for pods no popeye Scans your clusters for potential resource issues no preflight Executes application preflight tests in a cluster no profefe Gather and manage pprof profiles from running pods no prompt Prompts for user confirmation when executing co... no prune-unused Prune unused resources no psp-util Manage Pod Security Policy(PSP) and the related... no rabbitmq Manage RabbitMQ clusters no rbac-lookup Reverse lookup for RBAC no rbac-view A tool to visualize your RBAC permissions. no reap Delete unused Kubernetes resources. no resource-capacity Provides an overview of resource requests, limi... no resource-snapshot Prints a snapshot of nodes, pods and HPAs resou... no restart Restarts a pod with the given name no rm-standalone-pods Remove all pods without owner references no rolesum Summarize RBAC roles for subjects no roll Rolling restart of all persistent pods in a nam... no schemahero Declarative database schema migrations via YAML no score Kubernetes static code analysis. no service-tree Status for ingresses, services, and their backends no shovel Gather diagnostics for .NET Core applications no sick-pods Find and debug Pods that are "Not Ready" no snap Delete half of the pods in a namespace or cluster no sniff Start a remote packet capture on pods using tcp... no sort-manifests Sort manifest files in a proper order by Kind no split-yaml Split YAML output into one file per resource. no spy pod debugging tool for kubernetes clusters with... no sql Query the cluster via pseudo-SQL no ssh-jump A kubectl plugin to SSH into Kubernetes nodes u... no sshd Run SSH server in a Pod no ssm-secret Import/export secrets from/to AWS SSM param store no starboard Toolkit for finding risks in kubernetes resources no status Show status details of a given resource. no sudo Run Kubernetes commands impersonated as group s... no support-bundle Creates support bundles for off-cluster analysis no tail Stream logs from multiple pods and containers u... no tap Interactively proxy Kubernetes Services with ease no tmux-exec An exec multiplexer using Tmux no topology Explore region topology for nodes or pods no trace bpftrace programs in a cluster no tree Show a tree of object hierarchies through owner... no unused-volumes List unused PVCs no view-allocations List allocations per resources, nodes, pods. no view-cert View certificate information stored in secrets no view-secret Decode Kubernetes secrets no view-serviceaccount-kubeconfig Show a kubeconfig setting to access the apiserv... no view-utilization Shows cluster cpu and memory utilization no view-webhook Visualize your webhook configurations no virt Control KubeVirt virtual machines using virtctl no warp Sync and execute local files in Pod no who-can Shows who has RBAC permissions to access Kubern... no whoami Show the subject that's currently authenticated... no
Installez les plugins ctx, ns, view-allocations et pod-logs :
root@debian10:~# kubectl krew install ctx ns view-allocations pod-logs Updated the local copy of plugin index. Installing plugin: ctx Installed plugin: ctx \ | Use this plugin: | kubectl ctx | Documentation: | https://github.com/ahmetb/kubectx | Caveats: | \ | | If fzf is installed on your machine, you can interactively choose | | between the entries using the arrow keys, or by fuzzy searching | | as you type. | | See https://github.com/ahmetb/kubectx for customization and details. | / / WARNING: You installed plugin "ctx" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk. Installing plugin: ns Installed plugin: ns \ | Use this plugin: | kubectl ns | Documentation: | https://github.com/ahmetb/kubectx | Caveats: | \ | | If fzf is installed on your machine, you can interactively choose | | between the entries using the arrow keys, or by fuzzy searching | | as you type. | / / WARNING: You installed plugin "ns" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk. Installing plugin: view-allocations Installed plugin: view-allocations \ | Use this plugin: | kubectl view-allocations | Documentation: | https://github.com/davidB/kubectl-view-allocations / WARNING: You installed plugin "view-allocations" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk. Installing plugin: pod-logs Installed plugin: pod-logs \ | Use this plugin: | kubectl pod-logs | Documentation: | https://github.com/danisla/kubefunc / WARNING: You installed plugin "pod-logs" from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk.
Le plugin ctx permet de basculer entre les contextes facilement. Lister les contextes dans le cluster :
root@debian10:~# kubectl ctx kind-kind
Un contexte est un élément qui regroupe les paramètres d'accès sous un nom. Les paramètres d'accès sont au nombre de trois, à savoir le cluster, le namespace et l'utilisateur. La commande kubectl utilise les paramètres du contexte courant pour communiquer avec le cluster.
Le plugin ns permet de basculer entre les namespaces facilement. Lister les namespaces dans le cluster :
root@debian10:~# kubectl ns default kube-node-lease kube-public kube-system local-path-storage
Les Namespaces :
- peuvent être considérées comme des clusters virtuels,
- permettent l'isolation et la segmentation logique,
- permettent le regroupement d'utilisateurs, de rôles et de ressources,
- sont utilisés avec des applications, des clients, des projets ou des équipes.
Le plugin view-allocations permet de visualiser les allocations de ressources telles le CPU, la mémoire, le stockage etc :
root@debian10:~# kubectl view-allocations Resource Requested %Requested Limit %Limit Allocatable Free cpu 1.1 35% 300.0m 10% 3.0 1.9 ├─ kind-control-plane 850.0m 85% 100.0m 10% 1.0 150.0m │ ├─ coredns-f9fd979d6-b87p7 100.0m 0.0 │ ├─ coredns-f9fd979d6-jwd68 100.0m 0.0 │ ├─ kindnet-vrqzw 100.0m 100.0m │ ├─ kube-apiserver-kind-control-plane 250.0m 0.0 │ ├─ kube-controller-manager-kind-control-plane 200.0m 0.0 │ └─ kube-scheduler-kind-control-plane 100.0m 0.0 ├─ kind-worker2 100.0m 10% 100.0m 10% 1.0 900.0m │ └─ kindnet-t8t9q 100.0m 100.0m └─ kind-worker3 100.0m 10% 100.0m 10% 1.0 900.0m └─ kindnet-7jqqb 100.0m 100.0m ephemeral-storage 0.0 0% 0.0 0% 55.7Gi 55.7Gi ├─ kind-control-plane 0.0 0% 0.0 0% 18.6Gi 18.6Gi ├─ kind-worker2 0.0 0% 0.0 0% 18.6Gi 18.6Gi └─ kind-worker3 0.0 0% 0.0 0% 18.6Gi 18.6Gi memory 290.0Mi 1% 490.0Mi 2% 23.4Gi 22.9Gi ├─ kind-control-plane 190.0Mi 2% 390.0Mi 5% 7.8Gi 7.4Gi │ ├─ coredns-f9fd979d6-b87p7 70.0Mi 170.0Mi │ ├─ coredns-f9fd979d6-jwd68 70.0Mi 170.0Mi │ └─ kindnet-vrqzw 50.0Mi 50.0Mi ├─ kind-worker2 50.0Mi 1% 50.0Mi 1% 7.8Gi 7.7Gi │ └─ kindnet-t8t9q 50.0Mi 50.0Mi └─ kind-worker3 50.0Mi 1% 50.0Mi 1% 7.8Gi 7.7Gi └─ kindnet-7jqqb 50.0Mi 50.0Mi pods 0.0 0% 0.0 0% 330.0 330.0 ├─ kind-control-plane 0.0 0% 0.0 0% 110.0 110.0 ├─ kind-worker2 0.0 0% 0.0 0% 110.0 110.0 └─ kind-worker3 0.0 0% 0.0 0% 110.0 110.0
Le plugin pod-logs vous fourni avec une liste de pods en cours d'exécution et vous demande d'en choisir une :
root@debian10:~# kubectl pod-logs 1) postgresql-6c99978556-vcfmm default Running 2) coredns-f9fd979d6-b87p7 kube-system Running 3) coredns-f9fd979d6-jwd68 kube-system Running 4) etcd-kind-control-plane kube-system Running 5) kindnet-7jqqb kube-system Running 6) kindnet-t8t9q kube-system Running 7) kindnet-vrqzw kube-system Running 8) kube-apiserver-kind-control-plane kube-system Running 9) kube-controller-manager-kind-control-plane kube-system Running 10) kube-proxy-5zpkb kube-system Running 11) kube-proxy-8pp5m kube-system Running 12) kube-proxy-ltx6c kube-system Running 13) kube-scheduler-kind-control-plane kube-system Running 14) metrics-server-594b87569-28r4s kube-system Running 15) local-path-provisioner-78776bfc44-5rzmk local-path-storage Running Select a Pod:
Choisissez le pod postgresql. Vous verrez la sortie de la commande logs :
Select a Pod: 1 postgresql 14:58:48.79 postgresql 14:58:48.79 Welcome to the Bitnami postgresql container postgresql 14:58:48.79 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql postgresql 14:58:48.79 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues postgresql 14:58:48.79 postgresql 14:58:48.81 INFO ==> ** Starting PostgreSQL setup ** postgresql 14:58:48.83 INFO ==> Validating settings in POSTGRESQL_* env vars.. postgresql 14:58:48.84 INFO ==> Loading custom pre-init scripts... postgresql 14:58:48.85 INFO ==> Initializing PostgreSQL database... postgresql 14:58:48.87 INFO ==> pg_hba.conf file not detected. Generating it... postgresql 14:58:48.87 INFO ==> Generating local authentication configuration postgresql 14:58:53.51 INFO ==> Starting PostgreSQL in background... postgresql 14:58:53.64 INFO ==> Changing password of postgres postgresql 14:58:53.66 INFO ==> Configuring replication parameters postgresql 14:58:53.69 INFO ==> Configuring fsync postgresql 14:58:53.70 INFO ==> Loading custom scripts... postgresql 14:58:53.71 INFO ==> Enabling remote connections postgresql 14:58:53.73 INFO ==> Stopping PostgreSQL... postgresql 14:58:54.74 INFO ==> ** PostgreSQL setup finished! ** postgresql 14:58:54.78 INFO ==> ** Starting PostgreSQL ** 2020-12-12 14:58:54.819 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2020-12-12 14:58:54.820 GMT [1] LOG: listening on IPv6 address "::", port 5432 2020-12-12 14:58:54.829 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-12-12 14:58:54.843 GMT [106] LOG: database system was shut down at 2020-12-12 14:58:53 GMT 2020-12-12 14:58:54.850 GMT [1] LOG: database system is ready to accept connections
Pour lister les plugins installés, utilisez la commande list :
root@debian10:~# kubectl krew list PLUGIN VERSION ctx v0.9.1 krew v0.4.0 ns v0.9.1 pod-logs v1.0.1 view-allocations v0.9.2
Pour mettre à jour les plugins installés, utilisez la commande upgrade :
root@debian10:~# kubectl krew upgrade Updated the local copy of plugin index. Upgrading plugin: ctx Skipping plugin ctx, it is already on the newest version Upgrading plugin: krew Skipping plugin krew, it is already on the newest version Upgrading plugin: ns Skipping plugin ns, it is already on the newest version Upgrading plugin: pod-logs Skipping plugin pod-logs, it is already on the newest version Upgrading plugin: view-allocations Skipping plugin view-allocations, it is already on the newest version
Pour supprimer un plugin, utilisez la commande remove :
root@debian10:~# kubectl krew remove pod-logs Uninstalled plugin pod-logs root@debian10:~# kubectl krew list PLUGIN VERSION ctx v0.9.1 krew v0.4.0 ns v0.9.1 view-allocations v0.9.2
1.8 - Gérer des patchs
La Commande kustomize
Commencez par installer l'exécutable tree que vous utiliserez ultérieurement pour visualiser l'arborescence des répertoires et des fichiers que vous allez créer :
root@debian10:~# apt install tree
Créez ensuite le répertoire kustomize contenant le répertoire base et placez-vous dans ce dernier :
root@debian10:~# mkdir -p kustomize/base root@debian10:~# cd kustomize/base/ root@debian10:~/kustomize/base#
Créez le manifest deployment.yaml :
root@debian10:~/kustomize/base# vi deployment.yaml root@debian10:~/kustomize/base# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: "kubernetes.io/hostname" containers: - image: nginx:1.18.0 imagePullPolicy: IfNotPresent name: nginx
Important - le contenu de ce fichier crée un deployment de 1 replica du pod nginx à partir de l'image nginx:1.18.0.
Créez ensuite le manifest service.yaml :
root@debian10:~/kustomize/base# vi service.yaml root@debian10:~/kustomize/base# cat service.yaml apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: type: ClusterIP ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx
Important - le contenu de ce fichier crée un service de type ClusterIP en utilisant le deployment précédent. Le Service ClusterIP permet de regrouper les PODs offrant le même service afin de faciliter la communication.
Le Service ClusterIP permet de regrouper les PODs offrant le même service afin de faciliter la communication.
Dernièrement, créez le manifest kustomization.yaml :
root@debian10:~/kustomize/base# vi kustomization.yaml root@debian10:~/kustomize/base# cat kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization images: - name: nginx newTag: 1.19.1 resources: - deployment.yaml - service.yaml
Important - le contenu de ce fichier contient un patch pour l'application nginx créée par les deux fichiers précédent. Notez le tag newTag dans la section images. Dans la section resources se trouve la liste des manifests concernés par le patch. Notez que seul le manifest deployment.yaml fait référence à une image. Cependant, le fichier service.yaml est inclus ici car il sera nécessaire par la suite.
Consultez donc l'arborescence du répertoire kustomize :
root@debian10:~/kustomize/base# cd .. root@debian10:~/kustomize# tree . └── base ├── deployment.yaml ├── kustomization.yaml └── service.yaml 1 directory, 3 files
Exécutez maintenant la commande kustomize pour créer un patch pour les fichiers se trouvant dans le répertoire base :
root@debian10:~/kustomize# kubectl kustomize base apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: kubernetes.io/hostname containers: - image: nginx:1.19.1 imagePullPolicy: IfNotPresent name: nginx
Important - notez que le fichier généré contient les contenus des deux fichiers deployment.yaml et service.yaml séparés par les caractères —. Le contenu du fichier service.yaml n'a pas été modifié tandis que l'image a été modifiée de image: nginx:1.18.0 vers image: nginx:1.19.1 dans le contenu du fichier deployment.yaml. Notez que les deux fichiers d'origine n'ont pas été modifiés.
Imaginons maintenant que vous souhaitez déployer deux environnements différents de la même application, un pour la production et un pour le développement. La commande kustomize permet de faire ceci en utilisant des overlays.
Créez les répertoires kustomize/overlays/development et kustomize/overlays/production :
root@debian10:~/kustomize# mkdir -p overlays/development root@debian10:~/kustomize# mkdir overlays/production
Consultez l'arborescence du répertoire kustomize :
root@debian10:~/kustomize# tree . ├── base │ ├── deployment.yaml │ ├── kustomization.yaml │ └── service.yaml └── overlays ├── development └── production 4 directories, 3 files
Créez le fichier overlays/development/kustomization.yaml :
root@debian10:~/kustomize# vi overlays/development/kustomization.yaml root@debian10:~/kustomize# cat overlays/development/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base # <---------------------indique où sont stockés les manifests principaux nameSuffix: -development # <----------met à jour le nom du service/deployment commonLabels: environment: development # <--------ajoute une étiquette supplémentaire namespace: nginx-dev # <--------------indique le nom du namespace
Appliquez ces modifications :
root@debian10:~/kustomize# kubectl kustomize overlays/development/ apiVersion: v1 kind: Service metadata: labels: app: nginx environment: development # <-----------étiquette supplémentaire name: nginx-development # <--------------mise à jour du nom du service namespace: nginx-dev # <-----------------indique le nom du namespace spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx environment: development # <-----------étiquette supplémentaire type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx environment: development name: nginx-development namespace: nginx-dev spec: replicas: 1 selector: matchLabels: app: nginx environment: development template: metadata: labels: app: nginx environment: development spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: kubernetes.io/hostname containers: - image: nginx:1.19.1 # <-------------utilise l'image spécifiée dans le fichier /kustomize/base/kustomization.yaml imagePullPolicy: IfNotPresent name: nginx
Maintenant créez le fichier overlays/production/kustomization.yaml :
root@debian10:~/kustomize# vi overlays/production/kustomization.yaml root@debian10:~/kustomize# cat overlays/production/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../../base # <---------------------indique où sont stockés les manifests principaux nameSuffix: -production # <----------met à jour le nom du service/deployment commonLabels: environment: production # <--------ajoute une étiquette supplémentaire namespace: nginx-prod # <------------indique le nom du namespace images: - name: nginx newTag: 1.19.2 # <-----------------modifie l'image spécifiée dans le fichier /kustomize/base/kustomization.yaml
Appliquez ces modifications :
root@debian10:~/kustomize# kubectl kustomize overlays/production/ apiVersion: v1 kind: Service metadata: labels: app: nginx environment: production # <-----------étiquette supplémentaire name: nginx-production # <--------------mise à jour du nom du service namespace: nginx-prod # <---------------indique le nom du namespace spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx environment: production # <-----------étiquette supplémentaire type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx environment: production name: nginx-production namespace: nginx-prod spec: replicas: 1 selector: matchLabels: app: nginx environment: production template: metadata: labels: app: nginx environment: production spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: kubernetes.io/hostname containers: - image: nginx:1.19.2 # <-------------utilise l'image spécifiée dans le fichier overlays/production/kustomization.yaml imagePullPolicy: IfNotPresent name: nginx
Créez maintenant le namespace nginx-prod :
root@debian10:~/kustomize# kubectl create ns nginx-prod namespace/nginx-prod created
Installez l'application production :
root@debian10:~/kustomize# kubectl apply -k overlays/production/ service/nginx-production created deployment.apps/nginx-production created
Constatez le résultat de l'installation :
root@debian10:~/kustomize# kubectl get pods -n nginx-prod NAME READY STATUS RESTARTS AGE nginx-production-f456f9c8f-8hgss 1/1 Running 0 51s root@debian10:~/kustomize# kubectl get deployments -n nginx-prod NAME READY UP-TO-DATE AVAILABLE AGE nginx-production 1/1 1 1 67s root@debian10:~/kustomize# kubectl get services -n nginx-prod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-production ClusterIP 10.96.55.123 <none> 80/TCP 77s
Supprimez le deployment et le service nginx-production :
root@debian10:~/kustomize# kubectl delete deployments/nginx-production -n nginx-prod deployment.apps "nginx-production" deleted root@debian10:~/kustomize# kubectl get deployments -n nginx-prod No resources found in nginx-prod namespace. root@debian10:~/kustomize# kubectl get services -n nginx-prod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-production ClusterIP 10.96.55.123 <none> 80/TCP 4m13s root@debian10:~/kustomize# kubectl get pods -n nginx-prod No resources found in nginx-prod namespace. root@debian10:~/kustomize# kubectl delete services/nginx-production -n nginx-prod service "nginx-production" deleted root@debian10:~/kustomize# kubectl get services -n nginx-prod No resources found in nginx-prod namespace.
Installez l'application development :
root@debian10:~/kustomize# kubectl create ns nginx-dev namespace/nginx-dev created root@debian10:~/kustomize# kubectl apply -k overlays/development/ service/nginx-development created deployment.apps/nginx-development created
Constatez le résultat :
root@debian10:~/kustomize# kubectl get pods -n nginx-dev NAME READY STATUS RESTARTS AGE nginx-development-579c5cfcb6-w8dmq 1/1 Running 0 42s root@debian10:~/kustomize# kubectl get deployments -n nginx-dev NAME READY UP-TO-DATE AVAILABLE AGE nginx-development 1/1 1 1 52s root@debian10:~/kustomize# kubectl get services -n nginx-dev NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-development ClusterIP 10.96.51.27 <none> 80/TCP 62s
1.9 - Alias utiles
Pour faciliter le travail avec la commande kubectl, il est recommendé de créer des alias sous bash ou zsh :
root@debian10:~/kustomize# vi ~/.bash_aliases root@debian10:~/kustomize# cat ~/.bash_aliases alias k='kubectl' alias kg='kubectl get' alias kd='kubectl describe' alias kga='kubectl get all' alias kp='kubectl get pods -o wide' alias kap='kubectl get pods -A -o wide' alias ka='kubectl apply -f' alias kei='kuebctl exec -it' alias ke='kubectl exec' alias ktn='watch kubectl top nodes' alias ktp='watch kubectl top pods' alias kpf='kubectl port-forward' alias kl='kubectl logs' alias kz='kustomize'
Activez les alias avec la commande source :
root@debian10:~/kustomize# source ~/.bash_aliases
L'Alias kg
root@debian10:~/kustomize# kg nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready master 82m v1.19.1 kind-worker2 Ready <none> 81m v1.19.1 kind-worker3 Ready <none> 81m v1.19.1 root@debian10:~/kustomize# kg deployments NAME READY UP-TO-DATE AVAILABLE AGE postgresql 1/1 1 1 80m root@debian10:~/kustomize# kg services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 82m root@debian10:~/kustomize# kg pods NAME READY STATUS RESTARTS AGE postgresql-6c99978556-kqkp4 1/1 Running 0 81m
L'Alias kd
root@debian10:~/kustomize# kd pod postgresql-6c99978556-kqkp4 | more Name: postgresql-6c99978556-kqkp4 Namespace: default Priority: 0 Node: kind-worker2/172.18.0.4 Start Time: Sun, 13 Dec 2020 13:37:14 +0100 Labels: app=postgresql pod-template-hash=6c99978556 Annotations: <none> Status: Running IP: 10.244.1.2 IPs: IP: 10.244.1.2 Controlled By: ReplicaSet/postgresql-6c99978556 Containers: postgresql: Container ID: containerd://4faccd4f90e93528e6dddac1cc24dba7f93e36a4442e60093676e2ad1d4218aa Image: bitnami/postgresql:10.13.0 Image ID: docker.io/bitnami/postgresql@sha256:00794b9129f9b60942d70d635a00398180e70b4759e570e38cfe7434ebe2ccdd Port: <none> Host Port: <none> State: Running Started: Sun, 13 Dec 2020 13:37:36 +0100 Ready: True Restart Count: 0 Environment: POSTGRESQL_PASSWORD: VerySecurePassword:-) Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-z5ptn (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-z5ptn: Type: Secret (a volume populated by a Secret) SecretName: default-token-z5ptn Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s --More--
L'Alias kga
root@debian10:~/kustomize# kga NAME READY STATUS RESTARTS AGE pod/postgresql-6c99978556-kqkp4 1/1 Running 0 84m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 86m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/postgresql 1/1 1 1 84m NAME DESIRED CURRENT READY AGE replicaset.apps/postgresql-6c99978556 1 1 1 84m
L'Alias kp
root@debian10:~/kustomize# kp NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES postgresql-6c99978556-kqkp4 1/1 Running 0 85m 10.244.1.2 kind-worker2 <none> <none>
L'Alias kap
root@debian10:~/kustomize# kap NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default postgresql-6c99978556-kqkp4 1/1 Running 0 86m 10.244.1.2 kind-worker2 <none> <none> kube-system coredns-f9fd979d6-hd5sh 1/1 Running 0 88m 10.244.0.3 kind-control-plane <none> <none> kube-system coredns-f9fd979d6-q7tcx 1/1 Running 0 88m 10.244.0.2 kind-control-plane <none> <none> kube-system etcd-kind-control-plane 1/1 Running 0 88m 172.18.0.5 kind-control-plane <none> <none> kube-system kindnet-2vgnb 1/1 Running 0 87m 172.18.0.4 kind-worker2 <none> <none> kube-system kindnet-6x6pk 1/1 Running 0 88m 172.18.0.5 kind-control-plane <none> <none> kube-system kindnet-snk42 1/1 Running 0 87m 172.18.0.3 kind-worker3 <none> <none> kube-system kube-apiserver-kind-control-plane 1/1 Running 0 88m 172.18.0.5 kind-control-plane <none> <none> kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 88m 172.18.0.5 kind-control-plane <none> <none> kube-system kube-proxy-lkljb 1/1 Running 0 87m 172.18.0.3 kind-worker3 <none> <none> kube-system kube-proxy-mfgcf 1/1 Running 0 88m 172.18.0.5 kind-control-plane <none> <none> kube-system kube-proxy-wl4mk 1/1 Running 0 87m 172.18.0.4 kind-worker2 <none> <none> kube-system kube-scheduler-kind-control-plane 1/1 Running 0 88m 172.18.0.5 kind-control-plane <none> <none> local-path-storage local-path-provisioner-78776bfc44-bp7pb 1/1 Running 0 88m 10.244.0.4 kind-control-plane <none> <none> nginx-dev nginx-development-579c5cfcb6-w8dmq 1/1 Running 0 25m 10.244.3.3 kind-worker3 <none> <none>
L'Alias kei
root@debian10:~/kustomize# kei postgresql-6c99978556-kqkp4 -- bash I have no name!@postgresql-6c99978556-kqkp4:/$ exit exit
L'Alias ke
root@debian10:~/kustomize# ke postgresql-6c99978556-kqkp4 -- ls -alh total 88K drwxr-xr-x 1 root root 4.0K Dec 13 12:37 . drwxr-xr-x 1 root root 4.0K Dec 13 12:37 .. drwxr-xr-x 1 root root 4.0K Aug 12 04:15 bin drwxr-xr-x 3 root root 4.0K Aug 12 04:16 bitnami drwxr-xr-x 2 root root 4.0K Jul 10 21:04 boot drwxr-xr-x 5 root root 360 Dec 13 12:37 dev drwxrwxr-x 2 root root 4.0K Aug 12 04:16 docker-entrypoint-initdb.d drwxr-xr-x 2 root root 4.0K Dec 13 12:37 docker-entrypoint-preinitdb.d drwxr-xr-x 1 root root 4.0K Dec 13 12:37 etc drwxr-xr-x 2 root root 4.0K Jul 10 21:04 home drwxr-xr-x 1 root root 4.0K Sep 25 2017 lib drwxr-xr-x 2 root root 4.0K Jul 21 19:27 lib64 drwxr-xr-x 2 root root 4.0K Jul 21 19:27 media drwxr-xr-x 2 root root 4.0K Jul 21 19:27 mnt drwxrwxr-x 1 root root 4.0K Aug 12 04:15 opt dr-xr-xr-x 177 root root 0 Dec 13 12:37 proc drwx------ 2 root root 4.0K Jul 21 19:27 root drwxr-xr-x 1 root root 4.0K Dec 13 12:37 run drwxr-xr-x 1 root root 4.0K Aug 12 04:15 sbin drwxr-xr-x 2 root root 4.0K Jul 21 19:27 srv dr-xr-xr-x 13 root root 0 Dec 13 12:37 sys drwxrwxrwt 1 root root 4.0K Dec 13 12:37 tmp drwxrwxr-x 1 root root 4.0K Aug 12 04:15 usr drwxr-xr-x 1 root root 4.0K Jul 21 19:27 var
L'Alias kpf
root@debian10:~/kustomize# kpf postgresql-6c99978556-kqkp4 8080 Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080 ^Croot@debian10:~/kustomize#
L'Alias kl
root@debian10:~/kustomize# k get deploy NAME READY UP-TO-DATE AVAILABLE AGE postgresql 1/1 1 1 92m root@debian10:~/kustomize# kl deploy/postgresql --tail 10 postgresql 12:37:40.90 INFO ==> Enabling remote connections postgresql 12:37:40.92 INFO ==> Stopping PostgreSQL... postgresql 12:37:41.93 INFO ==> ** PostgreSQL setup finished! ** postgresql 12:37:41.95 INFO ==> ** Starting PostgreSQL ** 2020-12-13 12:37:41.979 GMT [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 2020-12-13 12:37:41.979 GMT [1] LOG: listening on IPv6 address "::", port 5432 2020-12-13 12:37:41.983 GMT [1] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432" 2020-12-13 12:37:42.001 GMT [105] LOG: database system was shut down at 2020-12-13 12:37:40 GMT 2020-12-13 12:37:42.007 GMT [1] LOG: database system is ready to accept connections
<html> <DIV ALIGN=“CENTER”> Copyright © 2020 Hugh Norris </div> </html>