Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédenteProchaine révision | Révision précédente | ||
elearning:workbooks:kubernetes:k8s07 [2022/09/04 11:41] – removed admin | elearning:workbooks:kubernetes:k8s07 [2024/12/15 06:55] (Version actuelle) – admin | ||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
+ | ~~PDF: | ||
+ | Version - **2024.01** | ||
+ | |||
+ | Dernière mise-à-jour : ~~LASTMOD~~ | ||
+ | |||
+ | |||
+ | ======DOF308 - Introduction à la Sécurisation de K8s====== | ||
+ | |||
+ | =====Contenu du Module===== | ||
+ | |||
+ | * **DOF308 - Introduction à la Sécurisation de K8s** | ||
+ | * Contenu du Module | ||
+ | * LAB #1 - Role Based Acces Control et Certificats TLS | ||
+ | * 1.1 - Présentation | ||
+ | * 1.2 - Le Fichier / | ||
+ | * 1.3 - Création d'un serviceAccount | ||
+ | * 1.4 - Création d'un Utilisateur | ||
+ | * 1.5 - Certificats TLS | ||
+ | * LAB #2 - Implémentation de la Sécurité au niveau des Pods | ||
+ | * 2.1 - Présentation | ||
+ | * 2.2 - Kubernetes Security Context | ||
+ | * ReadOnlyRootFilesystem | ||
+ | * drop | ||
+ | * 2.3 - Kubernetes Network Policies | ||
+ | * 2.4 - Kubernetes Resource Allocation Management | ||
+ | |||
+ | =====Ressources===== | ||
+ | |||
+ | ====Lab #1==== | ||
+ | |||
+ | * https:// | ||
+ | * https:// | ||
+ | |||
+ | ====Lab #2==== | ||
+ | |||
+ | * https:// | ||
+ | * https:// | ||
+ | * https:// | ||
+ | * https:// | ||
+ | * https:// | ||
+ | |||
+ | =====LAB #1 - Role Based Acces Control et Certificats TLS===== | ||
+ | |||
+ | ====1.1 - Présentation==== | ||
+ | |||
+ | Un objet Kubernetes est soit lié à un Namespace soit non-lié à un Namespace. | ||
+ | |||
+ | Kubernetes utilise l'API **rbac.authorization.k8s.io** pour gérer les autorisations. Les acteurs jouant un rôle dans cette API sont : | ||
+ | |||
+ | * **Namespaces**, | ||
+ | * peuvent être considérées comme des clusters virtuels, | ||
+ | * permettent l' | ||
+ | * permettent le regroupement d' | ||
+ | * sont utilisés avec des applications, | ||
+ | |||
+ | * **Subjects**, | ||
+ | * //Regular Users// - permettent la gestion des accès autorisés depuis l' | ||
+ | * // | ||
+ | * //User Groups// - Kubernetes regroupe des utilisateurs en utilisant des propriétés communes telles le préfixe d'un serviceAccount ou le champ de l' | ||
+ | |||
+ | * **Resources**, | ||
+ | * ce sont des entités auxquelles auront accès les Subjects, | ||
+ | * une ressource est une entité telle un pod, un deployment ou des sous-ressources telles les journaux d'un pod, | ||
+ | * le Pod Security Policy (PSP) est aussi considéré comme une ressource. | ||
+ | |||
+ | * **Roles** et **ClusterRoles**, | ||
+ | * //Roles// - permettent de définir des règles représentant un jeu de permissions, | ||
+ | * On ajoute des permissions, | ||
+ | * // | ||
+ | * définir des permissions pour des ressources à être utilisées dans un Namespace | ||
+ | * définir des permissions pour des ressources à être utilisées dans tous les Namespaces | ||
+ | * définir des permissions pour des ressources du cluster. | ||
+ | |||
+ | Un exemple d'un Role pour accorder les permissions dans le Namespace default est : | ||
+ | |||
+ | < | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | kind: Role | ||
+ | metadata: | ||
+ | namespace: default | ||
+ | name: pod-reader | ||
+ | rules: | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important> | ||
+ | **Important** : apiGroups: ["" | ||
+ | </ | ||
+ | |||
+ | Un example d'un ClusterRole pour accorder des permissions de lecture des secrets dans un Namespace spécifique ou dans tous les Namespaces est : | ||
+ | |||
+ | < | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | kind: ClusterRole | ||
+ | metadata: | ||
+ | name: secret-reader | ||
+ | rules: | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | </ | ||
+ | |||
+ | * **RoleBindings** et **ClusterRoleBindings**, | ||
+ | * permettent d' | ||
+ | * **RoleBindings** sont spécifiques à un NameSpace, | ||
+ | * **ClusterRoleBindings** s' | ||
+ | |||
+ | ====1.2 - Le Fichier / | ||
+ | |||
+ | L' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | annotations: | ||
+ | kubeadm.kubernetes.io/ | ||
+ | creationTimestamp: | ||
+ | labels: | ||
+ | component: kube-apiserver | ||
+ | tier: control-plane | ||
+ | name: kube-apiserver | ||
+ | namespace: kube-system | ||
+ | spec: | ||
+ | containers: | ||
+ | - command: | ||
+ | - kube-apiserver | ||
+ | - --advertise-address=192.168.56.2 | ||
+ | - --allow-privileged=true | ||
+ | - --authorization-mode=Node, | ||
+ | - --client-ca-file=/ | ||
+ | - --enable-admission-plugins=NodeRestriction | ||
+ | - --enable-bootstrap-token-auth=true | ||
+ | - --etcd-cafile=/ | ||
+ | - --etcd-certfile=/ | ||
+ | - --etcd-keyfile=/ | ||
+ | - --etcd-servers=https:// | ||
+ | - --kubelet-client-certificate=/ | ||
+ | - --kubelet-client-key=/ | ||
+ | - --kubelet-preferred-address-types=InternalIP, | ||
+ | - --proxy-client-cert-file=/ | ||
+ | - --proxy-client-key-file=/ | ||
+ | - --requestheader-allowed-names=front-proxy-client | ||
+ | - --requestheader-client-ca-file=/ | ||
+ | - --requestheader-extra-headers-prefix=X-Remote-Extra- | ||
+ | - --requestheader-group-headers=X-Remote-Group | ||
+ | - --requestheader-username-headers=X-Remote-User | ||
+ | - --secure-port=6443 | ||
+ | - --service-account-issuer=https:// | ||
+ | - --service-account-key-file=/ | ||
+ | - --service-account-signing-key-file=/ | ||
+ | - --service-cluster-ip-range=10.96.0.0/ | ||
+ | - --tls-cert-file=/ | ||
+ | - --tls-private-key-file=/ | ||
+ | image: k8s.gcr.io/ | ||
+ | imagePullPolicy: | ||
+ | livenessProbe: | ||
+ | failureThreshold: | ||
+ | httpGet: | ||
+ | host: 192.168.56.2 | ||
+ | path: /livez | ||
+ | port: 6443 | ||
+ | scheme: HTTPS | ||
+ | initialDelaySeconds: | ||
+ | periodSeconds: | ||
+ | timeoutSeconds: | ||
+ | name: kube-apiserver | ||
+ | readinessProbe: | ||
+ | failureThreshold: | ||
+ | httpGet: | ||
+ | host: 192.168.56.2 | ||
+ | path: /readyz | ||
+ | port: 6443 | ||
+ | scheme: HTTPS | ||
+ | periodSeconds: | ||
+ | timeoutSeconds: | ||
+ | resources: | ||
+ | requests: | ||
+ | cpu: 250m | ||
+ | startupProbe: | ||
+ | failureThreshold: | ||
+ | httpGet: | ||
+ | host: 192.168.56.2 | ||
+ | path: /livez | ||
+ | port: 6443 | ||
+ | scheme: HTTPS | ||
+ | initialDelaySeconds: | ||
+ | periodSeconds: | ||
+ | timeoutSeconds: | ||
+ | volumeMounts: | ||
+ | - mountPath: / | ||
+ | name: ca-certs | ||
+ | readOnly: true | ||
+ | - mountPath: / | ||
+ | name: etc-ca-certificates | ||
+ | readOnly: true | ||
+ | - mountPath: / | ||
+ | name: k8s-certs | ||
+ | readOnly: true | ||
+ | - mountPath: / | ||
+ | name: usr-local-share-ca-certificates | ||
+ | readOnly: true | ||
+ | - mountPath: / | ||
+ | name: usr-share-ca-certificates | ||
+ | readOnly: true | ||
+ | hostNetwork: | ||
+ | priorityClassName: | ||
+ | securityContext: | ||
+ | seccompProfile: | ||
+ | type: RuntimeDefault | ||
+ | volumes: | ||
+ | - hostPath: | ||
+ | path: / | ||
+ | type: DirectoryOrCreate | ||
+ | name: ca-certs | ||
+ | - hostPath: | ||
+ | path: / | ||
+ | type: DirectoryOrCreate | ||
+ | name: etc-ca-certificates | ||
+ | - hostPath: | ||
+ | path: / | ||
+ | type: DirectoryOrCreate | ||
+ | name: k8s-certs | ||
+ | - hostPath: | ||
+ | path: / | ||
+ | type: DirectoryOrCreate | ||
+ | name: usr-local-share-ca-certificates | ||
+ | - hostPath: | ||
+ | path: / | ||
+ | type: DirectoryOrCreate | ||
+ | name: usr-share-ca-certificates | ||
+ | status: {} | ||
+ | </ | ||
+ | |||
+ | ====1.3 - Création d'un serviceAccount==== | ||
+ | |||
+ | Il est préférable de créer un serviceAccount par service. Ceci permet une configuration plus fine de la sécurité concernant le service. Si un serviceAccount n'est pas spécifié lors de la création des pods, ces pods se verront attribués le serviceAccount par défaut du Namespace. | ||
+ | |||
+ | Imaginons que vous souhaitez que votre application interagisse avec l'API de Kubernetes afin d' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | no | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important> | ||
+ | **Important** : le format de la valeur de l' | ||
+ | </ | ||
+ | |||
+ | Créez maintenant le fichier **flask.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Namespace | ||
+ | metadata: | ||
+ | name: flask | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: ServiceAccount | ||
+ | metadata: | ||
+ | name: flask-backend | ||
+ | namespace: flask | ||
+ | --- | ||
+ | kind: Role | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | metadata: | ||
+ | name: flask-backend-role | ||
+ | namespace: flask | ||
+ | rules: | ||
+ | - apiGroups: ["" | ||
+ | resources: [" | ||
+ | verbs: [" | ||
+ | --- | ||
+ | kind: RoleBinding | ||
+ | apiVersion: rbac.authorization.k8s.io/ | ||
+ | metadata: | ||
+ | name: flask-backend-role-binding | ||
+ | namespace: flask | ||
+ | subjects: | ||
+ | - kind: ServiceAccount | ||
+ | name: flask-backend | ||
+ | namespace: flask | ||
+ | roleRef: | ||
+ | kind: Role | ||
+ | name: flask-backend-role | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | </ | ||
+ | |||
+ | Ce fichier crée : | ||
+ | |||
+ | * un Namespace appelé **flask**, | ||
+ | * un serviceAccount appelé **flask-backend** pour le Namespace **flask**, | ||
+ | * un Role appelé **flask-backend-role** qui accorde les permissions **get**, **watch** et **list** sur les pods dans le Namespace **flask**, | ||
+ | * un RoleBinding appelé **flask-backend-role-binding** qui accorde les permissions définies dans le Role **flask-backend-role** au Subject de type serviceAccount appelé **flask-backend**. | ||
+ | |||
+ | Appliquez le fichier : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | namespace/ | ||
+ | serviceaccount/ | ||
+ | role.rbac.authorization.k8s.io/ | ||
+ | rolebinding.rbac.authorization.k8s.io/ | ||
+ | </ | ||
+ | |||
+ | Créez maintenant le fichier **deployment.yaml** qui crée des pods qui utiliseront le serviceAccount appelé **flask-backend** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | --- | ||
+ | apiVersion: apps/v1 | ||
+ | kind: Deployment | ||
+ | metadata: | ||
+ | name: myapp-deployment | ||
+ | namespace: flask | ||
+ | labels: | ||
+ | app: myapp | ||
+ | type: front-end | ||
+ | spec: | ||
+ | template: | ||
+ | |||
+ | metadata: | ||
+ | name: myapp-pod | ||
+ | labels: | ||
+ | app: myapp | ||
+ | type: front-end | ||
+ | spec: | ||
+ | serviceAccount: | ||
+ | containers: | ||
+ | - name: nginx-container | ||
+ | image: nginx | ||
+ | |||
+ | replicas: 3 | ||
+ | selector: | ||
+ | matchLabels: | ||
+ | type: front-end | ||
+ | </ | ||
+ | |||
+ | Exécutez kubectl : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | deployment.apps/ | ||
+ | </ | ||
+ | |||
+ | Vérifiez la présence du deployment : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME | ||
+ | myapp-deployment | ||
+ | </ | ||
+ | |||
+ | Vérifiez maintenant que le serviceAccount **flask-backend** peut lister les pods dans le Namespace **flask** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | yes | ||
+ | </ | ||
+ | |||
+ | Notez cependant que le serviceAccount **flask-backend** n'a pas la permission **create** dans le Namespace **flask** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | no | ||
+ | </ | ||
+ | |||
+ | et que le serviceAccount **flask-backend** n'a pas la permission **list** dans le Namespace **default** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | no | ||
+ | </ | ||
+ | |||
+ | ====1.4 - Création d'un Utilisateur==== | ||
+ | |||
+ | Les utilisateurs font partis du contexte de configuration qui définit le nom du cluster et le nom du Namespace : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | CURRENT | ||
+ | * | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important> | ||
+ | **Important** : Un contexte est un élément qui regroupe les paramètres d' | ||
+ | </ | ||
+ | |||
+ | En regardant le contexte courant, on voit que l' | ||
+ | |||
+ | * client-certificate-data: | ||
+ | * client-key-data: | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | clusters: | ||
+ | - cluster: | ||
+ | certificate-authority-data: | ||
+ | server: https:// | ||
+ | name: kubernetes | ||
+ | contexts: | ||
+ | - context: | ||
+ | cluster: kubernetes | ||
+ | user: kubernetes-admin | ||
+ | name: kubernetes-admin@kubernetes | ||
+ | current-context: | ||
+ | kind: Config | ||
+ | preferences: | ||
+ | users: | ||
+ | - name: kubernetes-admin | ||
+ | user: | ||
+ | client-certificate-data: | ||
+ | client-key-data: | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important> | ||
+ | **Important** : Le mot **REDACTED** indique que les valeurs sont cachées pour des raisons de sécurité. | ||
+ | </ | ||
+ | |||
+ | Pour créer un nouveau utilisateur il faut commencer par créer une clef privée pour l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Generating RSA private key, 2048 bit long modulus | ||
+ | ....................................+++ | ||
+ | ..............+++ | ||
+ | e is 65537 (0x10001) | ||
+ | </ | ||
+ | |||
+ | Créez maintenant un CSR : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important> | ||
+ | **Important** : Notez que Kubernetes utilisera la valeur de la clef de l' | ||
+ | </ | ||
+ | |||
+ | Le CSR doit être signé par le CA racine de Kubernetes : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | -rw-r--r-- 1 root root 1099 juil. 12 13:23 / | ||
+ | -rw------- 1 root root 1679 juil. 12 13:23 / | ||
+ | </ | ||
+ | |||
+ | Signez donc le CSR : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Signature ok | ||
+ | subject=/ | ||
+ | Getting CA Private Key | ||
+ | </ | ||
+ | |||
+ | Visualisez le certificat de trainee : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Certificate: | ||
+ | Data: | ||
+ | Version: 1 (0x0) | ||
+ | Serial Number: | ||
+ | b6: | ||
+ | Signature Algorithm: sha256WithRSAEncryption | ||
+ | Issuer: CN = kubernetes | ||
+ | Validity | ||
+ | Not Before: Jul 14 07:49:14 2022 GMT | ||
+ | Not After : Aug 13 07:49:14 2022 GMT | ||
+ | Subject: CN = trainee, O = examplegroup | ||
+ | Subject Public Key Info: | ||
+ | Public Key Algorithm: rsaEncryption | ||
+ | Public-Key: (2048 bit) | ||
+ | Modulus: | ||
+ | 00: | ||
+ | 64: | ||
+ | ee: | ||
+ | 38: | ||
+ | cb: | ||
+ | 58: | ||
+ | 06: | ||
+ | 82: | ||
+ | 37: | ||
+ | b5: | ||
+ | d2: | ||
+ | 29: | ||
+ | c2: | ||
+ | 11: | ||
+ | 19: | ||
+ | 06: | ||
+ | 48: | ||
+ | 69:69 | ||
+ | Exponent: 65537 (0x10001) | ||
+ | Signature Algorithm: sha256WithRSAEncryption | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | | ||
+ | -----BEGIN CERTIFICATE----- | ||
+ | MIICujCCAaICCQC291mPdRm8EDANBgkqhkiG9w0BAQsFADAVMRMwEQYDVQQDEwpr | ||
+ | dWJlcm5ldGVzMB4XDTIyMDcxNDA3NDkxNFoXDTIyMDgxMzA3NDkxNFowKTEQMA4G | ||
+ | A1UEAwwHdHJhaW5lZTEVMBMGA1UECgwMZXhhbXBsZWdyb3VwMIIBIjANBgkqhkiG | ||
+ | 9w0BAQEFAAOCAQ8AMIIBCgKCAQEAmy3ofbrpn7PajxQTIYNkxm57LO5P5nFlp+TK | ||
+ | aiPuz+FDGOCwH+// | ||
+ | 9sJRfEKVFqxgDh1NCaoGKVF58UVwSLkc4gX8XDOC14JfojETtSNMEL+lik83KtbM | ||
+ | rMfArZdxlZ4mT2C1QYp7xXk4AiiwiIQjCxjSwvmf/ | ||
+ | z1ZeqboG2IPCPB04zPr9aRdOw3553TQRmv9dMuRoqA/ | ||
+ | n+Stv98GyCjHpHjyMbJsx56QuL9I1K79Zek4/ | ||
+ | SIb3DQEBCwUAA4IBAQBtyA3NfDRcCGeYtq6AJuhz8RQ7AgndtG3xf7sSihaG1ta+ | ||
+ | rZKZqCOh197U6QPsb7kZRi3Y9DBxjPBuQ63YEEYVq59GwVZMbIGrut1beGpXgtMa | ||
+ | 1xpfY8pOD/ | ||
+ | DM6HkZwl93KnRJ02QYdIYXExmiSuNk9AyPMIMvWxnfWKCnGA5nDZr+GWVYGfoZU5 | ||
+ | U7Ub8zc+UNWha9FL0cZ1+2PwYwbOmfvDFcFRO+3ZyGhDZjzvkrqupQ0CSI1CGnAi | ||
+ | E3VHrWnVSBFrsSSAftYN95IMuyiRbtRMoRTJLUcs | ||
+ | -----END CERTIFICATE----- | ||
+ | </ | ||
+ | |||
+ | Créez un deuxième utilisateur dans la même Organisation : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Generating RSA private key, 2048 bit long modulus | ||
+ | ................................................................................................................................+++ | ||
+ | .................+++ | ||
+ | e is 65537 (0x10001) | ||
+ | |||
+ | root@kubemaster: | ||
+ | |||
+ | root@kubemaster: | ||
+ | Signature ok | ||
+ | subject=/ | ||
+ | Getting CA Private Key | ||
+ | </ | ||
+ | |||
+ | Créez maintenant le contexte **trainee** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | User " | ||
+ | |||
+ | root@kubemaster: | ||
+ | Context " | ||
+ | </ | ||
+ | |||
+ | Vérifiez que le contexte soit présent : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | CURRENT | ||
+ | * | ||
+ | trainee@kubernetes | ||
+ | </ | ||
+ | |||
+ | Utilisez le contexte de trainee : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Switched to context " | ||
+ | |||
+ | root@kubemaster: | ||
+ | CURRENT | ||
+ | kubernetes-admin@kubernetes | ||
+ | * | ||
+ | | ||
+ | root@kubemaster: | ||
+ | Error from server (Forbidden): | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important> | ||
+ | **Important** : Notez que trainee ne peut pas lister les pods parce que les permissions RBAC n'ont pas été définies. | ||
+ | </ | ||
+ | |||
+ | Retournez au contexte de l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Switched to context " | ||
+ | |||
+ | root@kubemaster: | ||
+ | CURRENT | ||
+ | * | ||
+ | trainee@kubernetes | ||
+ | </ | ||
+ | |||
+ | Créez maintenant un **clusterrolebinding** au groupe **examplegroup** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | clusterrolebinding.rbac.authorization.k8s.io/ | ||
+ | </ | ||
+ | |||
+ | Utilisez de nouveau le contexte de trainee : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | Switched to context " | ||
+ | |||
+ | root@kubemaster: | ||
+ | CURRENT | ||
+ | kubernetes-admin@kubernetes | ||
+ | * | ||
+ | |||
+ | root@kubemaster: | ||
+ | NAME READY | ||
+ | calico-kube-controllers-6766647d54-v4hrm | ||
+ | calico-node-5mrjl | ||
+ | calico-node-688lw | ||
+ | calico-node-j25xd | ||
+ | coredns-6d4b75cb6d-dw4ph | ||
+ | coredns-6d4b75cb6d-ms2jm | ||
+ | etcd-kubemaster.ittraining.loc | ||
+ | kube-apiserver-kubemaster.ittraining.loc | ||
+ | kube-controller-manager-kubemaster.ittraining.loc | ||
+ | kube-proxy-bwctz | ||
+ | kube-proxy-j89vg | ||
+ | kube-proxy-jx76x | ||
+ | kube-scheduler-kubemaster.ittraining.loc | ||
+ | metrics-server-7cb867d5dc-g55k5 | ||
+ | </ | ||
+ | |||
+ | ====1.5 - Certificats TLS==== | ||
+ | |||
+ | Par défaut la communication entre kubectl et l'API Kubernetes est cryptée. Les certificats se trouvent dans le répertoire **/ | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | total 12 | ||
+ | -rw------- 1 root root 2851 juil. 12 13:23 kubelet-client-2022-07-12-13-23-12.pem | ||
+ | lrwxrwxrwx 1 root root 59 juil. 12 13:23 kubelet-client-current.pem -> / | ||
+ | -rw-r--r-- 1 root root 2367 juil. 12 13:23 kubelet.crt | ||
+ | -rw------- 1 root root 1675 juil. 12 13:23 kubelet.key | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important> | ||
+ | **Important** : Par défaut les certificats de kubelet expirent au bout d'un an. | ||
+ | </ | ||
+ | |||
+ | =====LAB #2 - Implémentation de la Sécurité au niveau des Pods===== | ||
+ | |||
+ | ==== 2.1 - Présentation ==== | ||
+ | |||
+ | Un **Admission Controller** est un morceau de code qui intercepte les requêtes à destination de l'API de Kubernetes. L' | ||
+ | |||
+ | < | ||
+ | --admission-control=Initializers, | ||
+ | </ | ||
+ | |||
+ | Les Admission Controllers les plus importants en termes de sécurité sont : | ||
+ | |||
+ | * **DenyEscalatingExec**, | ||
+ | * interdit l' | ||
+ | * **NodeRestriction**, | ||
+ | * limite les objets d'un nœud et d'un pod que kubectl est capable de modifier, | ||
+ | * **PodSecurityPolicy**, | ||
+ | * agit lors de la création ou de la modification d'un pod pour décider si celui-ci est admis au cluster en fonction du Contexte de Sécurité et les policies applicables, | ||
+ | * **ValidatingAdmissionWebhooks**, | ||
+ | * permet d' | ||
+ | |||
+ | ====2.2 - Kubernetes Security Context==== | ||
+ | |||
+ | La configuration du Contexte de Sécurité se fait du pod ou du conteneur. Voici quelques exemples. | ||
+ | |||
+ | ===ReadOnlyRootFilesystem=== | ||
+ | |||
+ | Créez le fichier **readonly.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: flask-ro | ||
+ | namespace: default | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: mateobur/ | ||
+ | name: flask-ro | ||
+ | securityContext: | ||
+ | readOnlyRootFilesystem: | ||
+ | </ | ||
+ | |||
+ | Exécutez kubectl : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | pod/ | ||
+ | </ | ||
+ | |||
+ | Vérifiez que le pod est en état de **READY** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME | ||
+ | flask-ro | ||
+ | postgres-deployment-5b8bd66778-j99zz | ||
+ | redis-deployment-67d4c466c4-9wzfn | ||
+ | result-app-deployment-b8f9dc967-nzbgd | ||
+ | result-app-deployment-b8f9dc967-r84k6 | ||
+ | result-app-deployment-b8f9dc967-zbsk2 | ||
+ | voting-app-deployment-669dccccfb-jpn6h | ||
+ | voting-app-deployment-669dccccfb-ktd7d | ||
+ | voting-app-deployment-669dccccfb-x868p | ||
+ | worker-app-deployment-559f7749b6-jh86r | ||
+ | </ | ||
+ | |||
+ | Connectez-vous au conteneur : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@flask-ro:/# | ||
+ | </ | ||
+ | |||
+ | Notez que le système est en lecture seule : | ||
+ | |||
+ | < | ||
+ | root@flask-ro:/# | ||
+ | overlay on / type overlay (ro, | ||
+ | |||
+ | root@flask-ro:/# | ||
+ | touch: cannot touch ' | ||
+ | |||
+ | root@flask-ro:/# | ||
+ | exit | ||
+ | command terminated with exit code 1 | ||
+ | </ | ||
+ | |||
+ | ===drop=== | ||
+ | |||
+ | Créez le fichier **drop.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: flask-cap | ||
+ | namespace: default | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: mateobur/ | ||
+ | name: flask-cap | ||
+ | securityContext: | ||
+ | capabilities: | ||
+ | drop: | ||
+ | - NET_RAW | ||
+ | - CHOWN | ||
+ | </ | ||
+ | |||
+ | Exécutez kubectl : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | pod/ | ||
+ | </ | ||
+ | |||
+ | Vérifiez que le pod est en état de **READY** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME | ||
+ | flask-cap | ||
+ | flask-ro | ||
+ | postgres-deployment-5b8bd66778-j99zz | ||
+ | redis-deployment-67d4c466c4-9wzfn | ||
+ | result-app-deployment-b8f9dc967-nzbgd | ||
+ | result-app-deployment-b8f9dc967-r84k6 | ||
+ | result-app-deployment-b8f9dc967-zbsk2 | ||
+ | voting-app-deployment-669dccccfb-jpn6h | ||
+ | voting-app-deployment-669dccccfb-ktd7d | ||
+ | voting-app-deployment-669dccccfb-x868p | ||
+ | worker-app-deployment-559f7749b6-jh86r | ||
+ | </ | ||
+ | |||
+ | Connectez-vous au conteneur : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@flask-cap:/# | ||
+ | </ | ||
+ | |||
+ | Notez la mise en place des restrictions : | ||
+ | |||
+ | < | ||
+ | root@flask-cap:/# | ||
+ | ping: Lacking privilege for raw socket. | ||
+ | root@flask-cap:/# | ||
+ | chown: changing ownership of '/ | ||
+ | |||
+ | root@flask-cap:/# | ||
+ | exit | ||
+ | command terminated with exit code 1 | ||
+ | </ | ||
+ | |||
+ | ====2.3 - Kubernetes Network Policies==== | ||
+ | |||
+ | Créez le fichier **guestbook-all-in-one.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: redis-master | ||
+ | labels: | ||
+ | app: redis | ||
+ | tier: backend | ||
+ | role: master | ||
+ | spec: | ||
+ | ports: | ||
+ | # the port that this service should serve on | ||
+ | - port: 6379 | ||
+ | targetPort: 6379 | ||
+ | selector: | ||
+ | app: redis | ||
+ | tier: backend | ||
+ | role: master | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: ReplicationController | ||
+ | metadata: | ||
+ | name: redis-master | ||
+ | # these labels can be applied automatically | ||
+ | # from the labels in the pod template if not set | ||
+ | labels: | ||
+ | app: redis | ||
+ | role: master | ||
+ | tier: backend | ||
+ | spec: | ||
+ | # this replicas value is default | ||
+ | # modify it according to your case | ||
+ | replicas: 1 | ||
+ | # selector can be applied automatically | ||
+ | # from the labels in the pod template if not set | ||
+ | # selector: | ||
+ | # app: guestbook | ||
+ | # role: master | ||
+ | # tier: backend | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: redis | ||
+ | role: master | ||
+ | tier: backend | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: master | ||
+ | image: gcr.io/ | ||
+ | resources: | ||
+ | requests: | ||
+ | cpu: 100m | ||
+ | memory: 100Mi | ||
+ | ports: | ||
+ | - containerPort: | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: redis-slave | ||
+ | labels: | ||
+ | app: redis | ||
+ | tier: backend | ||
+ | role: slave | ||
+ | spec: | ||
+ | ports: | ||
+ | # the port that this service should serve on | ||
+ | - port: 6379 | ||
+ | selector: | ||
+ | app: redis | ||
+ | tier: backend | ||
+ | role: slave | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: ReplicationController | ||
+ | metadata: | ||
+ | name: redis-slave | ||
+ | # these labels can be applied automatically | ||
+ | # from the labels in the pod template if not set | ||
+ | labels: | ||
+ | app: redis | ||
+ | role: slave | ||
+ | tier: backend | ||
+ | spec: | ||
+ | # this replicas value is default | ||
+ | # modify it according to your case | ||
+ | replicas: 2 | ||
+ | # selector can be applied automatically | ||
+ | # from the labels in the pod template if not set | ||
+ | # selector: | ||
+ | # app: guestbook | ||
+ | # role: slave | ||
+ | # tier: backend | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: redis | ||
+ | role: slave | ||
+ | tier: backend | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: slave | ||
+ | image: gcr.io/ | ||
+ | resources: | ||
+ | requests: | ||
+ | cpu: 100m | ||
+ | memory: 100Mi | ||
+ | env: | ||
+ | - name: GET_HOSTS_FROM | ||
+ | value: dns | ||
+ | # If your cluster config does not include a dns service, then to | ||
+ | # instead access an environment variable to find the master | ||
+ | # service' | ||
+ | # uncomment the line below. | ||
+ | # value: env | ||
+ | ports: | ||
+ | - containerPort: | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: Service | ||
+ | metadata: | ||
+ | name: frontend | ||
+ | labels: | ||
+ | app: guestbook | ||
+ | tier: frontend | ||
+ | spec: | ||
+ | # if your cluster supports it, uncomment the following to automatically create | ||
+ | # an external load-balanced IP for the frontend service. | ||
+ | # type: LoadBalancer | ||
+ | ports: | ||
+ | # the port that this service should serve on | ||
+ | - port: 80 | ||
+ | selector: | ||
+ | app: guestbook | ||
+ | tier: frontend | ||
+ | --- | ||
+ | apiVersion: v1 | ||
+ | kind: ReplicationController | ||
+ | metadata: | ||
+ | name: frontend | ||
+ | # these labels can be applied automatically | ||
+ | # from the labels in the pod template if not set | ||
+ | labels: | ||
+ | app: guestbook | ||
+ | tier: frontend | ||
+ | spec: | ||
+ | # this replicas value is default | ||
+ | # modify it according to your case | ||
+ | replicas: 3 | ||
+ | # selector can be applied automatically | ||
+ | # from the labels in the pod template if not set | ||
+ | # selector: | ||
+ | # app: guestbook | ||
+ | # tier: frontend | ||
+ | template: | ||
+ | metadata: | ||
+ | labels: | ||
+ | app: guestbook | ||
+ | tier: frontend | ||
+ | spec: | ||
+ | containers: | ||
+ | - name: php-redis | ||
+ | image: corelab/ | ||
+ | resources: | ||
+ | requests: | ||
+ | cpu: 100m | ||
+ | memory: 100Mi | ||
+ | env: | ||
+ | - name: GET_HOSTS_FROM | ||
+ | value: dns | ||
+ | # If your cluster config does not include a dns service, then to | ||
+ | # instead access environment variables to find service host | ||
+ | # info, comment out the ' | ||
+ | # line below. | ||
+ | # value: env | ||
+ | ports: | ||
+ | - containerPort: | ||
+ | </ | ||
+ | |||
+ | Installez l' | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | </ | ||
+ | |||
+ | Attendez que tous les pods soient dans un état de **READY** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME | ||
+ | flask-cap | ||
+ | flask-ro | ||
+ | frontend-dhd4w | ||
+ | frontend-dmbbf | ||
+ | frontend-rqr6p | ||
+ | redis-master-zrrr4 | ||
+ | redis-slave-jsrt6 | ||
+ | redis-slave-rrnx9 | ||
+ | ... | ||
+ | </ | ||
+ | |||
+ | Cette application crée des pods de type //backend// et // | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | tier=backend | ||
+ | |||
+ | root@kubemaster: | ||
+ | tier=frontend | ||
+ | </ | ||
+ | |||
+ | Créez le fichier **guestbook-network-policy.yaml** qui empêchera la communication d'un pod backend vers un pod frontend : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: networking.k8s.io/ | ||
+ | kind: NetworkPolicy | ||
+ | metadata: | ||
+ | name: deny-backend-egress | ||
+ | namespace: default | ||
+ | spec: | ||
+ | podSelector: | ||
+ | matchLabels: | ||
+ | tier: backend | ||
+ | policyTypes: | ||
+ | - Egress | ||
+ | egress: | ||
+ | - to: | ||
+ | - podSelector: | ||
+ | | ||
+ | tier: backend | ||
+ | </ | ||
+ | |||
+ | Exécutez kubectl : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | networkpolicy.networking.k8s.io/ | ||
+ | </ | ||
+ | |||
+ | Connectez-vous au pod **redis-master** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | [ root@redis-master-zrrr4:/ | ||
+ | </ | ||
+ | |||
+ | Essayez de contacter un pod du même **tier** : | ||
+ | |||
+ | < | ||
+ | [ root@redis-master-zrrr4:/ | ||
+ | PING 192.168.150.15 (192.168.150.15) 56(84) bytes of data. | ||
+ | 64 bytes from 192.168.150.15: | ||
+ | 64 bytes from 192.168.150.15: | ||
+ | 64 bytes from 192.168.150.15: | ||
+ | 64 bytes from 192.168.150.15: | ||
+ | |||
+ | --- 192.168.150.15 ping statistics --- | ||
+ | 4 packets transmitted, | ||
+ | rtt min/ | ||
+ | </ | ||
+ | |||
+ | Essayez maintenant de contacter un pod d'un **tier** frontend : | ||
+ | |||
+ | < | ||
+ | [ root@redis-master-zrrr4:/ | ||
+ | PING 192.168.150.16 (192.168.150.16) 56(84) bytes of data. | ||
+ | |||
+ | --- 192.168.150.16 ping statistics --- | ||
+ | 4 packets transmitted, | ||
+ | </ | ||
+ | |||
+ | Déconnectez-vous du pod **redis-master** et connectez-vous à un pod **frontend** : | ||
+ | |||
+ | < | ||
+ | [ root@redis-master-zrrr4:/ | ||
+ | exit | ||
+ | command terminated with exit code 1 | ||
+ | |||
+ | root@kubemaster: | ||
+ | root@frontend-dhd4w:/ | ||
+ | </ | ||
+ | |||
+ | Installez le paquet **iputils-ping** : | ||
+ | |||
+ | < | ||
+ | root@frontend-dhd4w:/ | ||
+ | root@frontend-dhd4w:/ | ||
+ | </ | ||
+ | |||
+ | Essayez de contacter un pod du même **tier** : | ||
+ | |||
+ | < | ||
+ | root@frontend-dhd4w:/ | ||
+ | PING 192.168.150.17 (192.168.150.17): | ||
+ | 64 bytes from 192.168.150.17: | ||
+ | 64 bytes from 192.168.150.17: | ||
+ | 64 bytes from 192.168.150.17: | ||
+ | 64 bytes from 192.168.150.17: | ||
+ | --- 192.168.150.17 ping statistics --- | ||
+ | 4 packets transmitted, | ||
+ | round-trip min/ | ||
+ | </ | ||
+ | |||
+ | Essayez maintenant de contacter un pod d'un **tier** backend : | ||
+ | |||
+ | < | ||
+ | root@frontend-dhd4w:/ | ||
+ | PING 192.168.239.27 (192.168.239.27): | ||
+ | 64 bytes from 192.168.239.27: | ||
+ | 64 bytes from 192.168.239.27: | ||
+ | 64 bytes from 192.168.239.27: | ||
+ | 64 bytes from 192.168.239.27: | ||
+ | --- 192.168.239.27 ping statistics --- | ||
+ | 4 packets transmitted, | ||
+ | round-trip min/ | ||
+ | </ | ||
+ | |||
+ | Sortez du pod frontend : | ||
+ | |||
+ | < | ||
+ | root@frontend-dhd4w:/ | ||
+ | exit | ||
+ | root@kubemaster: | ||
+ | </ | ||
+ | |||
+ | ====2.4 - Kubernetes Resource Allocation Management==== | ||
+ | |||
+ | Les ressources qui peuvent être limitées au niveau d'un pod sont : | ||
+ | |||
+ | * CPU | ||
+ | * Mémoire | ||
+ | * Stockage local | ||
+ | |||
+ | Créez le fichier **flask-resources.yaml** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@kubemaster: | ||
+ | apiVersion: v1 | ||
+ | kind: Pod | ||
+ | metadata: | ||
+ | name: flask-resources | ||
+ | namespace: default | ||
+ | spec: | ||
+ | containers: | ||
+ | - image: mateobur/ | ||
+ | name: flask-resources | ||
+ | resources: | ||
+ | requests: | ||
+ | memory: 512Mi | ||
+ | limits: | ||
+ | memory: 700Mi | ||
+ | </ | ||
+ | |||
+ | Dans ce fichier on peut constater deux allocations de ressources : | ||
+ | |||
+ | * **requests**, | ||
+ | * la quantité de mémoire qui doit être libre au moment du scheduling du pod, | ||
+ | * **limits**, | ||
+ | * la limite de mémoire pour le pod concerné. | ||
+ | |||
+ | Exécutez kubectl : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | pod/ | ||
+ | </ | ||
+ | |||
+ | Attendez que le statut du pod soit **READY** : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | NAME | ||
+ | flask-cap | ||
+ | flask-resources | ||
+ | flask-ro | ||
+ | ... | ||
+ | </ | ||
+ | |||
+ | Connectez-vous au pod : | ||
+ | |||
+ | < | ||
+ | root@kubemaster: | ||
+ | root@flask-resources:/# | ||
+ | </ | ||
+ | |||
+ | Installez le paquet **stress** : | ||
+ | |||
+ | < | ||
+ | root@flask-resources:/# | ||
+ | root@flask-resources:/# | ||
+ | root@flask-resources:/# | ||
+ | deb http:// | ||
+ | deb http:// | ||
+ | root@flask-resources:/# | ||
+ | root@flask-resources:/# | ||
+ | </ | ||
+ | |||
+ | Testez la limite mise en place : | ||
+ | |||
+ | < | ||
+ | root@flask-resources:/# | ||
+ | stress: info: [41] dispatching hogs: 1 cpu, 1 io, 2 vm, 0 hdd | ||
+ | stress: FAIL: [41] (416) <-- worker 45 got signal 9 | ||
+ | stress: WARN: [41] (418) now reaping child worker processes | ||
+ | stress: FAIL: [41] (452) failed run completed in 1s | ||
+ | </ | ||
+ | |||
+ | Sortez du pod flask-resources : | ||
+ | |||
+ | < | ||
+ | root@flask-resources:/# | ||
+ | exit | ||
+ | root@kubemaster: | ||
+ | </ | ||
+ | |||
+ | ---- | ||
+ | Copyright © 2024 Hugh Norris |