Différences
Ci-dessous, les différences entre deux révisions de la page.
Les deux révisions précédentesRévision précédenteProchaine révision | Révision précédente | ||
elearning:workbooks:solaris:11:junior:l124 [2019/11/30 08:31] – admin | elearning:workbooks:solaris:11:junior:l124 [2020/01/30 03:28] (Version actuelle) – modification externe 127.0.0.1 | ||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
- | ~~PDF: | + | ====== SO303 - Storage Administration ====== |
- | Dernière mise-à-jour : ~~LASTMOD~~ | + | =====Preparing your Solaris 11 VM===== |
- | Version : 1.11.01 | + | Before continuing further, shutdown your Solaris |
- | ======SO117 - Gestion du Démarrage et de l' | + | ^ Disk ^ Size ^ Name ^ |
+ | | c7t2d0 | 200 Mb | Disk1.vmdk | | ||
+ | | c7t3d0 | 200 Mb | Disk2.vmdk | | ||
+ | | c7t4d0 | 200 Mb | Disk3.vmdk | | ||
+ | | c7t5d0 | 200 Mb | Disk4.vmdk | | ||
+ | | c7t6d0 | 200 Mb | Disk5.vmdk | | ||
+ | | c7t7d0 | 20 Gb | Mirror.vmdk | | ||
+ | Using the **System** section of the **Oracle VM VirtualBox Manager**, add a second processor to your Solaris 11 VM. | ||
- | =====Plateforme SPARC===== | + | Finally, boot your Solaris 11 VM. |
- | La séquence de boot d'une station SPARC est la suivante : | + | =====Introduction===== |
- | - La PROM charge le bootblock | + | All previous versions of Solaris, including Solaris 10 use the **Unix File System** ( [[wp> |
- | - Le bootblock charge le fichier ufsboot | + | |
- | - ufsboot charge le noyau | + | |
- | - Le fichier /etc/system | + | |
- | - Le noyau charge les modules | + | |
- | - Le noyau lance / | + | |
- | - init lit le fichier / | + | |
- | La liste des commandes de la PROM sont : | + | ====Solaris 11 and ZFS==== |
- | ^ Commande ^ Description ^ | + | The Solaris 11 implementation of ZFS includes the following capabilities: |
- | | help | Affiche les commandes | | + | |
- | | boot | Démarrage du système | | + | |
- | | boot -s | Démarrage en mode mono-utilisateur | | + | |
- | | boot -a | Démarrage interactif | | + | |
- | | boot -r | Démarrage avec création des fichiers périphériques après l' | + | |
- | | banner | Affiche des informations sur le système | | + | |
- | | probe-scsi-all | Liste des périphériques SCSI | | + | |
- | | probe-scsi | Liste des périphériques SCSI | | + | |
- | | printenv | Affiche le périphérique de démarrage | | + | |
- | | setenv | Modifie le périphérique de démarrage | | + | |
- | La commande | + | |
+ | | ||
+ | | ||
+ | * encryption, | ||
+ | * compression, | ||
+ | | ||
+ | * quotas, | ||
+ | * file system migration between pools, | ||
+ | * snapshots. | ||
- | =====Plateforme PC===== | + | ====ZFS Vocabulary==== |
- | ====BIOS==== | + | The introduction of ZFS was obviously accompanied by a new vocabulary: |
- | Au démarrage de la machine, le premier programme exécuté est le BIOS. Le BIOS a pour fonction | + | ^ Term ^ Description ^ |
+ | | pool | A storage element regrouping one or more disk partitions containing one or more file systems | | ||
+ | | file system | A dataset containing directories and files | | ||
+ | | clone | A copy of a file system | | ||
+ | | snapshot | A read-only copy of the state of a file system | | ||
+ | | compression | The reduction of storage space achieved by the removal of duplicate data blocks | | ||
+ | | de-duplication | The reduction of storage space achieved by the removal of redundant data patterns | | ||
+ | | checksum | A 256-bit number used to validate data when read or written | | ||
+ | | encryption | The protection of data using a password | | ||
+ | | quota | The maximum amount of disk space used by a group or user | | ||
+ | | reservation | A preallocated amount of disk space assigned to a user or file system | | ||
+ | | mirror | An exact duplicate of a disk or partition | | ||
+ | | RAID-Z | ZFS implementation of [[wp> | ||
+ | | RAID-Z2 | ZFS implementation of [[wp> | ||
+ | | RAID-Z3 | ZFS implementation of Triple Parity RAID | | ||
- | * Tester les composants et les circuits, | + | ====ZFS Commands=== |
- | * Faire appel au BIOS de la carte graphique pour initialiser le système d' | + | |
- | * Détecter les périphériques de stockage, | + | |
- | * Lancer le **gestionnaire d' | + | |
- | ====Gestionnaire d' | + | The ZFS commands are as follows: |
- | Le gestionnaire d' | + | ^ Command ^ Description ^ |
+ | | zpool | Used to manage ZFS pools | | ||
+ | | zfs | Used to manage ZFS file systems | | ||
- | * 446 octets pour le gestionnaire d' | + | ===The zpool Command=== |
- | * 64 octet pour la table de partitions, soit 16 octets par partition décrite, | + | |
- | * 2 octets ayant une valeur fixe en hexadécimale de **AA55**. | + | |
- | Le gestionnaire d' | + | The **zpool** command uses a set of subcommands: |
- | ====Grub==== | + | ^ Command ^ Description ^ |
+ | | create | Creates a storage pool and configures its mount point | | ||
+ | | destroy | Destroys a storage pool | | ||
+ | | list | Displays the health and storage usage of a pool | | ||
+ | | get | Displays a list of pool properties | | ||
+ | | set | Sets a property for a pool | | ||
+ | | status | Displays the health of a pool | | ||
+ | | history | Displays the commands issued for a pool since its creation | | ||
+ | | add | Adds a disk to an existing pool | | ||
+ | | remove | Removes a disk from an existing pool | | ||
+ | | replace | Replaces a disk in a pool by another disk | | ||
+ | | scrub | Verifies the checksums of a pool and repairs any damaged data blocks | | ||
- | **grub** se configure grâce au fichier | + | ===The zfs Command=== |
+ | |||
+ | The **zfs** command use a set of subcommands: | ||
+ | |||
+ | ^ Command ^ Description ^ | ||
+ | | create | Creates a ZFS file system, sets its properties and automatically mounts it | | ||
+ | | destroy | Destroys a ZFS file system or snapshot | | ||
+ | | list | Displays the properties and storage usage of a ZFS file system | ||
+ | | get | Displays a list of ZFS file system properties | | ||
+ | | set | Sets a property for a ZFS file system | | ||
+ | | snapshot | Creates a read-only copy of the state of a ZFS file system | | ||
+ | | rollback | Returns the file system to the state of the **last** snapshot | | ||
+ | | send | Creates a file from a snapshot in order to migrate it to another pool | | ||
+ | | receive | Retrieves a file created by the subcommand **send** | | ||
+ | | clone | Creates a copy of a snapshot | | ||
+ | | promote | Transforms a clone into a ZFS file system | | ||
+ | | diff | Displays the file differences between two snapshots or a snapshot and its parent file system | | ||
+ | | mount | Mounts a ZFS file system at a specific mount point | | ||
+ | | unmount | Unmounts a ZFS file system | | ||
+ | |||
+ | ====Solaris Slices==== | ||
+ | |||
+ | Those familiar with UFS on Solaris will remember having to manipulate Solaris **slices**. Those slices still exist: | ||
< | < | ||
- | # cat / | + | root@solaris: |
- | #pragma ident " | + | Searching for disks...done |
- | # | + | |
- | # Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. | + | |
- | # | + | AVAILABLE DISK SELECTIONS: |
- | # With zfs root the active menu.lst file is /<pool>/boot/grub/menu.lst | + | 0. c7t0d0 |
- | # This reference copy of the file is not used. | + | |
- | # | + | 1. c7t2d0 < |
- | # default menu entry to boot | + | / |
- | default | + | 2. c7t3d0 <ATA-VBOX HARDDISK-1.0-200.00MB> |
- | # | + | /pci@0,0/pci8086, |
- | # menu timeout in second before default OS is booted | + | 3. c7t4d0 < |
- | # set to -1 to wait for user input | + | / |
- | timeout 10 | + | 4. c7t5d0 < |
- | # | + | / |
- | # To enable grub serial console to ttya uncomment the following lines | + | 5. c7t6d0 |
- | # and comment out the splashimage line below | + | /pci@0,0/pci8086, |
- | # WARNING: do not enable grub serial console when BIOS console serial | + | 6. c7t7d0 < |
- | # | + | |
- | # | + | Specify disk (enter its number): 0 |
- | # | + | selecting c7t0d0 |
- | # | + | [disk formatted] |
- | # Uncomment the following line to enable GRUB splashimage on console | + | /dev/dsk/c7t0d0s1 is part of active ZFS pool rpool. Please see zpool(1M). |
- | splashimage | + | |
- | # | + | |
- | # To chainload another OS | + | FORMAT MENU: |
- | # | + | disk - select a disk |
- | # title Another OS | + | type - select (define) a disk type |
- | # root (hd<disk no>,< | + | partition |
- | # | + | current |
- | # | + | format |
- | # To chainload a Solaris release not based on grub | + | fdisk |
- | # | + | repair |
- | # title Solaris 9 | + | label |
- | # root (hd<disk no>,< | + | analyze |
- | # | + | defect |
- | # | + | backup |
- | # | + | verify |
- | # To load a Solaris instance based on grub | + | inquiry |
- | # | + | volname |
- | # title Solaris <version> | + | !< |
- | # root (hd<disk no>,<partition no>, | + | quit |
- | # | + | format> part |
- | # | + | |
- | # | + | |
- | # To override Solaris boot args (see kernel(1M)), console device and | + | PARTITION MENU: |
- | # properties set via eeprom(1M) edit the " | + | 0 |
- | # | + | 1 |
- | # | + | 2 |
- | # | + | 3 |
- | #---------- | + | 4 |
- | title Oracle Solaris 10 1/13 s10x_u11wos_24a X86 | + | 5 |
- | findroot (rootfs0, | + | 6 |
- | kernel / | + | select |
- | module / | + | modify |
- | # | + | name - name the current table |
- | #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- | + | print |
- | title Solaris failsafe | + | label |
- | findroot | + | |
- | kernel / | + | quit |
- | module / | + | partition> |
- | # | + | Current partition table (original): |
+ | Total disk sectors available: 41926589 + 16384 (reserved sectors) | ||
+ | |||
+ | Part Tag Flag First Sector | ||
+ | 0 BIOS_boot | ||
+ | | ||
+ | 2 unassigned | ||
+ | 3 unassigned | ||
+ | 4 unassigned | ||
+ | 5 unassigned | ||
+ | 6 unassigned | ||
+ | 8 | ||
+ | |||
+ | partition> | ||
</ | </ | ||
- | Pour désinstaller grub du MBR, utilisez une disquette DOS pour démarrer la machine puis taper la commande suivante au prompt | + | <WRAP center round important 60%> |
+ | Note the following line in the above output: | ||
- | A> fdisk /mbr [Entrée] | + | **/dev/ |
- | ou utilisez la commande dd. | + | Since you are using ZFS for storage management, you no longer need to bother about slices ! |
+ | </ | ||
- | Le fichier | + | ====iSCSI Storage==== |
+ | |||
+ | In Solaris 10 the configuration of iSCSI LUNs was accomplished using the **iscsitadm** command and the ZFS **shareiscsi** property. In Solaris 11 these have been replaced by by the use of **Common Multiprotocol SCSI Target** ( COMSTAR ). COMSTAR is a **framework** that turns a Solaris host into a SCSI target. | ||
+ | |||
+ | COMSTAR includes the following features: | ||
+ | |||
+ | * scalability, | ||
+ | * compatibility with generic host adapters, | ||
+ | * multipathing, | ||
+ | * LUN masking and mapping functions. | ||
+ | |||
+ | An iSCSI target is an **endpoint** waiting for connections from clients called **initiators**. A target can provide multiple **Logical Units** which provides classic read and write data operations. | ||
+ | |||
+ | Each logical unit is backed by a **storage device**. You can create a logical unit backed by any one of the following: | ||
+ | |||
+ | * a file, | ||
+ | * a thin-provisioned file, | ||
+ | * a disk partition, | ||
+ | * a ZFS volume. | ||
+ | |||
+ | =====LAB #1 - Managing ZFS Storage===== | ||
+ | |||
+ | ====Displaying Online Help==== | ||
+ | |||
+ | Both the **zpool**and | ||
< | < | ||
- | # bootadm | + | root@solaris: |
- | bootadm: a command | + | The following commands are supported: |
- | USAGE: | + | add attach |
- | | + | help |
- | | + | replace |
- | | + | For more info, run: zpool help <command> |
- | | + | root@solaris: |
+ | The following commands are supported: | ||
+ | allow | ||
+ | groupspace | ||
+ | list | ||
+ | rollback | ||
+ | unmount | ||
+ | For more info, run: zfs help < | ||
</ | </ | ||
- | Il est donc possible de visualiser le menu de démarrage actuel ainsi que les options de Grub : | + | <WRAP center round important 60%> |
+ | Note that you can get help on subcommands by either using **zpool help < | ||
+ | </ | ||
+ | |||
+ | ====Checking Pool Status==== | ||
+ | |||
+ | Use the **zpool** command with the **list** subcommand to display the details of your pool: | ||
< | < | ||
- | # bootadm | + | root@solaris: |
- | The location for the active GRUB menu is: / | + | NAME SIZE ALLOC |
- | default 0 | + | rpool 19.6G 6.96G 12.7G 35% |
- | timeout 10 | + | |
- | 0 Oracle Solaris 10 1/13 s10x_u11wos_24a X86 | + | |
- | 1 Solaris failsafe | + | |
</ | </ | ||
- | =====Processus Init===== | + | Now use the **status** subcommand: |
- | Le premier processus lancé par le noyau est **Init**. L' | + | < |
+ | root@solaris: | ||
+ | pool: rpool | ||
+ | | ||
+ | scan: none requested | ||
+ | config: | ||
- | =====Niveaux d' | + | NAME STATE READ WRITE CKSUM |
+ | rpool | ||
+ | c7t0d0s1 | ||
- | Il existe 11 niveaux d' | + | errors: No known data errors |
+ | </ | ||
- | ^ RUNLEVEL ^ Description ^ | + | ====Creating a Mirrored Pool==== |
- | | 0 | Arrêt de la machine pour un système SPARC | | + | |
- | | S ou s | Mode mono-utilisateur avec seul la partition racine montée | | + | Create a ZFS mirrored pool called **mypool** using the first two of the five disks you recently created: |
- | | 1 | Mode mono-utilisateur pour la maintenance | | + | |
- | | 2 | Mode multi-utilisateur sans NFS | | + | |
- | | 3 | Mode multi-utilisateur avec NFS | | + | |
- | | 4 | Non-utilisé | | + | |
- | | 5 | Arrêt de la machine et coupure du courant | | + | |
- | | 6 | Redémarrage de la machine | | + | |
- | + | ||
- | Pour connaître le niveau d' | + | |
< | < | ||
- | # who -r | + | root@solaris:~# zpool create mypool mirror c7t2d0 c7t3d0 |
- | | + | |
</ | </ | ||
- | Pour modifier le niveau d' | + | Check that your pool has been created: |
- | ^ Option ^ Description ^ | + | < |
- | | Q ou q | Demande à Init de relire le fichier /etc/inittab | | + | root@solaris: |
+ | NAME | ||
+ | mypool | ||
+ | rpool | ||
+ | </code> | ||
- | Pour modifier le niveau d' | + | Display the file systems using the **zfs** command and the **list** subcommand: |
- | =====Inittab===== | + | < |
+ | root@solaris: | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | rpool 7.02G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
- | Le fichier | + | <WRAP center round important 60%> |
+ | Note that the zpool command automatically creates a file system on **mypool** and mounts it at **/ | ||
+ | </ | ||
+ | |||
+ | ====Adding File Systems to an Existing Pool==== | ||
+ | |||
+ | Now create two file systems in your pool called **/home** and **/home/user1** and then display the results | ||
< | < | ||
- | # cat /etc/inittab | + | root@solaris: |
- | # Copyright 2004 Sun Microsystems, | + | root@solaris: |
- | # Use is subject to license terms. | + | root@solaris: |
- | # | + | NAME USED AVAIL REFER MOUNTPOINT |
- | # The /etc/inittab file controls the configuration of init(1M); for more | + | mypool |
- | # information refer to init(1M) and inittab(4). | + | mypool/ |
- | # necessary to edit inittab(4) directly; administrators should use the | + | mypool/home/user1 31K |
- | # Solaris Service Management Facility (SMF) to define services instead. | + | rpool 7.02G 12.3G |
- | # Refer to smf(5) and the System Administration Guide for more | + | rpool/ |
- | # information on SMF. | + | rpool/ |
- | # | + | rpool/ROOT/solaris-backup-1 |
- | # For modifying parameters passed to ttymon, use svccfg(1m) to modify | + | rpool/ROOT/solaris-backup-1/var 46K |
- | # the SMF repository. For example: | + | rpool/ROOT/solaris/var 865M 12.3G |
- | # | + | rpool/VARSHARE |
- | # # svccfg | + | rpool/dump |
- | # | + | rpool/export |
- | # svc:/system/console-login> setprop ttymon/terminal_type = " | + | rpool/export/home 87.5M 12.3G 32K |
- | # svc:/system/console-login> exit | + | rpool/export/home/trainee |
- | # | + | rpool/swap |
- | # | + | |
- | ap:: | + | |
- | sp:: | + | |
- | smf:: | + | |
- | p3: | + | |
</ | </ | ||
- | Dans l' | + | <WRAP center round important 60%> |
+ | Note that the two file systems | ||
+ | </ | ||
- | ^ Champ ^ Nom ^ Description ^ | + | ====Changing the Pool Mount Point==== |
- | | 1 | Identifiant | Identifiant unique de la ligne composé de 1 à 4 caractères | | + | |
- | | 2 | RUNLEVELS | Liste des niveaux d' | + | |
- | | 3 | Action | Méthode utilisé pour lancer la commande se trouvant dans le champ 4 | | + | |
- | | 4 | Commande | Commande à lancer | | + | |
- | L' | + | Suppose that you want the /home file system mounted elsewhere rather than under the /mypool mount point. With ZFS, this is very simple: |
- | ^ Directive ^ Description ^ | + | < |
- | | sysinit | La commande est exécutée au démarrage da la machine avant les lignes boot et bootwait | | + | root@solaris: |
- | | powerfail | La commande est exécutée quand init reçoit un signal SIGPWR d'un onduleur | | + | root@solaris: |
+ | root@solaris: | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | rpool 7.02G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
+ | <WRAP center round important 60%> | ||
+ | Note that ZFS has automatically and transparently unmounted **/ | ||
+ | </ | ||
- | =====Service Management Facility===== | + | ====Adding a Hot Spare==== |
- | Le SMF remplace le système de démarrage Unix System V basé sur des répertoires rcx.d. | + | To display all of the properties associated with **mypool**, use the **zpool** command and the **get** subcommand: |
- | Un service SMF est caractérisé par des entités | + | < |
+ | root@solaris:~# zpool get all mypool | ||
+ | NAME PROPERTY | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | </ | ||
- | - un **SMF Manifest** | + | <WRAP center round important 60%> |
- | - Propriétés par défaut du service | + | Note that the **autoreplace** property is set to **off**. In order to use a hot spare, this property needs to be set to **on**. |
- | - une ou plusieurs | + | </ |
- | - un script permettant l' | + | |
- | - un ou plusieurs | + | |
- | - un fichier | + | |
- | - un FMRI | + | |
- | - Fault Management Resource Identifier | + | |
- | Le fichier méthode de cron est / | + | Set the autoreplace property to on: |
< | < | ||
- | # cat / | + | root@solaris: |
- | #!/sbin/sh | + | root@solaris: |
- | # | + | NAME PROPERTY |
- | # Copyright 2004 Sun Microsystems, | + | mypool |
- | # Use is subject to license terms. | + | </code> |
- | # | + | |
- | # ident " | + | |
- | # | + | |
- | # Start method script for the cron service. | + | |
- | # | + | |
- | . / | + | Add the fourth 200 Mb disk that you have created to **mypool** as a spare: |
- | if [ -p / | + | < |
- | | + | root@solaris: |
- | echo "$0: cron is already running" | + | root@solaris: |
- | exit $SMF_EXIT_ERR_NOSMF | + | pool: mypool |
- | fi | + | |
- | fi | + | scan: none requested |
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | mypool | ||
+ | mirror-0 ONLINE | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
- | if [ -x / | + | errors: No known data errors |
- | /usr/bin/rm -f / | + | |
- | / | + | |
- | else | + | |
- | exit 1 | + | |
- | fi | + | |
- | exit $SMF_EXIT_OK | + | |
</ | </ | ||
- | Une **instance de service** peut prendre plusieurs états : | + | ====Observing Pool Activity==== |
- | | + | Create a random data file in **/ |
- | | + | |
- | | + | |
- | | + | |
- | * ONLINE | + | |
- | * Ce statut indique que le service fonctionne. Ce statut est celui attendu lors du fonctionnement normal d'un service. | + | |
- | * DEGRADED | + | |
- | * Le service fonctionne, mais de manière limitée. Une restauration de la part de l' | + | |
- | * MAINTENANCE | + | |
- | * Le service est indisponible mais activé, et requiert une opération de l' | + | |
- | * DISABLED | + | |
- | * Le service est désactivé et ne fonctionne pas. Il n'est pas lancé au démarrage de la machine. | + | |
- | * LEGACY-RUN | + | |
- | * Ce statut est utilisé pour des pseudo-services qui ne sont pas directement gérés par SMF. Il ne garantit pas que le service soit en fonctionnement. | + | |
- | Pour voir les service il convient d' | + | < |
+ | root@solaris: | ||
+ | [1] 2617 | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | Write down the PID, you will need it in 2 minutes to kill the process you have just started. | ||
+ | </ | ||
+ | |||
+ | Now display the writes to the pool using the **iostat** subcommand of the **zpool** command: | ||
< | < | ||
- | # svcs -a | + | root@solaris: |
- | STATE STIME FMRI | + | capacity |
- | legacy_run | + | pool alloc |
- | legacy_run | + | ---------- |
- | legacy_run | + | mypool |
- | legacy_run | + | |
- | legacy_run | + | |
- | legacy_run | + | |
- | legacy_run | + | ---------- |
- | legacy_run | + | rpool 6.96G 12.7G 0 |
- | legacy_run | + | |
- | legacy_run | + | ---------- |
- | legacy_run | + | |
- | legacy_run | + | capacity |
- | legacy_run | + | pool alloc |
- | legacy_run | + | ---------- |
- | legacy_run | + | mypool |
- | legacy_run | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | ---------- |
- | disabled | + | rpool 6.96G 12.7G 0 |
- | disabled | + | |
- | disabled | + | ---------- |
- | disabled | + | |
- | disabled | + | capacity |
- | disabled | + | pool alloc |
- | disabled | + | ---------- |
- | disabled | + | mypool |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | ---------- |
- | disabled | + | rpool 6.96G 12.7G 0 |
- | disabled | + | |
- | disabled | + | ---------- |
- | disabled | + | |
- | disabled | + | capacity |
- | disabled | + | pool alloc |
- | disabled | + | ---------- |
- | disabled | + | mypool |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | ---------- |
- | disabled | + | rpool 6.96G 12.7G 0 |
- | disabled | + | |
- | disabled | + | ---------- |
- | disabled | + | |
- | disabled | + | capacity |
- | disabled | + | pool alloc |
- | disabled | + | ---------- |
- | disabled | + | mypool |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | ---------- |
- | disabled | + | rpool 6.96G 12.7G 0 0 0 882 |
- | disabled | + | |
- | disabled | + | ---------- ----- ----- ----- ----- ----- ----- |
- | disabled | + | |
- | disabled | + | ^C |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | offline | + | |
- | offline | + | |
- | offline | + | |
</ | </ | ||
- | Pour lister les processus associés à un service, il convient d' | + | <WRAP center round todo 60%> |
+ | Is your mirror functioning ? | ||
+ | </ | ||
+ | |||
+ | Now kill the process creating the file **randomfile** : | ||
+ | |||
+ | # kill -9 PID [Entrée] | ||
+ | |||
+ | Delete the file **/ | ||
< | < | ||
- | # svcs -p svc:/system/sac:default | + | root@solaris: |
- | STATE STIME FMRI | + | [1]+ Killed |
- | online | + | |
- | 6: | + | |
- | 6: | + | |
</ | </ | ||
- | Pour consulter les détails d'un service il convient d' | + | ====Setting a User Quota==== |
+ | |||
+ | To set a user quota, you need to use the **set** subcommand of **zpool**: | ||
< | < | ||
- | # svcs -l svc:/system/sac:default | + | root@solaris: |
- | fmri svc:/ | + | root@solaris:~# zfs get quota mypool |
- | name SAF service access controller | + | NAME PROPERTY |
- | enabled | + | mypool |
- | state online | + | root@solaris: |
- | next_state | + | NAME USED AVAIL REFER MOUNTPOINT |
- | state_time | + | mypool |
- | logfile | + | mypool/ |
- | restarter | + | mypool/home/user1 31K 50.0M 31K |
- | contract_id | + | rpool 7.03G 12.3G 4.58M /rpool |
- | dependency | + | rpool/ |
- | dependency | + | rpool/ROOT/solaris |
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ROOT/solaris/var 865M 12.3G | ||
+ | rpool/ | ||
+ | rpool/dump | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
</ | </ | ||
- | Pour désactiver un service il convient d' | + | <WRAP center round important 60%> |
+ | Note that the quota of 50 Mb has been set on / | ||
+ | </ | ||
+ | |||
+ | Now create a random data file in / | ||
< | < | ||
- | # svcadm disable cron | + | root@solaris: |
- | # svcs -l cron | + | cat: output error (0/131072 characters written) |
- | fmri svc:/system/cron: | + | Disc quota exceeded |
- | name clock daemon (cron) | + | |
- | enabled | + | |
- | state disabled | + | |
- | next_state | + | |
- | state_time | + | |
- | logfile | + | |
- | restarter | + | |
- | contract_id | + | |
- | dependency | + | |
- | dependency | + | |
</ | </ | ||
- | La commande | + | <WRAP center round important 60%> |
+ | After a few minutes, you will see the **Disc quota exceeded** message. | ||
+ | </ | ||
- | * Démarrer un service. L' | + | Looking at the available disk space on / |
- | | + | < |
+ | root@solaris: | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool/ | ||
+ | </code> | ||
- | | + | Delete the **testfile** file: |
- | | + | < |
+ | root@solaris: | ||
+ | </code> | ||
- | * Rafraîchir un service : | + | ====Setting a User Reservation==== |
- | # svcadm refresh < | + | As with setting quotas, setting a reservation is very simple: |
- | * Réactiver un service. Ceci est utilisé pour passer un service antérieurement en mode maintenance ou en mode Degraded vers le mode Online après la réparation du service concerné | + | < |
+ | root@solaris:~# zfs set reservation=25M mypool/ | ||
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool/ | ||
+ | </ | ||
+ | ====Using Snapshots==== | ||
- | # svcadm clear < | + | Create a file in **/ |
- | Pour consulter la méthode associée à un service, il convient d' | + | < |
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | snapshot1 | ||
+ | </ | ||
+ | |||
+ | To create a snapshot of a ZFS file system, you need to use the **snapshot** subcommand of the **zfs** command: | ||
< | < | ||
- | # svcprop -p start/exec system/cron | + | root@solaris: |
- | / | + | |
</ | </ | ||
- | ====Activer et Désactiver un Service ==== | + | The snapshot is located in a hidden directory under **/ |
- | Vous allez travailler sur le service | + | < |
+ | root@solaris: | ||
+ | total 3 | ||
+ | drwxr-xr-x | ||
+ | </ | ||
+ | |||
+ | As you can see, the snapshot contains the **snapshot1** file: | ||
< | < | ||
- | # svcs cron | + | root@solaris:~# ls -l /users/user1/ |
- | STATE STIME FMRI | + | total 2 |
- | disabled | + | -rw-r--r-- |
</ | </ | ||
- | Démarrez maintenant le service cron : | + | It is important to note here that the .zfs directory is also hidden from the **ls** command, even when using the **-a** switch: |
< | < | ||
- | # svcadm enable cron | + | root@solaris: |
+ | / | ||
+ | total 8 | ||
+ | drwxr-xr-x | ||
+ | drwxr-xr-x | ||
+ | -rw-r--r-- | ||
</ | </ | ||
- | Contrôlez maintenant son état : | + | You can also create a recursive snapshot of all file systems in a pool: |
< | < | ||
- | # svcs cron | + | root@solaris: |
- | STATE STIME FMRI | + | |
- | online | + | |
</ | </ | ||
- | Rendez votre service cron inutilisable | + | The snapshots are stored in their respective .zfs directories: |
< | < | ||
- | # mv /lib/svc/method/svc-cron /lib/svc/method/ | + | root@solaris: |
- | # svcadm restart cron | + | Dec13-1 |
+ | root@solaris: | ||
+ | Dec13 Dec13-1 | ||
</ | </ | ||
- | Contrôlez maintenant son état : | + | You can list all snapshots as follows: |
< | < | ||
- | # svcs cron | + | root@solaris: |
- | STATE STIME FMRI | + | NAME |
- | maintenance | + | mypool@Dec13-1 |
+ | mypool/home@Dec13-1 | ||
+ | mypool/home/ | ||
+ | mypool/ | ||
</ | </ | ||
- | Identifiez maintenant le fichier log du service | + | Create another file in **/ |
< | < | ||
- | # svcs -x cron | + | root@solaris: |
- | svc:/system/cron: | + | root@solaris:~# ls -l /users/user1 |
- | State: maintenance since Sat Nov 30 08:22:28 2019 | + | total 4 |
- | Reason: Start method failed repeatedly, last exited with status | + | -rw-r--r-- |
- | See: http://sun.com/msg/ | + | -rw-r--r-- |
- | See: cron(1M) | + | root@solaris:~# cat /users/user1/snapshot1 |
- | See: crontab(1) | + | This is a test file for the first snapshot |
- | See: /var/svc/log/ | + | root@solaris:~# cat /users/user1/snapshot2 |
- | Impact: | + | This is a test file for the second snapshot |
</ | </ | ||
- | Examinez le fichier log pour connaître le problème à résoudre | + | Now take a second recursive snapshot of **mypool**: |
< | < | ||
- | # cat / | + | root@solaris: |
- | [ Nov 29 13:26:34 Disabled. ] | + | root@solaris:~# zfs list -t snapshot -r mypool |
- | [ Nov 29 13:26:34 Rereading configuration. ] | + | NAME |
- | [ Nov 29 13:26:37 Enabled. ] | + | mypool@Dec13-1 |
- | [ Nov 29 13:26:46 Executing start method ("/ | + | mypool@Dec13-2 |
- | [ Nov 29 13:26:46 Method " | + | mypool/home@Dec13-1 0 |
- | [ Nov 29 17:41:20 Executing start method ("/ | + | mypool/home@Dec13-2 0 |
- | [ Nov 29 17:41:20 Method " | + | mypool/home/user1@Dec13 |
- | [ Nov 30 05:28:20 Executing start method ("/lib/ | + | mypool/home/user1@Dec13-1 0 |
- | [ Nov 30 05:28:20 Method " | + | mypool/home/user1@Dec13-2 |
- | [ Nov 30 06:14:40 Executing start method ("/ | + | |
- | [ Nov 30 06:14:41 Method " | + | |
- | [ Nov 30 06:19:26 Executing start method ("/lib/ | + | |
- | [ Nov 30 06:19:26 Method " | + | |
- | [ Nov 30 06:22:09 Executing start method ("/ | + | |
- | [ Nov 30 06:22:09 Method " | + | |
- | [ Nov 30 06:44:48 Executing start method ("/lib/svc/ | + | |
- | [ Nov 30 06:44:48 Method " | + | |
- | [ Nov 30 08:18:52 Stopping because service disabled. ] | + | |
- | [ Nov 30 08:18:52 Executing stop method (:kill) ] | + | |
- | [ Nov 30 08:21:14 Enabled. ] | + | |
- | [ Nov 30 08:21:14 Executing start method ("/ | + | |
- | [ Nov 30 08:21:14 Method " | + | |
- | [ Nov 30 08:22:28 Stopping because service restarting. ] | + | |
- | [ Nov 30 08:22:28 Executing stop method (:kill) ] | + | |
- | [ Nov 30 08:22:28 Executing start method ("/ | + | |
- | /sbin/sh: / | + | |
- | [ Nov 30 08:22:28 Method " | + | |
- | [ Nov 30 08:22:28 Executing start method ("/ | + | |
- | /sbin/sh: / | + | |
- | [ Nov 30 08:22:28 Method " | + | |
- | [ Nov 30 08:22:28 Executing start method ("/lib/svc/ | + | |
- | /sbin/sh: / | + | |
- | [ Nov 30 08:22:28 Method " | + | |
</ | </ | ||
- | Notez la ligne : | + | The **diff** subcommand of the **zfs** command displays the differences between two snapshots: |
- | <file> | + | <code> |
- | /sbin/sh: /lib/svc/method/svc-cron: not found | + | root@solaris:~# zfs diff mypool/home/user1@Dec13-1 mypool/home/user1@Dec13-2 |
- | </file> | + | M / |
+ | M / | ||
+ | + / | ||
+ | </code> | ||
- | Réparez maintenant le service cron : | + | <WRAP center round important 60%> |
+ | The above out put shows that **/ | ||
+ | </ | ||
+ | |||
+ | This output can contain the following characters: | ||
+ | |||
+ | ^ Character ^ Description ^ | ||
+ | | M | **M**odification | | ||
+ | | R | **R**enamed | | ||
+ | | + | Added | | ||
+ | | - | Deleted | | ||
+ | |||
+ | Note that you cannot compare the snapshots in the reverse order: | ||
< | < | ||
- | # mv /lib/svc/method/svc-cron.old | + | root@solaris: |
+ | Unable to obtain diffs: mypool/home/user1@Dec13-1 is not a descendant dataset of mypool/home/user1@Dec13-2 | ||
</ | </ | ||
- | Lancez le service cron : | + | ====Rolling Back to a Snapshot==== |
+ | |||
+ | In the case that you wish to rollback to a specific snapshot, note that you can **only** roll back to the last snapshot as shown by the output of **zfs list**: | ||
< | < | ||
- | # svcadm clear cron | + | root@solaris: |
- | # svcadm enable | + | NAME |
+ | mypool@Dec13-1 | ||
+ | mypool@Dec13-2 | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | root@solaris: | ||
+ | cannot rollback to ' | ||
+ | use '-r' to force deletion of the following snapshots: | ||
+ | mypool/ | ||
</ | </ | ||
- | Dernièrement, | + | Delete the **Dec13-2** snapshot as follows: |
< | < | ||
- | # svcs cron | + | root@solaris: |
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool@Dec13-1 | ||
+ | mypool@Dec13-2 | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | </ | ||
+ | |||
+ | Now roll back to **Dec13-1**: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | total 2 | ||
+ | -rw-r--r-- | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | Note that the **snapshot2** file has obviously disappeared since it was not in the **Dec13-1** snapshot. | ||
+ | </ | ||
+ | |||
+ | ====Cloning a Snapshot==== | ||
+ | |||
+ | Snapshots are read-only. To convert a snapshot to a writable file system, you can use the **clone** subcommand of the **zfs** command: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | rpool 7.03G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
+ | |||
+ | Display the contents of **/ | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | total 2 | ||
+ | -rw-r--r-- | ||
+ | </ | ||
+ | |||
+ | ====Using Compression==== | ||
+ | |||
+ | In order to minimize storage space, you can make a file system use compression. Compression can be activated either at creation time or after creation. Compression only works for new data. Any existing data in the file system at the time of activating compression remains uncompressed. | ||
+ | |||
+ | To activate compression on an existing file system, you need to change the file system' | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool/ | ||
+ | </ | ||
+ | |||
+ | ====Using De-duplication==== | ||
+ | |||
+ | Another space saving property of ZFS file systems is **De-duplication**: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool/ | ||
+ | </ | ||
+ | |||
+ | ====Using Encryption==== | ||
+ | |||
+ | Unlike **Compression** and **De-duplication**, | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | Enter passphrase for ' | ||
+ | Enter again: fenestros | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | Note that the passphrase is not shown in the real output of the command. It is in the above example only for the purposes of this lesson. | ||
+ | </ | ||
+ | |||
+ | To check if encryption is active on a file system, use the following command: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool/ | ||
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool/ | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | ====Replacing a Faulty Disk==== | ||
+ | |||
+ | In the case of a faulty disk and no hot spares, replacing the disk is a one-line operation using the **replace** subcommand of the **zpool** command: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | pool: mypool | ||
+ | | ||
+ | scan: none requested | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | mypool | ||
+ | mirror-0 | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
+ | |||
+ | errors: No known data errors | ||
+ | root@solaris: | ||
+ | </ | ||
+ | |||
+ | Use the **status** subcommand of the **zpool** command again to see what has happened: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | pool: mypool | ||
+ | | ||
+ | scan: resilvered 601K in 0h0m with 0 errors on Thu Dec 13 11:45:49 2012 | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | mypool | ||
+ | mirror-0 | ||
+ | c7t4d0 | ||
+ | c7t3d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
+ | |||
+ | errors: No known data errors | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | ZFS // | ||
+ | </ | ||
+ | |||
+ | ====Destroying a Pool==== | ||
+ | |||
+ | Destroying a pool is achieved by using the **destroy** subcommand of the **zpool** command: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | </ | ||
+ | |||
+ | As you can see by the following output, this operation has also destroyed all the associated snapshots: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | rpool 7.03G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | root@solaris: | ||
+ | cannot open ' | ||
+ | root@solaris: | ||
+ | total 0 | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | As you have seen above, destroying a pool, **all** the data in it and **all** the associated snapshots is disconcertingly simple. You should therefore be very careful when using the **destroy** subcommand. | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | ====Creating a RAID-5 Pool==== | ||
+ | |||
+ | You can create a RAID-5 pool using the RAID-Z algorithm: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | pool: mypool | ||
+ | | ||
+ | scan: none requested | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | mypool | ||
+ | raidz1-0 | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | c7t4d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
+ | |||
+ | errors: No known data errors | ||
+ | </ | ||
+ | |||
+ | Destroy **mypool** : | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | </ | ||
+ | |||
+ | ====Creating a RAID-6 Pool==== | ||
+ | |||
+ | You can create a RAID-6 pool using the RAID-Z2 algorithm: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | pool: mypool | ||
+ | | ||
+ | scan: none requested | ||
+ | config: | ||
+ | |||
+ | NAME STATE READ WRITE CKSUM | ||
+ | mypool | ||
+ | raidz2-0 | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | c7t4d0 | ||
+ | c7t5d0 | ||
+ | spares | ||
+ | c7t6d0 | ||
+ | |||
+ | errors: No known data errors | ||
+ | </ | ||
+ | |||
+ | Destroy **mypool** : | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | </ | ||
+ | |||
+ | <WRAP center round todo 60%> | ||
+ | Create a triple parity RAID **mypool** using your five 200MB disks. Do not delete it. | ||
+ | </ | ||
+ | |||
+ | ====Displaying the Zpool History==== | ||
+ | |||
+ | You can review everything that has been done to existing pools by using the **history** subcommand of the **zpool** command: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | History for ' | ||
+ | 2012-12-13.14: | ||
+ | |||
+ | History for ' | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.22: | ||
+ | 2012-12-01.14: | ||
+ | 2012-12-03.13: | ||
+ | 2012-12-08.14: | ||
+ | 2012-12-11.15: | ||
+ | 2012-12-12.09: | ||
+ | </ | ||
+ | |||
+ | <WRAP center round important 60%> | ||
+ | Note that the history related to destroyed pools has been deleted. | ||
+ | </ | ||
+ | |||
+ | =====LAB #2 - Managing iSCSI Storage===== | ||
+ | |||
+ | ====Installing the COMSTAR Server==== | ||
+ | |||
+ | Start by installing the COMSTAR storage server software: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | | ||
+ | | ||
+ | Create backup boot environment: | ||
+ | Services to change: | ||
+ | |||
+ | DOWNLOAD | ||
+ | Completed | ||
+ | |||
+ | PHASE ITEMS | ||
+ | Installing new actions | ||
+ | Updating package state database | ||
+ | Updating image state Done | ||
+ | Creating fast lookup database | ||
+ | </ | ||
+ | |||
+ | The **COMSTAR target mode framework** runs as the **stmf** service. Check to see if it is enabled: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
STATE STIME FMRI | STATE STIME FMRI | ||
- | online | + | disabled |
</ | </ | ||
- | =====Arrêt du Système===== | + | Enable the service: |
- | ====La commande shutdown==== | + | < |
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | STATE STIME FMRI | ||
+ | online | ||
+ | </ | ||
- | La commande utilisée pour arrêter le système est la commande | + | You can check the status of the server using the **stmfadm** command: |
- | shutdown [-y] [-gsecondes] [-irunlevel] | + | < |
+ | root@solaris: | ||
+ | Operational Status: online | ||
+ | Config Status | ||
+ | ALUA Status | ||
+ | ALUA Node : 0 | ||
+ | </ | ||
- | Les options sont : | + | ====Creating SCSI Logical Units==== |
- | ^ Option ^ Description ^ | + | First you need to create your **Backing Storage Device** within your **mypool** pool: |
- | | -y | Commande non-intéractive | | + | |
- | | -gsecondes | Délai de grâce en secondes | | + | |
- | | -irunlevel | Choix du runlevel de destination | | + | |
- | L' | + | < |
+ | root@solaris: | ||
+ | root@solaris:~# zfs list | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/ | ||
+ | rpool 7.40G 11.9G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
- | ^ Valeur ^ Description ^ | + | You can see your raw device in the **/ |
- | | s | Single User. Ceci est le comportement par défaut | | + | |
- | | 0 | Arrêt complet SPARC | | + | |
- | | 5 | Arrêt complet x86 | | + | |
- | | 6 | Redémarrage | | + | |
- | Pour mieux comprendre, saisissez la commande suivante pour arrêter la machine dans 6 minutes | + | < |
+ | root@solaris: | ||
+ | total 0 | ||
+ | lrwxrwxrwx | ||
+ | </ | ||
+ | |||
+ | You can now create a logical unit using the **create-lu** subcommand of the **sbdadm** command: | ||
< | < | ||
- | # shutdown | + | root@solaris: |
+ | Created the following LU: | ||
- | Shutdown started. | + | GUID DATA SIZE |
+ | -------------------------------- | ||
+ | 600144f0e2a54e00000050cae6d80001 | ||
+ | </ | ||
- | Broadcast Message from root (pts/2) on solaris.i2tch.loc Sat Nov 30 08: | + | ====Mapping the Logical Unit==== |
- | The system solaris.i2tch.loc will be shut down in 6 minutes | + | |
- | showmount: solaris.i2tch.loc: RPC: Program not registered | + | In order for the logical unit to be available to initiators, it has to be **mapped**. In order to map the logical device you need its GUID. You can use either one of the two following commands to get that information: |
+ | |||
+ | < | ||
+ | root@solaris:~# sbdadm list-lu | ||
+ | |||
+ | Found 1 LU(s) | ||
+ | |||
+ | GUID DATA SIZE | ||
+ | -------------------------------- | ||
+ | 600144f0e2a54e00000050cae6d80001 | ||
</ | </ | ||
- | Ouvrez un autre terminal puis saisissez la commande suivante pour rechercher le PID du processus shutdown | + | < |
+ | root@solaris: | ||
+ | LU Name: 600144F0E2A54E00000050CAE6D80001 | ||
+ | Operational Status | ||
+ | Provider Name : sbd | ||
+ | Alias : / | ||
+ | View Entry Count : 0 | ||
+ | Data File : / | ||
+ | Meta File : not set | ||
+ | Size : 104857600 | ||
+ | Block Size : 512 | ||
+ | Management URL : not set | ||
+ | Vendor ID : SUN | ||
+ | Product ID : COMSTAR | ||
+ | Serial Num : not set | ||
+ | Write Protect | ||
+ | Write Cache Mode Select: Enabled | ||
+ | Writeback Cache : Enabled | ||
+ | Access State : Active | ||
+ | </ | ||
+ | |||
+ | Create simple mapping for this logical unit by using the **add-view** subcommand of the **stmfadm** command: | ||
< | < | ||
- | # ps -ef | grep shutdown | + | root@solaris:~# stmfadm add-view 600144F0E2A54E00000050CAE6D80001 |
- | | + | |
- | root | + | |
</ | </ | ||
- | Tuez maintenat le processus du shutdown | + | ====Creating a Target==== |
+ | |||
+ | In order to create a target the **svc:/ | ||
< | < | ||
- | # kill -9 907 | + | root@solaris: |
+ | STATE STIME FMRI | ||
+ | disabled | ||
+ | online | ||
</ | </ | ||
- | L' | + | Start the service: |
- | ====Autres commandes==== | + | < |
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | STATE STIME FMRI | ||
+ | online | ||
+ | online | ||
+ | </ | ||
+ | |||
+ | Now create a target using the **create-target** subcommand of the **itadm** command: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | Target iqn.1986-03.com.sun: | ||
+ | </ | ||
+ | |||
+ | To list the target(s), use the **list-target** subcommand of the **itadm** command: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | TARGET NAME STATE SESSIONS | ||
+ | iqn.1986-03.com.sun: | ||
+ | </ | ||
+ | |||
+ | ====Configuring the Target for Discovery==== | ||
+ | |||
+ | Finally, you need to configure the target so it can be discovered by initiators: | ||
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | </ | ||
- | Trois autres commandes existent pour arrêter la machine : | + | =====References===== |
- | ^ Commande ^ Description ^ | + | * **[[http:// |
- | | halt | Arrête le système ( RUNLEVEL 0 ) | | + | |
- | | reboot | Re-démarre le système ( RUNLEVEL 5/6 ) | | + | |
- | | poweroff | Arrête le système ( RUNLEVEL 0 ) et essaie de couper l' | + | |
----- | ----- | ||
< | < | ||
- | < | + | <div align=" |
Copyright © 2019 Hugh Norris. | Copyright © 2019 Hugh Norris. | ||
- | </ | ||
</ | </ |