Différences
Ci-dessous, les différences entre deux révisions de la page.
Prochaine révision | Révision précédente | ||
elearning:workbooks:solaris:11:junior:l124 [2016/10/22 05:25] – modification externe 127.0.0.1 | elearning:workbooks:solaris:11:junior:l124 [2020/01/30 03:28] (Version actuelle) – modification externe 127.0.0.1 | ||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
- | ======Boot Process and Service | + | ====== |
- | =====Boot Process===== | + | =====Preparing your Solaris 11 VM===== |
- | ====SPARC Systems==== | + | Before continuing further, shutdown your Solaris 11 VM. Using the **Storage** section of the **Oracle VM VirtualBox Manager**, add the following **.vmdk** disks to the **existing** SATA controller of your Solaris 11 VM: |
- | The boot process on a SPARC system uses **[[wp> | + | ^ Disk ^ Size ^ Name ^ |
+ | | c7t2d0 | 200 Mb | Disk1.vmdk | | ||
+ | | c7t3d0 | 200 Mb | Disk2.vmdk | | ||
+ | | c7t4d0 | 200 Mb | Disk3.vmdk | | ||
+ | | c7t5d0 | 200 Mb | Disk4.vmdk | | ||
+ | | c7t6d0 | 200 Mb | Disk5.vmdk | | ||
+ | | c7t7d0 | 20 Gb | Mirror.vmdk | | ||
- | For a detailed description | + | Using the **System** section |
- | * **[[http:// | + | Finally, boot your Solaris |
- | * **[[http:// | + | |
- | The finality of the boot process is the execution of **/ | + | =====Introduction===== |
- | <WRAP center round important 60%> | + | All previous versions |
- | Since there is no Live Media boot option for SPARC systems, installation | + | |
- | </WRAP> | + | |
- | ====x86 Systems==== | + | ====Solaris 11 and ZFS==== |
- | x86 systems boot using the **Basic Input/ | + | The Solaris 11 implementation of ZFS includes |
- | ====Grub 2==== | + | * 128-bit addressing, |
+ | * data integrity assurance, | ||
+ | * automated data corruption detection and repair, | ||
+ | * encryption, | ||
+ | * compression, | ||
+ | * de-duplication, | ||
+ | * quotas, | ||
+ | * file system migration between pools, | ||
+ | * snapshots. | ||
- | Grub 2 is a complete re-write of the previous Grub, now called **Grub Legacy**. | + | ====ZFS Vocabulary==== |
- | Grub 2 has gone the modules route. Solaris 11 Grub 2 modules can be found in the **/ | + | The introduction of ZFS was obviously accompanied by a new vocabulary: |
- | < | + | ^ Term ^ Description ^ |
- | root@solaris: | + | | pool | A storage element regrouping one or more disk partitions containing one or more file systems | |
- | acpi.mod | + | | file system | A dataset containing directories and files | |
- | adler32.mod | + | | clone | A copy of a file system | |
- | affs.mod | + | | snapshot | A read-only copy of the state of a file system | |
- | afs.mod | + | | compression | The reduction of storage space achieved by the removal of duplicate data blocks | |
- | ahci.mod | + | | de-duplication | The reduction of storage space achieved by the removal of redundant data patterns | |
- | aout.mod | + | | checksum | A 256-bit number used to validate data when read or written | |
- | at_keyboard.mod | + | | encryption | The protection of data using a password |
- | ata.mod | + | | quota | The maximum amount of disk space used by a group or user | |
- | bfs.mod | + | | reservation | A preallocated amount of disk space assigned to a user or file system | |
- | bitmap_scale.mod | + | | mirror | An exact duplicate of a disk or partition | |
- | bitmap.mod | + | | RAID-Z | ZFS implementation of [[wp> |
- | blocklist.mod | + | | RAID-Z2 | ZFS implementation of [[wp> |
- | boot.img | + | | RAID-Z3 | ZFS implementation of Triple Parity RAID | |
- | boot.mod | + | |
- | bsd.mod | + | |
- | btrfs.mod | + | |
- | bufio.mod | + | |
- | cat.mod | + | |
- | cdboot.img | + | |
- | chain.mod | + | |
- | cmostest.mod | + | |
- | cmp.mod | + | |
- | command.lst | + | |
- | core.img | + | |
- | cpio_be.mod | + | |
- | cpio.mod | + | |
- | cpuid.mod | + | |
- | crc64.mod | + | |
- | crypto.lst | + | |
- | crypto.mod | + | |
- | cryptodisk.mod | + | |
- | cs5536.mod | + | |
- | custom.cfg | + | |
- | date.mod | + | |
- | datehook.mod | + | |
- | datetime.mod | + | |
- | diskboot.img | + | |
- | dm_nv.mod | + | |
- | drivemap.mod | + | |
- | echo.mod | + | |
- | efiemu.mod | + | |
- | efiemu32.o | + | |
- | efiemu64.o | + | |
- | elf.mod | + | |
- | example_functional_test.mod | + | |
- | exfat.mod | + | |
- | ext2.mod | + | |
- | extcmd.mod | + | |
- | fat.mod | + | |
- | font.mod | + | |
- | freedos.mod | + | |
- | fs.lst | + | |
- | fshelp.mod | + | |
- | functional_test.mod | + | |
- | gcry_arcfour.mod | + | |
- | gcry_blowfish.mod | + | |
- | gcry_camellia.mod | + | |
- | gcry_cast5.mod | + | |
- | gcry_crc.mod | + | |
- | gcry_des.mod | + | |
- | gcry_md4.mod | + | |
- | gcry_md5.mod | + | |
- | gcry_rfc2268.mod | + | |
- | gcry_rijndael.mod | + | |
- | gcry_rmd160.mod | + | |
- | gcry_seed.mod | + | |
- | gcry_serpent.mod | + | |
- | gcry_sha1.mod | + | |
- | gcry_sha256.mod | + | |
- | gcry_sha512.mod | + | |
- | gcry_tiger.mod | + | |
- | gcry_twofish.mod | + | |
- | gcry_whirlpool.mod | + | |
- | geli.mod | + | |
- | gettext.mod | + | |
- | gfxmenu.mod | + | |
- | gfxterm.mod | + | |
- | gptsync.mod | + | |
- | grub.cfg | + | |
- | halt.mod | + | |
- | hashsum.mod | + | |
- | hdparm.mod | + | |
- | hello.mod | + | |
- | help.mod | + | |
- | hexdump.mod | + | |
- | hfs.mod | + | |
- | hfsplus.mod | + | |
- | http.mod | + | |
- | iorw.mod | + | |
- | jfs.mod | + | |
- | jpeg.mod | + | |
- | kernel.img | + | |
- | keylayouts.mod | + | |
- | keystatus.mod | + | |
- | legacycfg.mod | + | |
- | linux.mod | + | |
- | linux16.mod | + | |
- | lnxboot.img | + | |
- | loadenv.mod | + | |
- | loopback.mod | + | |
- | ls.mod | + | |
- | lsacpi.mod | + | |
- | lsapm.mod | + | |
- | lsmmap.mod | + | |
- | lspci.mod | + | |
- | </ | + | |
- | **Grub 2** reads the boot entries or **stanzas** from the **/ | + | ====ZFS Commands=== |
- | < | + | The ZFS commands are as follows: |
- | root@solaris:~# cat / | + | |
- | # GRUB2 configuration file | + | |
- | load_video_$target | + | ^ Command ^ Description ^ |
- | terminal_output console | + | | zpool | Used to manage ZFS pools | |
+ | | zfs | Used to manage ZFS file systems | | ||
- | if sleep --verbose --interruptible 5; then | + | ===The zpool Command=== |
- | set timeout=0 | + | |
- | fi | + | |
- | set default=" | + | The **zpool** command uses a set of subcommands: |
- | menuentry " | + | ^ Command ^ Description ^ |
- | search --no-floppy --file --set=root / | + | | create | Creates a storage pool and configures its mount point | |
- | set kern=/ | + | | destroy | Destroys a storage pool | |
- | echo -n " | + | | list | Displays the health and storage usage of a pool | |
- | $multiboot $kern $kern | + | | get | Displays a list of pool properties | |
- | set gfxpayload=" | + | | set | Sets a property for a pool | |
- | insmod gzio | + | | status | Displays the health of a pool | |
- | echo -n " | + | | history | Displays the commands issued for a pool since its creation | |
- | $module / | + | | add | Adds a disk to an existing pool | |
- | } | + | | remove | Removes a disk from an existing pool | |
+ | | replace | Replaces a disk in a pool by another disk | | ||
+ | | scrub | Verifies the checksums of a pool and repairs any damaged data blocks | | ||
- | menuentry " | + | ===The zfs Command=== |
- | search --no-floppy --file --set=root / | + | |
- | set kern=/ | + | |
- | echo -n " | + | |
- | $multiboot $kern $kern -B console=ttya | + | |
- | set gfxpayload=" | + | |
- | insmod gzio | + | |
- | echo -n " | + | |
- | $module / | + | |
- | } | + | |
- | menuentry " | + | The **zfs** command use a set of subcommands: |
- | search --no-floppy --file --set=root / | + | |
- | set kern=/ | + | |
- | echo -n " | + | |
- | $multiboot $kern $kern -B console=ttyb | + | |
- | set gfxpayload=" | + | |
- | insmod gzio | + | |
- | echo -n " | + | |
- | $module / | + | |
- | } | + | |
- | if [ " | + | ^ Command ^ Description ^ |
- | menuentry "Boot from Hard Disk" { | + | | create | Creates a ZFS file system, sets its properties and automatically mounts it | |
- | set root=(hd0) | + | | destroy | Destroys a ZFS file system or snapshot | |
- | chainloader | + | | list | Displays the properties and storage usage of a ZFS file system |
- | } | + | | get | Displays a list of ZFS file system properties | |
- | else | + | | set | Sets a property for a ZFS file system | |
- | menuentry "Entry [Boot from Hard Disk] not supported on this firmware" | + | | snapshot | Creates a read-only copy of the state of a ZFS file system | |
- | echo "Not supported" | + | | rollback | Returns the file system to the state of the **last** snapshot | |
- | } | + | | send | Creates a file from a snapshot in order to migrate it to another pool | |
- | fi | + | | receive | Retrieves a file created by the subcommand **send** | |
+ | | clone | Creates a copy of a snapshot | | ||
+ | | promote | Transforms a clone into a ZFS file system | | ||
+ | | diff | Displays the file differences between two snapshots or a snapshot and its parent file system | | ||
+ | | mount | Mounts a ZFS file system at a specific mount point | | ||
+ | | unmount | Unmounts a ZFS file system | | ||
+ | ====Solaris Slices==== | ||
- | if [ -f / | + | Those familiar with UFS on Solaris will remember having to manipulate Solaris |
- | source / | + | |
- | fi | + | |
- | </ | + | |
- | + | ||
- | This file must **never** be edited manually. To modify Grub 2, use the **bootadm** command: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | Usage: | + | Searching for disks...done |
- | bootadm update-archive [-vn] [-R altroot [-p platform> | + | |
- | bootadm list-archive [-R altroot [-p platform> | + | |
- | bootadm install-bootloader [-fMv] [-P pool] [-R path] [device1 | + | |
- | bootadm set-menu [-P pool] [-R altroot] key=value | + | |
- | bootadm list-menu [-P pool] [-R altroot] < | + | |
- | bootadm add-entry [-P pool] [-i < | + | |
- | bootadm remove-entry [-P pool] < | + | |
- | bootadm change-entry [-P pool] < | + | |
- | bootadm generate-menu [-f] [-P pool] | + | |
- | </ | + | |
- | The bootadm command can, for example, be used to display the current boot menu and default Grub values: | ||
- | <code> | + | AVAILABLE DISK SELECTIONS: |
- | root@solaris:~# bootadm list-menu | + | 0. c7t0d0 |
- | the location of the boot loader configuration files is: /rpool/boot/grub | + | /pci@0, |
- | default | + | 1. c7t2d0 <ATA-VBOX HARDDISK-1.0-200.00MB> |
- | console text | + | /pci@0,0/pci8086, |
- | timeout 30 | + | 2. c7t3d0 < |
- | 0 Oracle Solaris 11.1 | + | / |
- | 1 solaris-backup-1 | + | 3. c7t4d0 < |
+ | /pci@0, | ||
+ | 4. c7t5d0 < | ||
+ | / | ||
+ | 5. c7t6d0 < | ||
+ | / | ||
+ | 6. c7t7d0 < | ||
+ | / | ||
+ | Specify disk (enter its number): 0 | ||
+ | selecting c7t0d0 | ||
+ | [disk formatted] | ||
+ | / | ||
+ | |||
+ | |||
+ | FORMAT MENU: | ||
+ | disk - select a disk | ||
+ | type - select (define) a disk type | ||
+ | partition | ||
+ | current | ||
+ | format | ||
+ | fdisk - run the fdisk program | ||
+ | repair | ||
+ | label - write label to the disk | ||
+ | analyze | ||
+ | defect | ||
+ | | ||
+ | verify | ||
+ | inquiry | ||
+ | volname | ||
+ | !< | ||
+ | quit | ||
+ | format> part | ||
+ | |||
+ | |||
+ | PARTITION MENU: | ||
+ | 0 - change `0' partition | ||
+ | | ||
+ | 2 - change `2' partition | ||
+ | 3 - change `3' partition | ||
+ | 4 - change `4' partition | ||
+ | 5 - change `5' partition | ||
+ | 6 - change `6' partition | ||
+ | select - select a predefined table | ||
+ | modify - modify a predefined partition table | ||
+ | name - name the current table | ||
+ | print - display the current table | ||
+ | label - write partition map and label to the disk | ||
+ | !< | ||
+ | quit | ||
+ | partition> | ||
+ | Current partition table (original): | ||
+ | Total disk sectors available: 41926589 + 16384 (reserved sectors) | ||
+ | |||
+ | Part Tag Flag First Sector | ||
+ | 0 BIOS_boot | ||
+ | 1 usr wm 524288 | ||
+ | 2 unassigned | ||
+ | 3 unassigned | ||
+ | 4 unassigned | ||
+ | 5 unassigned | ||
+ | 6 unassigned | ||
+ | 8 | ||
+ | |||
+ | partition> | ||
</ | </ | ||
- | ====The Init Process==== | + | <WRAP center round important 60%> |
+ | Note the following line in the above output: | ||
- | As seen above, the first process to be launched at boot is the **init** process. The **init** process is configured by editing the **/etc/inittab** file. | + | **/dev/dsk/ |
- | ===Inittab=== | + | |
- | < | + | Since you are using ZFS for storage management, you no longer |
- | # | + | </WRAP> |
- | # Copyright (c) 1988, 2011, Oracle and/or its affiliates. All rights reserved. | + | |
- | # | + | |
- | # The / | + | |
- | # information refer to init(1M) and inittab(4). | + | |
- | # necessary | + | |
- | # Solaris Service Management Facility (SMF) to define services instead. | + | |
- | # Refer to smf(5) and the System Administration Guide for more | + | |
- | # information on SMF. | + | |
- | # | + | |
- | # For modifying parameters passed to ttymon, use svccfg(1m) to modify | + | |
- | # the SMF repository. For example: | + | |
- | # | + | |
- | # # svccfg | + | |
- | # svc:> select system/ | + | |
- | # | + | |
- | # | + | |
- | # | + | |
- | # | + | |
- | ap:: | + | |
- | smf:: | + | |
- | p3: | + | |
- | </file> | + | |
- | In the above example, each uncommented line contains four fields separated by a colon: | + | ====iSCSI Storage==== |
- | ^ Field ^ Name ^ Description ^ | + | In Solaris 10 the configuration of iSCSI LUNs was accomplished using the **iscsitadm** |
- | | 1 | ID | A 1 to 4 character unique identifier for the line | | + | |
- | | 2 | RUN LEVELS | The UNIX SVR4 run levels concerned by the line | | + | |
- | | 3 | ACTION | The method used to run the command | + | |
- | | 4 | COMMAND | The command to execute | | + | |
- | As you can see by the following | + | COMSTAR includes |
- | < | + | * scalability, |
- | ... | + | * compatibility with generic host adapters, |
- | smf:: | + | * multipathing, |
- | ... | + | * LUN masking and mapping functions. |
- | </ | + | |
- | the **action** //sysinit// starts | + | An iSCSI target is an **endpoint** waiting for connections from clients called **initiators**. A target can provide multiple **Logical Units** which provides classic read and write data operations. |
- | The svc.startd daemon | + | Each logical unit is backed by a **storage device**. You can create a logical unit backed by any one of the following: |
- | * starting SMF service instances, | + | * a file, |
- | * monitoring SMF service instances, | + | * a thin-provisioned file, |
- | * restarting SMF service instances, | + | * a disk partition, |
- | * running legacy rc scripts in the appropriate run levels, | + | * a ZFS volume. |
- | * error state reporting. | + | |
- | The svc.startd daemon keeps status information for every SMF service instance in files in the **/etc/svc** directory. It also keeps a repository of all SMF service instances in the **/ | + | =====LAB #1 - Managing ZFS Storage===== |
+ | |||
+ | ====Displaying Online Help==== | ||
+ | |||
+ | Both the **zpool**and **zfs** commands have built-in online help: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | total 214807 | + | The following commands are supported: |
- | drwxr-xr-x | + | add attach |
- | lrwxrwxrwx | + | help |
- | -rw------- | + | replace |
- | -rw------- | + | For more info, run: zpool help <command> |
- | -rw------- | + | root@solaris:~# zfs help |
- | -rw------- | + | The following commands are supported: |
- | lrwxrwxrwx | + | allow |
- | -rw------- | + | groupspace |
- | -rw------- | + | list mount |
- | -rw------- | + | rollback |
- | -rw------- | + | unmount |
- | -rw------- | + | For more info, run: zfs help <command> |
- | lrwxrwxrwx | + | |
</ | </ | ||
- | Boot log files are now stored in **/etc/ | + | <WRAP center round important 60%> |
+ | Note that you can get help on subcommands by either using **zpool help < | ||
+ | </WRAP> | ||
+ | |||
+ | ====Checking Pool Status==== | ||
+ | |||
+ | Use the **zpool** command with the **list** subcommand to display the details of your pool: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | application-cups-scheduler: | + | NAME SIZE ALLOC |
- | application-desktop-cache-desktop-mime-cache: | + | rpool 19.6G 6.96G 12.7G 35% 1.00x ONLINE |
- | application-desktop-cache-docbook-dtds-update: | + | |
- | application-desktop-cache-docbook-style-dsssl-update: | + | |
- | application-desktop-cache-docbook-style-xsl-update: | + | |
- | application-desktop-cache-gconf-cache: | + | |
- | application-desktop-cache-icon-cache: | + | |
- | application-desktop-cache-input-method-cache: | + | |
- | application-desktop-cache-mime-types-cache: | + | |
- | application-desktop-cache-pixbuf-loaders-installer: | + | |
- | application-font-fc-cache: | + | |
- | application-graphical-login-gdm: | + | |
- | application-man-index: | + | |
- | application-management-net-snmp: | + | |
- | application-opengl-ogl-select: | + | |
- | application-pkg-dynamic-mirror: | + | |
- | application-pkg-server: | + | |
- | application-pkg-system-repository: | + | |
- | application-pkg-update: | + | |
- | application-pkg-zones-proxyd: | + | |
- | application-security-tcsd: | + | |
- | application-stosreg: | + | |
- | application-texinfo-update: | + | |
- | application-time-slider-plugin: | + | |
- | application-time-slider-plugin: | + | |
- | application-time-slider: | + | |
- | application-upnp-coherence: | + | |
- | application-virtualbox-vboxmslnk: | + | |
- | application-virtualbox-vboxservice: | + | |
- | bootadm.lock | + | |
- | ConsoleKit | + | |
- | cronfifo | + | |
- | cups | + | |
- | cups-socket | + | |
- | daemon | + | |
- | dbus | + | |
- | dladm | + | |
- | filesystem-autofs.lock | + | |
- | gdm | + | |
- | gdm.pid | + | |
- | hald | + | |
- | in.ndpd_ipadm | + | |
- | in.ndpd_mib | + | |
- | in.ndpd.pid | + | |
- | inetd.uds | + | |
- | init.state | + | |
- | initpipe | + | |
- | ipadm | + | |
- | ipf | + | |
- | ipmon.pid | + | |
- | ipsecconf.lock | + | |
- | kcfd_door | + | |
- | milestone-config: | + | |
- | milestone-devices: | + | |
- | milestone-multi-user-server: | + | |
- | milestone-multi-user: | + | |
- | milestone-name-services: | + | |
- | milestone-network: | + | |
- | milestone-self-assembly-complete: | + | |
- | milestone-single-user: | + | |
- | milestone-unconfig: | + | |
- | name_service_door | + | |
- | netcfg | + | |
- | network-datalink-management: | + | |
- | network-dhcp-relay: | + | |
- | network-dhcp-relay: | + | |
- | network-dhcp-server: | + | |
- | network-dhcp-server: | + | |
- | network-dns-client: | + | |
- | network-dns-multicast: | + | |
- | network-dns-server: | + | |
- | network-ftp: | + | |
- | network-http: | + | |
- | network-ilomconfig-interconnect: | + | |
- | network-inetd-upgrade: | + | |
- | network-inetd: | + | |
- | network-initial: | + | |
- | network-install: | + | |
- | network-ip-interface-management: | + | |
- | network-ipfilter: | + | |
- | network-ipmievd: | + | |
- | network-ipmon: | + | |
- | network-ipmp: | + | |
- | network-ipsec-ike: | + | |
- | network-ipsec-ipsecalgs: | + | |
- | network-ipsec-manual-key: | + | |
- | network-ipsec-policy: | + | |
- | network-iptun: | + | |
- | network-iscsi-initiator: | + | |
- | network-ldap-client: | + | |
- | network-ldap-server: | + | |
- | network-lms: | + | |
- | network-loadbalancer-ilb: | + | |
- | network-location: | + | |
- | network-location: | + | |
- | network-loopback: | + | |
- | network-netcfg: | + | |
- | network-netmask: | + | |
- | network-nfs-cbd: | + | |
- | network-nfs-client: | + | |
- | network-nfs-fedfs-client: | + | |
- | network-nfs-mapid: | + | |
- | network-nfs-nlockmgr: | + | |
- | network-nfs-server: | + | |
- | network-nfs-status: | + | |
- | network-nis-client: | + | |
- | network-nis-domain: | + | |
- | network-npiv_config: | + | |
- | network-ntp: | + | |
- | network-physical: | + | |
- | network-physical: | + | |
- | network-routing-legacy-routing: | + | |
- | network-routing-legacy-routing: | + | |
- | network-routing-ndp: | + | |
- | network-routing-rdisc: | + | |
- | network-routing-ripng: | + | |
- | network-routing-route: | + | |
- | network-routing-setup: | + | |
- | network-rpc-bind: | + | |
- | network-rpc-keyserv: | + | |
- | network-sctp-congestion-control: | + | |
- | network-sctp-congestion-control: | + | |
- | network-sctp-congestion-control: | + | |
- | network-sctp-congestion-control: | + | |
- | network-security-kadmin: | + | |
- | network-security-krb5kdc: | + | |
- | network-sendmail-client: | + | |
- | network-service: | + | |
- | network-shares: | + | |
- | network-slp: | + | |
- | network-smb-client: | + | |
- | network-smb: | + | |
- | network-smtp: | + | |
- | network-socket-config: | + | |
- | network-socket-filter: | + | |
- | network-ssh: | + | |
- | network-tcp-congestion-control: | + | |
- | network-tcp-congestion-control: | + | |
- | network-tcp-congestion-control: | + | |
- | network-tcp-congestion-control: | + | |
- | network-uucp-lock-cleanup: | + | |
- | network-vpanels-http: | + | |
- | nfs-mapid.lock | + | |
- | nfs4_domain | + | |
- | opengl | + | |
- | picld_door | + | |
- | platform-i86pc-acpihpd: | + | |
- | rad | + | |
- | rcm_daemon_door | + | |
- | rcm_daemon_lock | + | |
- | rcm_daemon_state | + | |
- | repository_door | + | |
- | rpc_door | + | |
- | sendmail.pid | + | |
- | sshd.pid | + | |
- | svc_nonpersist.db | + | |
- | svc: | + | |
- | svc.startd.log | + | |
- | sysevent_channels | + | |
- | sysevent_door | + | |
- | syseventconf.lock | + | |
- | syseventconfd_door | + | |
- | syseventd.lock | + | |
- | syslog_door | + | |
- | syslog.pid | + | |
- | system-auditd: | + | |
- | system-auditset: | + | |
- | system-avahi-bridge-dsd: | + | |
- | system-boot-archive-update: | + | |
- | system-boot-archive: | + | |
- | system-boot-config: | + | |
- | system-boot-loader-update: | + | |
- | system-ca-certificates: | + | |
- | system-config-user: | + | |
- | system-consadm: | + | |
- | system-console-login: | + | |
- | system-console-login: | + | |
- | system-console-login: | + | |
- | system-console-login: | + | |
- | system-console-login: | + | |
- | system-console-login: | + | |
- | system-console-login: | + | |
- | system-console-login: | + | |
- | system-console-reset: | + | |
- | system-consolekit: | + | |
- | system-coreadm: | + | |
- | system-cron: | + | |
- | system-cryptosvc: | + | |
- | system-dbus: | + | |
- | system-devchassis: | + | |
- | system-devchassis: | + | |
- | system-devfsadm: | + | |
- | system-device-audio: | + | |
- | system-device-fc-fabric: | + | |
- | system-device-local: | + | |
- | system-device-mpxio-upgrade: | + | |
- | system-dumpadm: | + | |
- | system-early-manifest-import: | + | |
- | system-environment: | + | |
- | system-extended-accounting: | + | |
- | system-extended-accounting: | + | |
- | system-extended-accounting: | + | |
- | system-extended-accounting: | + | |
- | system-fcoe_initiator: | + | |
- | system-filesystem-autofs: | + | |
- | system-filesystem-local: | + | |
- | system-filesystem-minimal: | + | |
- | system-filesystem-reparse: | + | |
- | system-filesystem-rmvolmgr: | + | |
- | system-filesystem-root: | + | |
- | system-filesystem-ufs-quota: | + | |
- | system-filesystem-usr: | + | |
- | system-filesystem-zfs-auto-snapshot: | + | |
- | system-filesystem-zfs-auto-snapshot: | + | |
- | system-filesystem-zfs-auto-snapshot: | + | |
- | system-filesystem-zfs-auto-snapshot: | + | |
- | system-filesystem-zfs-auto-snapshot: | + | |
- | system-fm-asr-notify: | + | |
- | system-fm-notify-params: | + | |
- | system-fm-smtp-notify: | + | |
- | system-fm-snmp-notify: | + | |
- | system-fmd: | + | |
- | system-hal: | + | |
- | system-hostid: | + | |
- | system-hotplug: | + | |
- | system-identity: | + | |
- | system-identity: | + | |
- | system-idmap: | + | |
- | system-install-server: | + | |
- | system-intrd: | + | |
- | system-keymap: | + | |
- | system-logadm-upgrade: | + | |
- | system-manifest-import: | + | |
- | system-name-service-cache: | + | |
- | system-name-service-switch: | + | |
- | system-name-service-upgrade: | + | |
- | system-ocm: | + | |
- | system-pfexec: | + | |
- | system-picl: | + | |
- | system-pkgserv: | + | |
- | system-pools-dynamic: | + | |
- | system-pools: | + | |
- | system-postrun: | + | |
- | system-power: | + | |
- | system-rad: | + | |
- | system-rad: | + | |
- | system-rbac: | + | |
- | system-rcap: | + | |
- | system-rds: | + | |
- | system-resource-controls: | + | |
- | system-resource-mgmt: | + | |
- | system-rmtmpfiles: | + | |
- | system-sar: | + | |
- | system-scheduler: | + | |
- | system-security-security-extensions: | + | |
- | system-svc-global: | + | |
- | system-sysevent: | + | |
- | system-system-log: | + | |
- | system-timezone: | + | |
- | system-utmp: | + | |
- | system-vbiosd: | + | |
- | system-vtdaemon: | + | |
- | system-wusbd: | + | |
- | system-zones-install: | + | |
- | system-zones-monitoring: | + | |
- | system-zones: | + | |
- | tzsync | + | |
- | utmppipe | + | |
- | utmpx | + | |
- | vbiosd.door | + | |
- | vbiosd.lock | + | |
- | vt | + | |
- | xkb | + | |
- | zonestat_door | + | |
</ | </ | ||
- | For example: | + | Now use the **status** subcommand: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | [ Dec 3 13:15:22 Enabled. ] | + | |
- | </ | + | state: ONLINE |
+ | scan: none requested | ||
+ | config: | ||
+ | NAME STATE READ WRITE CKSUM | ||
+ | rpool | ||
+ | c7t0d0s1 | ||
- | ===Traditional Unix Run Levels=== | + | errors: No known data errors |
+ | </ | ||
- | Historically Solaris used the UNIX SVR4 boot sequence based upon **run levels**: | + | ====Creating a Mirrored Pool==== |
- | ^ Run level ^ Description ^ | + | Create a ZFS mirrored pool called **mypool** using the first two of the five disks you recently created: |
- | | 0 | Shut down for SPARC systems | | + | |
- | | S or s | Single-User mode with only root filesystem mounted (as read-only) | | + | |
- | | 1 | Single-User mode with all local filesystems mounted (read-write) | | + | |
- | | 2 | Multiple user mode without NFS export | | + | |
- | | 3 | Multiple user mode with NFS export | | + | |
- | | 4 | Unused / User definable | | + | |
- | | 5 | Shut down, power-off if hardware supports it | | + | |
- | | 6 | System reboot | | + | |
- | This boot sequence made use of scripts which either started or stopped services dependent upon the current run level. Solaris no longer uses this start up sequence. However Solaris 11 still has run levels as shown by the output of the following command: | + | < |
+ | root@solaris: | ||
+ | </ | ||
+ | |||
+ | Check that your pool has been created: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | | + | NAME |
+ | mypool | ||
+ | rpool | ||
</ | </ | ||
- | You can interpret this output as follows: | + | Display the file systems using the **zfs** command and the **list** subcommand: |
- | ^ Output ^ Description ^ | + | < |
- | | run-level 3 | The current run level | | + | root@solaris: |
- | | Dec 3 13:16 | Date and hour the run level last changed | | + | NAME USED AVAIL REFER MOUNTPOINT |
- | | 3 | The current run level | | + | mypool |
- | | 0 | The number of times the system has been at this run level since the last reboot | | + | rpool 7.02G 12.3G 4.58M /rpool |
- | | S | The previous run level | | + | rpool/ |
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
- | ===Solaris 11 Boot Milestones=== | + | <WRAP center round important 60%> |
+ | Note that the zpool command automatically creates a file system on **mypool** and mounts it at **/ | ||
+ | </ | ||
- | Solaris 11's boot process is based upon **boot milestones**. The following table shows the correspondence between the previous run levels and the new milestones: | + | ====Adding File Systems to an Existing Pool==== |
- | ^ Run level ^ Milestone ^ Description ^ | + | Now create two file systems in your pool called **/home** and **/ |
- | | - | %%none%% | All services are disabled | | + | |
- | | S or s | %%svc://milestone/single-user:default%% | Single-user mode | | + | < |
- | | 2 | %%svc://milestone/multi-user: | + | root@solaris:~# zfs create mypool/home |
- | | 3 | %%svc://milestone/multi-user-server: | + | root@solaris: |
- | | 5 | %%all%% | Multiuser mode with all services enabled | | + | root@solaris:~# zfs list |
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | rpool 7.02G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ROOT/solaris-backup-1/ | ||
+ | rpool/ROOT/solaris/var 865M 12.3G | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | Changing milestones manually is uncommon. However if necessary, it is recommended to use the **init** command | + | Note that the two file systems |
</ | </ | ||
- | =====Service Management Facility===== | + | ====Changing the Pool Mount Point==== |
- | The **Service Management Facility** ( SMF ), introduced with Solaris 10, is used to manage services under Solaris 11. | + | Suppose that you want the /home file system mounted elsewhere rather than under the /mypool mount point. With ZFS, this is very simple: |
- | An SMF service contains the following: | + | < |
+ | root@solaris:~# mkdir /users | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | rpool 7.02G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
- | - an **SMF Manifest** containing the default properties of the service, | + | <WRAP center round important 60%> |
- | - one or several **Start | + | Note that ZFS has automatically |
- | - one or several | + | </ |
- | | + | |
- | - a **Fault Management Resource Identifier** ( FMRI ). | + | |
+ | ====Adding a Hot Spare==== | ||
+ | To display all of the properties associated with **mypool**, use the **zpool** command and the **get** subcommand: | ||
- | ====Fault Management Resource Identifiers==== | + | < |
+ | root@solaris: | ||
+ | NAME PROPERTY | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | mypool | ||
+ | </ | ||
- | An FRMI is composed of: | + | <WRAP center round important 60%> |
+ | Note that the **autoreplace** property | ||
+ | </ | ||
- | - a **Scheme** - the type of service, | + | Set the autoreplace property to on: |
- | - a **Location** - the hostname of the system where the service runs, | + | |
- | - a **Category** - the service category, | + | |
- | - a **Description** - the service name, | + | |
- | - an **Instance** - the service instance ( some programs run multiple instances or the same service, for example apache ). | + | |
- | ====SMF Service categories==== | + | < |
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | NAME PROPERTY | ||
+ | mypool | ||
+ | </ | ||
- | SMF services are grouped into functional | + | Add the fourth 200 Mb disk that you have created to **mypool** as a spare: |
- | - **Applications**, | + | < |
- | - **Network**, | + | root@solaris: |
- | - **Device**, | + | root@solaris: |
- | | + | |
- | | + | |
+ | | ||
+ | config: | ||
- | ====SMF Service States==== | + | NAME STATE READ WRITE CKSUM |
+ | mypool | ||
+ | mirror-0 | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
- | At any point in time a SMF service instance can be in one of seven **states**: | + | errors: No known data errors |
+ | </ | ||
- | - **Uninitialized** - The initial state or a service instance before being started by the svc.startd deamon. | + | ====Observing Pool Activity==== |
- | - **Offline** - The service instance is enabled but not yet running. | + | |
- | - **Online** - The service instance and all of its dependencies are running without any errors. | + | |
- | - **Degraded** - The service instance is running but in a limited mode. | + | |
- | - **Maintenance** - The deamon svc.startd cannot start the service instance because of a problem. | + | |
- | - **Disabled** - The service instance is disabled and will not run at the next reboot. | + | |
- | - **Legacy_run** - The service instance cannot be managed by SMF. | + | |
- | ====Legacy_run Services==== | + | Create a random data file in **/ |
- | + | ||
- | Legacy_run services are those services that still use the previous Unix rcx.d structure. SMF can start and stop these services but can do nothing more: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | /etc/rc0.d: | + | [1] 2617 |
- | K50pppd | + | </ |
- | /etc/rc1.d: | + | <WRAP center round important 60%> |
- | K50pppd | + | Write down the PID, you will need it in 2 minutes to kill the process you have just started. |
+ | </ | ||
- | /etc/rc2.d: | + | Now display the writes to the pool using the **iostat** subcommand of the **zpool** command: |
- | README | + | |
- | /etc/rc3.d: | + | < |
- | README | + | root@solaris:~# zpool iostat -v 3 |
+ | | ||
+ | pool alloc | ||
+ | ---------- | ||
+ | mypool | ||
+ | mirror | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | ---------- | ||
+ | rpool | ||
+ | c7t0d0s1 | ||
+ | ---------- | ||
- | /etc/rcm: | + | |
- | scripts | + | pool alloc |
+ | ---------- | ||
+ | mypool | ||
+ | mirror | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | ---------- | ||
+ | rpool | ||
+ | c7t0d0s1 | ||
+ | ---------- | ||
- | /etc/rcS.d: | + | |
- | K50pppd | + | pool alloc |
- | </ | + | ---------- |
+ | mypool | ||
+ | | ||
+ | | ||
+ | c7t3d0 | ||
+ | ---------- | ||
+ | rpool | ||
+ | c7t0d0s1 | ||
+ | ---------- | ||
- | ====SMF Commands==== | + | |
+ | pool alloc | ||
+ | ---------- | ||
+ | mypool | ||
+ | mirror | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | ---------- | ||
+ | rpool | ||
+ | c7t0d0s1 | ||
+ | ---------- | ||
- | The commands used to manage SMF services are shown in the following table: | + | |
+ | pool alloc | ||
+ | ---------- | ||
+ | mypool | ||
+ | mirror | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | ---------- | ||
+ | rpool | ||
+ | c7t0d0s1 | ||
+ | ---------- | ||
- | ^ Command ^ Description ^ | + | ^C |
- | | svcs | Displays information about a service | | + | </ |
- | | svcadm | Used to manage services | | + | |
- | | svccfg | Used to manipulate the SMF repository | | + | |
- | | svcprop | Used to view the SMF repository data | | + | |
- | | inetadm | Used to view and configure inetd services | | + | |
+ | <WRAP center round todo 60%> | ||
+ | Is your mirror functioning ? | ||
+ | </ | ||
- | =====Lab #1 - Working with SMF===== | + | Now kill the process creating the file **randomfile** : |
+ | # kill -9 PID [Entrée] | ||
- | ====Using the svcs command==== | + | Delete |
- | + | ||
- | Firstly, use the **svcs** command with the **-a** switch to view all of the services and their current status: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | STATE STIME FMRI | + | [1]+ Killed |
- | legacy_run | + | |
- | legacy_run | + | |
- | legacy_run | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
- | online | + | |
</ | </ | ||
- | To view the processes associated with a specific service, use the svcs command with the **-p** switch: | + | ====Setting a User Quota==== |
+ | |||
+ | To set a user quota, you need to use the **set** subcommand of **zpool**: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | STATE STIME FMRI | + | root@solaris:~# zfs get quota mypool |
- | online | + | NAME PROPERTY |
- | Dec_03 | + | mypool |
+ | root@solaris:~# zfs list | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/home 63K | ||
+ | mypool/ | ||
+ | rpool 7.03G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
</ | </ | ||
- | To see a detailed output of the properties | + | <WRAP center round important 60%> |
+ | Note that the quota of 50 Mb has been set on / | ||
+ | </ | ||
+ | |||
+ | Now create | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | fmri svc:/system/cron: | + | cat: output error (0/131072 characters written) |
- | name clock daemon (cron) | + | Disc quota exceeded |
- | enabled | + | |
- | state online | + | |
- | next_state | + | |
- | state_time | + | |
- | logfile | + | |
- | restarter | + | |
- | contract_id | + | |
- | manifest | + | |
- | manifest | + | |
- | dependency | + | |
- | dependency | + | |
</ | </ | ||
- | The properties are as follows: | + | <WRAP center round important 60%> |
+ | After a few minutes, you will see the **Disc quota exceeded** message. | ||
+ | </ | ||
- | ^ Property ^ Description ^ | + | Looking at the available disk space on / |
- | | fmri | The Fault Management Resource Identifier of the service instance | | + | |
- | | name | An abbreviated name for the service | | + | |
- | | state | The current state of the service | | + | |
- | | next_state | When initialising, | + | |
- | | state_time | The service startup time stamp | | + | |
- | | logfile | Full path to the service log file | | + | |
- | | restarter | The service responsible for restarting the current service | | + | |
- | | contract_id | The Process ID of the restarter | | + | |
- | | manifest | The Start and Stop Manifest(s) of the service | | + | |
- | | dependancy | The services dependencies | | + | |
- | + | ||
- | List the service(s) | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | STATE STIME FMRI | + | NAME USED AVAIL REFER MOUNTPOINT |
- | online | + | mypool/home/user1 50.1M 0 50.1M |
- | online | + | |
</ | </ | ||
- | Now list the service(s) that depend on cron by using the **-D** switch: | + | Delete |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | STATE STIME FMRI | + | |
- | online | + | |
</ | </ | ||
- | ===svcs switches=== | + | ====Setting a User Reservation==== |
- | The available switches for this command are: | + | As with setting quotas, setting a reservation is very simple: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | Usage: svcs [-aHpv] [-o col[,col ... ]] [-R restarter] [-sS col] [< | + | root@solaris:~# zfs get reservation mypool/ |
- | svcs -d | -D [-Hpv] [-o col[,col ... ]] [-sS col] [< | + | NAME |
- | svcs -l < | + | mypool/ |
- | svcs -x [-v] [<service> ...] | + | </code> |
- | svcs -? | + | ====Using Snapshots==== |
- | -a list all service instances rather than only those that are enabled | + | Create |
- | -d list dependencies of the specified service(s) | + | |
- | -D list dependents of the specified service(s) | + | |
- | -H omit header line from output | + | |
- | -l list detailed information about the specified service(s) | + | |
- | -o list only the specified columns | + | |
- | -p list process IDs and names associated with each service | + | |
- | -R list only those services with the specified restarter | + | |
- | -s sort output in ascending order by the specified column(s) | + | |
- | -S sort output in descending order by the specified column(s) | + | |
- | -v list verbose information appropriate to the type of output | + | |
- | -x explain the status of services that might require maintenance, | + | |
- | or explain the status of the specified service(s) | + | |
- | Services can be specified using an FMRI, abbreviation, | + | < |
- | pattern, as shown in these examples for svc:/network/smtp:sendmail | + | root@solaris:~# echo "This is a test file for the first snapshot" |
+ | root@solaris:~# ls / | ||
+ | snapshot1 | ||
+ | </ | ||
- | svcs [opts] svc:/ | + | To create a snapshot of a ZFS file system, you need to use the **snapshot** subcommand of the **zfs** command: |
- | svcs [opts] network/ | + | |
- | svcs [opts] network/*mail | + | |
- | svcs [opts] network/ | + | |
- | svcs [opts] smtp:sendmail | + | |
- | svcs [opts] smtp | + | |
- | svcs [opts] sendmail | + | |
- | Columns for output or sorting can be specified using these names: | + | < |
- | + | root@solaris:~# zfs snapshot mypool/ | |
- | CTID contract ID for service (see contract(4)) | + | |
- | DESC human-readable description of the service | + | |
- | FMRI Fault Managed Resource Identifier for service | + | |
- | INST portion of the FMRI indicating service instance | + | |
- | N | + | |
- | NSTA abbreviation for next state (if in transition) | + | |
- | NSTATE | + | |
- | S | + | |
- | SCOPE | + | |
- | SN abbreviation for current state and next state | + | |
- | SVC | + | |
- | STA | + | |
- | STATE | + | |
- | STIME | + | |
</ | </ | ||
- | ====Using the svcadm command==== | + | The snapshot is located in a hidden directory under **/ |
- | + | ||
- | The **svcadm** command uses subcommands. The following shows a full list of subcommands and their switches: | + | |
< | < | ||
- | Usage: svcadm [-v] [cmd [args ... ]] | + | root@solaris:~# ls -l / |
+ | total 3 | ||
+ | drwxr-xr-x | ||
+ | </ | ||
- | svcadm enable [-rst] < | + | As you can see, the snapshot contains the **snapshot1** file: |
- | svcadm disable [-st] < | + | |
- | svcadm restart < | + | |
- | svcadm refresh < | + | |
- | svcadm mark [-It] < | + | |
- | svcadm clear < | + | |
- | svcadm milestone [-d] < | + | |
- | svcadm delegate [-s] < | + | |
- | Services can be specified using an FMRI, abbreviation, | + | < |
- | pattern, as shown in these examples for svc:/network/smtp: | + | root@solaris:~# ls -l /users/user1/.zfs/snapshot/Dec13/ |
- | + | total 2 | |
- | svcadm <cmd> svc:/network/smtp: | + | -rw-r--r-- |
- | svcadm <cmd> network/smtp: | + | |
- | svcadm <cmd> network/*mail | + | |
- | svcadm <cmd> network/ | + | |
- | svcadm <cmd> smtp:sendmail | + | |
- | svcadm <cmd> smtp | + | |
- | svcadm <cmd> sendmail | + | |
</ | </ | ||
- | Using the **svcadm** command, | + | It is important to note here that the .zfs directory is also hidden from the **ls** command, |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | /users/user1: |
- | STATE STIME FMRI | + | total 8 |
- | disabled | + | drwxr-xr-x |
+ | drwxr-xr-x | ||
+ | -rw-r--r-- | ||
</ | </ | ||
- | Now start the service again using the **enable** subcommand the **-r** switch to specify that all dependences should | + | You can also create a recursive snapshot of all file systems in a pool: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris:~# svcs svc:/ | + | |
- | STATE STIME FMRI | + | |
- | online | + | |
</ | </ | ||
- | ====Using the svcprop command==== | + | The snapshots are stored in their respective .zfs directories: |
- | + | ||
- | The **svcprop** command is used to display the properties of a particular service: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | general/ | + | Dec13-1 |
- | general/ | + | root@solaris:~# ls /users/user1/.zfs/snapshot |
- | general/ | + | Dec13 Dec13-1 |
- | general/entity_stability astring Unstable | + | |
- | general/ | + | |
- | usr/ | + | |
- | usr/ | + | |
- | usr/ | + | |
- | usr/type astring service | + | |
- | ns/entities fmri svc:/ | + | |
- | ns/grouping astring require_all | + | |
- | ns/ | + | |
- | ns/type astring service | + | |
- | manifestfiles/ | + | |
- | manifestfiles/ | + | |
- | dependents/ | + | |
- | startd/ | + | |
- | start/exec astring / | + | |
- | start/group astring | + | |
- | start/ | + | |
- | start/type astring method | + | |
- | start/ | + | |
- | start/user astring root | + | |
- | stop/exec astring | + | |
- | stop/timeout_seconds count 60 | + | |
- | stop/type astring method | + | |
- | refresh/exec astring :kill\ -THAW | + | |
- | refresh/ | + | |
- | refresh/ | + | |
- | tm_common_name/ | + | |
- | tm_man_cron1M/ | + | |
- | tm_man_cron1M/ | + | |
- | tm_man_cron1M/ | + | |
- | tm_man_crontab1/ | + | |
- | tm_man_crontab1/ | + | |
- | tm_man_crontab1/ | + | |
- | restarter/ | + | |
- | restarter/start_pid count 7978 | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter_actions/ | + | |
- | restarter_actions/ | + | |
- | restarter_actions/ | + | |
</ | </ | ||
- | In order to identify the **method** associated with a service, you need to use the **svcprop** command with the **-p** switch: | + | You can list all snapshots as follows: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | /lib/svc/method/svc-cron | + | NAME |
+ | mypool@Dec13-1 | ||
+ | mypool/home@Dec13-1 | ||
+ | mypool/home/user1@Dec13 | ||
+ | mypool/home/user1@Dec13-1 | ||
</ | </ | ||
- | ====Maintenance mode==== | + | Create another file in **/ |
- | + | ||
- | You are now going to break the cron service by renaming the cron Start and Stop method to **svc-cron.old**: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | root@solaris: | ||
+ | total 4 | ||
+ | -rw-r--r-- | ||
+ | -rw-r--r-- | ||
+ | root@solaris: | ||
+ | This is a test file for the first snapshot | ||
+ | root@solaris: | ||
+ | This is a test file for the second snapshot | ||
</ | </ | ||
- | Restart the service using the **svcadm** command and check the current state of the cron service using the **svcs** command: | + | Now take a second recursive snapshot |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | root@solaris: |
- | STATE STIME FMRI | + | NAME |
- | maintenance | + | mypool@Dec13-1 |
+ | mypool@Dec13-2 | ||
+ | mypool/home@Dec13-1 | ||
+ | mypool/home@Dec13-2 | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
</ | </ | ||
- | <WRAP center round important 60%> | + | The **diff** subcommand of the **zfs** command displays |
- | Note that the cron service has now gone into Maintenance mode ! | + | |
- | </ | + | |
- | + | ||
- | Use the **svcs** command and the **-x** switch to explain why the cron service requires maintenance: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | svc:/system/cron: | + | M /users/user1/ |
- | State: maintenance since December 11, 2012 12:43:27 PM CET | + | M /users/user1/snapshot1 |
- | Reason: Start method failed repeatedly, last exited with status 127. | + | + /users/user1/snapshot2 |
- | See: http://support.oracle.com/msg/ | + | |
- | See: cron(1M) | + | |
- | See: crontab(1) | + | |
- | See: /var/svc/log/ | + | |
- | Impact: This service is not running. | + | |
</ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | Note that in the above output, the system suggests | + | The above out put shows that **/ |
</ | </ | ||
- | So before doing anything else, open the service log file: | + | This output can contain |
- | < | + | ^ Character ^ Description ^ |
- | root@solaris: | + | | M | **M**odification | |
- | [ Nov 20 18:25:55 Enabled. ] | + | | R | **R**enamed | |
- | [ Nov 20 18:25:55 Rereading configuration. ] | + | | + | Added | |
- | [ Nov 20 18:27:48 Executing start method ("/ | + | | - | Deleted | |
- | [ Nov 20 18:27:49 Method " | + | |
- | [ Nov 20 22:46:22 Executing start method ("/ | + | |
- | [ Nov 20 22:46:22 Method " | + | |
- | [ Dec 1 14:33:18 Executing start method ("/ | + | |
- | [ Dec 1 14:33:18 Method " | + | |
- | [ Dec 3 13:14:29 Stopping because service disabled. ] | + | |
- | [ Dec 3 13:14:29 Executing stop method (:kill). ] | + | |
- | [ Dec 3 13:16:22 Executing start method ("/ | + | |
- | [ Dec 3 13:16:23 Method " | + | |
- | [ Dec 11 12:13:34 Stopping because service disabled. ] | + | |
- | [ Dec 11 12:13:34 Executing stop method (:kill). ] | + | |
- | [ Dec 11 12:16:49 Enabled. ] | + | |
- | [ Dec 11 12:16:49 Executing start method ("/ | + | |
- | [ Dec 11 12:16:49 Method " | + | |
- | [ Dec 11 12:43:27 Stopping because service restarting. ] | + | |
- | [ Dec 11 12:43:27 Executing stop method (:kill). ] | + | |
- | [ Dec 11 12:43:27 Executing start method ("/ | + | |
- | / | + | |
- | [ Dec 11 12:43:27 Method " | + | |
- | [ Dec 11 12:43:27 Executing start method ("/ | + | |
- | / | + | |
- | [ Dec 11 12:43:27 Method " | + | |
- | [ Dec 11 12:43:27 Executing start method ("/ | + | |
- | / | + | |
- | [ Dec 11 12:43:27 Method " | + | |
- | </ | + | |
- | Look for lines that give you an indication of what is going wrong, such as: | + | Note that you cannot |
- | + | ||
- | < | + | |
- | / | + | |
- | </ | + | |
- | + | ||
- | Open and examine the **/ | + | |
- | + | ||
- | < | + | |
- | # | + | |
- | # | + | |
- | # Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. | + | |
- | # | + | |
- | # Start method script for the cron service. | + | |
- | # | + | |
- | + | ||
- | . / | + | |
- | + | ||
- | if [ -p $SMF_SYSVOL_FS/ | + | |
- | if / | + | |
- | echo "$0: cron is already running" | + | |
- | exit $SMF_EXIT_ERR_NOSMF | + | |
- | fi | + | |
- | fi | + | |
- | + | ||
- | if [ -x / | + | |
- | / | + | |
- | / | + | |
- | else | + | |
- | exit 1 | + | |
- | fi | + | |
- | exit $SMF_EXIT_OK | + | |
- | </ | + | |
- | + | ||
- | As you can see, without the method, the cron service | + | |
- | + | ||
- | Now you are aware of the root cause of the problem, repair | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | Unable to obtain diffs: mypool/home/user1@Dec13-1 is not a descendant dataset of mypool/home/user1@Dec13-2 | ||
</ | </ | ||
- | Clear the maintenance status of the cron service: | + | ====Rolling Back to a Snapshot==== |
+ | |||
+ | In the case that you wish to rollback to a specific snapshot, note that you can **only** roll back to the last snapshot as shown by the output of **zfs list**: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | NAME | ||
+ | mypool@Dec13-1 | ||
+ | mypool@Dec13-2 | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | root@solaris: | ||
+ | cannot rollback to ' | ||
+ | use ' | ||
+ | mypool/ | ||
</ | </ | ||
- | and restart | + | Delete |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool@Dec13-1 | ||
+ | mypool@Dec13-2 | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
</ | </ | ||
- | Finally, check that the cron service has come out of maintenance mode: | + | Now roll back to **Dec13-1**: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | STATE STIME FMRI | + | root@solaris:~# ls -l /users/user1 |
- | online | + | total 2 |
+ | -rw-r--r-- | ||
</ | </ | ||
- | ====Using | + | <WRAP center round important 60%> |
+ | Note that the **snapshot2** file has obviously disappeared since it was not in the **Dec13-1** snapshot. | ||
+ | </ | ||
- | As you now know the **svcprop** command is used to the **SMF repository** data, in other words, he properties defined in the generic.xml and service specific manifest files. The **svccfg** command is used to configure those properties. | + | ====Cloning a Snapshot==== |
- | The **svccfg** command | + | Snapshots are read-only. To convert a snapshot to a writable file system, you can use the **clone** subcommand of the **zfs** command: |
- | + | ||
- | Set the **set-notify** property globally so that you are informed by email every time a service goes into maintenance mode: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | root@solaris:~# zfs list | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | mypool/ | ||
+ | rpool 7.03G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
</ | </ | ||
- | Now set that same property for the cron service so that you are informed by email when that specific service goes offline: | + | Display |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | total 2 | ||
+ | -rw-r--r-- | ||
</ | </ | ||
- | The **svccfg** command can also be used interactively: | + | ====Using Compression==== |
+ | |||
+ | In order to minimize storage space, you can make a file system use compression. Compression can be activated either at creation time or after creation. Compression only works for new data. Any existing data in the file system at the time of activating compression remains uncompressed. | ||
+ | |||
+ | To activate compression on an existing file system, you need to change the file system' | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | svc:> help | + | root@solaris:~# zfs get compression mypool/home/user1 |
- | General commands: | + | NAME |
- | Manifest commands: | + | mypool/home/user1 compression |
- | Profile commands: | + | |
- | Entity commands: | + | |
- | Snapshot commands: | + | |
- | Instance commands: | + | |
- | Property group commands: listpg addpg delpg | + | |
- | Property commands: | + | |
- | Customization commands: | + | |
- | Property value commands: addpropvalue delpropvalue setenv unsetenv | + | |
- | Notification parameters: listnotify setnotify delnotify | + | |
- | svc:> select system/cron | + | |
- | svc:/system/ | + | |
- | :properties | + | |
- | default | + | |
- | svc:/system/cron> select default | + | |
- | svc:/ | + | |
- | general | + | |
- | general/ | + | |
- | general/ | + | |
- | restarter | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/start_method_waitstatus integer | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter/ | + | |
- | restarter_actions | + | |
- | restarter_actions/ | + | |
- | restarter_actions/ | + | |
- | restarter_actions/ | + | |
- | restarter_actions/auxiliary_fmri | + | |
- | restarter_actions/ | + | |
- | general_ovr | + | |
- | svc:/ | + | |
</ | </ | ||
+ | ====Using De-duplication==== | ||
+ | Another space saving property of ZFS file systems is **De-duplication**: | ||
+ | < | ||
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | NAME | ||
+ | mypool/ | ||
+ | </ | ||
+ | ====Using Encryption==== | ||
- | ==== inetd ==== | + | Unlike |
- | + | ||
- | Historically under Unix, certain network servers were managed by **inetd**. The **inetd** daemon was capable of launching a specific server on an on-demand basis when it detected an incoming connection on the port associated with that server, as detailed in the **/ | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | # | + | Enter passphrase |
- | # Copyright 2004 Sun Microsystems, | + | Enter again: fenestros |
- | # Use is subject to license terms. | + | |
- | # | + | |
- | # | + | |
- | # | + | |
- | # Legacy configuration file for inetd(1M). | + | |
- | # | + | |
- | # This file is no longer directly used to configure inetd. | + | |
- | # The Solaris services which were formerly configured using this file | + | |
- | # are now configured in the Service Management Facility (see smf(5)) | + | |
- | # using inetadm(1M). | + | |
- | # | + | |
- | # Any records remaining in this file after installation or upgrade, | + | |
- | # or later created by installing additional software, must be converted | + | |
- | # to smf(5) services and imported into the smf repository using | + | |
- | # inetconv(1M), | + | |
- | # a service has been converted using inetconv, further changes made to | + | |
- | # its entry here are not reflected in the service. | + | |
- | # | + | |
</ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | As you can see, use of this file is now deprecated. | + | Note that the passphrase is not shown in the real output |
</ | </ | ||
- | Lines in this file were of the following | + | To check if encryption is active on a file system, use the following |
- | <file> | + | <code> |
- | tftp | + | root@solaris: |
- | </file> | + | NAME |
+ | mypool/ | ||
+ | root@solaris:~# zfs get encryption mypool/home/user2 | ||
+ | NAME | ||
+ | mypool/home/user2 encryption | ||
+ | </code> | ||
- | The first field on each line indicated the port associated with the server. Inetd consulted the **/ | ||
- | The second and third fields identified the protocol: | ||
- | * **stream tcp** for tcp | + | ====Replacing a Faulty Disk==== |
- | * **dgram udp** for udp | + | |
- | The fourth field took one of two values: | + | In the case of a faulty disk and no hot spares, replacing the disk is a one-line operation using the **replace** subcommand |
- | * **nowait**, | + | < |
- | * a server was started for each connecting client, | + | root@solaris: |
- | | + | |
- | * a single unique server was started for all connecting clients. | + | state: ONLINE |
+ | scan: none requested | ||
+ | config: | ||
- | The fifth field indicated the user executing the server, in this case **root**. | + | NAME STATE READ WRITE CKSUM |
+ | mypool | ||
+ | mirror-0 | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
- | The sixth field indicated the program to be launched. In this case **/usr/ | + | errors: No known data errors |
+ | root@solaris: | ||
+ | </code> | ||
- | The seventh field identified | + | Use the **status** subcommand |
+ | < | ||
+ | root@solaris: | ||
+ | pool: mypool | ||
+ | | ||
+ | scan: resilvered 601K in 0h0m with 0 errors on Thu Dec 13 11:45:49 2012 | ||
+ | config: | ||
+ | NAME STATE READ WRITE CKSUM | ||
+ | mypool | ||
+ | mirror-0 | ||
+ | c7t4d0 | ||
+ | c7t3d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
- | ==== TCP Wrapper ==== | + | errors: No known data errors |
+ | </ | ||
- | In order to improve security, **TCP Wrapper** was used to control access to the servers managed by the inetd daemon. Each line in the **/etc/inetd.conf** configuration file, such as: | + | <WRAP center round important 60%> |
+ | ZFS //Resilvering// | ||
+ | </ | ||
- | < | + | ====Destroying a Pool==== |
- | tftp | + | |
- | </ | + | |
- | was replaced by a line of the following format: | + | Destroying |
- | <file> | + | <code> |
- | tftp | + | root@solaris:~# zpool destroy mypool |
- | </file> | + | </code> |
- | Subsequently, | + | As you can see by the following output, this operation has also destroyed all the associated snapshots: |
- | The **tcpd** daemon then updated its logs and checked whether the client IP, FQDN or domain name was listed in one of the following two files: | + | < |
+ | root@solaris:~# zfs list | ||
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | rpool 7.03G 12.3G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | root@solaris: | ||
+ | cannot open ' | ||
+ | root@solaris: | ||
+ | total 0 | ||
+ | </ | ||
- | | + | <WRAP center round important 60%> |
- | | + | As you have seen above, destroying a pool, **all** the data in it and **all** the associated snapshots is disconcertingly simple. You should therefore be very careful when using the **destroy** subcommand. |
+ | </ | ||
- | The lines in the above two files were of the following format: | ||
- | < | ||
- | daemon : client list | ||
- | </ | ||
- | For example in the case of our **tftp**, if the **/ | + | ====Creating a RAID-5 Pool==== |
- | < | + | You can create a RAID-5 pool using the RAID-Z algorithm: |
- | in.tftpd | + | |
- | </ | + | |
- | then the client using the **192.168.1.10** IP address or any client whose domain name was **fenestros.com** could connect to the server. | + | < |
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | pool: mypool | ||
+ | | ||
+ | scan: none requested | ||
+ | config: | ||
- | A special keyword could also be user; **ALL**. If the **/ | + | NAME STATE READ WRITE CKSUM |
- | + | mypool | |
+ | raidz1-0 | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | c7t4d0 | ||
+ | spares | ||
+ | c7t5d0 | ||
+ | errors: No known data errors | ||
+ | </ | ||
- | + | Destroy **mypool** | |
- | + | ||
- | Since the introduction of Solaris 10, the inetd daemon is managed by SMF. By default, TCP Wrappers is disabled: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
- | defaults/ | + | |
</ | </ | ||
- | To activate TCP Wrappers, use the **svccfg** command as follows: | + | ====Creating a RAID-6 Pool==== |
+ | |||
+ | You can create a RAID-6 pool using the RAID-Z2 algorithm: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | </ | + | root@solaris: |
+ | pool: mypool | ||
+ | | ||
+ | scan: none requested | ||
+ | config: | ||
- | Refresh the inetd service using **svcadm** and check whether TCP Wrappers is now enabled: | + | NAME STATE READ WRITE CKSUM |
+ | mypool | ||
+ | raidz2-0 | ||
+ | c7t2d0 | ||
+ | c7t3d0 | ||
+ | c7t4d0 | ||
+ | c7t5d0 | ||
+ | spares | ||
+ | c7t6d0 | ||
- | < | + | errors: No known data errors |
- | root@solaris:~# svcadm refresh inetd | + | |
- | root@solaris: | + | |
- | defaults/ | + | |
</ | </ | ||
- | The **inetadm** command is used to list the servers managed by inetd: | + | Destroy |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | ENABLED | + | |
- | disabled | + | |
- | enabled | + | |
- | disabled | + | |
- | disabled | + | |
- | enabled | + | |
- | disabled | + | |
- | disabled | + | |
- | enabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
- | disabled | + | |
</ | </ | ||
- | Using this same command and the **-l** switch you can check to see if the **tftp** server is configured to use TCP Wrappers: | + | <WRAP center round todo 60%> |
+ | Create a triple parity RAID **mypool** using your five 200MB disks. Do not delete it. | ||
+ | </ | ||
+ | |||
+ | ====Displaying the Zpool History==== | ||
+ | |||
+ | You can review everything that has been done to existing pools by using the **history** subcommand of the **zpool** command: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | default | + | History for ' |
+ | 2012-12-13.14: | ||
+ | |||
+ | History for ' | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.19: | ||
+ | 2012-11-20.22: | ||
+ | 2012-12-01.14: | ||
+ | 2012-12-03.13: | ||
+ | 2012-12-08.14: | ||
+ | 2012-12-11.15: | ||
+ | 2012-12-12.09: | ||
</ | </ | ||
- | To modify this property, you can use the **inetadm** command with the **-m** switch: | + | <WRAP center round important 60%> |
+ | Note that the history related to destroyed pools has been deleted. | ||
+ | </ | ||
+ | |||
+ | =====LAB #2 - Managing iSCSI Storage===== | ||
+ | |||
+ | ====Installing the COMSTAR Server==== | ||
+ | |||
+ | Start by installing the COMSTAR storage server software: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris:~# inetadm -l /network/tftp/udp6 | grep tcp_wrappers | + | Packages to install: 20 |
- | tcp_wrappers=FALSE | + | |
+ | Create backup boot environment: | ||
+ | Services to change: | ||
+ | |||
+ | DOWNLOAD | ||
+ | Completed | ||
+ | |||
+ | PHASE ITEMS | ||
+ | Installing new actions | ||
+ | Updating package state database | ||
+ | Updating image state Done | ||
+ | Creating fast lookup database | ||
</ | </ | ||
- | Now change | + | The **COMSTAR target mode framework** runs as the **stmf** service. Check to see if it is enabled: |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | STATE STIME FMRI | ||
+ | disabled | ||
</ | </ | ||
- | Note however that the tftp daemon keeps its previously defined value for the same property: | + | Enable |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | tcp_wrappers=FALSE | + | root@solaris: |
+ | STATE STIME FMRI | ||
+ | online | ||
</ | </ | ||
- | Change that value back to TRUE: | + | You can check the status of the server using the **stmfadm** command: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris:~# inetadm -l / | + | Operational Status: online |
- | tcp_wrappers=TRUE | + | Config Status |
+ | ALUA Status | ||
+ | ALUA Node | ||
</ | </ | ||
- | ====Boot Milestone Services==== | + | ====Creating SCSI Logical Units==== |
- | **Before** you start experimenting with milestones, write down the following command: | + | First you need to create your **Backing Storage Device** within your **mypool** pool: |
- | <file> | + | <code> |
- | svcadm milestone all | + | root@solaris: |
- | </file> | + | root@solaris: |
+ | NAME USED AVAIL REFER MOUNTPOINT | ||
+ | mypool | ||
+ | mypool/ | ||
+ | rpool 7.40G 11.9G 4.58M /rpool | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </code> | ||
- | You will now put your system into single-user mode by using the following command: | + | You can see your raw device in the **/ |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | total 0 | ||
+ | lrwxrwxrwx | ||
</ | </ | ||
- | <WRAP center round important 60%> | + | You can now create a logical unit using the **create-lu** subcommand of the **sbdadm** |
- | When you are in single-user mode, use the command | + | |
- | </ | + | |
+ | < | ||
+ | root@solaris: | ||
+ | Created the following LU: | ||
- | ====The shutdown command==== | + | GUID DATA SIZE |
+ | -------------------------------- | ||
+ | 600144f0e2a54e00000050cae6d80001 | ||
+ | </ | ||
- | The **shutdown** command is used to either halt, reboot or change | + | ====Mapping |
- | shutdown [-y] [-g seconds] [-r | -i state] [message] | + | In order for the logical unit to be available to initiators, it has to be **mapped**. In order to map the logical device you need its GUID. You can use either one of the two following commands to get that information: |
- | The switches are as follows | + | < |
+ | root@solaris:~# sbdadm list-lu | ||
- | ^ Switch ^ Description ^ | + | Found 1 LU(s) |
- | | -y | The command is non-interactive | | + | |
- | | -g seconds | Grace period in seconds. The default value is 60. | | + | |
- | | -i state| Destination state. The default value is S. | | + | |
- | | -r | Equivalent to -i6 | | + | |
- | Before starting to shutdown, the system sends out a standard message: | + | GUID DATA SIZE |
+ | -------------------------------- | ||
+ | 600144f0e2a54e00000050cae6d80001 | ||
+ | </ | ||
- | **The system will be shut down in ...** | + | < |
+ | root@solaris: | ||
+ | LU Name: 600144F0E2A54E00000050CAE6D80001 | ||
+ | Operational Status | ||
+ | Provider Name : sbd | ||
+ | Alias : / | ||
+ | View Entry Count : 0 | ||
+ | Data File : / | ||
+ | Meta File : not set | ||
+ | Size : 104857600 | ||
+ | Block Size : 512 | ||
+ | Management URL : not set | ||
+ | Vendor ID : SUN | ||
+ | Product ID : COMSTAR | ||
+ | Serial Num : not set | ||
+ | Write Protect | ||
+ | Write Cache Mode Select: Enabled | ||
+ | Writeback Cache : Enabled | ||
+ | Access State : Active | ||
+ | </ | ||
- | This message is sent out 7200, 3600, 1800, 1200, 600, 300, 120, 60 and 30 seconds before shutdown begins. | + | Create simple mapping for this logical unit by using the **add-view** subcommand of the **stmfadm** command: |
- | The system message can also be complemented by an administrator defined message, **[message]**. If the message is longer than one word it must be enclosed in single (') or double (") quotation marks. | + | < |
+ | root@solaris: | ||
+ | </ | ||
- | The switch **-i** can take one of 5 states: | + | ====Creating a Target==== |
- | ^ State ^ Description ^ | + | In order to create a target the **svc:/ |
- | | 0 | System halt | | + | |
- | | 1 | Administrative state | | + | |
- | | s or S | Single User state | | + | |
- | | 5 | System halt and Powerdown | | + | |
- | | 6 | System reboot | | + | |
- | + | ||
- | Use the following command | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | STATE STIME FMRI | ||
+ | disabled | ||
+ | online | ||
+ | </ | ||
- | Shutdown started. | + | Start the service: |
- | Broadcast Message from root (pts/1) on solaris.fenestros.loc Wed Dec 12 11:56:53... | + | < |
- | The system | + | root@solaris:~# svcadm enable -r svc:/ |
+ | root@solaris:~# svcs \*scsi\* | ||
+ | STATE STIME FMRI | ||
+ | online | ||
+ | online | ||
+ | </ | ||
- | showmount: solaris.fenestros.loc: RPC: Program not registered | + | Now create a target using the **create-target** subcommand of the **itadm** command: |
+ | |||
+ | < | ||
+ | root@solaris:~# itadm create-target | ||
+ | Target iqn.1986-03.com.sun:02:897fd011-8b3d-cf2b-fc1d-a010bd97d035 successfully created | ||
</ | </ | ||
- | Open another terminal and use the follwing command to identify the PID of the shutdown process: | + | To list the target(s), |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | | + | TARGET NAME STATE SESSIONS |
+ | iqn.1986-03.com.sun:02:897fd011-8b3d-cf2b-fc1d-a010bd97d035 | ||
</ | </ | ||
- | Now kill the shutdown process: | + | ====Configuring |
+ | |||
+ | Finally, you need to configure the target so it can be discovered by initiators: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
</ | </ | ||
Ligne 1715: | Ligne 1216: | ||
< | < | ||
<div align=" | <div align=" | ||
- | Copyright © 2011-2015 | + | Copyright © 2019 Hugh Norris. |
- | <a rel=" | + | |
- | </ | + | |
</ | </ |