Différences

Ci-dessous, les différences entre deux révisions de la page.

Lien vers cette vue comparative

Prochaine révision
Révision précédente
elearning:workbooks:solaris:11:junior:l124 [2016/10/22 05:25] – modification externe 127.0.0.1elearning:workbooks:solaris:11:junior:l124 [2020/01/30 03:28] (Version actuelle) – modification externe 127.0.0.1
Ligne 1: Ligne 1:
-======Boot Process and Service Administration======+====== SO303 - Storage Administration ======
  
-=====Boot Process=====+=====Preparing your Solaris 11 VM=====
  
-====SPARC Systems====+Before continuing further, shutdown your Solaris 11 VM. Using the **Storage** section of the **Oracle VM VirtualBox Manager**, add the following **.vmdk** disks to the **existing** SATA controller of your Solaris 11 VM:
  
-The boot process on a SPARC system uses **[[wp>Open_Firmware|OpenBoot commands]]**.+^ Disk ^ Size ^ Name ^ 
 +c7t2d0 | 200 Mb | Disk1.vmdk |  
 +| c7t3d0 | 200 Mb | Disk2.vmdk |  
 +| c7t4d0 | 200 Mb | Disk3.vmdk |  
 +| c7t5d0 | 200 Mb | Disk4.vmdk |  
 +| c7t6d0 | 200 Mb | Disk5.vmdk |  
 +| c7t7d0 | 20 Gb | Mirror.vmdk | 
  
-For a detailed description of the SPARC boot processlook as the following two pages from oracle's documentation site:+Using the **System** section of the **Oracle VM VirtualBox Manager**add a second processor to your Solaris 11 VM.
  
-  * **[[http://docs.oracle.com/cd/E23824_01/html/821-2731/ggjcp.html|Overview of the Oracle Solaris Boot Architecture]]** +Finally, boot your Solaris 11 VM.
-  * **[[http://docs.oracle.com/cd/E23824_01/html/821-2731/glhaf.html#scrolltoc|What's New in Booting and Shutting Down a System]]**+
  
-The finality of the boot process is the execution of **/sbin/init** which generates the **init** process.+=====Introduction=====
  
-<WRAP center round important 60%> +All previous versions of Solaris, including Solaris 10 use the **Unix File System** ( [[wp>Unix_File_System|UFS]] ) as their default file systemSolaris 11 uses the **Zettabyte File System** ( [[wp>Zfs|ZFS]] ), introduced with Solaris 10, as its default file system.
-Since there is no Live Media boot option for SPARC systems, installation of Solaris 11 **must** be accomplished using the Text Install method. +
-</WRAP>+
  
-====x86 Systems====+====Solaris 11 and ZFS====
  
-x86 systems boot using the **Basic Input/Output System** ( [[wp>BIOS|BIOS]] ) or more recently the **Unified Extensible Firmware Interface** ( [[wp>Unified_Extensible_Firmware_Interface|UEFI]] ). Amongst other roles, both of these are responsible for starting the **Boot Loader**. Solaris 11 uses the **[[wp>GNU|GNU]]** **GRand Unified Boot loader 2** ( Grub2 ) as its default boot loader on x86 systems.+The Solaris 11 implementation of ZFS includes the following capabilities:
  
-====Grub 2====+  * 128-bit addressing, 
 +  * data integrity assurance, 
 +  * automated data corruption detection and repair, 
 +  * encryption, 
 +  * compression, 
 +  * de-duplication, 
 +  * quotas, 
 +  * file system migration between pools, 
 +  * snapshots.
  
-Grub 2 is a complete re-write of the previous Grub, now called **Grub Legacy**.+====ZFS Vocabulary====
  
-Grub 2 has gone the modules route. Solaris 11 Grub 2 modules can be found in the **/boot/grub/i386-pc** directory:+The introduction of ZFS was obviously accompanied by a new vocabulary:
  
-<code> +^ Term ^ Description ^ 
-root@solaris:~# ls /boot/grub/i386-pc +| pool | A storage element regrouping one or more disk partitions containing one or more file systems | 
-acpi.mod                     luks.mod +| file system | A dataset containing directories and files | 
-adler32.mod                  lvm.mod +| clone | A copy of a file system | 
-affs.mod                     lzma_decompress.img +| snapshot | A read-only copy of the state of a file system | 
-afs.mod                      lzopio.mod +| compression | The reduction of storage space achieved by the removal of duplicate data blocks | 
-ahci.mod                     mdraid09.mod +| de-duplication | The reduction of storage space achieved by the removal of redundant data patterns | 
-aout.mod                     mdraid1x.mod +| checksum | A 256-bit number used to validate data when read or written | 
-at_keyboard.mod              memrw.mod +| encryption | The protection of data using a password | 
-ata.mod                      minix.mod +| quota | The maximum amount of disk space used by a group or user | 
-bfs.mod                      minix2.mod +| reservation | A preallocated amount of disk space assigned to a user or file system | 
-bitmap_scale.mod             minix3.mod +| mirror | An exact duplicate of a disk or partition | 
-bitmap.mod                   mmap.mod +| RAID-Z | ZFS implementation of [[wp>Raid_5#RAID_5|RAID-5]] | 
-blocklist.mod                moddep.lst +| RAID-Z2 | ZFS implementation of [[wp>Raid_6#RAID_6|RAID-6]] | 
-boot.img                     msdospart.mod +| RAID-Z3 | ZFS implementation of Triple Parity RAID |
-boot.mod                     multiboot.mod +
-bsd.mod                      multiboot2.mod +
-btrfs.mod                    net.mod +
-bufio.mod                    newc.mod +
-cat.mod                      nilfs2.mod +
-cdboot.img                   normal.mod +
-chain.mod                    ntfs.mod +
-cmostest.mod                 ntfscomp.mod +
-cmp.mod                      ntldr.mod +
-command.lst                  odc.mod +
-core.img                     ohci.mod +
-cpio_be.mod                  part_acorn.mod +
-cpio.mod                     part_amiga.mod +
-cpuid.mod                    part_apple.mod +
-crc64.mod                    part_bsd.mod +
-crypto.lst                   part_dvh.mod +
-crypto.mod                   part_plan.mod +
-cryptodisk.mod               part_sun.mod +
-cs5536.mod                   partmap.lst +
-custom.cfg                   parttool.lst +
-date.mod                     parttool.mod +
-datehook.mod                 password_pbkdf2.mod +
-datetime.mod                 password.mod +
-diskboot.img                 pata.mod +
-dm_nv.mod                    pbkdf2.mod +
-drivemap.mod                 pci.mod +
-echo.mod                     plan9.mod +
-efiemu.mod                   play.mod +
-efiemu32.o                   png.mod +
-efiemu64.o                   priority_queue.mod +
-elf.mod                      probe.mod +
-example_functional_test.mod  pxe.mod +
-exfat.mod                    pxeboot.img +
-ext2.mod                     raid.mod +
-extcmd.mod                   raid5rec.mod +
-fat.mod                      raid6rec.mod +
-font.mod                     read.mod +
-freedos.mod                  reiserfs.mod +
-fs.lst                       relocator.mod +
-fshelp.mod                   romfs.mod +
-functional_test.mod          scsi.mod +
-gcry_arcfour.mod             search_fs_file.mod +
-gcry_blowfish.mod            search_fs_uuid.mod +
-gcry_camellia.mod            search_label.mod +
-gcry_cast5.mod               sendkey.mod +
-gcry_crc.mod                 serial.mod +
-gcry_des.mod                 setjmp.mod +
-gcry_md4.mod                 setpci.mod +
-gcry_md5.mod                 sfs.mod +
-gcry_rfc2268.mod             sleep.mod +
-gcry_rijndael.mod            squash4.mod +
-gcry_rmd160.mod              terminal.lst +
-gcry_seed.mod                terminal.mod +
-gcry_serpent.mod             terminfo.mod +
-gcry_sha1.mod                test_blockarg.mod +
-gcry_sha256.mod              test.mod +
-gcry_sha512.mod              testload.mod +
-gcry_tiger.mod               tftp.mod +
-gcry_twofish.mod             tga.mod +
-gcry_whirlpool.mod           time.mod +
-geli.mod                     tr.mod +
-gettext.mod                  trig.mod +
-gfxmenu.mod                  true.mod +
-gfxterm.mod                  udf.mod +
-gptsync.mod                  ufs2.mod +
-grub.cfg                     uhci.mod +
-halt.mod                     usb_keyboard.mod +
-hashsum.mod                  usb.mod +
-hdparm.mod                   usbms.mod +
-hello.mod                    usbserial_common.mod +
-help.mod                     usbserial_ftdi.mod +
-hexdump.mod                  usbserial_pl2303.mod +
-hfs.mod                      usbtest.mod +
-hfsplus.mod                  vbe.mod +
-http.mod                     version.lst +
-iorw.mod                     vga_text.mod +
-jfs.mod                      vga.mod +
-jpeg.mod                     video_bochs.mod +
-kernel.img                   video_cirrus.mod +
-keylayouts.mod               video_fb.mod +
-keystatus.mod                video.lst +
-legacycfg.mod                video.mod +
-linux.mod                    videoinfo.mod +
-linux16.mod                  videotest.mod +
-lnxboot.img                  xfs.mod +
-loadenv.mod                  xnu_uuid.mod +
-loopback.mod                 xnu.mod +
-ls.mod                       xzio.mod +
-lsacpi.mod                   zfs.mod +
-lsapm.mod                    zfscrypt.mod +
-lsmmap.mod                   zfsinfo.mod +
-lspci.mod +
-</code>+
  
-**Grub 2** reads the boot entries or **stanzas** from the **/boot/grub/grub.cfg** file:+====ZFS Commands===
  
-<code> +The ZFS commands are as follows:
-root@solaris:~# cat /boot/grub/grub.cfg  +
-# GRUB2 configuration file+
  
-load_video_$target +^ Command ^ Description ^ 
-terminal_output console+| zpool | Used to manage ZFS pools | 
 +| zfs | Used to manage ZFS file systems |
  
-if sleep --verbose --interruptible 5; then +===The zpool Command===
- set timeout=+
-fi+
  
-set default="0"+The **zpool** command uses a set of subcommands:
  
-menuentry "Oracle Solaris 11.1" { +^ Command ^ Description ^ 
- search --no-floppy --file --set=root /.geranium-2012-09-19T13:14:16.621522 +| create | Creates a storage pool and configures its mount point | 
- set kern=/platform/i86pc/kernel/amd64/unix +| destroy | Destroys a storage pool | 
- echo -n "Loading ${root}$kern: " +| list | Displays the health and storage usage of a pool | 
- $multiboot $kern $kern  +| get | Displays a list of pool properties | 
- set gfxpayload="1024x768x32;1024x768x16;800x600x16;640x480x16;640x480x15;640x480x32" +set | Sets a property for a pool | 
- insmod gzio +| status | Displays the health of a pool | 
- echo -n "Loading ${root}/platform/i86pc/amd64/boot_archive: " +| history | Displays the commands issued for a pool since its creation | 
- $module /platform/i86pc/amd64/boot_archive +| add | Adds a disk to an existing pool | 
-}+| remove | Removes a disk from an existing pool | 
 +| replace | Replaces a disk in a pool by another disk | 
 +| scrub | Verifies the checksums of a pool and repairs any damaged data blocks |
  
-menuentry "Oracle Solaris 11.1 ttya" { +===The zfs Command===
- search --no-floppy --file --set=root /.geranium-2012-09-19T13:14:16.621522 +
- set kern=/platform/i86pc/kernel/amd64/unix +
- echo -n "Loading ${root}$kern:+
- $multiboot $kern $kern -B console=ttya +
- set gfxpayload="1024x768x32;1024x768x16;800x600x16;640x480x16;640x480x15;640x480x32" +
- insmod gzio +
- echo -n "Loading ${root}/platform/i86pc/amd64/boot_archive:+
- $module /platform/i86pc/amd64/boot_archive +
-}+
  
-menuentry "Oracle Solaris 11.1 ttyb" { +The **zfs** command use a set of subcommands:
- search --no-floppy --file --set=root /.geranium-2012-09-19T13:14:16.621522 +
- set kern=/platform/i86pc/kernel/amd64/unix +
- echo -n "Loading ${root}$kern:+
- $multiboot $kern $kern -B console=ttyb +
- set gfxpayload="1024x768x32;1024x768x16;800x600x16;640x480x16;640x480x15;640x480x32" +
- insmod gzio +
- echo -n "Loading ${root}/platform/i86pc/amd64/boot_archive:+
- $module /platform/i86pc/amd64/boot_archive +
-}+
  
-if [ "$target" = "i386_pc" ]; then +^ Command ^ Description ^ 
- menuentry "Boot from Hard Disk" { +| create | Creates a ZFS file system, sets its properties and automatically mounts it | 
- set root=(hd0) +| destroy | Destroys a ZFS file system or snapshot | 
- chainloader --force +1 +| list | Displays the properties and storage usage of a ZFS file system 
- } +| get | Displays a list of ZFS file system properties | 
-else +set | Sets a property for a ZFS file system | 
- menuentry "Entry [Boot from Hard Disk] not supported on this firmware" { +| snapshot | Creates a read-only copy of the state of a ZFS file system | 
- echo "Not supported" +| rollback | Returns the file system to the state of the **last** snapshot | 
- } +| send | Creates a file from a snapshot in order to migrate it to another pool | 
-fi+| receive | Retrieves a file created by the subcommand **send** | 
 +| clone | Creates a copy of a snapshot | 
 +| promote | Transforms a clone into a ZFS file system | 
 +| diff | Displays the file differences between two snapshots or a snapshot and its parent file system | 
 +| mount | Mounts a ZFS file system at a specific mount point | 
 +| unmount | Unmounts a ZFS file system |
  
 +====Solaris Slices====
  
-if [ -f /boot/grub/custom.cfg ]; then +Those familiar with UFS on Solaris will remember having to manipulate Solaris **slices**. Those slices still exist:
- source /boot/grub/custom.cfg +
-fi +
-</code> +
- +
-This file must **never** be edited manuallyTo modify Grub 2, use the **bootadm** command:+
  
 <code> <code>
-root@solaris:~# bootadm +root@solaris:~# format 
-Usage: +Searching for disks...done
- bootadm update-archive [-vn] [-R altroot [-p platform>]] +
- bootadm list-archive [-R altroot [-p platform>]] +
- bootadm install-bootloader [-fMv] [-P pool] [-R path] [device1 ... deviceN] +
- bootadm set-menu [-P pool] [-R altroot] key=value +
- bootadm list-menu [-P pool] [-R altroot] <entry_title>|-i <index> +
- bootadm add-entry [-P pool] [-i <source_index>] <entry_title> +
- bootadm remove-entry [-P pool] <entry_title>|-i <index> +
- bootadm change-entry [-P pool] <entry_title>|-i <index> {key=value}+ [set-default] +
- bootadm generate-menu [-f] [-P pool] +
-</code>+
  
-The bootadm command can, for example, be used to display the current boot menu and default Grub values: 
  
-<code+AVAILABLE DISK SELECTIONS: 
-root@solaris:~# bootadm list-menu +       0. c7t0d0 <ATA-VBOX HARDDISK-1.0-20.00GB
-the location of the boot loader configuration files is: /rpool/boot/grub +          /pci@0,0/pci8086,2829@d/disk@0,
-default +       1. c7t2d0 <ATA-VBOX HARDDISK-1.0-200.00MB> 
-console text +          /pci@0,0/pci8086,2829@d/disk@2,0 
-timeout 30 +       2. c7t3d0 <ATA-VBOX HARDDISK-1.0-200.00MB> 
-Oracle Solaris 11.1 +          /pci@0,0/pci8086,2829@d/disk@3,0 
-solaris-backup-1+       3. c7t4d0 <ATA-VBOX HARDDISK-1.0-200.00MB> 
 +          /pci@0,0/pci8086,2829@d/disk@4,
 +       4c7t5d0 <ATA-VBOX HARDDISK-1.0-200.00MB> 
 +          /pci@0,0/pci8086,2829@d/disk@5,
 +       5. c7t6d0 <ATA-VBOX HARDDISK-1.0-200.00MB> 
 +          /pci@0,0/pci8086,2829@d/disk@6,
 +       6. c7t7d0 <ATA-VBOX HARDDISK-1.0 cyl 2608 alt 2 hd 255 sec 63> 
 +          /pci@0,0/pci8086,2829@d/disk@7,
 +Specify disk (enter its number): 0 
 +selecting c7t0d0 
 +[disk formatted] 
 +/dev/dsk/c7t0d0s1 is part of active ZFS pool rpool. Please see zpool(1M). 
 + 
 + 
 +FORMAT MENU: 
 +        disk       - select a disk 
 +        type       - select (define) a disk type 
 +        partition  - select (define) a partition table 
 +        current    - describe the current disk 
 +        format     - format and analyze the disk 
 +        fdisk      - run the fdisk program 
 +        repair     - repair a defective sector 
 +        label      - write label to the disk 
 +        analyze    - surface analysis 
 +        defect     - defect list management 
 +        backup     search for backup labels 
 +        verify     - read and display labels 
 +        inquiry    - show disk ID 
 +        volname    - set 8-character volume name 
 +        !<cmd>     - execute <cmd>, then return 
 +        quit 
 +format> part 
 + 
 + 
 +PARTITION MENU: 
 +        0      - change `0' partition 
 +             - change `1' partition 
 +        2      - change `2' partition 
 +        3      - change `3' partition 
 +        4      - change `4' partition 
 +        5      - change `5' partition 
 +        6      - change `6' partition 
 +        select - select a predefined table 
 +        modify - modify a predefined partition table 
 +        name   - name the current table 
 +        print  - display the current table 
 +        label  - write partition map and label to the disk 
 +        !<cmd> - execute <cmd>, then return 
 +        quit 
 +partition> pri 
 +Current partition table (original): 
 +Total disk sectors available: 41926589 + 16384 (reserved sectors) 
 + 
 +Part      Tag    Flag     First Sector        Size        Last Sector 
 +  0  BIOS_boot    wm               256     255.88MB         524287     
 +  1        usr    wm            524288      19.74GB         41913087     
 +  2 unassigned    wm                          0              0     
 +  3 unassigned    wm                          0              0     
 +  4 unassigned    wm                          0              0     
 +  5 unassigned    wm                          0              0     
 +  6 unassigned    wm                          0              0     
 +  8   reserved    wm          41913088       8.00MB         41929471     
 + 
 +partition> 
 </code> </code>
  
-====The Init Process====+<WRAP center round important 60%> 
 +Note the following line in the above output:
  
-As seen above, the first process to be launched at boot is the **init** process. The **init** process is configured by editing the **/etc/inittab** file. +**/dev/dsk/c7t0d0s1 is part of active ZFS pool rpool. Please see zpool(1M).**
-===Inittab===+
  
-<file> +Since you are using ZFS for storage managementyou no longer need to bother about slices ! 
-+</WRAP>
-# Copyright (c) 19882011, Oracle and/or its affiliates. All rights reserved. +
-+
-# The /etc/inittab file controls the configuration of init(1M); for more +
-# information refer to init(1M) and inittab(4).  It is no longer +
-# necessary to edit inittab(4) directly; administrators should use the +
-# Solaris Service Management Facility (SMF) to define services instead. +
-# Refer to smf(5) and the System Administration Guide for more +
-# information on SMF. +
-+
-# For modifying parameters passed to ttymon, use svccfg(1m) to modify +
-# the SMF repository. For example: +
-+
-# # svccfg +
-# svc:> select system/console-login:default +
-# svc:/system/console-login> setprop ttymon/terminal_type = "xterm" +
-# svc:/system/console-login> refresh +
-# svc:/system/console-login> exit +
-+
-ap::sysinit:/usr/sbin/autopush -f /etc/iu.ap +
-smf::sysinit:/lib/svc/bin/svc.startd >/dev/msglog 2<>/dev/msglog </dev/console +
-p3:s1234:powerfail:/usr/sbin/shutdown -y -i5 -g0 >/dev/msglog 2<>/dev/msglog +
-</file>+
  
-In the above example, each uncommented line contains four fields separated by a colon:+====iSCSI Storage====
  
-^ Field ^ Name ^ Description ^ +In Solaris 10 the configuration of iSCSI LUNs was accomplished using the **iscsitadm** command and the ZFS **shareiscsi** property. In Solaris 11 these have been replaced by by the use of **Common Multiprotocol SCSI Target** ( COMSTAR ). COMSTAR is a **framework** that turns a Solaris host into a SCSI target.
-| 1 | ID | A 1 to 4 character unique identifier for the line | +
-| 2 | RUN LEVELS | The UNIX SVR4 run levels concerned by the line | +
-| 3 | ACTION | The method used to run the command in the 4th field | +
-| 4 | COMMAND | The command to execute |+
  
-As you can see by the following line in the above file:+COMSTAR includes the following features:
  
-<file> +  * scalability, 
-... +  * compatibility with generic host adapters, 
-smf::sysinit:/lib/svc/bin/svc.startd >/dev/msglog 2<>/dev/msglog </dev/console +  * multipathing, 
-... +  * LUN masking and mapping functions.
-</file>+
  
-the **action** //sysinit// starts **/lib/svc/bin/svc.startd** at system boot.+An iSCSI target is an **endpoint** waiting for connections from clients called **initiators**. A target can provide multiple **Logical Units** which provides classic read and write data operations.
  
-The svc.startd daemon is responsible for:+Each logical unit is backed by a **storage device**. You can create a logical unit backed by any one of the following:
  
-  * starting SMF service instances+  * a file
-  * monitoring SMF service instances+  * a thin-provisioned file
-  * restarting SMF service instances+  * a disk partition
-  * running legacy rc scripts in the appropriate run levels, +  * a ZFS volume.
-  * error state reporting.+
  
-The svc.startd daemon keeps status information for every SMF service instance in files in the **/etc/svc** directory. It also keeps a repository of all SMF service instances in the **/etc/svc/repository.db** database:+=====LAB #1 - Managing ZFS Storage===== 
 + 
 +====Displaying Online Help==== 
 + 
 +Both the **zpool**and **zfs** commands have built-in online help:
  
 <code> <code>
-root@solaris:~# ls -l /etc/svc +root@solaris:~# zpool help 
-total 214807 +The following commands are supported
-drwxr-xr-x   3 root     sys           17 Nov 20 19:20 profile +add      attach   clear    create   destroy  detach   export   get       
-lrwxrwxrwx   1 root     root          31 Dec  3 13:15 repository-boot -> repository-boot-20121203_131509 +help     history  import   iostat   list     offline  online   remove    
--rw-------   1 root     root       54272 Nov 20 19:24 repository-boot-20121120_182407 +replace  scrub    set      split    status   upgrade   
--rw-------   1 root     root     14077952 Nov 20 22:45 repository-boot-20121120_224526 +For more info, runzpool help <command
--rw-------   1 root     root     14093312 Dec  1 14:32 repository-boot-20121201_143210 +root@solaris:~# zfs help 
--rw-------   1 root     root     14131200 Dec  3 13:15 repository-boot-20121203_131509 +The following commands are supported
-lrwxrwxrwx   1 root     root          42 Dec  1 17:39 repository-manifest_import -repository-manifest_import-20121201_173902 +allow       clone       create      destroy     diff        get          
--rw-------   root     root     12013568 Nov 20 19:25 repository-manifest_import-20121120_182518 +groupspace  help        hold        holds       inherit     key          
--rw-------   1 root     root     12632064 Nov 20 19:26 repository-manifest_import-20121120_182617 +list        mount       promote     receive     release     rename       
--rw-------   1 root     root     14077952 Nov 20 22:40 repository-manifest_import-20121120_214010 +rollback    send        set         share       snapshot    unallow      
--rw-------   1 root     root     14115840 Dec  1 17:39 repository-manifest_import-20121201_173902 +unmount     unshare     upgrade     userspace    
--rw-------   1 root     root     14160896 Dec 10 15:14 repository.db +For more info, runzfs help <command>
-lrwxrwxrwx   1 root     root          21 Nov 20 19:19 volatile -../../system/volatile+
 </code> </code>
  
-Boot log files are now stored in **/etc/svc/volatile** and the service configurations common to all services are stored in the **/etc/svc/profile/generic.xml** file:+<WRAP center round important 60%> 
 +Note that you can get help on subcommands by either using **zpool help <subcommand>** or **zfs help <subcommand>**. 
 +</WRAP> 
 + 
 +====Checking Pool Status==== 
 + 
 +Use the **zpool** command with the **list** subcommand to display the details of your pool:
  
 <code> <code>
-root@solaris:~# ls /etc/svc/volatile +root@solaris:~# zpool list 
-application-cups-scheduler:default.log +NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT 
-application-desktop-cache-desktop-mime-cache:default.log +rpool  19.6G  6.96G  12.7G  35%  1.00x  ONLINE  -
-application-desktop-cache-docbook-dtds-update:default.log +
-application-desktop-cache-docbook-style-dsssl-update:default.log +
-application-desktop-cache-docbook-style-xsl-update:default.log +
-application-desktop-cache-gconf-cache:default.log +
-application-desktop-cache-icon-cache:default.log +
-application-desktop-cache-input-method-cache:default.log +
-application-desktop-cache-mime-types-cache:default.log +
-application-desktop-cache-pixbuf-loaders-installer:default.log +
-application-font-fc-cache:default.log +
-application-graphical-login-gdm:default.log +
-application-man-index:default.log +
-application-management-net-snmp:default.log +
-application-opengl-ogl-select:default.log +
-application-pkg-dynamic-mirror:default.log +
-application-pkg-server:default.log +
-application-pkg-system-repository:default.log +
-application-pkg-update:default.log +
-application-pkg-zones-proxyd:default.log +
-application-security-tcsd:default.log +
-application-stosreg:default.log +
-application-texinfo-update:default.log +
-application-time-slider-plugin:rsync.log +
-application-time-slider-plugin:zfs-send.log +
-application-time-slider:default.log +
-application-upnp-coherence:default.log +
-application-virtualbox-vboxmslnk:default.log +
-application-virtualbox-vboxservice:default.log +
-bootadm.lock +
-ConsoleKit +
-cronfifo +
-cups +
-cups-socket +
-daemon +
-dbus +
-dladm +
-filesystem-autofs.lock +
-gdm +
-gdm.pid +
-hald +
-in.ndpd_ipadm +
-in.ndpd_mib +
-in.ndpd.pid +
-inetd.uds +
-init.state +
-initpipe +
-ipadm +
-ipf +
-ipmon.pid +
-ipsecconf.lock +
-kcfd_door +
-milestone-config:default.log +
-milestone-devices:default.log +
-milestone-multi-user-server:default.log +
-milestone-multi-user:default.log +
-milestone-name-services:default.log +
-milestone-network:default.log +
-milestone-self-assembly-complete:default.log +
-milestone-single-user:default.log +
-milestone-unconfig:default.log +
-name_service_door +
-netcfg +
-network-datalink-management:default.log +
-network-dhcp-relay:ipv4.log +
-network-dhcp-relay:ipv6.log +
-network-dhcp-server:ipv4.log +
-network-dhcp-server:ipv6.log +
-network-dns-client:default.log +
-network-dns-multicast:default.log +
-network-dns-server:default.log +
-network-ftp:default.log +
-network-http:apache22.log +
-network-ilomconfig-interconnect:default.log +
-network-inetd-upgrade:default.log +
-network-inetd:default.log +
-network-initial:default.log +
-network-install:default.log +
-network-ip-interface-management:default.log +
-network-ipfilter:default.log +
-network-ipmievd:default.log +
-network-ipmon:default.log +
-network-ipmp:default.log +
-network-ipsec-ike:default.log +
-network-ipsec-ipsecalgs:default.log +
-network-ipsec-manual-key:default.log +
-network-ipsec-policy:default.log +
-network-iptun:default.log +
-network-iscsi-initiator:default.log +
-network-ldap-client:default.log +
-network-ldap-server:openldap_24.log +
-network-lms:default.log +
-network-loadbalancer-ilb:default.log +
-network-location:default.log +
-network-location:upgrade.log +
-network-loopback:default.log +
-network-netcfg:default.log +
-network-netmask:default.log +
-network-nfs-cbd:default.log +
-network-nfs-client:default.log +
-network-nfs-fedfs-client:default.log +
-network-nfs-mapid:default.log +
-network-nfs-nlockmgr:default.log +
-network-nfs-server:default.log +
-network-nfs-status:default.log +
-network-nis-client:default.log +
-network-nis-domain:default.log +
-network-npiv_config:default.log +
-network-ntp:default.log +
-network-physical:default.log +
-network-physical:upgrade.log +
-network-routing-legacy-routing:ipv4.log +
-network-routing-legacy-routing:ipv6.log +
-network-routing-ndp:default.log +
-network-routing-rdisc:default.log +
-network-routing-ripng:default.log +
-network-routing-route:default.log +
-network-routing-setup:default.log +
-network-rpc-bind:default.log +
-network-rpc-keyserv:default.log +
-network-sctp-congestion-control:cubic.log +
-network-sctp-congestion-control:highspeed.log +
-network-sctp-congestion-control:newreno.log +
-network-sctp-congestion-control:vegas.log +
-network-security-kadmin:default.log +
-network-security-krb5kdc:default.log +
-network-sendmail-client:default.log +
-network-service:default.log +
-network-shares:default.log +
-network-slp:default.log +
-network-smb-client:default.log +
-network-smb:default.log +
-network-smtp:sendmail.log +
-network-socket-config:default.log +
-network-socket-filter:kssl.log +
-network-ssh:default.log +
-network-tcp-congestion-control:cubic.log +
-network-tcp-congestion-control:highspeed.log +
-network-tcp-congestion-control:newreno.log +
-network-tcp-congestion-control:vegas.log +
-network-uucp-lock-cleanup:default.log +
-network-vpanels-http:apache2.log +
-nfs-mapid.lock +
-nfs4_domain +
-opengl +
-picld_door +
-platform-i86pc-acpihpd:default.log +
-rad +
-rcm_daemon_door +
-rcm_daemon_lock +
-rcm_daemon_state +
-repository_door +
-rpc_door +
-sendmail.pid +
-sshd.pid +
-svc_nonpersist.db +
-svc: +
-svc.startd.log +
-sysevent_channels +
-sysevent_door +
-syseventconf.lock +
-syseventconfd_door +
-syseventd.lock +
-syslog_door +
-syslog.pid +
-system-auditd:default.log +
-system-auditset:default.log +
-system-avahi-bridge-dsd:default.log +
-system-boot-archive-update:default.log +
-system-boot-archive:default.log +
-system-boot-config:default.log +
-system-boot-loader-update:default.log +
-system-ca-certificates:default.log +
-system-config-user:default.log +
-system-consadm:default.log +
-system-console-login:default.log +
-system-console-login:terma.log +
-system-console-login:termb.log +
-system-console-login:vt2.log +
-system-console-login:vt3.log +
-system-console-login:vt4.log +
-system-console-login:vt5.log +
-system-console-login:vt6.log +
-system-console-reset:default.log +
-system-consolekit:default.log +
-system-coreadm:default.log +
-system-cron:default.log +
-system-cryptosvc:default.log +
-system-dbus:default.log +
-system-devchassis:cleanstart.log +
-system-devchassis:daemon.log +
-system-devfsadm:default.log +
-system-device-audio:default.log +
-system-device-fc-fabric:default.log +
-system-device-local:default.log +
-system-device-mpxio-upgrade:default.log +
-system-dumpadm:default.log +
-system-early-manifest-import:default.log +
-system-environment:init.log +
-system-extended-accounting:flow.log +
-system-extended-accounting:net.log +
-system-extended-accounting:process.log +
-system-extended-accounting:task.log +
-system-fcoe_initiator:default.log +
-system-filesystem-autofs:default.log +
-system-filesystem-local:default.log +
-system-filesystem-minimal:default.log +
-system-filesystem-reparse:default.log +
-system-filesystem-rmvolmgr:default.log +
-system-filesystem-root:default.log +
-system-filesystem-ufs-quota:default.log +
-system-filesystem-usr:default.log +
-system-filesystem-zfs-auto-snapshot:daily.log +
-system-filesystem-zfs-auto-snapshot:frequent.log +
-system-filesystem-zfs-auto-snapshot:hourly.log +
-system-filesystem-zfs-auto-snapshot:monthly.log +
-system-filesystem-zfs-auto-snapshot:weekly.log +
-system-fm-asr-notify:default.log +
-system-fm-notify-params:default.log +
-system-fm-smtp-notify:default.log +
-system-fm-snmp-notify:default.log +
-system-fmd:default.log +
-system-hal:default.log +
-system-hostid:default.log +
-system-hotplug:default.log +
-system-identity:domain.log +
-system-identity:node.log +
-system-idmap:default.log +
-system-install-server:default.log +
-system-intrd:default.log +
-system-keymap:default.log +
-system-logadm-upgrade:default.log +
-system-manifest-import:default.log +
-system-name-service-cache:default.log +
-system-name-service-switch:default.log +
-system-name-service-upgrade:default.log +
-system-ocm:default.log +
-system-pfexec:default.log +
-system-picl:default.log +
-system-pkgserv:default.log +
-system-pools-dynamic:default.log +
-system-pools:default.log +
-system-postrun:default.log +
-system-power:default.log +
-system-rad:local.log +
-system-rad:remote.log +
-system-rbac:default.log +
-system-rcap:default.log +
-system-rds:default.log +
-system-resource-controls:default.log +
-system-resource-mgmt:default.log +
-system-rmtmpfiles:default.log +
-system-sar:default.log +
-system-scheduler:default.log +
-system-security-security-extensions:default.log +
-system-svc-global:default.log +
-system-sysevent:default.log +
-system-system-log:default.log +
-system-timezone:default.log +
-system-utmp:default.log +
-system-vbiosd:default.log +
-system-vtdaemon:default.log +
-system-wusbd:default.log +
-system-zones-install:default.log +
-system-zones-monitoring:default.log +
-system-zones:default.log +
-tzsync +
-utmppipe +
-utmpx +
-vbiosd.door +
-vbiosd.lock +
-vt +
-xkb +
-zonestat_door+
 </code> </code>
  
-For example:+Now use the **status** subcommand:
  
 <code> <code>
-root@solaris:~# cat /etc/svc/volatile/application-cups-scheduler:default.log +root@solaris:~# zpool status 
-[ Dec  3 13:15:22 Enabled. ] +  poolrpool 
-</code>+ stateONLINE 
 +  scan: none requested 
 +config:
  
 + NAME        STATE     READ WRITE CKSUM
 + rpool       ONLINE               0
 +   c7t0d0s1  ONLINE               0
  
-===Traditional Unix Run Levels===+errors: No known data errors 
 +</code>
  
-Historically Solaris used the UNIX SVR4 boot sequence based upon **run levels**:+====Creating a Mirrored Pool====
  
-^ Run level ^ Description ^ +Create a ZFS mirrored pool called **mypool** using the first two of the five disks you recently created:
-| 0 | Shut down for SPARC systems | +
-| S or s | Single-User mode with only root filesystem mounted (as read-only) | +
-| 1 | Single-User mode with all local filesystems mounted (read-write) | +
-| 2 | Multiple user mode without NFS export | +
-| 3 | Multiple user mode with NFS export | +
-| 4 | Unused / User definable | +
-| 5 | Shut down, power-off if hardware supports it | +
-| 6 | System reboot |+
  
-This boot sequence made use of scripts which either started or stopped services dependent upon the current run level. Solaris no longer uses this start up sequence. However Solaris 11 still has run levels as shown by the output of the following command:+<code> 
 +root@solaris:~# zpool create mypool mirror c7t2d0 c7t3d0 
 +</code> 
 + 
 +Check that your pool has been created:
  
 <code> <code>
-root@solaris:~# who -r +root@solaris:~# zpool list 
-         run-level 3  Dec  3 13:16          0  S+NAME     SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT 
 +mypool   187M   346K   187M   0%  1.00x  ONLINE  - 
 +rpool   19.6G  6.97G  12.7G  35%  1.00x  ONLINE  -
 </code> </code>
  
-You can interpret this output as follows:+Display the file systems using the **zfs** command and the **list** subcommand:
  
-^ Output ^ Description ^ +<code> 
-| run-level 3 | The current run level | +root@solaris:~# zfs list 
-| Dec  3 13:16 | Date and hour the run level last changed | +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
-| The current run level | +mypool                            272K   155M    31K  /mypool 
-| 0 | The number of times the system has been at this run level since the last reboot | +rpool                            7.02G  12.3G  4.58M  /rpool 
-| S | The previous run level |+rpool/ROOT                       4.87G  12.3G    31K  legacy 
 +rpool/ROOT/solaris               4.86G  12.3G  3.92G  / 
 +rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var 
 +rpool/ROOT/solaris/var            865M  12.3G   207M  /var 
 +rpool/VARSHARE                    100K  12.3G   100K  /var/share 
 +rpool/dump                       1.03G  12.3G  1.00G  - 
 +rpool/export                     87.6M  12.3G    32K  /export 
 +rpool/export/home                87.5M  12.3G    32K  /export/home 
 +rpool/export/home/trainee        87.5M  12.3G  87.5M  /export/home/trainee 
 +rpool/swap                       1.03G  12.3G  1.00G  - 
 +</code>
  
-===Solaris 11 Boot Milestones===+<WRAP center round important 60%> 
 +Note that the zpool command automatically creates a file system on **mypool** and mounts it at **/mypool**. 
 +</WRAP>
  
-Solaris 11's boot process is based upon **boot milestones**. The following table shows the correspondence between the previous run levels and the new milestones:+====Adding File Systems to an Existing Pool====
  
-^ Run level ^ Milestone ^ Description ^ +Now create two file systems in your pool called **/home** and **/home/user1** and then display the results : 
-| - | %%none%% | All services are disabled | + 
-| S or s | %%svc://milestone/single-user:default%% | Single-user mode | +<code> 
-| %%svc://milestone/multi-user:default%% | Multiuser mode | +root@solaris:~# zfs create mypool/home 
-| 3 | %%svc://milestone/multi-user-server:default%% | Multiuser mode with NFS | +root@solaris:~# zfs create mypool/home/user1 
-| 5 | %%all%% | Multiuser mode with all services enabled |+root@solaris:~# zfs list 
 +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
 +mypool                            184K   155M    32K  /mypool 
 +mypool/home                        63K   155M    32K  /mypool/home 
 +mypool/home/user1                  31K   155M    31K  /mypool/home/user1 
 +rpool                            7.02G  12.3G  4.58M  /rpool 
 +rpool/ROOT                       4.87G  12.3G    31K  legacy 
 +rpool/ROOT/solaris               4.86G  12.3G  3.92G  / 
 +rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var 
 +rpool/ROOT/solaris/var            865M  12.3G   207M  /var 
 +rpool/VARSHARE                    100K  12.3G   100K  /var/share 
 +rpool/dump                       1.03G  12.3G  1.00G  - 
 +rpool/export                     87.6M  12.3G    32K  /export 
 +rpool/export/home                87.5M  12.3G    32K  /export/home 
 +rpool/export/home/trainee        87.5M  12.3G  87.5M  /export/home/trainee 
 +rpool/swap                       1.03G  12.3G  1.00G  
 +</code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-Changing milestones manually is uncommon. However if necessary, it is recommended to use the **init** command as opposed to the **svcadm** command. Using **init** to put a system in run-level **S** will change the milestone automatically.+Note that the two file systems **share** the same disk space as the parent pool.
 </WRAP> </WRAP>
  
-=====Service Management Facility=====+====Changing the Pool Mount Point====
  
-The **Service Management Facility** ( SMF ), introduced with Solaris 10, is used to manage services under Solaris 11.+Suppose that you want the /home file system mounted elsewhere rather than under the /mypool mount point. With ZFSthis is very simple:
  
-An SMF service contains the following:+<code> 
 +root@solaris:~# mkdir /users 
 +root@solaris:~# zfs set mountpoint=/users mypool/home 
 +root@solaris:~# zfs list 
 +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
 +mypool                            196K   155M    31K  /mypool 
 +mypool/home                        63K   155M    32K  /users 
 +mypool/home/user1                  31K   155M    31K  /users/user1 
 +rpool                            7.02G  12.3G  4.58M  /rpool 
 +rpool/ROOT                       4.87G  12.3G    31K  legacy 
 +rpool/ROOT/solaris               4.86G  12.3G  3.92G  / 
 +rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var 
 +rpool/ROOT/solaris/var            865M  12.3G   207M  /var 
 +rpool/VARSHARE                    100K  12.3G   100K  /var/share 
 +rpool/dump                       1.03G  12.3G  1.00G  - 
 +rpool/export                     87.6M  12.3G    32K  /export 
 +rpool/export/home                87.5M  12.3G    32K  /export/home 
 +rpool/export/home/trainee        87.5M  12.3G  87.5M  /export/home/trainee 
 +rpool/swap                       1.03G  12.3G  1.00G  - 
 +</code>
  
-  - an **SMF Manifest** containing the default properties of the service,  +<WRAP center round important 60%> 
-  - one or several **Start and Stop Methods** or //scripts// used to control the service, +Note that ZFS has automatically and transparently unmounted **/mypool/home** and re-mounted it at **/users**. 
-  - one or several **Programs**, +</WRAP>
-  **log** file, +
-  - a **Fault Management Resource Identifier** ( FMRI ).+
  
 +====Adding a Hot Spare====
  
 +To display all of the properties associated with **mypool**, use the **zpool** command and the **get** subcommand:
  
-====Fault Management Resource Identifiers====+<code> 
 +root@solaris:~# zpool get all mypool 
 +NAME    PROPERTY       VALUE                SOURCE 
 +mypool  allocated      673K                 - 
 +mypool  altroot        -                    default 
 +mypool  autoexpand     off                  default 
 +mypool  autoreplace    off                  default 
 +mypool  bootfs                            default 
 +mypool  cachefile      -                    default 
 +mypool  capacity       0%                   - 
 +mypool  dedupditto                        default 
 +mypool  dedupratio     1.00x                - 
 +mypool  delegation     on                   default 
 +mypool  failmode       wait                 default 
 +mypool  free           186M                 - 
 +mypool  guid           6502877439742337134 
 +mypool  health         ONLINE               - 
 +mypool  listshares     off                  default 
 +mypool  listsnapshots  off                  default 
 +mypool  readonly       off                  - 
 +mypool  size           187M                 - 
 +mypool  version        34                   default 
 +</code>
  
-An FRMI is composed of:+<WRAP center round important 60%> 
 +Note that the **autoreplace** property is set to **off**. In order to use a hot spare, this property needs to be set to **on**. 
 +</WRAP>
  
-  - a **Scheme** - the type of service, +Set the autoreplace property to on:
-  - a **Location** - the hostname of the system where the service runs, +
-  - a **Category** - the service category, +
-  - a **Description** - the service name, +
-  - an **Instance** - the service instance ( some programs run multiple instances or the same service, for example apache ).+
  
-====SMF Service categories====+<code> 
 +root@solaris:~# zpool set autoreplace=on mypool 
 +root@solaris:~# zpool get autoreplace mypool 
 +NAME    PROPERTY     VALUE  SOURCE 
 +mypool  autoreplace  on     local 
 +</code>
  
-SMF services are grouped into functional **categories**:+Add the fourth 200 Mb disk that you have created to **mypool** as a spare:
  
-  - **Applications**, +<code> 
-  - **Network**, +root@solaris:~# zpool add mypool spare c7t5d0  
-  - **Device**, +root@solaris:~# zpool status mypool 
-  - **Milestone**, +  pool: mypool 
-  - **System**.+ state: ONLINE 
 +  scan: none requested 
 +config:
  
-====SMF Service States====+ NAME        STATE     READ WRITE CKSUM 
 + mypool      ONLINE               0 
 +   mirror-0  ONLINE               0 
 +     c7t2d0  ONLINE               0 
 +     c7t3d0  ONLINE               0 
 + spares 
 +   c7t5d0    AVAIL   
  
-At any point in time a SMF service instance can be in one of seven **states**:+errorsNo known data errors 
 +</code>
  
-  - **Uninitialized** - The initial state or a service instance before being started by the svc.startd deamon. +====Observing Pool Activity====
-  - **Offline** - The service instance is enabled but not yet running. +
-  - **Online** - The service instance and all of its dependencies are running without any errors. +
-  - **Degraded** - The service instance is running but in a limited mode. +
-  - **Maintenance** - The deamon svc.startd cannot start the service instance because of a problem. +
-  - **Disabled** - The service instance is disabled and will not run at the next reboot. +
-  - **Legacy_run** - The service instance cannot be managed by SMF.+
  
-====Legacy_run Services==== +Create a random data file in **/users/user1**:
- +
-Legacy_run services are those services that still use the previous Unix rcx.d structure. SMF can start and stop these services but can do nothing more:+
  
 <code> <code>
-root@solaris:~# ls /etc/rc* +root@solaris:~# cat /dev/urandom > /users/user1/randomfile & 
-/etc/rc0.d: +[1] 2617 
-K50pppd+</code>
  
-/etc/rc1.d: +<WRAP center round important 60%> 
-K50pppd+Write down the PID, you will need it in 2 minutes to kill the process you have just started
 +</WRAP>
  
-/etc/rc2.d: +Now display the writes to the pool using the **iostat** subcommand of the **zpool** command:
-README              S47pppd             S81dodatadm.udaplt  S89PRESERVE+
  
-/etc/rc3.d+<code> 
-README+root@solaris:~# zpool iostat -v 3 
 +               capacity     operations    bandwidth 
 +pool        alloc   free   read  write   read  write 
 +----------  -----  -----  -----  -----  -----  ----- 
 +mypool      29.6M   157M      0      5    184  50.6K 
 +  mirror    29.6M   157M      0      5    184  50.6K 
 +    c7t2d0      -      -      0      5  1.46K  59.8K 
 +    c7t3d0      -      -      0      5  1.33K  56.9K 
 +----------  -----  -----  -----  -----  -----  ----- 
 +rpool       6.96G  12.7G      0     14  27.3K   124K 
 +  c7t0d0s1  6.96G  12.7G      0     14  27.3K   124K 
 +----------  -----  -----  -----  -----  -----  -----
  
-/etc/rcm: +               capacity     operations    bandwidth 
-scripts+pool        alloc   free   read  write   read  write 
 +----------  -----  -----  -----  -----  -----  ----- 
 +mypool      39.9M   147M      0    127      0  2.89M 
 +  mirror    39.9M   147M      0    127      0  2.89M 
 +    c7t2d0      -      -      0    113      0  2.78M 
 +    c7t3d0      -      -      0    110      0  2.89M 
 +----------  -----  -----  -----  -----  -----  ----- 
 +rpool       6.96G  12.7G      0     13      0  29.5K 
 +  c7t0d0s1  6.96G  12.7G      0     13      0  29.5K 
 +----------  -----  -----  -----  -----  -----  -----
  
-/etc/rcS.d: +               capacity     operations    bandwidth 
-K50pppd  README +pool        alloc   free   read  write   read  write 
-</code>+----------  -----  -----  -----  -----  -----  ----- 
 +mypool      53.3M   134M      0    128      0  2.78M 
 +  mirror    53.3M   134M      0    128      0  2.78M 
 +    c7t2d0      -      -      0    112      0  2.44M 
 +    c7t3d0      -      -      0    113      0  2.78M 
 +----------  -----  -----  -----  -----  -----  ----- 
 +rpool       6.96G  12.7G      0     77      0   500K 
 +  c7t0d0s1  6.96G  12.7G      0     77      0   500K 
 +----------  -----  -----  -----  -----  -----  -----
  
-====SMF Commands====+               capacity     operations    bandwidth 
 +pool        alloc   free   read  write   read  write 
 +----------  -----  -----  -----  -----  -----  ----- 
 +mypool      65.5M   121M      0    171      0  3.08M 
 +  mirror    65.5M   121M      0    171      0  3.08M 
 +    c7t2d0      -      -      0    153      0  3.08M 
 +    c7t3d0      -      -      0    153      0  3.08M 
 +----------  -----  -----  -----  -----  -----  ----- 
 +rpool       6.96G  12.7G      0     21      0  45.1K 
 +  c7t0d0s1  6.96G  12.7G      0     21      0  45.1K 
 +----------  -----  -----  -----  -----  -----  -----
  
-The commands used to manage SMF services are shown in the following table:+               capacity     operations    bandwidth 
 +pool        alloc   free   read  write   read  write 
 +----------  -----  -----  -----  -----  -----  ----- 
 +mypool      75.8M   111M      0    172      0  2.88M 
 +  mirror    75.8M   111M      0    172      0  2.88M 
 +    c7t2d0      -      -      0    149      0  2.85M 
 +    c7t3d0      -      -      0    149      0  2.88M 
 +----------  -----  -----  -----  -----  -----  ----- 
 +rpool       6.96G  12.7G      0      0      0    882 
 +  c7t0d0s1  6.96G  12.7G      0      0      0    882 
 +----------  -----  -----  -----  -----  -----  -----
  
-Command ^ Description ^ +^C 
-| svcs | Displays information about a service | +</code>
-| svcadm | Used to manage services | +
-| svccfg | Used to manipulate the SMF repository | +
-| svcprop | Used to view the SMF repository data | +
-| inetadm | Used to view and configure inetd services |+
  
 +<WRAP center round todo 60%>
 +Is your mirror functioning ?
 +</WRAP>
  
-=====Lab #1 - Working with SMF=====+Now kill the process creating the file **randomfile** :
  
 +  # kill -9 PID [Entrée]
  
-====Using the svcs command==== +Delete the file **/users/user1/randomfile**:
- +
-Firstly, use the **svcs** command with the **-a** switch to view all of the services and their current status:+
  
 <code> <code>
-root@solaris:~# svcs -+root@solaris:~# rm -rf /users/user1/randomfile 
-STATE          STIME    FMRI +[1]+  Killed                  cat /dev/urandom > /users/user1/randomfile
-legacy_run     Dec_03   lrc:/etc/rc2_d/S47pppd +
-legacy_run     Dec_03   lrc:/etc/rc2_d/S81dodatadm_udaplt +
-legacy_run     Dec_03   lrc:/etc/rc2_d/S89PRESERVE +
-disabled       Dec_03   svc:/system/device/mpxio-upgrade:default +
-disabled       Dec_03   svc:/network/install:default +
-disabled       Dec_03   svc:/network/ipsec/ike:default +
-disabled       Dec_03   svc:/network/ipsec/manual-key:default +
-disabled       Dec_03   svc:/network/nis/domain:default +
-disabled       Dec_03   svc:/system/name-service-cache:default +
-disabled       Dec_03   svc:/network/ldap/client:default +
-disabled       Dec_03   svc:/network/nis/client:default +
-disabled       Dec_03   svc:/network/nfs/status:default +
-disabled       Dec_03   svc:/network/nfs/nlockmgr:default +
-disabled       Dec_03   svc:/network/nfs/cbd:default +
-disabled       Dec_03   svc:/system/idmap:default +
-disabled       Dec_03   svc:/network/rpc/keyserv:default +
-disabled       Dec_03   svc:/network/inetd-upgrade:default +
-disabled       Dec_03   svc:/network/nfs/client:default +
-disabled       Dec_03   svc:/system/pools:default +
-disabled       Dec_03   svc:/system/rcap:default +
-disabled       Dec_03   svc:/network/smb/client:default +
-disabled       Dec_03   svc:/application/management/net-snmp:default +
-disabled       Dec_03   svc:/application/security/tcsd:default +
-disabled       Dec_03   svc:/network/nfs/server:default +
-disabled       Dec_03   svc:/system/filesystem/reparse:default +
-disabled       Dec_03   svc:/network/ntp:default +
-disabled       Dec_03   svc:/application/pkg/system-repository:default +
-disabled       Dec_03   svc:/network/dns/multicast:default +
-disabled       Dec_03   svc:/application/pkg/dynamic-mirror:default +
-disabled       Dec_03   svc:/application/pkg/server:default +
-disabled       Dec_03   svc:/application/pkg/zones-proxyd:default +
-disabled       Dec_03   svc:/system/console-login:terma +
-disabled       Dec_03   svc:/system/console-login:termb +
-disabled       Dec_03   svc:/network/dhcp/server:ipv4 +
-disabled       Dec_03   svc:/network/dhcp/server:ipv6 +
-disabled       Dec_03   svc:/network/dhcp/relay:ipv6 +
-disabled       Dec_03   svc:/network/dhcp/relay:ipv4 +
-disabled       Dec_03   svc:/network/routing/ripng:default +
-disabled       Dec_03   svc:/network/routing/legacy-routing:ipv4 +
-disabled       Dec_03   svc:/network/routing/legacy-routing:ipv6 +
-disabled       Dec_03   svc:/network/routing/route:default +
-disabled       Dec_03   svc:/network/routing/rdisc:default +
-disabled       Dec_03   svc:/network/security/krb5kdc:default +
-disabled       Dec_03   svc:/network/security/kadmin:default +
-disabled       Dec_03   svc:/network/slp:default +
-disabled       Dec_03   svc:/network/socket-filter:kssl +
-disabled       Dec_03   svc:/network/lms:default +
-disabled       Dec_03   svc:/network/dns/server:default +
-disabled       Dec_03   svc:/network/ftp:default +
-disabled       Dec_03   svc:/network/http:apache22 +
-disabled       Dec_03   svc:/network/loadbalancer/ilb:default +
-disabled       Dec_03   svc:/network/ipmievd:default +
-disabled       Dec_03   svc:/network/ldap/server:openldap_24 +
-disabled       Dec_03   svc:/system/consadm:default +
-disabled       Dec_03   svc:/system/fm/notify-params:default +
-disabled       Dec_03   svc:/system/fm/snmp-notify:default +
-disabled       Dec_03   svc:/system/extended-accounting:process +
-disabled       Dec_03   svc:/system/pools/dynamic:default +
-disabled       Dec_03   svc:/system/extended-accounting:flow +
-disabled       Dec_03   svc:/system/extended-accounting:net +
-disabled       Dec_03   svc:/system/extended-accounting:task +
-disabled       Dec_03   svc:/system/install/server:default +
-disabled       Dec_03   svc:/system/rad:remote +
-disabled       Dec_03   svc:/system/wusbd:default +
-disabled       Dec_03   svc:/system/sar:default +
-disabled       Dec_03   svc:/system/hotplug:default +
-disabled       Dec_03   svc:/system/rds:default +
-disabled       Dec_03   svc:/system/svc/global:default +
-disabled       Dec_03   svc:/system/avahi-bridge-dsd:default +
-disabled       Dec_03   svc:/application/time-slider:default +
-disabled       Dec_03   svc:/application/time-slider/plugin:rsync +
-disabled       Dec_03   svc:/application/time-slider/plugin:zfs-send +
-disabled       Dec_03   svc:/application/upnp/coherence:default +
-disabled       Dec_03   svc:/network/vpanels-http:apache2 +
-disabled       Dec_03   svc:/system/filesystem/zfs/auto-snapshot:monthly +
-disabled       Dec_03   svc:/system/filesystem/zfs/auto-snapshot:hourly +
-disabled       Dec_03   svc:/system/filesystem/zfs/auto-snapshot:weekly +
-disabled       Dec_03   svc:/system/filesystem/zfs/auto-snapshot:daily +
-disabled       Dec_03   svc:/system/filesystem/zfs/auto-snapshot:frequent +
-disabled       Dec_03   svc:/platform/i86pc/acpihpd:default +
-disabled       Dec_03   svc:/network/rpc/rstat:default +
-disabled       Dec_03   svc:/application/cups/in-lpd:default +
-disabled       Dec_03   svc:/network/rpc/rusers:default +
-disabled       Dec_03   svc:/network/rpc/spray:default +
-disabled       Dec_03   svc:/network/rpc/wall:default +
-disabled       Dec_03   svc:/network/rpc/rex:default +
-disabled       Dec_03   svc:/network/echo:dgram +
-disabled       Dec_03   svc:/network/echo:stream +
-disabled       Dec_03   svc:/network/time:dgram +
-disabled       Dec_03   svc:/network/time:stream +
-disabled       Dec_03   svc:/network/shell:default +
-disabled       Dec_03   svc:/network/shell:kshell +
-disabled       Dec_03   svc:/network/stlisten:default +
-disabled       Dec_03   svc:/network/finger:default +
-disabled       Dec_03   svc:/network/discard:dgram +
-disabled       Dec_03   svc:/network/discard:stream +
-disabled       Dec_03   svc:/network/nfs/rquota:default +
-disabled       Dec_03   svc:/network/telnet:default +
-disabled       Dec_03   svc:/network/chargen:dgram +
-disabled       Dec_03   svc:/network/chargen:stream +
-disabled       Dec_03   svc:/network/rexec:default +
-disabled       Dec_03   svc:/network/daytime:dgram +
-disabled       Dec_03   svc:/network/daytime:stream +
-disabled       Dec_03   svc:/network/comsat:default +
-disabled       Dec_03   svc:/network/login:eklogin +
-disabled       Dec_03   svc:/network/login:klogin +
-disabled       Dec_03   svc:/network/login:rlogin +
-disabled       Dec_03   svc:/network/talk:default +
-disabled       Dec_03   svc:/network/tftp/udp6:default +
-disabled       Dec_03   svc:/network/stdiscover:default +
-disabled       Dec_03   svc:/application/x11/xfs:default +
-disabled       Dec_03   svc:/application/x11/xvnc-inetd:default +
-disabled       15:14:45 svc:/network/ipmon:default +
-disabled       15:14:45 svc:/network/ipfilter:default +
-online         Dec_03   svc:/system/early-manifest-import:default +
-online         Dec_03   svc:/system/svc/restarter:default +
-online         Dec_03   svc:/network/netcfg:default +
-online         Dec_03   svc:/network/sctp/congestion-control:highspeed +
-online         Dec_03   svc:/network/sctp/congestion-control:vegas +
-online         Dec_03   svc:/network/tcp/congestion-control:highspeed +
-online         Dec_03   svc:/network/tcp/congestion-control:newreno +
-online         Dec_03   svc:/network/sctp/congestion-control:newreno +
-online         Dec_03   svc:/network/tcp/congestion-control:vegas +
-online         Dec_03   svc:/network/tcp/congestion-control:cubic +
-online         Dec_03   svc:/network/sctp/congestion-control:cubic +
-online         Dec_03   svc:/network/smb:default +
-online         Dec_03   svc:/system/name-service/upgrade:default +
-online         Dec_03   svc:/system/filesystem/root:default +
-online         Dec_03   svc:/system/cryptosvc:default +
-online         Dec_03   svc:/system/resource-controls:default +
-online         Dec_03   svc:/system/scheduler:default +
-online         Dec_03   svc:/network/ipsec/ipsecalgs:default +
-online         Dec_03   svc:/network/ip-interface-management:default +
-online         Dec_03   svc:/system/boot-archive:default +
-online         Dec_03   svc:/network/datalink-management:default +
-online         Dec_03   svc:/network/loopback:default +
-online         Dec_03   svc:/network/ipmp:default +
-online         Dec_03   svc:/system/filesystem/usr:default +
-online         Dec_03   svc:/system/devchassis:cleanstart +
-online         Dec_03   svc:/system/pfexec:default +
-online         Dec_03   svc:/system/device/local:default +
-online         Dec_03   svc:/network/socket-config:default +
-online         Dec_03   svc:/system/filesystem/minimal:default +
-online         Dec_03   svc:/system/rbac:default +
-online         Dec_03   svc:/system/ca-certificates:default +
-online         Dec_03   svc:/system/sysevent:default +
-online         Dec_03   svc:/network/uucp-lock-cleanup:default +
-online         Dec_03   svc:/system/pkgserv:default +
-online         Dec_03   svc:/system/zones-monitoring:default +
-online         Dec_03   svc:/system/security/security-extensions:default +
-online         Dec_03   svc:/application/desktop-cache/mime-types-cache:default +
-online         Dec_03   svc:/system/vbiosd:default +
-online         Dec_03   svc:/system/utmp:default +
-online         Dec_03   svc:/network/ilomconfig-interconnect:default +
-online         Dec_03   svc:/system/hostid:default +
-online         Dec_03   svc:/system/logadm-upgrade:default +
-online         Dec_03   svc:/system/resource-mgmt:default +
-online         Dec_03   svc:/network/npiv_config:default +
-online         Dec_03   svc:/system/environment:init +
-online         Dec_03   svc:/system/devfsadm:default +
-online         Dec_03   svc:/system/dbus:default +
-online         Dec_03   svc:/system/rad:local +
-online         Dec_03   svc:/application/opengl/ogl-select:default +
-online         Dec_03   svc:/application/desktop-cache/desktop-mime-cache:default +
-online         Dec_03   svc:/system/rmtmpfiles:default +
-online         Dec_03   svc:/application/desktop-cache/gconf-cache:default +
-online         Dec_03   svc:/application/desktop-cache/pixbuf-loaders-installer:default +
-online         Dec_03   svc:/application/desktop-cache/input-method-cache:default +
-online         Dec_03   svc:/milestone/unconfig:default +
-online         Dec_03   svc:/system/postrun:default +
-online         Dec_03   svc:/system/device/fc-fabric:default +
-online         Dec_03   svc:/milestone/devices:default +
-online         Dec_03   svc:/application/desktop-cache/docbook-style-xsl-update:default +
-online         Dec_03   svc:/milestone/config:default +
-online         Dec_03   svc:/system/device/audio:default +
-online         Dec_03   svc:/system/manifest-import:default +
-online         Dec_03   svc:/system/config-user:default +
-online         Dec_03   svc:/system/coreadm:default +
-online         Dec_03   svc:/system/timezone:default +
-online         Dec_03   svc:/network/physical:upgrade +
-online         Dec_03   svc:/network/location:upgrade +
-online         Dec_03   svc:/application/desktop-cache/docbook-style-dsssl-update:default +
-online         Dec_03   svc:/application/desktop-cache/docbook-dtds-update:default +
-online         Dec_03   svc:/system/keymap:default +
-online         Dec_03   svc:/application/font/fc-cache:default +
-online         Dec_03   svc:/application/desktop-cache/icon-cache:default +
-online         Dec_03   svc:/network/physical:default +
-online         Dec_03   svc:/system/identity:node +
-online         Dec_03   svc:/system/picl:default +
-online         Dec_03   svc:/system/identity:domain +
-online         Dec_03   svc:/network/ipsec/policy:default +
-online         Dec_03   svc:/system/fcoe_initiator:default +
-online         Dec_03   svc:/network/initial:default +
-online         Dec_03   svc:/network/nfs/fedfs-client:default +
-online         Dec_03   svc:/network/netmask:default +
-online         Dec_03   svc:/milestone/single-user:default +
-online         Dec_03   svc:/system/filesystem/local:default +
-online         Dec_03   svc:/system/boot-loader-update:default +
-online         Dec_03   svc:/system/filesystem/ufs/quota:default +
-online         Dec_03   svc:/system/power:default +
-online         Dec_03   svc:/application/virtualbox/vboxmslnk:default +
-online         Dec_03   svc:/application/virtualbox/vboxservice:default +
-online         Dec_03   svc:/system/consolekit:default +
-online         Dec_03   svc:/network/shares:default +
-online         Dec_03   svc:/system/boot-archive-update:default +
-online         Dec_03   svc:/system/auditset:default +
-online         Dec_03   svc:/network/service:default +
-online         Dec_03   svc:/system/cron:default +
-online         Dec_03   svc:/network/iscsi/initiator:default +
-online         Dec_03   svc:/system/hal:default +
-online         Dec_03   svc:/system/filesystem/rmvolmgr:default +
-online         Dec_03   svc:/network/rpc/bind:default +
-online         Dec_03   svc:/network/inetd:default +
-online         Dec_03   svc:/system/dumpadm:default +
-online         Dec_03   svc:/network/ssh:default +
-online         Dec_03   svc:/network/rpc/gss:default +
-online         Dec_03   svc:/milestone/self-assembly-complete:default +
-online         Dec_03   svc:/network/rpc/smserver:default +
-online         Dec_03   svc:/system/system-log:default +
-online         Dec_03   svc:/network/security/ktkt_warn:default +
-online         Dec_03   svc:/system/auditd:default +
-online         Dec_03   svc:/system/console-login:default +
-online         Dec_03   svc:/system/vtdaemon:default +
-online         Dec_03   svc:/system/console-login:vt6 +
-online         Dec_03   svc:/system/console-login:vt4 +
-online         Dec_03   svc:/milestone/multi-user:default +
-online         Dec_03   svc:/application/cups/scheduler:default +
-online         Dec_03   svc:/system/console-login:vt5 +
-online         Dec_03   svc:/system/fmd:default +
-online         Dec_03   svc:/system/console-login:vt3 +
-online         Dec_03   svc:/system/console-login:vt2 +
-online         Dec_03   svc:/milestone/multi-user-server:default +
-online         Dec_03   svc:/system/fm/asr-notify:default +
-online         Dec_03   svc:/application/man-index:default +
-online         Dec_03   svc:/system/fm/smtp-notify:default +
-online         Dec_03   svc:/system/devchassis:daemon +
-online         Dec_03   svc:/system/boot-config:default +
-online         Dec_03   svc:/system/intrd:default +
-online         Dec_03   svc:/application/graphical-login/gdm:default +
-online         Dec_03   svc:/application/stosreg:default +
-online         Dec_03   svc:/system/zones-install:default +
-online         Dec_03   svc:/system/zones:default +
-online         Dec_03   svc:/network/routing/ndp:default +
-online         Dec_03   svc:/system/ocm:default +
-online         Dec_03   svc:/system/console-reset:default +
-online         Dec_03   svc:/application/texinfo-update:default +
-online         Dec_07   svc:/application/pkg/update:default +
-online         15:14:39 svc:/network/routing-setup:default +
-online         15:14:40 svc:/network/iptun:default +
-online         15:14:46 svc:/milestone/network:default +
-online         15:14:51 svc:/system/name-service/switch:default +
-online         15:14:51 svc:/milestone/name-services:default +
-online         15:14:51 svc:/network/nfs/mapid:default +
-online         15:14:52 svc:/network/sendmail-client:default +
-online         15:14:52 svc:/network/dns/client:default +
-online         15:14:52 svc:/network/smtp:sendmail +
-online         15:14:55 svc:/network/location:default +
-online         15:14:56 svc:/system/name-service/cache:default +
-online         15:14:56 svc:/system/filesystem/autofs:default+
 </code> </code>
  
-To view the processes associated with specific service, use the svcs command with the **-p** switch:+====Setting a User Quota==== 
 + 
 +To set user quotayou need to use the **set** subcommand of **zpool**:
  
 <code> <code>
-root@solaris:~# svcs -p svc:/system/cron:default +root@solaris:~# zfs set quota=50M mypool/home/user1 
-STATE          STIME    FMRI +root@solaris:~# zfs get quota mypool 
-online         Dec_03   svc:/system/cron:default +NAME    PROPERTY  VALUE  SOURCE 
-               Dec_03        856 cron+mypool  quota     none   default 
 +root@solaris:~# zfs list 
 +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
 +mypool                            380K   155M    31K  /mypool 
 +mypool/home                        63K   155M    32K  /users 
 +mypool/home/user1                  31K  50.0M    31K  /users/user1 
 +rpool                            7.03G  12.3G  4.58M  /rpool 
 +rpool/ROOT                       4.87G  12.3G    31K  legacy 
 +rpool/ROOT/solaris               4.86G  12.3G  3.92G  / 
 +rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var 
 +rpool/ROOT/solaris/var            865M  12.3G   207M  /var 
 +rpool/VARSHARE                    101K  12.3G   101K  /var/share 
 +rpool/dump                       1.03G  12.3G  1.00G  - 
 +rpool/export                     89.3M  12.3G    32K  /export 
 +rpool/export/home                89.3M  12.3G    32K  /export/home 
 +rpool/export/home/trainee        89.3M  12.3G  89.3M  /export/home/trainee 
 +rpool/swap                       1.03G  12.3G  1.00G  -
 </code> </code>
  
-To see a detailed output of the properties of a service, use the **-l** switch:+<WRAP center round important 60%> 
 +Note that the quota of 50 Mb has been set on /mypool/home/user1. 
 +</WRAP> 
 + 
 +Now create random data file in /users/user1:
  
 <code> <code>
-root@solaris:~# svcs -l svc:/system/cron:default +root@solaris:~# cat /dev/urandom > /users/user1/testfile 
-fmri         svc:/system/cron:default +catoutput error (0/131072 characters written
-name         clock daemon (cron) +Disc quota exceeded
-enabled      true +
-state        online +
-next_state   none +
-state_time   December  3, 2012 01:16:23 PM CET +
-logfile      /var/svc/log/system-cron:default.log +
-restarter    svc:/system/svc/restarter:default +
-contract_id  98  +
-manifest     /etc/svc/profile/generic.xml +
-manifest     /lib/svc/manifest/system/cron.xml +
-dependency   require_all/none svc:/system/filesystem/local (online+
-dependency   require_all/none svc:/milestone/name-services (online)+
 </code> </code>
  
-The properties are as follows:+<WRAP center round important 60%> 
 +After a few minutes, you will see the **Disc quota exceeded** message. 
 +</WRAP>
  
-^ Property ^ Description ^ +Looking at the available disk space on /users/user1 you will notice that the value is now 0:
-| fmri | The Fault Management Resource Identifier of the service instance | +
-| name | An abbreviated name for the service | +
-| state | The current state of the service | +
-| next_state | When initialising, shows the next state that the service will go to | +
-| state_time | The service startup time stamp | +
-| logfile | Full path to the service log file | +
-| restarter | The service responsible for restarting the current service | +
-| contract_id | The Process ID of the restarter | +
-| manifest | The Start and Stop Manifest(s) of the service | +
-| dependancy | The services dependencies | +
- +
-List the service(s) that cron depends on by using the **-d** switch:+
  
 <code> <code>
-root@solaris:~# svcs -d svc:/system/cron:default +root@solaris:~# zfs list mypool/home/user1 
-STATE          STIME    FMRI +NAME                USED  AVAIL  REFER  MOUNTPOINT 
-online         Dec_03   svc:/system/filesystem/local:default +mypool/home/user1  50.1M      0  50.1M  /users/user1
-online         11:36:13 svc:/milestone/name-services:default+
 </code> </code>
  
-Now list the service(s) that depend on cron by using the **-D** switch:+Delete the **testfile** file:
  
 <code> <code>
-root@solaris:~# svcs -D svc:/system/cron:default +root@solaris:~# rm -/users/user1/testfile
-STATE          STIME    FMRI +
-online         Dec_03   svc:/milestone/multi-user:default+
 </code> </code>
  
-===svcs switches===+====Setting a User Reservation====
  
-The available switches for this command are:+As with setting quotas, setting a reservation is very simple:
  
 <code> <code>
-root@solaris:~# svcs -? +root@solaris:~# zfs set reservation=25M mypool/home/user1 
-Usagesvcs [-aHpv] [-o col[,col ... ]] [-R restarter] [-sS col] [<service> ...] +root@solaris:~# zfs get reservation mypool/home/user1 
-       svcs -d | -D [-Hpv] [-o col[,col ... ]] [-sS col] [<service> ...] +NAME               PROPERTY     VALUE  SOURCE 
-       svcs -l <service> ... +mypool/home/user1  reservation  25M    local 
-       svcs -x [-v] [<service...] +</code
-       svcs -?+====Using Snapshots====
  
- - list all service instances rather than only those that are enabled +Create file in **/users/user1**:
- -d  list dependencies of the specified service(s) +
- -D  list dependents of the specified service(s) +
- -H  omit header line from output +
- -l  list detailed information about the specified service(s) +
- -o  list only the specified columns in the output +
- -p  list process IDs and names associated with each service +
- -R  list only those services with the specified restarter +
- -s  sort output in ascending order by the specified column(s) +
- -S  sort output in descending order by the specified column(s) +
- -v  list verbose information appropriate to the type of output +
- -x  explain the status of services that might require maintenance, +
-     or explain the status of the specified service(s)+
  
- Services can be specified using an FMRI, abbreviation, or fnmatch(5) +<code> 
- pattern, as shown in these examples for svc:/network/smtp:sendmail+root@solaris:~# echo "This is a test file for the first snapshot"/users/user1/snapshot1  
 +root@solaris:~# ls /users/user1 
 +snapshot1 
 +</code>
  
- svcs [opts] svc:/network/smtp:sendmail +To create a snapshot of a ZFS file system, you need to use the **snapshot** subcommand of the **zfs*command:
- svcs [opts] network/smtp:sendmail +
- svcs [opts] network/*mail +
- svcs [opts] network/smtp +
- svcs [opts] smtp:sendmail +
- svcs [opts] smtp +
- svcs [opts] sendmail+
  
- Columns for output or sorting can be specified using these names: +<code> 
- +root@solaris:~# zfs snapshot mypool/home/user1@Dec13
- CTID    contract ID for service (see contract(4)) +
- DESC    human-readable description of the service +
- FMRI    Fault Managed Resource Identifier for service +
- INST    portion of the FMRI indicating service instance +
- N       abbreviation for next state (if in transition) +
- NSTA    abbreviation for next state (if in transition) +
- NSTATE  name for next state (if in transition) +
- S       abbreviation for current state +
- SCOPE   name for scope associated with service +
- SN      abbreviation for current state and next state +
- SVC     portion of the FMRI representing service name +
- STA     abbreviation for current state +
- STATE   name for current state +
- STIME   time of last state change+
 </code> </code>
  
-====Using the svcadm command==== +The snapshot is located in a hidden directory under **/users/user1** called **.zfs/snapshot**:
- +
-The **svcadm** command uses subcommandsThe following shows a full list of subcommands and their switches:+
  
 <code> <code>
-Usagesvcadm [-v] [cmd [args ... ]]+root@solaris:~# ls -l /users/user1/.zfs/snapshot 
 +total 3 
 +drwxr-xr-x   2 root     root           3 Dec 13 10:24 Dec13 
 +</code>
  
- svcadm enable [-rst] <service> ... - enable and online service(s) +As you can see, the snapshot contains the **snapshot1** file:
- svcadm disable [-st] <service> ... - disable and offline service(s) +
- svcadm restart <service> ... - restart specified service(s) +
- svcadm refresh <service> ... - re-read service configuration +
- svcadm mark [-It] <state> <service> ... - set maintenance state +
- svcadm clear <service> ... - clear maintenance state +
- svcadm milestone [-d] <milestone> - advance to a service milestone +
- svcadm delegate [-s] <restarter> <svc> ... - delegate service to a restarter+
  
- Services can be specified using an FMRI, abbreviation, or fnmatch(5) +<code> 
- pattern, as shown in these examples for svc:/network/smtp:sendmail +root@solaris:~# ls -l /users/user1/.zfs/snapshot/Dec13
- +total 2 
- svcadm <cmd> svc:/network/smtp:sendmail +-rw-r--r--   1 root     root          43 Dec 13 10:24 snapshot1
- svcadm <cmd> network/smtp:sendmail +
- svcadm <cmd> network/*mail +
- svcadm <cmd> network/smtp +
- svcadm <cmd> smtp:sendmail +
- svcadm <cmd> smtp +
- svcadm <cmd> sendmail+
 </code> </code>
  
-Using the **svcadm** command, stop the **cron** service by using the **disable** subcommand and the **-t** switch. The **-t** switch is specified to **t**emporarily stop the service. Should the **-t** switch not be used, the service will not automatically restart at the next reboot:+It is important to note here that the .zfs directory is also hidden from the **ls** command, even when using the **-a** switch:
  
 <code> <code>
-root@solaris:~# svcadm disable -t svc:/system/cron:default +root@solaris:~# ls -laR /users/user1 
-root@solaris:~# svcs svc:/system/cron:default +/users/user1
-STATE          STIME    FMRI +total 8 
-disabled       12:13:34 svc:/system/cron:default+drwxr-xr-x   2 root     root           3 Dec 13 10:24 . 
 +drwxr-xr-x   3 root     root           3 Dec 12 16:09 .. 
 +-rw-r--r--   1 root     root          43 Dec 13 10:24 snapshot1
 </code> </code>
  
-Now start the service again using the **enable** subcommand the **-r** switch to specify that all dependences should also be started:+You can also create a recursive snapshot of all file systems in a pool:
  
 <code> <code>
-root@solaris:~# svcadm enable -r svc:/system/cron:default +root@solaris:~# zfs snapshot -r mypool@Dec13-1
-root@solaris:~# svcs svc:/system/cron:default +
-STATE          STIME    FMRI +
-online         12:16:49 svc:/system/cron:default+
 </code> </code>
  
-====Using the svcprop command==== +The snapshots are stored in their respective .zfs directories:
- +
-The **svcprop** command is used to display the properties of a particular service:+
  
 <code> <code>
-root@solaris:~# svcprop svc:/system/cron:default +root@solaris:~# ls /users/.zfs/snapshot 
-general/complete astring  +Dec13-1 
-general/enabled boolean true +root@solaris:~# ls /users/user1/.zfs/snapshot 
-general/action_authorization astring solaris.smf.manage.cron +Dec13    Dec13-1
-general/entity_stability astring Unstable +
-general/single_instance boolean true +
-usr/entities fmri svc:/system/filesystem/local +
-usr/grouping astring require_all +
-usr/restart_on astring none +
-usr/type astring service +
-ns/entities fmri svc:/milestone/name-services +
-ns/grouping astring require_all +
-ns/restart_on astring none +
-ns/type astring service +
-manifestfiles/etc_svc_profile_generic_xml astring /etc/svc/profile/generic.xml +
-manifestfiles/lib_svc_manifest_system_cron_xml astring /lib/svc/manifest/system/cron.xml +
-dependents/cron_multi-user fmri svc:/milestone/multi-user +
-startd/ignore_error astring core,signal +
-start/exec astring /lib/svc/method/svc-cron +
-start/group astring root +
-start/timeout_seconds count 60 +
-start/type astring method +
-start/use_profile boolean false +
-start/user astring root +
-stop/exec astring :kill +
-stop/timeout_seconds count 60 +
-stop/type astring method +
-refresh/exec astring :kill\ -THAW +
-refresh/timeout_seconds count 60 +
-refresh/type astring method +
-tm_common_name/C ustring clock\ daemon\ \(cron\) +
-tm_man_cron1M/manpath astring /usr/share/man +
-tm_man_cron1M/section astring 1M +
-tm_man_cron1M/title astring cron +
-tm_man_crontab1/manpath astring /usr/share/man +
-tm_man_crontab1/section astring 1 +
-tm_man_crontab1/title astring crontab +
-restarter/logfile astring /var/svc/log/system-cron:default.log +
-restarter/start_pid count 7978 +
-restarter/start_method_timestamp time 1355226207.490539000 +
-restarter/start_method_waitstatus integer 32512 +
-restarter/contract count  +
-restarter/auxiliary_state astring fault_threshold_reached +
-restarter/next_state astring none +
-restarter/state astring maintenance +
-restarter/state_timestamp time 1355226207.498412000 +
-restarter_actions/auxiliary_tty boolean true +
-restarter_actions/auxiliary_fmri astring svc:/application/graphical-login/gdm:default +
-restarter_actions/restart integer +
 </code> </code>
  
-In order to identify the **method** associated with a service, you need to use the **svcprop** command with the **-p** switch:+You can list all snapshots as follows:
  
 <code> <code>
-root@solaris:~# svcprop -p start/exec system/cron +root@solaris:~# zfs list -t snapshot -r mypool 
-/lib/svc/method/svc-cron+NAME                       USED  AVAIL  REFER  MOUNTPOINT 
 +mypool@Dec13-1                0      -    31K  - 
 +mypool/home@Dec13-1                -    32K  - 
 +mypool/home/user1@Dec13            -  31.5K  - 
 +mypool/home/user1@Dec13-1          -  31.5K  -
 </code> </code>
  
-====Maintenance mode==== +Create another file in **/users/user1**:
- +
-You are now going to break the cron service by renaming the cron Start and Stop method to **svc-cron.old**:+
  
 <code> <code>
-root@solaris:~# mv /lib/svc/method/svc-cron /lib/svc/method/svc-cron.old+root@solaris:~# echo "This is a test file for the second snapshot"/users/user1/snapshot2 
 +root@solaris:~# ls -l /users/user1 
 +total 4 
 +-rw-r--r--   1 root     root          43 Dec 13 10:44 snapshot1 
 +-rw-r--r--   1 root     root          44 Dec 13 10:45 snapshot2 
 +root@solaris:~# cat /users/user1/snapshot1 
 +This is a test file for the first snapshot 
 +root@solaris:~# cat /users/user1/snapshot2 
 +This is a test file for the second snapshot
 </code> </code>
  
-Restart the service using the **svcadm** command and check the current state of the cron service using the **svcs** command:+Now take a second recursive snapshot of **mypool**:
  
 <code> <code>
-root@solaris:~# svcadm restart cron +root@solaris:~# zfs snapshot -r mypool@Dec13-2 
-root@solaris:~# svcs cron +root@solaris:~# zfs list -t snapshot -r mypool 
-STATE          STIME    FMRI +NAME                       USED  AVAIL  REFER  MOUNTPOINT 
-maintenance    12:43:27 svc:/system/cron:default+mypool@Dec13-1                0      -    31K  - 
 +mypool@Dec13-2                0      -    31K  - 
 +mypool/home@Dec13-1                -    32K  - 
 +mypool/home@Dec13-2                -    32K  - 
 +mypool/home/user1@Dec13            -  31.5K  - 
 +mypool/home/user1@Dec13-1          -  31.5K  - 
 +mypool/home/user1@Dec13-2          -    33K  -
 </code> </code>
  
-<WRAP center round important 60%> +The **diff** subcommand of the **zfs** command displays the differences between two snapshots:
-Note that the cron service has now gone into Maintenance mode ! +
-</WRAP> +
- +
-Use the **svcs** command and the **-x** switch to explain why the cron service requires maintenance:+
  
 <code> <code>
-root@solaris:~# svcs -x cron +root@solaris:~# zfs diff mypool/home/user1@Dec13-1 mypool/home/user1@Dec13-2 
-svc:/system/cron:default (clock daemon (cron)) +M /users/user1
- State: maintenance since December 11, 2012 12:43:27 PM CET +M /users/user1/snapshot1 
-Reason: Start method failed repeatedly, last exited with status 127. ++ /users/user1/snapshot2
-   See: http://support.oracle.com/msg/SMF-8000-KS +
-   See: cron(1M) +
-   See: crontab(1) +
-   See: /var/svc/log/system-cron:default.log +
-Impact: This service is not running.+
 </code> </code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-Note that in the above output, the system suggests that you should consult a web page at **support.oracle.com**, consult the **cron(1M)** and **crontab(1)** manuals and finally consult the service log file **/var/svc/log/system-cron:default.log** in order to find out why the service is in Maintenance mode. In actual fact this list should be read bottoms-up.+The above out put shows that **/users/user1/snapshot2** has been added and included in the **second** snapshot as it appears in the command line.
 </WRAP> </WRAP>
  
-So before doing anything else, open the service log file:+This output can contain the following characters:
  
-<code> +^ Character ^ Description ^ 
-root@solaris:~# cat /var/svc/log/system-cron:default.log +| M | **M**odification | 
-[ Nov 20 18:25:55 Enabled. ] +| R | **R**enamed | 
-[ Nov 20 18:25:55 Rereading configuration. ] +| + | Added | 
-[ Nov 20 18:27:48 Executing start method ("/lib/svc/method/svc-cron"). ] +| Deleted |
-[ Nov 20 18:27:49 Method "start" exited with status 0. ] +
-[ Nov 20 22:46:22 Executing start method ("/lib/svc/method/svc-cron"). ] +
-[ Nov 20 22:46:22 Method "start" exited with status 0. ] +
-[ Dec  1 14:33:18 Executing start method ("/lib/svc/method/svc-cron"). ] +
-[ Dec  1 14:33:18 Method "start" exited with status 0. ] +
-[ Dec  3 13:14:29 Stopping because service disabled. ] +
-[ Dec  3 13:14:29 Executing stop method (:kill). ] +
-[ Dec  3 13:16:22 Executing start method ("/lib/svc/method/svc-cron"). ] +
-[ Dec  3 13:16:23 Method "start" exited with status 0. ] +
-[ Dec 11 12:13:34 Stopping because service disabled. ] +
-[ Dec 11 12:13:34 Executing stop method (:kill). ] +
-[ Dec 11 12:16:49 Enabled. ] +
-[ Dec 11 12:16:49 Executing start method ("/lib/svc/method/svc-cron"). ] +
-[ Dec 11 12:16:49 Method "start" exited with status 0. ] +
-[ Dec 11 12:43:27 Stopping because service restarting. ] +
-[ Dec 11 12:43:27 Executing stop method (:kill). ] +
-[ Dec 11 12:43:27 Executing start method ("/lib/svc/method/svc-cron"). ] +
-/usr/sbin/sh[1]: exec: /lib/svc/method/svc-cron: not found +
-[ Dec 11 12:43:27 Method "start" exited with status 127. ] +
-[ Dec 11 12:43:27 Executing start method ("/lib/svc/method/svc-cron"). ] +
-/usr/sbin/sh[1]: exec: /lib/svc/method/svc-cron: not found +
-[ Dec 11 12:43:27 Method "start" exited with status 127. ] +
-[ Dec 11 12:43:27 Executing start method ("/lib/svc/method/svc-cron"). ] +
-/usr/sbin/sh[1]: exec: /lib/svc/method/svc-cron: not found +
-[ Dec 11 12:43:27 Method "start" exited with status 127. ] +
-</code>+
  
-Look for lines that give you an indication of what is going wrong, such as: +Note that you cannot compare the snapshots in the reverse order:
- +
-<file> +
-/usr/sbin/sh[1]: exec: /lib/svc/method/svc-cron: not found +
-</file> +
- +
-Open and examine the **/lib/svc/method/svc-cron.old** file: +
- +
-<file> +
-#!/usr/sbin/sh +
-+
-# Copyright (c) 2004, 2011, Oracle and/or its affiliates. All rights reserved. +
-+
-# Start method script for the cron service. +
-+
- +
-. /lib/svc/share/smf_include.sh +
- +
-if [ -p $SMF_SYSVOL_FS/cronfifo ]; then +
- if /usr/bin/pgrep -x -u 0 -z `smf_zonename` cron >/dev/null 2>&1; then +
- echo "$0: cron is already running" +
- exit $SMF_EXIT_ERR_NOSMF +
- fi +
-fi +
- +
-if [ -x /usr/sbin/cron ]; then +
- /usr/bin/rm -f $SMF_SYSVOL_FS/cronfifo +
- /usr/sbin/cron & +
-else +
- exit 1 +
-fi +
-exit $SMF_EXIT_OK +
-</file> +
- +
-As you can see, without the method, the cron service cannot be started. +
- +
-Now you are aware of the root cause of the problem, repair the cron service by renaming **/lib/svc/method/svc-cron.old** back to **/lib/svc/method/svc-cron**:+
  
 <code> <code>
-root@solaris:~# mv /lib/svc/method/svc-cron.old /lib/svc/method/svc-cron+root@solaris:~# zfs diff mypool/home/user1@Dec13-2 mypool/home/user1@Dec13-
 +Unable to obtain diffs: mypool/home/user1@Dec13-1 is not a descendant dataset of mypool/home/user1@Dec13-2
 </code> </code>
  
-Clear the maintenance status of the cron service:+====Rolling Back to a Snapshot==== 
 + 
 +In the case that you wish to rollback to a specific snapshot, note that you can **only** roll back to the last snapshot as shown by the output of **zfs list**:
  
 <code> <code>
-root@solaris:~# svcadm clear cron+root@solaris:~# zfs list -t snapshot -r mypool 
 +NAME                       USED  AVAIL  REFER  MOUNTPOINT 
 +mypool@Dec13-1                0      -    31K  - 
 +mypool@Dec13-2                0      -    31K  - 
 +mypool/home@Dec13-1                -    32K  - 
 +mypool/home@Dec13-2                -    32K  - 
 +mypool/home/user1@Dec13            -  31.5K  - 
 +mypool/home/user1@Dec13-1          -  31.5K  - 
 +mypool/home/user1@Dec13-2          -    33K  - 
 +root@solaris:~# zfs rollback mypool/home/user1@Dec13-1 
 +cannot rollback to 'mypool/home/user1@Dec13-1': more recent snapshots exist 
 +use '-r' to force deletion of the following snapshots: 
 +mypool/home/user1@Dec13-2
 </code> </code>
  
-and restart the service:+Delete the **Dec13-2** snapshot as follows:
  
 <code> <code>
-root@solaris:~# svcadm enable -r cron+root@solaris:~# zfs destroy mypool/home/user1@Dec13-2 
 +root@solaris:~# zfs list -t snapshot -r mypool 
 +NAME                       USED  AVAIL  REFER  MOUNTPOINT 
 +mypool@Dec13-1                0      -    31K  - 
 +mypool@Dec13-2                0      -    31K  - 
 +mypool/home@Dec13-1                -    32K  - 
 +mypool/home@Dec13-2                -    32K  - 
 +mypool/home/user1@Dec13            -  31.5K  - 
 +mypool/home/user1@Dec13-1          -  31.5K  -
 </code> </code>
  
-Finally, check that the cron service has come out of maintenance mode:+Now roll back to **Dec13-1**:
  
 <code> <code>
-root@solaris:~# svcs cron +root@solaris:~# zfs rollback mypool/home/user1@Dec13-1 
-STATE          STIME    FMRI +root@solaris:~# ls -l /users/user1 
-online         13:12:56 svc:/system/cron:default+total 2 
 +-rw-r--r--   1 root     root          43 Dec 13 10:24 snapshot1
 </code> </code>
  
-====Using the svccfg command====+<WRAP center round important 60%> 
 +Note that the **snapshot2** file has obviously disappeared since it was not in the **Dec13-1** snapshot. 
 +</WRAP> 
  
-As you now know the **svcprop** command is used to the **SMF repository** data, in other words, he properties defined in the generic.xml and service specific manifest files. The **svccfg** command is used to configure those properties. +====Cloning a Snapshot====
  
-The **svccfg** command can be used to set both **global** properties for all services by using the **-g** switch and service specific properties by using the **-s** switch followed by the <FMRI>+Snapshots are read-only. To convert a snapshot to a writable file system, you can use the **clone** subcommand of the **zfs** command:
- +
-Set the **set-notify** property globally so that you are informed by email every time a service goes into maintenance mode:+
  
 <code> <code>
-root@solaris:~# svccfg setnotify -g to-maintenance mailto:infos@fenestros.com+root@solaris:~# zfs clone mypool/home/user1@Dec13-1 mypool/home/user3 
 +root@solaris:~# zfs list 
 +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
 +mypool                           25.5M   129M    31K  /mypool 
 +mypool/home                      25.1M   129M    34K  /users 
 +mypool/home/user1                50.5K  50.0M  31.5K  /users/user1 
 +mypool/home/user3                  18K   129M  31.5K  /users/user3 
 +rpool                            7.03G  12.3G  4.58M  /rpool 
 +rpool/ROOT                       4.87G  12.3G    31K  legacy 
 +rpool/ROOT/solaris               4.86G  12.3G  3.92G  / 
 +rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var 
 +rpool/ROOT/solaris/var            865M  12.3G   207M  /var 
 +rpool/VARSHARE                    102K  12.3G   102K  /var/share 
 +rpool/dump                       1.03G  12.3G  1.00G  - 
 +rpool/export                     90.5M  12.3G    32K  /export 
 +rpool/export/home                90.5M  12.3G    32K  /export/home 
 +rpool/export/home/trainee        90.4M  12.3G  90.4M  /export/home/trainee 
 +rpool/swap                       1.03G  12.3G  1.00G  -
 </code> </code>
  
-Now set that same property for the cron service so that you are informed by email when that specific service goes offline:+Display the contents of **/users/user3**:
  
 <code> <code>
-root@solaris:~# svccfg -s cron setnotify to-offline mailto:infos@fenestros.com+root@solaris:~# ls -l /users/user3 
 +total 2 
 +-rw-r--r--   1 root     root          43 Dec 13 10:24 snapshot1
 </code> </code>
  
-The **svccfg** command can also be used interactively:+====Using Compression==== 
 + 
 +In order to minimize storage space, you can make a file system use compression. Compression can be activated either at creation time or after creation. Compression only works for new data. Any existing data in the file system at the time of activating compression remains uncompressed. 
 + 
 +To activate compression on an existing file system, you need to change the file system'**compression** property from off to on:
  
 <code> <code>
-root@solaris:~# svccfg +root@solaris:~# zfs set compression=on mypool/home/user1 
-svc:> help +root@solaris:~# zfs get compression mypool/home/user1 
-General commands:        help set repository end +NAME               PROPERTY     VALUE  SOURCE 
-Manifest commands:       inventory validate import export +mypool/home/user1  compression  on     local
-Profile commands:        apply extract +
-Entity commands:         list select unselect add delete describe +
-Snapshot commands:       listsnap selectsnap revert +
-Instance commands:       refresh +
-Property group commands: listpg addpg delpg +
-Property commands:       listprop setprop delprop editprop +
-Customization commands:  listcust delcust +
-Property value commands: addpropvalue delpropvalue setenv unsetenv +
-Notification parameters: listnotify setnotify delnotify +
-svc:> select system/cron +
-svc:/system/cron> list +
-:properties +
-default +
-svc:/system/cron> select default +
-svc:/system/cron:default> listprop +
-general                            framework           +
-general/complete                  astring     +
-general/enabled                   boolean     true +
-restarter                          framework          NONPERSISTENT +
-restarter/contract                count       506 +
-restarter/start_pid               count       8008 +
-restarter/start_method_timestamp  time        1355227976.889809000 +
-restarter/start_method_waitstatus integer     0 +
-restarter/logfile                 astring     /var/svc/log/system-cron:default.log +
-restarter/auxiliary_state         astring     none +
-restarter/next_state              astring     none +
-restarter/state                   astring     online +
-restarter/state_timestamp         time        1355230779.987887000 +
-restarter_actions                  framework          NONPERSISTENT +
-restarter_actions/restart         integer     +
-restarter_actions/maint_off       integer     +
-restarter_actions/auxiliary_tty   boolean     true +
-restarter_actions/auxiliary_fmri  astring     svc:/application/graphical-login/gdm:default +
-restarter_actions/refresh         integer     +
-general_ovr                        framework          NONPERSISTENT +
-svc:/system/cron:default> exit+
 </code> </code>
  
 +====Using De-duplication====
  
 +Another space saving property of ZFS file systems is **De-duplication**:
  
 +<code>
 +root@solaris:~# zfs set dedup=on mypool/home/user1
 +root@solaris:~# zfs get dedup mypool/home/user1
 +NAME               PROPERTY  VALUE  SOURCE
 +mypool/home/user1  dedup     on     local
 +</code>
  
 +====Using Encryption====
  
-==== inetd ==== +Unlike **Compression** and **De-duplication**, **Encryption** can only be set on a file system at the time of creation:
- +
-Historically under Unix, certain network servers were managed by **inetd**. The **inetd** daemon was capable of launching a specific server on an on-demand basis when it detected an incoming connection on the port associated with that serveras detailed in the **/etc/services** file. The **inetd** daemon was configured by the **/etc/inetd.conf** file:+
  
 <code> <code>
-root@solaris:~# cat /etc/inetd.conf +root@solaris:~# zfs create -o encryption=on mypool/home/user2 
-+Enter passphrase for 'mypool/home/user2': fenestros 
-# Copyright 2004 Sun Microsystems, Inc.  All rights reserved. +Enter again: fenestros
-# Use is subject to license terms. +
-+
-#ident "%Z%%M% %I% %E% SMI" +
-+
-# Legacy configuration file for inetd(1M).  See inetd.conf(4). +
-+
-# This file is no longer directly used to configure inetd. +
-# The Solaris services which were formerly configured using this file +
-# are now configured in the Service Management Facility (see smf(5)) +
-# using inetadm(1M). +
-+
-# Any records remaining in this file after installation or upgrade, +
-# or later created by installing additional software, must be converted +
-# to smf(5) services and imported into the smf repository using +
-# inetconv(1M), otherwise the service will not be available.  Once +
-# a service has been converted using inetconv, further changes made to +
-# its entry here are not reflected in the service. +
-#+
 </code> </code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-As you can see, use of this file is now deprecated.+Note that the passphrase is not shown in the real output of the command. It is in the above example only for the purposes of this lesson.
 </WRAP> </WRAP>
  
-Lines in this file were of the following format:+To check if encryption is active on a file system, use the following command:
  
-<file+<code
-tftp   dgram   udp6    wait    root    /usr/sbin/in.tftpd      in.tftpd -s /tftpboot +root@solaris:~# zfs get encryption mypool/home/user1 
-</file>+NAME               PROPERTY    VALUE  SOURCE 
 +mypool/home/user1  encryption  off    
 +root@solaris:~# zfs get encryption mypool/home/user2 
 +NAME               PROPERTY    VALUE  SOURCE 
 +mypool/home/user2  encryption  on     local 
 +</code>
  
-The first field on each line indicated the port associated with the server. Inetd consulted the **/etc/services** file in order to identify the port number to listen on. 
  
-The second and third fields identified the protocol: 
  
-  * **stream tcp** for tcp +====Replacing a Faulty Disk====
-  * **dgram udp** for udp+
  
-The fourth field took one of two values:+In the case of a faulty disk and no hot spares, replacing the disk is a one-line operation using the **replace** subcommand of the **zpool** command:
  
-  * **nowait**, +<code> 
-    * a server was started for each connecting client, +root@solaris:~# zpool status mypool 
-  * **wait**, +  pool: mypool 
-    * a single unique server was started for all connecting clients.+ state: ONLINE 
 +  scan: none requested 
 +config:
  
-The fifth field indicated the user executing the server, in this case **root**.+ NAME        STATE     READ WRITE CKSUM 
 + mypool      ONLINE               0 
 +   mirror-0  ONLINE               0 
 +     c7t2d0  ONLINE               0 
 +     c7t3d0  ONLINE               0 
 + spares 
 +   c7t5d0    AVAIL   
  
-The sixth field indicated the program to be launched. In this case **/usr/sbin/in.tftpd**.+errors: No known data errors 
 +root@solaris:~# zpool replace mypool c7t2d0 c7t4d0 
 +</code>
  
-The seventh field identified the arguments and switches given to the program. Argument **0** was always the name of the program.+Use the **status** subcommand of the **zpool** command again to see what has happened:
  
 +<code>
 +root@solaris:~# zpool status mypool
 +  pool: mypool
 + state: ONLINE
 +  scan: resilvered 601K in 0h0m with 0 errors on Thu Dec 13 11:45:49 2012
 +config:
  
 + NAME        STATE     READ WRITE CKSUM
 + mypool      ONLINE               0
 +   mirror-0  ONLINE               0
 +     c7t4d0  ONLINE               0
 +     c7t3d0  ONLINE               0
 + spares
 +   c7t5d0    AVAIL   
  
-==== TCP Wrapper ====+errors: No known data errors 
 +</code>
  
-In order to improve security, **TCP Wrapper** was used to control access to the servers managed by the inetd daemon. Each line in the **/etc/inetd.conf** configuration file, such as:+<WRAP center round important 60%> 
 +ZFS //Resilvering// is the equivalent of UFS re-synchronization. 
 +</WRAP> 
  
-<file> +====Destroying a Pool====
-tftp   dgram   udp6    wait    root    /usr/sbin/in.tftpd      in.tftpd -s /tftpboot +
-</file>+
  
-was replaced by line of the following format:+Destroying pool is achieved by using the **destroy** subcommand of the **zpool** command:
  
-<file+<code
-tftp   dgram   udp6    wait    root    /usr/sbin/tcpd      in.tftpd -s /tftpboot +root@solaris:~# zpool destroy mypool 
-</file>+</code>
  
-Subsequently, when a connection was detected**inetd** launch the **/usr/sbin/tcpd** program as opposed to **/usr/sbin/in.tftpd** program.+As you can see by the following outputthis operation has also destroyed all the associated snapshots:
  
-The **tcpd** daemon then updated its logs and checked whether the client IP, FQDN or domain name was listed in one of the following two files:+<code> 
 +root@solaris:~# zfs list 
 +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
 +rpool                            7.03G  12.3G  4.58M  /rpool 
 +rpool/ROOT                       4.87G  12.3G    31K  legacy 
 +rpool/ROOT/solaris               4.86G  12.3G  3.92G  / 
 +rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var 
 +rpool/ROOT/solaris/var            865M  12.3G   207M  /var 
 +rpool/VARSHARE                    102K  12.3G   102K  /var/share 
 +rpool/dump                       1.03G  12.3G  1.00G  - 
 +rpool/export                     90.5M  12.3G    32K  /export 
 +rpool/export/home                90.5M  12.3G    32K  /export/home 
 +rpool/export/home/trainee        90.4M  12.3G  90.4M  /export/home/trainee 
 +rpool/swap                       1.03G  12.3G  1.00G  - 
 +root@solaris:~# zfs list -t snapshot -r mypool 
 +cannot open 'mypool': filesystem does not exist 
 +root@solaris:~# ls -l /users 
 +total 0 
 +</code>
  
-  * **/etc/hosts.allow** +<WRAP center round important 60%> 
-  * **/etc/host.deny**+As you have seen above, destroying a pool, **all** the data in it and **all** the associated snapshots is disconcertingly simpleYou should therefore be very careful when using the **destroy** subcommand. 
 +</WRAP>
  
-The lines in the above two files were of the following format: 
  
-<file> 
-daemon : client list 
-</file> 
  
-For example in the case of our **tftp**, if the **/etc/hosts.allow** contained the following line:+====Creating a RAID-5 Pool====
  
-<file> +You can create a RAID-5 pool using the RAID-Z algorithm:
-in.tftpd 192.168.1.10, .fenestros.com +
-</file>+
  
-then the client using the **192.168.1.10** IP address or any client whose domain name was **fenestros.com** could connect to the server.+<code> 
 +root@solaris:~# zpool create mypool raidz c7t2d0 c7t3d0 c7t4d0 spare c7t5d0 
 +root@solaris:~# zpool status mypool 
 +  pool: mypool 
 + state: ONLINE 
 +  scan: none requested 
 +config:
  
-A special keyword could also be user; **ALL**. If the **/etc/host.deny** file contained the line **ALL:ALL**, the system was hermetically sealed against all connections. + NAME        STATE     READ WRITE CKSUM 
- + mypool      ONLINE               0 
 +   raidz1-0  ONLINE               0 
 +     c7t2d0  ONLINE               0 
 +     c7t3d0  ONLINE               0 
 +     c7t4d0  ONLINE               0 
 + spares 
 +   c7t5d0    AVAIL   
  
 +errors: No known data errors
 +</code>
  
- +Destroy **mypool** :
- +
-Since the introduction of Solaris 10, the inetd daemon is managed by SMF. By default, TCP Wrappers is disabled:+
  
 <code> <code>
-root@solaris:~# svcprop -p defaults inetd +root@solaris:~# zpool destroy mypool
-defaults/bind_addr astring "" +
-defaults/bind_fail_interval integer -1 +
-defaults/bind_fail_max integer -1 +
-defaults/con_rate_offline integer -1 +
-defaults/connection_backlog integer 10 +
-defaults/failrate_cnt integer 40 +
-defaults/failrate_interval integer 60 +
-defaults/inherit_env boolean true +
-defaults/max_con_rate integer -1 +
-defaults/max_copies integer -1 +
-defaults/stability astring Evolving +
-defaults/tcp_keepalive boolean false +
-defaults/tcp_trace boolean false +
-defaults/tcp_wrappers boolean false +
-defaults/value_authorization astring solaris.smf.value.inetd+
 </code> </code>
  
-To activate TCP Wrappers, use the **svccfg** command as follows:+====Creating a RAID-6 Pool==== 
 + 
 +You can create a RAID-6 pool using the RAID-Z2 algorithm:
  
 <code> <code>
-root@solaris:~# svccfg -s inetd setprop defaults/tcp_wrappers=true +root@solaris:~# zpool create mypool raidz2 c7t2d0 c7t3d0 c7t4d0 c7t5d0 spare c7t6d0 
-</code>+root@solaris:~# zpool status mypool 
 +  pool: mypool 
 + state: ONLINE 
 +  scan: none requested 
 +config:
  
-Refresh the inetd service using **svcadm** and check whether TCP Wrappers is now enabled:+ NAME        STATE     READ WRITE CKSUM 
 + mypool      ONLINE               0 
 +   raidz2-0  ONLINE               0 
 +     c7t2d0  ONLINE               0 
 +     c7t3d0  ONLINE               0 
 +     c7t4d0  ONLINE               0 
 +     c7t5d0  ONLINE               0 
 + spares 
 +   c7t6d0    AVAIL   
  
-<code> +errorsNo known data errors
-root@solaris:~# svcadm refresh inetd +
-root@solaris:~# svcprop -p defaults inetd | grep tcp_wrappers +
-defaults/tcp_wrappers boolean true+
 </code> </code>
  
-The **inetadm** command is used to list the servers managed by inetd:+Destroy **mypool** :
  
 <code> <code>
-root@solaris:~# inetadm +root@solaris:~# zpool destroy mypool
-ENABLED   STATE          FMRI +
-disabled  disabled       svc:/application/cups/in-lpd:default +
-enabled   online         svc:/network/security/ktkt_warn:default +
-disabled  disabled       svc:/network/rpc/rusers:default +
-disabled  disabled       svc:/network/rpc/spray:default +
-enabled   online         svc:/network/rpc/smserver:default +
-disabled  disabled       svc:/network/rpc/wall:default +
-disabled  disabled       svc:/network/rpc/rstat:default +
-enabled   online         svc:/network/rpc/gss:default +
-disabled  disabled       svc:/network/rpc/rex:default +
-disabled  disabled       svc:/network/echo:dgram +
-disabled  disabled       svc:/network/echo:stream +
-disabled  disabled       svc:/network/time:dgram +
-disabled  disabled       svc:/network/time:stream +
-disabled  disabled       svc:/network/shell:default +
-disabled  disabled       svc:/network/shell:kshell +
-disabled  disabled       svc:/network/stlisten:default +
-disabled  disabled       svc:/network/finger:default +
-disabled  disabled       svc:/network/discard:dgram +
-disabled  disabled       svc:/network/discard:stream +
-disabled  disabled       svc:/network/nfs/rquota:default +
-disabled  disabled       svc:/network/telnet:default +
-disabled  disabled       svc:/network/chargen:dgram +
-disabled  disabled       svc:/network/chargen:stream +
-disabled  disabled       svc:/network/rexec:default +
-disabled  disabled       svc:/network/daytime:dgram +
-disabled  disabled       svc:/network/daytime:stream +
-disabled  disabled       svc:/network/comsat:default +
-disabled  disabled       svc:/network/login:eklogin +
-disabled  disabled       svc:/network/login:klogin +
-disabled  disabled       svc:/network/login:rlogin +
-disabled  disabled       svc:/network/talk:default +
-disabled  disabled       svc:/network/tftp/udp6:default +
-disabled  disabled       svc:/network/stdiscover:default +
-disabled  disabled       svc:/application/x11/xfs:default +
-disabled  disabled       svc:/application/x11/xvnc-inetd:default+
 </code> </code>
  
-Using this same command and the **-l** switch you can check to see if the **tftp** server is configured to use TCP Wrappers:+<WRAP center round todo 60%> 
 +Create a triple parity RAID **mypool** using your five 200MB disks. Do not delete it. 
 +</WRAP> 
 + 
 +====Displaying the Zpool History==== 
 + 
 +You can review everything that has been done to existing pools by using the **history** subcommand of the **zpool** command:
  
 <code> <code>
-root@solaris:~# inetadm -/network/tftp/udp6 | grep tcp_wrappers +root@solaris:~# zpool history 
-default  tcp_wrappers=TRUE+History for 'mypool': 
 +2012-12-13.14:02:17 zpool create mypool raidz3 c7t2d0 c7t3d0 c7t4d0 c7t5d0 spare c7t6d0 
 + 
 +History for 'rpool': 
 +2012-11-20.19:08:05 zpool create -f -B rpool c7t0d0s1 
 +2012-11-20.19:08:05 zfs create -p -o mountpoint=/export rpool/export 
 +2012-11-20.19:08:05 zfs set mountpoint=/export rpool/export 
 +2012-11-20.19:08:05 zfs create -p rpool/export/home 
 +2012-11-20.19:08:06 zfs create -p -o canmount=noauto -o mountpoint=/var/share rpool/VARSHARE 
 +2012-11-20.19:08:06 zfs set mountpoint=/var/share rpool/VARSHARE 
 +2012-11-20.19:08:07 zfs create -p -V 1024.0m rpool/dump 
 +2012-11-20.19:08:12 zfs create -p -V 1024.0m rpool/swap 
 +2012-11-20.19:08:20 zfs set primarycache=metadata rpool/swap 
 +2012-11-20.19:25:51 zfs set primarycache=metadata rpool/swap 
 +2012-11-20.19:26:25 zfs create rpool/export/home/trainee 
 +2012-11-20.22:45:56 zfs set primarycache=metadata rpool/swap 
 +2012-12-01.14:32:36 zfs set primarycache=metadata rpool/swap 
 +2012-12-03.13:15:45 zfs set primarycache=metadata rpool/swap 
 +2012-12-08.14:33:41 zfs /tmp/be 
 +2012-12-11.15:33:50 zfs set primarycache=metadata rpool/swap 
 +2012-12-12.09:57:00 zfs set primarycache=metadata rpool/swap
 </code> </code>
  
-To modify this property, you can use the **inetadm** command with the **-m** switch:+<WRAP center round important 60%> 
 +Note that the history related to destroyed pools has been deleted. 
 +</WRAP> 
 + 
 +=====LAB #2 Managing iSCSI Storage===== 
 + 
 +====Installing the COMSTAR Server==== 
 + 
 +Start by installing the COMSTAR storage server software:
  
 <code> <code>
-root@solaris:~# inetadm -m tftp/udp6 tcp_wrappers=FALSE +root@solaris:~# pkg install group/feature/storage-server 
-root@solaris:~# inetadm -l /network/tftp/udp6 | grep tcp_wrappers +           Packages to install 20 
-         tcp_wrappers=FALSE+       Create boot environment:  No 
 +Create backup boot environment: Yes 
 +            Services to change:   1 
 + 
 +DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED 
 +Completed                              20/20     1023/1023    54.5/54.5  636k/s 
 + 
 +PHASE                                          ITEMS 
 +Installing new actions                     1863/1863 
 +Updating package state database                 Done  
 +Updating image state                            Done  
 +Creating fast lookup database                   Done
 </code> </code>
  
-Now change the global system default property of TCP Wrappers:+The **COMSTAR target mode framework** runs as the **stmf** service. Check to see if it is enabled:
  
 <code> <code>
-root@solaris:~# inetadm -M tcp_wrappers=TRUE+root@solaris:~# svcs \*stmf\* 
 +STATE          STIME    FMRI 
 +disabled       15:43:16 svc:/system/stmf:default
 </code> </code>
  
-Note however that the tftp daemon keeps its previously defined value for the same property:+Enable the service:
  
 <code> <code>
-root@solaris:~# inetadm -l /network/tftp/udp6 | grep tcp_wrappers +root@solaris:~# svcadm enable stmf 
-         tcp_wrappers=FALSE+root@solaris:~# svcs \*stmf\* 
 +STATE          STIME    FMRI 
 +online         16:01:56 svc:/system/stmf:default
 </code> </code>
  
-Change that value back to TRUE:+You can check the status of the server using the **stmfadm** command:
  
 <code> <code>
-root@solaris:~# inetadm -m tftp/udp6 tcp_wrappers=TRUE +root@solaris:~# stmfadm list-state 
-root@solaris:~# inetadm -l /network/tftp/udp6 | grep tcp_wrappers +Operational Statusonline 
-         tcp_wrappers=TRUE+Config Status     : initialized 
 +ALUA Status       : disabled 
 +ALUA Node         : 0
 </code> </code>
  
-====Boot Milestone Services====+====Creating SCSI Logical Units====
  
-**Before** you start experimenting with milestones, write down the following command:+First you need to create your **Backing Storage Device** within your **mypool** pool:
  
-<file+<code
-svcadm milestone all +root@solaris:~# zfs create -V 100M mypool/iscsi 
-</file>+root@solaris:~# zfs list 
 +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
 +mypool                            103M  51.6M    31K  /mypool 
 +mypool/iscsi                      103M   155M    16K  - 
 +rpool                            7.40G  11.9G  4.58M  /rpool 
 +rpool/ROOT                       5.22G  11.9G    31K  legacy 
 +rpool/ROOT/solaris               5.22G  11.9G  4.08G  / 
 +rpool/ROOT/solaris-backup-1      2.47M  11.9G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  11.9G   758M  /var 
 +rpool/ROOT/solaris-backup-2       127K  11.9G  3.92G  / 
 +rpool/ROOT/solaris-backup-2/var    58K  11.9G   266M  /var 
 +rpool/ROOT/solaris/var            980M  11.9G   209M  /var 
 +rpool/VARSHARE                    102K  11.9G   102K  /var/share 
 +rpool/dump                       1.03G  12.0G  1.00G  - 
 +rpool/export                      108M  11.9G    32K  /export 
 +rpool/export/home                 108M  11.9G    32K  /export/home 
 +rpool/export/home/trainee         108M  11.9G   108M  /export/home/trainee 
 +rpool/swap                       1.03G  12.0G  1.00G  - 
 +</code>
  
-You will now put your system into single-user mode by using the following command:+You can see your raw device in the **/dev/zvol/rdsk/mypool/** directory:
  
 <code> <code>
-root@solaris:~# svcadm milestone svc:/milestone/single-user+root@solaris:~# ls -l /dev/zvol/rdsk/mypool 
 +total 0 
 +lrwxrwxrwx   1 root     root           0 Dec 14 09:42 iscsi -> ../../../..//devices/pseudo/zfs@0:6,raw
 </code> </code>
  
-<WRAP center round important 60%> +You can now create a logical unit using the **create-lu** subcommand of the **sbdadm** command:
-When you are in single-user mode, use the command you just wrote down to get back here. +
-</WRAP>+
  
 +<code>
 +root@solaris:~# sbdadm create-lu /dev/zvol/rdsk/mypool/iscsi
 +Created the following LU:
  
-====The shutdown command====+       GUID                    DATA SIZE           SOURCE 
 +--------------------------------  -------------------  ---------------- 
 +600144f0e2a54e00000050cae6d80001  104857600            /dev/zvol/rdsk/mypool/iscsi 
 +</code>
  
-The **shutdown** command is used to either halt, reboot or change the **state** of the system. The command takes has following syntax:+====Mapping the Logical Unit====
  
-  shutdown [-y] [-g seconds] [-r | -i state] [message]+In order for the logical unit to be available to initiators, it has to be **mapped**. In order to map the logical device you need its GUID. You can use either one of the two following commands to get that information:
  
-The switches are as follows :+<code> 
 +root@solaris:~# sbdadm list-lu
  
-^ Switch ^ Description ^ +Found 1 LU(s)
-| -y | The command is non-interactive | +
-| -g seconds | Grace period in seconds. The default value is 60. | +
-| -i state| Destination state. The default value is S. | +
-| -r | Equivalent to -i6 |+
  
-Before starting to shutdown, the system sends out a standard message:+       GUID                    DATA SIZE           SOURCE 
 +--------------------------------  -------------------  ---------------- 
 +600144f0e2a54e00000050cae6d80001  104857600            /dev/zvol/rdsk/mypool/iscsi 
 +</code>
  
-**The system will be shut down in ...** +<code> 
 +root@solaris:~# stmfadm list-lu -v 
 +LU Name: 600144F0E2A54E00000050CAE6D80001 
 +    Operational Status     : Online 
 +    Provider Name          : sbd 
 +    Alias                  : /dev/zvol/rdsk/mypool/iscsi 
 +    View Entry Count       : 0 
 +    Data File              : /dev/zvol/rdsk/mypool/iscsi 
 +    Meta File              : not set 
 +    Size                   : 104857600 
 +    Block Size             : 512 
 +    Management URL         : not set 
 +    Vendor ID              : SUN      
 +    Product ID             : COMSTAR          
 +    Serial Num             : not set 
 +    Write Protect          : Disabled 
 +    Write Cache Mode Select: Enabled 
 +    Writeback Cache        : Enabled 
 +    Access State           : Active 
 +</code>
  
-This message is sent out 7200, 3600, 1800, 1200, 600, 300, 120, 60 and 30 seconds before shutdown begins.+Create simple mapping for this logical unit by using the **add-view** subcommand of the **stmfadm** command:
  
-The system message can also be complemented by an administrator defined message, **[message]**. If the message is longer than one word it must be enclosed in single (') or double (") quotation marks.+<code> 
 +root@solaris:~# stmfadm add-view 600144F0E2A54E00000050CAE6D80001 
 +</code>
  
-The switch **-i** can take one of 5 states:+====Creating a Target====
  
-^ State ^ Description ^ +In order to create a target the **svc:/network/iscsi/target:default** service must be online. Check if it is:
-| 0 | System halt | +
-| 1 | Administrative state | +
-| s or S | Single User state | +
-| 5 | System halt and Powerdown | +
-| 6 | System reboot | +
- +
-Use the following command to shutdown your system:+
  
 <code> <code>
-root@solaris:~# shutdown -y -g360 -i0+root@solaris:~# svcs \*scsi\* 
 +STATE          STIME    FMRI 
 +disabled       15:42:56 svc:/network/iscsi/target:default 
 +online         Dec_12   svc:/network/iscsi/initiator:default 
 +</code>
  
-Shutdown started.    Wednesday, December 12, 2012 11:56:53 AM CET+Start the service:
  
-Broadcast Message from root (pts/1) on solaris.fenestros.loc Wed Dec 12 11:56:53... +<code> 
-The system solaris.fenestros.loc will be shut down in 6 minutes +root@solaris:~# svcadm enable -r svc:/network/iscsi/target:default 
 +root@solaris:~# svcs \*scsi\* 
 +STATE          STIME    FMRI 
 +online         Dec_12   svc:/network/iscsi/initiator:default 
 +online         10:06:54 svc:/network/iscsi/target:default 
 +</code>
  
-showmount: solaris.fenestros.locRPCProgram not registered+Now create a target using the **create-target** subcommand of the **itadm** command: 
 + 
 +<code> 
 +root@solaris:~# itadm create-target 
 +Target iqn.1986-03.com.sun:02:897fd011-8b3d-cf2b-fc1d-a010bd97d035 successfully created
 </code> </code>
  
-Open another terminal and use the follwing command to identify the PID of the shutdown process:+To list the target(s), use the **list-target** subcommand of the **itadm** command:
  
 <code> <code>
-root@solaris:~# ps -ef | grep shutdown +root@solaris:~# itadm list-target 
-    root  2462  1914   0 11:56:53 pts/1       0:00 /usr/sbin/sh /usr/sbin/shutdown --g360 -i0+TARGET NAME                                                  STATE    SESSIONS  
 +iqn.1986-03.com.sun:02:897fd011-8b3d-cf2b-fc1d-a010bd97d035  online          
 </code> </code>
  
-Now kill the shutdown process:+====Configuring the Target for Discovery==== 
 + 
 +Finally, you need to configure the target so it can be discovered by initiators:
  
 <code> <code>
-root@solaris:~# kill -9 2462+root@solaris:~# devfsadm -i iscsi
 </code> </code>
  
Ligne 1715: Ligne 1216:
 <html> <html>
 <div align="center"> <div align="center">
-Copyright © 2011-2015 Hugh Norris.<br><br> +Copyright © 2019 Hugh Norris.
-<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/">Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License</a>+
-</div>+
 </html> </html>
Menu