Différences

Ci-dessous, les différences entre deux révisions de la page.

Lien vers cette vue comparative

Prochaine révision
Révision précédente
elearning:workbooks:solaris:11:junior:l125 [2016/10/22 05:25] – modification externe 127.0.0.1elearning:workbooks:solaris:11:junior:l125 [2020/01/30 03:28] (Version actuelle) – modification externe 127.0.0.1
Ligne 1: Ligne 1:
-====== Storage Administration ======+====== SO304 - Zone Administration ======
  
-=====Preparing your Solaris 11 VM=====+=====Solaris Containers=====
  
-Before continuing further, shutdown your Solaris 11 VM. Using the **Storage** section of the **Oracle VM VirtualBox Manager**, add the following **.vmdk** disks to the **existing** SATA controller of your Solaris 11 VM:+The term Solaris **Containers** is often confused with that of Solaris **Zones**. In factthere is a slight differenceThe definition of a container is:
  
-^ Disk ^ Size ^ Name ^ +<WRAP center round important 60%> 
-| c7t2d0 | 200 Mb | Disk1.vmdk |  +Solaris container = Solaris Zone + Solaris Resource Manager ( SRM ) 
-| c7t3d0 | 200 Mb | Disk2.vmdk |  +</WRAP>
-| c7t4d0 | 200 Mb | Disk3.vmdk |  +
-| c7t5d0 | 200 Mb | Disk4.vmdk |  +
-| c7t6d0 | 200 Mb | Disk5.vmdk |  +
-| c7t7d0 | 20 Gb | Mirror.vmdk | +
  
-Using the **System** section of the **Oracle VM VirtualBox Manager**, add a second processor to your Solaris 11 VM.+The **SRM** is responsible for workload and resource management.
  
-Finally, boot your Solaris 11 VM.+Solaris Zones are **not** full guest operating system kernels and as such they are similar in concept to FreeBSD **[[wp>FreeBSD_jail|Jails]]**. Zones share the same kernel and as such cannot be live migrated to another host.
  
-=====Introduction=====+There are two types of zones:
  
-All previous versions of Solarisincluding Solaris 10 use the **Unix File System** ( [[wp>Unix_File_System|UFS]] ) as their default file system. Solaris 11 uses the **Zettabyte File System** ( [[wp>Zfs|ZFS]] ), introduced with Solaris 10, as its default file system.+  * a single **Global** zone, 
 +  one or more **Non-global** or **local** zones.
  
-====Solaris 11 and ZFS====+Each local zone requires about 400 Mb of disk space and 15 Mb of RAM.
  
-The Solaris 11 implementation of ZFS includes the following capabilities:+====The Global Zone====
  
-  * 128-bit addressing, +The global zone:
-  * data integrity assurance, +
-  * automated data corruption detection and repair, +
-  * encryption, +
-  * compression, +
-  * de-duplication, +
-  * quotas, +
-  * file system migration between pools, +
-  * snapshots.+
  
-====ZFS Vocabulary====+    * has a zone **ID** of **0**, 
 +    * provides a unique instance of the Solaris kernel, 
 +    * contains all packages installed by IPS, 
 +    * can contains other software not installed by IPS, 
 +    * contains a database of all applications installed in the global zone, 
 +    * contains all configuration data concerning the global zone such as its host name, 
 +    * knows about all //devices// and all //file systems//, 
 +    * is aware of all local zones as well as their configuration, 
 +    * is the zone in which local zones can be created, installed, configured, managed, un-installed and deleted.
  
-The introduction of ZFS was obviously accompanied by a new vocabulary:+====Non-global or Local Zones====
  
-^ Term ^ Description ^ +local zone:
-| pool | storage element regrouping one or more disk partitions containing one or more file systems | +
-| file system | A dataset containing directories and files | +
-| clone | A copy of a file system | +
-| snapshot | A read-only copy of the state of a file system | +
-| compression | The reduction of storage space achieved by the removal of duplicate data blocks | +
-| de-duplication | The reduction of storage space achieved by the removal of redundant data patterns | +
-| checksum | A 256-bit number used to validate data when read or written | +
-| encryption | The protection of data using a password | +
-| quota | The maximum amount of disk space used by a group or user | +
-| reservation | A preallocated amount of disk space assigned to a user or file system | +
-| mirror | An exact duplicate of a disk or partition | +
-| RAID-Z | ZFS implementation of [[wp>Raid_5#RAID_5|RAID-5]] | +
-| RAID-Z2 | ZFS implementation of [[wp>Raid_6#RAID_6|RAID-6]] | +
-| RAID-Z3 | ZFS implementation of Triple Parity RAID |+
  
-====ZFS Commands===+    * is given a zone ID when it is booted, 
 +    * shares the kernel with the global zone, 
 +    * contains a some of the installed packages, 
 +    * shares packages with the global zone, 
 +    * can contain other software and files not present in the global zone, 
 +    * contains a database of all locally installed applications as well as all applications shared by the global zone, 
 +    * has no knowledge of the other local zones, 
 +    * cannot be used to manage or to un-install local zones, including itself, 
 +    * contains all configuration data concerning the local zone such as its host name and IP address,
  
-The ZFS commands are as follows:+For those familiar with Solaris 10 zones, there were two types of local zones
  
-^ Command ^ Description ^ +  * **//Small zones//** or //Sparse Root zones// where the zone shared the following global zone directories: 
-| zpool | Used to manage ZFS pools | +    * /usr 
-| zfs | Used to manage ZFS file systems |+    * /lib 
 +    * /platform 
 +    * /sbin 
 +  * **//Big zones//**  or //Whole Root zones// that contained a complete Solaris installation.
  
-===The zpool Command===+In Solaris 11 only Whole Root Zones remain.
  
-The **zpool** command uses a set of subcommands: 
  
-^ Command ^ Description ^ +=====Lab #1 - Installing Non-global Zone=====
-| create | Creates storage pool and configures its mount point | +
-| destroy | Destroys a storage pool | +
-| list | Displays the health and storage usage of a pool | +
-| get | Displays a list of pool properties | +
-| set | Sets a property for a pool | +
-| status | Displays the health of a pool | +
-| history | Displays the commands issued for a pool since its creation | +
-| add | Adds a disk to an existing pool | +
-| remove | Removes a disk from an existing pool | +
-| replace | Replaces a disk in a pool by another disk | +
-| scrub | Verifies the checksums of a pool and repairs any damaged data blocks |+
  
-===The zfs Command=== +In this lab you will be installing a **Local Zone** into a ZFS file system. Start by looking at where you can create the directory that will contain future zones:
- +
-The **zfs** command use set of subcommands: +
- +
-^ Command ^ Description ^ +
-| create | Creates a ZFS file system, sets its properties and automatically mounts it | +
-| destroy | Destroys a ZFS file system or snapshot | +
-| list | Displays the properties and storage usage of a ZFS file system +
-| get | Displays a list of ZFS file system properties | +
-| set | Sets a property for a ZFS file system | +
-| snapshot | Creates a read-only copy of the state of a ZFS file system | +
-| rollback | Returns the file system to the state of the **last** snapshot | +
-| send | Creates a file from a snapshot in order to migrate it to another pool | +
-| receive | Retrieves a file created by the subcommand **send** | +
-| clone | Creates a copy of a snapshot | +
-| promote | Transforms a clone into a ZFS file system +
-| diff | Displays the file differences between two snapshots or a snapshot and its parent file system | +
-| mount | Mounts a ZFS file system at a specific mount point | +
-| unmount | Unmounts a ZFS file system | +
- +
-====Solaris Slices==== +
- +
-Those familiar with UFS on Solaris will remember having to manipulate Solaris **slices**. Those slices still exist:+
  
 <code> <code>
-root@solaris:~# format +root@solaris:~# zfs list 
-Searching for disks...done +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
- +mypool                            103M  51.5M    31K  /mypool 
- +mypool/iscsi                      103M   155M    16K  - 
-AVAILABLE DISK SELECTIONS: +rpool                            7.40G  11.9G  4.58M  /rpool 
-       0c7t0d0 <ATA-VBOX HARDDISK-1.0-20.00GB> +rpool/ROOT                       5.22G  11.9G    31K  legacy 
-          /pci@0,0/pci8086,2829@d/disk@0,+rpool/ROOT/solaris               5.22G  11.9G  4.08G  / 
-       1c7t2d0 <ATA-VBOX HARDDISK-1.0-200.00MB> +rpool/ROOT/solaris-backup-1      2.47M  11.9G  1.98G  / 
-          /pci@0,0/pci8086,2829@d/disk@2,+rpool/ROOT/solaris-backup-1/var    46K  11.9G   758M  /var 
-       2c7t3d0 <ATA-VBOX HARDDISK-1.0-200.00MB> +rpool/ROOT/solaris-backup-2       127K  11.9G  3.92G  / 
-          /pci@0,0/pci8086,2829@d/disk@3,+rpool/ROOT/solaris-backup-2/var    58K  11.9G   266M  /var 
-       3. c7t4d0 <ATA-VBOX HARDDISK-1.0-200.00MB> +rpool/ROOT/solaris/var            980M  11.9G   209M  /var 
-          /pci@0,0/pci8086,2829@d/disk@4,+rpool/VARSHARE                    102K  11.9G   102K  /var/share 
-       4. c7t5d0 <ATA-VBOX HARDDISK-1.0-200.00MB> +rpool/dump                       1.03G  12.0G  1.00G  - 
-          /pci@0,0/pci8086,2829@d/disk@5,+rpool/export                      110M  11.9G    32K  /export 
-       5. c7t6d0 <ATA-VBOX HARDDISK-1.0-200.00MB> +rpool/export/home                 110M  11.9G    32K  /export/home 
-          /pci@0,0/pci8086,2829@d/disk@6,+rpool/export/home/trainee         110M  11.9G   110M  /export/home/trainee 
-       6. c7t7d0 <ATA-VBOX HARDDISK-1.0 cyl 2608 alt hd 255 sec 63> +rpool/swap                       1.03G  12.0G  1.00G  -
-          /pci@0,0/pci8086,2829@d/disk@7,0 +
-Specify disk (enter its number): 0 +
-selecting c7t0d0 +
-[disk formatted] +
-/dev/dsk/c7t0d0s1 is part of active ZFS pool rpool. Please see zpool(1M). +
- +
- +
-FORMAT MENU: +
-        disk       - select a disk +
-        type       - select (define) a disk type +
-        partition  - select (define) a partition table +
-        current    - describe the current disk +
-        format     - format and analyze the disk +
-        fdisk      - run the fdisk program +
-        repair     - repair a defective sector +
-        label      - write label to the disk +
-        analyze    - surface analysis +
-        defect     - defect list management +
-        backup     - search for backup labels +
-        verify     - read and display labels +
-        inquiry    - show disk ID +
-        volname    - set 8-character volume name +
-        !<cmd>     - execute <cmd>, then return +
-        quit +
-format> part +
- +
- +
-PARTITION MENU: +
-        0      - change `0' partition +
-             - change `1' partition +
-        2      - change `2' partition +
-        3      - change `3' partition +
-        4      - change `4' partition +
-        5      - change `5' partition +
-        6      - change `6' partition +
-        select - select a predefined table +
-        modify - modify a predefined partition table +
-        name   - name the current table +
-        print  display the current table +
-        label  - write partition map and label to the disk +
-        !<cmd> - execute <cmd>, then return +
-        quit +
-partition> pri +
-Current partition table (original): +
-Total disk sectors available: 41926589 + 16384 (reserved sectors) +
- +
-Part      Tag    Flag     First Sector        Size        Last Sector +
-  0  BIOS_boot    wm               256     255.88MB         524287     +
-  1        usr    wm            524288      19.74GB         41913087     +
-  2 unassigned    wm                          0              0     +
-  3 unassigned    wm                          0              0     +
-  4 unassigned    wm                          0              0     +
-  5 unassigned    wm                          0              0     +
-  6 unassigned    wm                          0              0     +
-  8   reserved    wm          41913088       8.00MB         41929471     +
- +
-partition> +
 </code> </code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-Note the following line in the above output: +You **cannot** create zone datasets under the **rpool/ROOT** dataset.
- +
-**/dev/dsk/c7t0d0s1 is part of active ZFS pool rpool. Please see zpool(1M).** +
- +
-Since you are using ZFS for storage management, you no longer need to bother about slices !+
 </WRAP> </WRAP>
  
-====iSCSI Storage====+====Configuring the Zone's Dataset====
  
-In Solaris 10 the configuration of iSCSI LUNs was accomplished using the **iscsitadm** command and the ZFS **shareiscsi** property. In Solaris 11 these have been replaced by by the use of **Common Multiprotocol SCSI Target** ( COMSTAR ). COMSTAR is a **framework** that turns a Solaris host into a SCSI target.+It seems that the best option is to create new file system just for zones:
  
-COMSTAR includes the following features+<code> 
- +root@solaris:~# zfs create -o mountpoint=/zones rpool/zones 
-  * scalability, +root@solaris:~# zfs list 
-  * compatibility with generic host adapters, +NAME                              USED  AVAIL  REFER  MOUNTPOINT 
-  * multipathing, +mypool                            103M  51.5M    31K  /mypool 
-  * LUN masking and mapping functions+mypool/iscsi                      103M   155M    16K  - 
- +rpool                            7.40G  11.9G  4.58M  /rpool 
-An iSCSI target is an **endpoint** waiting for connections from clients called **initiators**A target can provide multiple **Logical Units** which provides classic read and write data operations+rpool/ROOT                       5.22G  11.9G    31K  legacy 
- +rpool/ROOT/solaris               5.22G  11.9G  4.08G  / 
-Each logical unit is backed by a **storage device**You can create a logical unit backed by any one of the following:+rpool/ROOT/solaris-backup-1      2.47M  11.9G  1.98G  / 
 +rpool/ROOT/solaris-backup-1/var    46K  11.9G   758M  /var 
 +rpool/ROOT/solaris-backup-2       127K  11.9G  3.92G  / 
 +rpool/ROOT/solaris-backup-2/var    58K  11.9G   266M  /var 
 +rpool/ROOT/solaris/var            980M  11.9G   209M  /var 
 +rpool/VARSHARE                    102K  11.9G   102K  /var/share 
 +rpool/dump                       1.03G  12.0G  1.00G  - 
 +rpool/export                      110M  11.9G    32K  /export 
 +rpool/export/home                 110M  11.9G    32K  /export/home 
 +rpool/export/home/trainee         110M  11.9G   110M  /export/home/trainee 
 +rpool/swap                       1.03G  12.0G  1.00G  - 
 +rpool/zones                        31K  11.9G    31K  /zones 
 +</code>
  
-  * a file, +Now create your zone using the **zonecfg** command:
-  * a thin-provisioned file, +
-  * a disk partition, +
-  * a ZFS volume. +
- +
-=====LAB #1 - Managing ZFS Storage===== +
- +
-====Displaying Online Help==== +
- +
-Both the **zpool**and **zfs** commands have built-in online help:+
  
 <code> <code>
-root@solaris:~# zpool help +root@solaris:~# zonecfg -z myzone 
-The following commands are supported: +Use 'create' to begin configuring a new zone. 
-add      attach   clear    create   destroy  detach   export   get       +zonecfg:myzonecreate 
-help     history  import   iostat   list     offline  online   remove    +createUsing system default template 'SYSdefault' 
-replace  scrub    set      split    status   upgrade   +zonecfg:myzone> set zonepath=/zones/myzone 
-For more info, runzpool help <command+zonecfg:myzone> set autoboot=true 
-root@solaris:~# zfs help +zonecfg:myzone
-The following commands are supported+
-allow       clone       create      destroy     diff        get          +
-groupspace  help        hold        holds       inherit     key          +
-list        mount       promote     receive     release     rename       +
-rollback    send        set         share       snapshot    unallow      +
-unmount     unshare     upgrade     userspace    +
-For more info, runzfs help <command>+
 </code> </code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-Note that you can get help on subcommands by either using **zpool help <subcommand>** or **zfs help <subcommand>**.+The **-z** switch stands for the **zonename**.
 </WRAP> </WRAP>
  
-====Checking Pool Status==== +Zones are represented by **XML** files. As you can see above, when created, the zonecfg uses the default template **/etc/zones/SYSdefault.xml**:
- +
-Use the **zpool** command with the **list** subcommand to display the details of your pool:+
  
 <code> <code>
-root@solaris:~# zpool list +root@solaris:~# cat /etc/zones/SYSdefault.xml 
-NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT +<?xml version="1.0"?>
-rpool  19.6G  6.96G  12.7G  35%  1.00x  ONLINE +
-</code>+
  
-Now use the **status** subcommand:+<!-- 
 + Copyright (c) 2010, 2011, Oracle and/or its affiliates. All rights reserved.
  
-<code> +    DO NOT EDIT THIS FILE.  Use zonecfg(1M) instead. 
-root@solaris:~# zpool status +-->
-  pool: rpool +
- state: ONLINE +
-  scan: none requested +
-config:+
  
- NAME        STATE     READ WRITE CKSUM +<!DOCTYPE zone PUBLIC "-//Sun Microsystems Inc//DTD Zones//EN" "file:///usr/share/lib/xml/dtd/zonecfg.dtd.1">
- rpool       ONLINE               0 +
-   c7t0d0s1  ONLINE               0+
  
-errors: No known data errors+<zone name="default" zonepath="" autoboot="false" brand="solaris" 
 +      ip-type="exclusive"> 
 +  <automatic-network lower-link="auto" linkname="net0" 
 +      link-protection="mac-nospoof" mac-address="random"/> 
 +</zone>
 </code> </code>
  
-====Creating a Mirrored Pool====+Note that you also have set the **autoboot** property to **yes** so that the zone starts on system boot.
  
-Create a ZFS mirrored pool called **mypool** using the first two of the five disks you recently created:+To show the configuration that zonecfg has already filled in for you by using the **/etc/zones/SYSdefault.xml** file, use the command **info** :
  
 <code> <code>
-root@solaris:~# zpool create mypool mirror c7t2d0 c7t3d0+zonecfg:myzone> info 
 +zonename: myzone 
 +zonepath: /zones/myzone 
 +brand: solaris 
 +autoboot: true 
 +bootargs:  
 +file-mac-profile:  
 +pool:  
 +limitpriv:  
 +scheduling-class:  
 +ip-type: exclusive 
 +hostid:  
 +fs-allowed:  
 +anet: 
 + linkname: net0 
 + lower-link: auto 
 + allowed-address not specified 
 + configure-allowed-address: true 
 + defrouter not specified 
 + allowed-dhcp-cids not specified 
 + link-protection: mac-nospoof 
 + mac-address: random 
 + mac-prefix not specified 
 + mac-slot not specified 
 + vlan-id not specified 
 + priority not specified 
 + rxrings not specified 
 + txrings not specified 
 + mtu not specified 
 + maxbw not specified 
 + rxfanout not specified 
 + vsi-typeid not specified 
 + vsi-vers not specified 
 + vsi-mgrid not specified 
 + etsbw-lcl not specified 
 + cos not specified 
 + pkey not specified 
 + linkmode not specified 
 +zonecfg:myzone> 
 </code> </code>
  
-Check that your pool has been created:+Finally, **commit** the configuration, **verify** the newly created **XML** file and **quit**:
  
 <code> <code>
-root@solaris:~# zpool list +zonecfg:myzone> commit 
-NAME     SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT +zonecfg:myzone> verify 
-mypool   187M   346K   187M   0%  1.00x  ONLINE +zonecfg:myzone> exit 
-rpool   19.6G  6.97G  12.7G  35%  1.00x  ONLINE  -+root@solaris:~# 
 </code> </code>
  
-Display the file systems using the **zfs** command and the **list** subcommand:+Your zone's configuration is now in its own **XML** file: 
 + 
 +<code> 
 +root@solaris:~# cat /etc/zones/myzone.xml  
 +<?xml version="1.0" encoding="UTF-8"?> 
 +<!DOCTYPE zone PUBLIC "-//Sun Microsystems Inc//DTD Zones//EN" "file:///usr/share/lib/xml/dtd/zonecfg.dtd.1"> 
 +<!-- 
 +    DO NOT EDIT THIS FILE.  Use zonecfg(1M) instead. 
 +--> 
 +<zone name="myzone" zonepath="/zones/myzone" autoboot="true" brand="solaris" ip-type="exclusive"> 
 +  <automatic-network lower-link="auto" linkname="net0" link-protection="mac-nospoof" mac-address="random"/> 
 +</zone> 
 +</code>  
 + 
 +Display the zones on your system by using the **list** subcommand of the **zoneadm** command:
  
 <code> <code>
-root@solaris:~# zfs list +root@solaris:~# zoneadm list -cv 
-NAME                              USED  AVAIL  REFER  MOUNTPOINT +  ID NAME             STATUS     PATH                           BRAND    IP     
-mypool                            272K   155M    31K  /mypool +   0 global           running    /                              solaris  shared 
-rpool                            7.02G  12.3G  4.58M  /rpool +   myzone           configured /zones/myzone                  solaris  excl 
-rpool/ROOT                       4.87G  12.3G    31K  legacy +
-rpool/ROOT/solaris               4.86G  12.3G  3.92G  / +
-rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / +
-rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var +
-rpool/ROOT/solaris/var            865M  12.3G   207M  /var +
-rpool/VARSHARE                    100K  12.3G   100K  /var/share +
-rpool/dump                       1.03G  12.3G  1.00G  - +
-rpool/export                     87.6M  12.3G    32K  /export +
-rpool/export/home                87.5M  12.3G    32K  /export/home +
-rpool/export/home/trainee        87.5M  12.3G  87.5M  /export/home/trainee +
-rpool/swap                       1.03G  12.3G  1.00G  -+
 </code> </code>
  
-<WRAP center round important 60%> +The switches used with the **list** subcommand are: 
-Note that the zpool command automatically creates a file system on **mypool** and mounts it at **/mypool**. + 
-</WRAP>+^ Switch ^ Description ^ 
 +| -c | Display all zones | 
 +| -v | Display verbose output |
  
-====Adding File Systems to an Existing Pool====+====Installing a Zone====
  
-Now create two file systems in your pool called **/home** and **/home/user1** and then display the results :+Now you are ready to install the zone with the following command:
  
 <code> <code>
-root@solaris:~# zfs create mypool/home +root@solaris:~# zoneadm -z myzone install
-root@solaris:~# zfs create mypool/home/user1 +
-root@solaris:~# zfs list +
-NAME                              USED  AVAIL  REFER  MOUNTPOINT +
-mypool                            184K   155M    32K  /mypool +
-mypool/home                        63K   155M    32K  /mypool/home +
-mypool/home/user1                  31K   155M    31K  /mypool/home/user1 +
-rpool                            7.02G  12.3G  4.58M  /rpool +
-rpool/ROOT                       4.87G  12.3G    31K  legacy +
-rpool/ROOT/solaris               4.86G  12.3G  3.92G  / +
-rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / +
-rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var +
-rpool/ROOT/solaris/var            865M  12.3G   207M  /var +
-rpool/VARSHARE                    100K  12.3G   100K  /var/share +
-rpool/dump                       1.03G  12.3G  1.00G  - +
-rpool/export                     87.6M  12.3G    32K  /export +
-rpool/export/home                87.5M  12.3G    32K  /export/home +
-rpool/export/home/trainee        87.5M  12.3G  87.5M  /export/home/trainee +
-rpool/swap                       1.03G  12.3G  1.00G  -+
 </code> </code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-Note that the two file systems **share** the same disk space as the parent pool.+Go grab a cup of coffee or juice ! The installation process can take upto 20 minutes.
 </WRAP> </WRAP>
  
-====Changing the Pool Mount Point==== +When installation is complete, you will see something similar to the following:
- +
-Suppose that you want the /home file system mounted elsewhere rather than under the /mypool mount point. With ZFS, this is very simple:+
  
 <code> <code>
-root@solaris:~# mkdir /users +root@solaris:~# zoneadm -z myzone install 
-root@solaris:~# zfs set mountpoint=/users mypool/home +The following ZFS file system(s) have been created
-root@solaris:~# zfs list +    rpool/zones/myzone 
-NAME                              USED  AVAIL  REFER  MOUNTPOINT +Progress being logged to /var/log/zones/zoneadm.20121214T123059Z.myzone.install 
-mypool                            196K   155M    31K  /mypool +       Image: Preparing at /zones/myzone/root.
-mypool/home                        63K   155M    32K  /users +
-mypool/home/user1                  31K   155M    31K  /users/user1 +
-rpool                            7.02G  12.3G  4.58M  /rpool +
-rpool/ROOT                       4.87G  12.3G    31K  legacy +
-rpool/ROOT/solaris               4.86G  12.3G  3.92G  / +
-rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / +
-rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var +
-rpool/ROOT/solaris/var            865M  12.3G   207M  /var +
-rpool/VARSHARE                    100K  12.3G   100K  /var/share +
-rpool/dump                       1.03G  12.3G  1.00G  - +
-rpool/export                     87.6M  12.3G    32K  /export +
-rpool/export/home                87.5M  12.3G    32K  /export/home +
-rpool/export/home/trainee        87.5M  12.3G  87.5M  /export/home/trainee +
-rpool/swap                       1.03G  12.3G  1.00G  - +
-</code>+
  
-<WRAP center round important 60%> + AI Manifest: /tmp/manifest.xml.UWaiVk 
-Note that ZFS has automatically and transparently unmounted **/mypool/home** and re-mounted it at **/users**+  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml 
-</WRAP>+    Zonename: myzone 
 +Installation: Starting ...
  
-====Adding a Hot Spare====+              Creating IPS image 
 +Startup linked: 1/1 done 
 +              Installing packages from: 
 +                  solaris 
 +                      origin:  http://pkg.oracle.com/solaris/release/ 
 +DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED 
 +Completed                            183/183   33556/33556  222.2/222.2  154k/s
  
-To display all of the properties associated with **mypool**, use the **zpool** command and the **get** subcommand:+PHASE                                          ITEMS 
 +Installing new actions                   46825/46825 
 +Updating package state database                 Done  
 +Updating image state                            Done  
 +Creating fast lookup database                   Done  
 +InstallationSucceeded
  
-<code> +        Note: Man pages can be obtained by installing pkg:/system/manual
-root@solaris:~# zpool get all mypool +
-NAME    PROPERTY       VALUE                SOURCE +
-mypool  allocated      673K                 - +
-mypool  altroot        -                    default +
-mypool  autoexpand     off                  default +
-mypool  autoreplace    off                  default +
-mypool  bootfs         -                    default +
-mypool  cachefile      -                    default +
-mypool  capacity       0%                   - +
-mypool  dedupditto                        default +
-mypool  dedupratio     1.00x                - +
-mypool  delegation     on                   default +
-mypool  failmode       wait                 default +
-mypool  free           186M                 - +
-mypool  guid           6502877439742337134 +
-mypool  health         ONLINE               - +
-mypool  listshares     off                  default +
-mypool  listsnapshots  off                  default +
-mypool  readonly       off                  - +
-mypool  size           187M                 - +
-mypool  version        34                   default +
-</code>+
  
-<WRAP center round important 60%> + done.
-Note that the **autoreplace** property is set to **off**In order to use a hot spare, this property needs to be set to **on**. +
-</WRAP>+
  
-Set the autoreplace property to on:+        DoneInstallation completed in 3389.965 seconds.
  
-<code> 
-root@solaris:~# zpool set autoreplace=on mypool 
-root@solaris:~# zpool get autoreplace mypool 
-NAME    PROPERTY     VALUE  SOURCE 
-mypool  autoreplace  on     local 
-</code> 
  
-Add the fourth 200 Mb disk that you have created to **mypool** as a spare:+  Next StepsBoot the zone, then log into the zone console (zlogin -C)
  
-<code> +              to complete the configuration process.
-root@solaris:~# zpool add mypool spare c7t5d0  +
-root@solaris:~# zpool status mypool +
-  pool: mypool +
- state: ONLINE +
-  scan: none requested +
-config: +
- +
- NAME        STATE     READ WRITE CKSUM +
- mypool      ONLINE               0 +
-   mirror-0  ONLINE               0 +
-     c7t2d0  ONLINE               0 +
-     c7t3d0  ONLINE               0 +
- spares +
-   c7t5d0    AVAIL   +
  
-errors: No known data errors+Log saved in non-global zone as /zones/myzone/root/var/log/zones/zoneadm.20121214T123059Z.myzone.install
 </code> </code>
  
-====Observing Pool Activity==== +Now use zonesadm'**list** subcommand to display the zones:
- +
-Create a random data file in **/users/user1**:+
  
 <code> <code>
-root@solaris:~# cat /dev/urandom > /users/user1/randomfile & +root@solaris:~# zoneadm list -cv 
-[1] 2617+  ID NAME             STATUS     PATH                           BRAND    IP     
 +   0 global           running                                 solaris  shared 
 +   - myzone           installed  /zones/myzone                  solaris  excl  
 </code> </code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-Write down the PID, you will need it in 2 minutes to kill the process you have just started.+Note that the myzone **STATUS** is now **installed** as opposed to **configured**.
 </WRAP> </WRAP>
  
-Now display the writes to the pool using the **iostat** subcommand of the **zpool** command:+====A Zone's First Boot==== 
 + 
 +Verify myzone and then boot it:
  
 <code> <code>
-root@solaris:~# zpool iostat -v 3 +root@solaris:~# zoneadm -z myzone verify 
-               capacity     operations    bandwidth +root@solaris:~# zoneadm -z myzone boot 
-pool        alloc   free   read  write   read  write +</code>
-----------  -----  -----  -----  -----  -----  ----- +
-mypool      29.6M   157M      0      5    184  50.6K +
-  mirror    29.6M   157M      0      5    184  50.6K +
-    c7t2d0      -      -      0      5  1.46K  59.8K +
-    c7t3d0      -      -      0      5  1.33K  56.9K +
-----------  -----  -----  -----  -----  -----  ----- +
-rpool       6.96G  12.7G      0     14  27.3K   124K +
-  c7t0d0s1  6.96G  12.7G      0     14  27.3K   124K +
-----------  -----  -----  -----  -----  -----  -----+
  
-               capacity     operations    bandwidth +Check if the zone status is now **running**:
-pool        alloc   free   read  write   read  write +
-----------  -----  -----  -----  -----  -----  ----- +
-mypool      39.9M   147M      0    127      0  2.89M +
-  mirror    39.9M   147M      0    127      0  2.89M +
-    c7t2d0      -      -      0    113      0  2.78M +
-    c7t3d0      -      -      0    110      0  2.89M +
-----------  -----  -----  -----  -----  -----  ----- +
-rpool       6.96G  12.7G      0     13      0  29.5K +
-  c7t0d0s1  6.96G  12.7G      0     13      0  29.5K +
-----------  -----  -----  -----  -----  -----  -----+
  
-               capacity     operations    bandwidth +<code> 
-pool        alloc   free   read  write   read  write +root@solaris:~# zoneadm list -cv 
-----------  -----  -----  -----  -----  -----  ----+  ID NAME             STATUS     PATH                           BRAND    IP     
-mypool      53.3M   134M      0    128      0  2.78M +   global           running    /                              solaris  shared 
-  mirror    53.3M   134M      0    128      0  2.78M +   1 myzone           running    /zones/myzone                  solaris  excl   
-    c7t2d0      -      -      0    112      0  2.44M +</code>
-    c7t3d0      -      -      0    113      0  2.78M +
-----------  -----  -----  -----  -----  -----  ----- +
-rpool       6.96G  12.7G      0     77      0   500K +
-  c7t0d0s1  6.96G  12.7G      0     77      0   500K +
-----------  -----  -----  -----  -----  -----  -----+
  
-               capacity     operations    bandwidth +Now you can login to the zone using the **zlogin** command:
-pool        alloc   free   read  write   read  write +
-----------  -----  -----  -----  -----  -----  ----- +
-mypool      65.5M   121M      0    171      0  3.08M +
-  mirror    65.5M   121M      0    171      0  3.08M +
-    c7t2d0      -      -      0    153      0  3.08M +
-    c7t3d0      -      -      0    153      0  3.08M +
-----------  -----  -----  -----  -----  -----  ----- +
-rpool       6.96G  12.7G      0     21      0  45.1K +
-  c7t0d0s1  6.96G  12.7G      0     21      0  45.1K +
-----------  -----  -----  -----  -----  -----  -----+
  
-               capacity     operations    bandwidth +<code> 
-pool        alloc   free   read  write   read  write +root@solaris:~# zlogin -C myzone 
-----------  -----  -----  -----  -----  -----  ----- +[Connected to zone 'myzone' console]
-mypool      75.8M   111M      0    172      0  2.88M +
-  mirror    75.8M   111M      0    172      0  2.88M +
-    c7t2d0      -      -      0    149      0  2.85M +
-    c7t3d0      -      -      0    149      0  2.88M +
-----------  -----  -----  -----  -----  -----  ----- +
-rpool       6.96G  12.7G      0      0      0    882 +
-  c7t0d0s1  6.96G  12.7G      0      0      0    882 +
-----------  -----  -----  -----  -----  -----  ----- +
- +
-^C+
 </code> </code>
  
-<WRAP center round todo 60%> +Hit <key>Enter</keyand configure the zone:
-Is your mirror functioning ? +
-</WRAP>+
  
-Now kill the process creating the file **randomfile** :+  host name = myzone.fenestros.loc 
 +  time zone = Europe/Paris 
 +  root password = Wind0w$ 
 +  user name and login = myzone 
 +  * user password = fenestr0$
  
-  # kill -9 PID [Entrée] +Once configured you will see messages similar to the following:
- +
-Delete the file **/users/user1/randomfile**:+
  
 <code> <code>
-root@solaris:~# rm -rf /users/user1/randomfile +SC profile successfully generated. 
-[1]+  Killed                  cat /dev/urandom > /users/user1/randomfile+Exiting System Configuration Tool. Log is available at
 +/system/volatile/sysconfig/sysconfig.log.7316
 </code> </code>
  
-====Setting a User Quota==== +Use the tilde-dot ( ~. ) shortcut to leave the zone and return to your global zone:
- +
-To set a user quota, you need to use the **set** subcommand of **zpool**:+
  
 <code> <code>
-root@solaris:~# zfs set quota=50M mypool/home/user1 +~. 
-root@solaris:~# zfs get quota mypool +[Connection to zone 'myzone' console closed] 
-NAME    PROPERTY  VALUE  SOURCE +root@solaris:~# 
-mypool  quota     none   default +
-root@solaris:~# zfs list +
-NAME                              USED  AVAIL  REFER  MOUNTPOINT +
-mypool                            380K   155M    31K  /mypool +
-mypool/home                        63K   155M    32K  /users +
-mypool/home/user1                  31K  50.0M    31K  /users/user1 +
-rpool                            7.03G  12.3G  4.58M  /rpool +
-rpool/ROOT                       4.87G  12.3G    31K  legacy +
-rpool/ROOT/solaris               4.86G  12.3G  3.92G  / +
-rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / +
-rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var +
-rpool/ROOT/solaris/var            865M  12.3G   207M  /var +
-rpool/VARSHARE                    101K  12.3G   101K  /var/share +
-rpool/dump                       1.03G  12.3G  1.00G  - +
-rpool/export                     89.3M  12.3G    32K  /export +
-rpool/export/home                89.3M  12.3G    32K  /export/home +
-rpool/export/home/trainee        89.3M  12.3G  89.3M  /export/home/trainee +
-rpool/swap                       1.03G  12.3G  1.00G  -+
 </code> </code>
  
-<WRAP center round important 60%> +====Logging into a Zone Directly as Root====
-Note that the quota of 50 Mb has been set on /mypool/home/user1. +
-</WRAP>+
  
-Now create a random data file in /users/user1:+Log back into the zone as root using the **-S** switch of the **zlogin** command:
  
 <code> <code>
-root@solaris:~# cat /dev/urandom > /users/user1/testfile +root@solaris:~# zlogin -S myzone 
-catoutput error (0/131072 characters written) +[Connected to zone 'myzone' pts/4] 
-Disc quota exceeded+@myzone:~$ whoami 
 +root 
 +@myzone:~$ ~.                                                                                                                                                            
 +[Connection to zone 'myzone' pts/4 closed] 
 +root@solaris:~# 
 </code> </code>
  
-<WRAP center round important 60%> +====Logging into Zone as a specific User====
-After few minutes, you will see the **Disc quota exceeded** message. +
-</WRAP>+
  
-Looking at the available disk space on /users/user1 you will notice that the value is now 0:+To log into the zone as the **myzone** user that you previously created, use the following command:
  
 <code> <code>
-root@solaris:~# zfs list mypool/home/user1 +root@solaris:~# zlogin -l myzone myzone 
-NAME                USED  AVAIL  REFER  MOUNTPOINT +[Connected to zone 'myzone' pts/4] 
-mypool/home/user1  50.1M      0  50.1M  /users/user1+No directory! Logging in with home=
 +Oracle Corporation SunOS 5.11 11.1 September 2012 
 +-bash-4.1$ whoami 
 +myzone 
 +-bash-4.1$ ~. 
 +[Connection to zone 'myzone' pts/4 closed] 
 +root@solaris:~# 
 </code> </code>
  
-Delete the **testfile** file:+=====LAB #2 - Administering Zones=====
  
-<code> +====Sharing Files between the Global and Local Zones====
-root@solaris:~# rm -f /users/user1/testfile +
-</code>+
  
-====Setting User Reservation====+To share files between the two zones, you need to configure LOFS mount.
  
-As with setting quotassetting reservation is very simple:+In the global zonecreate directory for sharing files:
  
 <code> <code>
-root@solaris:~# zfs set reservation=25M mypool/home/user1 +root@solaris:~# mkdir -p /root/zones/myzone
-root@solaris:~# zfs get reservation mypool/home/user1 +
-NAME               PROPERTY     VALUE  SOURCE +
-mypool/home/user1  reservation  25M    local+
 </code> </code>
-====Using Snapshots==== 
  
-Create a file in **/users/user1**:+Now use the **zonecfg** command to configure the share:
  
 <code> <code>
-root@solaris:~# echo "This is a test file for the first snapshot" > /users/user1/snapshot1  +root@solaris:~# zonecfg -z myzone 
-root@solaris:~# ls /users/user1 +zonecfg:myzone> add fs 
-snapshot1+zonecfg:myzone:fsset dir=/root/share 
 +zonecfg:myzone:fs> set special=/root/zones/myzone 
 +zonecfg:myzone:fs> set type=lofs 
 +zonecfg:myzone:fs> add options [rw,nodevices] 
 +zonecfg:myzone:fs> end 
 +zonecfg:myzone> exit 
 +root@solaris:~# 
 </code> </code>
  
-To create a snapshot of a ZFS file system, you need to use the **snapshot** subcommand of the **zfs** command:+<WRAP center round important 60%> 
 +Note that **dir** indicates the mount point in the local zone whilst **special** indicates the share in the global zone. 
 +</WRAP> 
 + 
 +Now create a file in **/root/zones/myzone**:
  
 <code> <code>
-root@solaris:~# zfs snapshot mypool/home/user1@Dec13+root@solaris:~# touch /root/zones/myzone/testshare
 </code> </code>
  
-The snapshot is located in a hidden directory under **/users/user1** called **.zfs/snapshot**:+Reboot myzone, check it is up and running, log into myzone as root and check you can see the share. Finally, create a file in **/root/share**:
  
 <code> <code>
-root@solaris:~# ls -/users/user1/.zfs/snapshot +root@solaris:~# zoneadm -z myzone reboot 
-total 3 +root@solaris:~# zoneadm list -cv 
-drwxr-xr-x   root     root           3 Dec 13 10:24 Dec13+  ID NAME             STATUS     PATH                           BRAND    IP     
 +   0 global           running                                 solaris  shared 
 +   2 myzone           running    /zones/myzone                  solaris  excl   
 +root@solaris:~# zlogin -S myzone 
 +[Connected to zone 'myzone' pts/4] 
 +@myzone:~$ cd /root 
 +@myzone:~root$ ls                                                                                                                                                        
 +share 
 +@myzone:~root$ ls share                                                                                                                                                  
 +testshare 
 +@myzone:~root$ touch share/shareback                                                                                                                                     
 +@myzone:~root$ ls share                                                                                                                                                  
 +shareback  testshare
 </code> </code>
  
-As you can see, the snapshot contains the **snapshot1** file:+Leave myzone and check if you can see the **shareback** file from the global zone:
  
 <code> <code>
-root@solaris:~# ls -l /users/user1/.zfs/snapshot/Dec13+@myzone:~root$ ~.                                                                                                                                                        
-total 2 +[Connection to zone 'myzone' pts/4 closed] 
--rw-r--r--   1 root     root          43 Dec 13 10:24 snapshot1+root@solaris:~# ls /root/zones/myzone 
 +shareback  testshare
 </code> </code>
  
-It is important to note here that the .zfs directory is also hidden from the **ls** command, even when using the **-a** switch:+You can also share the global zone's DVD-ROM driveHowever do **not** use the process explained above since it creates a permanent LOFS mount which will stop you ejecting the DVD from the global zone's DVD-ROM drive whilst the local zone is running:
  
 <code> <code>
-root@solaris:~# ls -laR /users/user1 +root@solaris:~# mkdir /zones/myzone/root/globaldvdrom 
-/users/user1: +root@solaris:~# ls /cdrom/cdrom0 
-total 8 +32Bit                           autorun.sh                      runasroot.sh                    VBoxWindowsAdditions-amd64.exe 
-drwxr-xr-x   2 root     root           3 Dec 13 10:24 +64Bit                           cert                            VBoxLinuxAdditions.run          VBoxWindowsAdditions-x86.exe 
-drwxr-xr-x   3 root     root           3 Dec 12 16:09 .. +AUTORUN.INF                     OS2                             VBoxSolarisAdditions.pkg        VBoxWindowsAdditions.exe 
--rw-r--r--   root     root          43 Dec 13 10:24 snapshot1+root@solaris:~# mount -F lofs /cdrom/cdrom0 /zones/myzone/root/globaldvdrom
 </code> </code>
  
-You can also create a recursive snapshot of all file systems in a pool:+Now check you can see the contents of the DVD from within the local zone:
  
 <code> <code>
-root@solaris:~# zfs snapshot -r mypool@Dec13-1+root@solaris:~# zlogin myzone ls /globaldvdrom 
 +32Bit 
 +64Bit 
 +AUTORUN.INF 
 +autorun.sh 
 +cert 
 +OS2 
 +runasroot.sh 
 +VBoxLinuxAdditions.run 
 +VBoxSolarisAdditions.pkg 
 +VBoxWindowsAdditions-amd64.exe 
 +VBoxWindowsAdditions-x86.exe 
 +VBoxWindowsAdditions.exe
 </code> </code>
  
-The snapshots are stored in their respective .zfs directories:+Finally unmount the DVD-Rom and eject it:
  
 <code> <code>
-root@solaris:~# ls /users/.zfs/snapshot +root@solaris:~# umount /zones/myzone/root/globaldvdrom 
-Dec13-1 +root@solaris:~# eject cdrom 
-root@solaris:~# ls /users/user1/.zfs/snapshot +cdrom /dev/dsk/c7t1d0s2 ejected
-Dec13    Dec13-1+
 </code> </code>
  
-You can list all snapshots as follows:+====Removing the Share==== 
 + 
 +In order to remove the LOFS share, proceed as follows:
  
 <code> <code>
-root@solaris:~# zfs list -t snapshot -r mypool +root@solaris:~# zonecfg -z myzone 
-NAME                       USED  AVAIL  REFER  MOUNTPOINT +zonecfg:myzone> info fs 
-mypool@Dec13-1                0      -    31K  - +fs: 
-mypool/home@Dec13-1                -    32K  - + dir: /root/share 
-mypool/home/user1@Dec13            -  31.5K  - + special: /root/zones/myzone 
-mypool/home/user1@Dec13-1          -  31.5K  -+ raw not specified 
 + type: lofs 
 + options: [rw,nodevices] 
 +zonecfg:myzone> remove fs dir=/root/share 
 +zonecfg:myzone> exit 
 +root@solaris:~# zoneadm -z myzone reboot
 </code> </code>
  
-Create another file in **/users/user1**:+====Allocating CPU Resources====
  
-<code> +First, lets see what the non-global zone currently sees as available processors:
-root@solaris:~# echo "This is a test file for the second snapshot" > /users/user1/snapshot2 +
-root@solaris:~# ls -l /users/user1 +
-total 4 +
--rw-r--r--   1 root     root          43 Dec 13 10:44 snapshot1 +
--rw-r--r--   1 root     root          44 Dec 13 10:45 snapshot2 +
-root@solaris:~# cat /users/user1/snapshot1 +
-This is a test file for the first snapshot +
-root@solaris:~# cat /users/user1/snapshot2 +
-This is a test file for the second snapshot +
-</code> +
- +
-Now take a second recursive snapshot of **mypool**:+
  
 <code> <code>
-root@solaris:~# zfs snapshot -r mypool@Dec13-2 +root@solaris:~# zlogin myzone psrinfo -v 
-root@solaris:~# zfs list -t snapshot -r mypool +Status of virtual processor 0 as of12/15/2012 06:54:05 
-NAME                       USED  AVAIL  REFER  MOUNTPOINT +  on-line since 12/15/2012 07:25:33. 
-mypool@Dec13-1                0      -    31K  - +  The i386 processor operates at 2271 MHz, 
-mypool@Dec13-2                0      -    31K  - + and has an i387 compatible floating point processor. 
-mypool/home@Dec13-1                -    32K  - +Status of virtual processor 1 as of: 12/15/2012 06:54:05 
-mypool/home@Dec13-2                -    32K  - +  on-line since 12/15/2012 07:25:34. 
-mypool/home/user1@Dec13            -  31.5K  - +  The i386 processor operates at 2271 MHz, 
-mypool/home/user1@Dec13-1          -  31.5K  - + and has an i387 compatible floating point processor.
-mypool/home/user1@Dec13-2          -    33K  -+
 </code> </code>
  
-The **diff** subcommand of the **zfs** command displays the differences between two snapshots:+As you can see, the zone has //grabbed// both of the processors available in the global zone. In order to limit the availabilty to just 1 processor, you need to change the zone's configuration:
  
 <code> <code>
-root@solaris:~# zfs diff mypool/home/user1@Dec13-1 mypool/home/user1@Dec13-2 +root@solaris:~# zonecfg -z myzone 
-M /users/user1/ +zonecfg:myzone> add dedicated-cpu 
-M /users/user1/snapshot1 +zonecfg:myzone:dedicated-cpu> set ncpus=1 
-+ /users/user1/snapshot2+zonecfg:myzone:dedicated-cpu> end 
 +zonecfg:myzone> exit 
 +root@solaris:~# zoneadm -z myzone reboot 
 +root@solaris:~# zlogin myzone psrinfo -v 
 +Status of virtual processor 0 as of: 12/15/2012 07:12:29 
 +  on-line since 12/15/2012 07:25:33. 
 +  The i386 processor operates at 2271 MHz, 
 + and has an i387 compatible floating point processor.
 </code> </code>
  
 <WRAP center round important 60%> <WRAP center round important 60%>
-The above out put shows that **/users/user1/snapshot2** has been added and included in the **second** snapshot as it appears in the command line.+The dedicated cpu is now invisible to all other non-global zones. You can also define a range of CPUs, such as **1-3**in which case, when the non-global zone boots, the system will allocate 1 CPU as a minimum and 2 or 3 CPUs if they are available.
 </WRAP> </WRAP>
  
-This output can contain the following characters: +Before proceeding further, remove the dedicated CPU:
- +
-^ Character ^ Description ^ +
-| M | **M**odification | +
-| R | **R**enamed | +
-| + | Added | +
-| - | Deleted | +
- +
-Note that you cannot compare the snapshots in the reverse order:+
  
 <code> <code>
-root@solaris:~# zfs diff mypool/home/user1@Dec13-2 mypool/home/user1@Dec13-1 +root@solaris:~# zonecfg -z myzone "remove dedicated-cpu" 
-Unable to obtain diffsmypool/home/user1@Dec13-1 is not a descendant dataset of mypool/home/user1@Dec13-2+root@solaris:~# zoneadm -z myzone reboot
 </code> </code>
  
-====Rolling Back to a Snapshot====+====Fair Share Scheduler====
  
-In the case that you wish to rollback to a specific snapshot, note that you can **only** roll back to the last snapshot as shown by the output of **zfs list**:+Another way of sharing resources is to use the Fair Share Scheduler. Firstly, you need to set that scheduler as the default for the system:
  
 <code> <code>
-root@solaris:~# zfs list -t snapshot -r mypool +root@solaris:~# dispadmin -d FSS 
-NAME                       USED  AVAIL  REFER  MOUNTPOINT +root@solaris:~# dispadmin -l 
-mypool@Dec13-1                0      -    31K  - +CONFIGURED CLASSES 
-mypool@Dec13-2                0      -    31K  - +==================
-mypool/home@Dec13-1                -    32K  - +
-mypool/home@Dec13-2                -    32K  - +
-mypool/home/user1@Dec13            -  31.5K  - +
-mypool/home/user1@Dec13-1          -  31.5K  - +
-mypool/home/user1@Dec13-2          -    33K  +
-root@solaris:~# zfs rollback mypool/home/user1@Dec13-1 +
-cannot rollback to 'mypool/home/user1@Dec13-1': more recent snapshots exist +
-use '-r' to force deletion of the following snapshots: +
-mypool/home/user1@Dec13-2 +
-</code>+
  
-Delete the **Dec13-2** snapshot as follows: +SYS (System Class) 
- +TS (Time Sharing) 
-<code> +SDC (System Duty-Cycle Class) 
-root@solaris:~# zfs destroy mypool/home/user1@Dec13-2 +FX (Fixed Priority) 
-root@solaris:~# zfs list -t snapshot -r mypool +IA (Interactive) 
-NAME                       USED  AVAIL  REFER  MOUNTPOINT +RT (Real Time) 
-mypool@Dec13-1                0      -    31K  - +FSS (Fair Share)
-mypool@Dec13-2                0      -    31K  - +
-mypool/home@Dec13-1                -    32K  - +
-mypool/home@Dec13-2                -    32K  - +
-mypool/home/user1@Dec13            -  31.5K  - +
-mypool/home/user1@Dec13-1          -  31.5K  -+
 </code> </code>
  
-Now roll back to **Dec13-1**:+Next set the FSS scheduler as the default for your zone:
  
 <code> <code>
-root@solaris:~# zfs rollback mypool/home/user1@Dec13-+root@solaris:~# zonecfg -z myzone "set scheduling-class=FSS"
-root@solaris:~# ls -l /users/user1 +
-total 2 +
--rw-r--r--   1 root     root          43 Dec 13 10:24 snapshot1+
 </code> </code>
  
-<WRAP center round important 60%> +Now you can give your global zone 75% of your processors leaving 25% for your zone:
-Note that the **snapshot2** file has obviously disappeared since it was not in the **Dec13-1** snapshot. +
-</WRAP>  +
- +
-====Cloning a Snapshot==== +
- +
-Snapshots are read-only. To convert a snapshot to a writable file system, you can use the **clone** subcommand of the **zfs** command:+
  
 <code> <code>
-root@solaris:~# zfs clone mypool/home/user1@Dec13-1 mypool/home/user3 +root@solaris:~# zonecfg -z global 
-root@solaris:~# zfs list +zonecfg:global> set cpu-shares=75 
-NAME                              USED  AVAIL  REFER  MOUNTPOINT +zonecfg:global> exit 
-mypool                           25.5M   129M    31K  /mypool +root@solaris:~# zonecfg -z myzone 
-mypool/home                      25.1M   129M    34K  /users +zonecfg:myzone> set cpu-shares=25 
-mypool/home/user1                50.5K  50.0M  31.5K  /users/user1 +zonecfg:myzone> exit
-mypool/home/user3                  18K   129M  31.5K  /users/user3 +
-rpool                            7.03G  12.3G  4.58M  /rpool +
-rpool/ROOT                       4.87G  12.3G    31K  legacy +
-rpool/ROOT/solaris               4.86G  12.3G  3.92G  / +
-rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / +
-rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var +
-rpool/ROOT/solaris/var            865M  12.3G   207M  /var +
-rpool/VARSHARE                    102K  12.3G   102K  /var/share +
-rpool/dump                       1.03G  12.3G  1.00G  - +
-rpool/export                     90.5M  12.3G    32K  /export +
-rpool/export/home                90.5M  12.3G    32K  /export/home +
-rpool/export/home/trainee        90.4M  12.3G  90.4M  /export/home/trainee +
-rpool/swap                       1.03G  12.3G  1.00G  -+
 </code> </code>
  
-Display the contents of **/users/user3**:+Finally use the **prstat** command to display the CPU resource balancing:
  
 <code> <code>
-root@solaris:~# ls -/users/user3 +   PID USERNAME  SIZE   RSS STATE   PRI NICE      TIME  CPU PROCESS/NLWP       
-total 2 +  3725 trainee   518M  180M sleep    49    0   0:31:48 0.4% firefox/21 
--rw-r--r--   root     root          43 Dec 13 10:24 snapshot1 +  3738 trainee   129M   18M sleep    59    0   0:00:37 0.4% gnome-terminal/
-</code>+ 11208 daemon     14M 3380K sleep    59    0   0:00:04 0.3% rcapd/1 
 +  1159 trainee   169M  152M sleep    59    0   0:04:51 0.1% Xorg/3 
 +  3661 trainee   227M  149M sleep    59    0   0:03:32 0.1% java/23 
 +     5 root        0K    0K sleep    99  -20   0:01:11 0.1% zpool-rpool/136 
 +  3683 trainee    13M  724K sleep    59    0   0:01:25 0.0% VBoxClient/
 + 12971 root       11M 3744K cpu1     59    0   0:00:00 0.0% prstat/1 
 + 12134 netadm   5340K 2808K sleep    59    0   0:00:00 0.0% nwamd/7 
 + 12308 root     5880K 2296K sleep    59    0   0:00:00 0.0% nscd/25 
 +  3645 trainee   150M   35M sleep    59    0   0:00:06 0.0% gnome-panel/2 
 +  3644 trainee   128M   15M sleep    59    0   0:00:12 0.0% metacity/
 + 12400 root     3964K 1008K sleep    59    0   0:00:00 0.0% syslogd/11 
 +  3658 trainee    61M   23M sleep    12   19   0:00:03 0.0% updatemanagerno/
 +   814 root       16M 6288K sleep    59    0   0:00:05 0.0% nscd/37 
 +   932 root       11M  600K sleep    59    0   0:00:05 0.0% VBoxService/
 +   957 root       11M 1144K sleep    59    0   0:00:00 0.0% syslogd/11 
 +   818 root        0K    0K sleep    99  -20   0:00:00 0.0% zpool-mypool/136 
 +   637 root     9408K 1036K sleep    59    0   0:00:00 0.0% dhcpagent/
 +   881 daemon   3356K    4K sleep    59    0   0:00:00 0.0% rpcbind/1 
 +    95 netadm   4296K  680K sleep    59    0   0:00:00 0.0% ipmgmtd/6 
 +   104 root     9692K  388K sleep    59    0   0:00:00 0.0% in.mpathd/1 
 +    85 daemon     16M 2940K sleep    59    0   0:00:00 0.0% kcfd/
 +    50 root       16M 1560K sleep    59    0   0:00:00 0.0% dlmgmtd/7 
 +ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE                         
 +          124 4095M  973M    48%   0:53:57 1.4% global                       
 +           28  137M   46M   2.2%   0:00:08 0.0% myzone                      
  
-====Using Compression==== 
  
-In order to minimize storage space, you can make a file system use compression. Compression can be activated either at creation time or after creation. Compression only works for new data. Any existing data in the file system at the time of activating compression remains uncompressed. 
  
-To activate compression on an existing file system, you need to change the file system's **compression** property from off to on: 
  
-<code> 
-root@solaris:~# zfs set compression=on mypool/home/user1 
-root@solaris:~# zfs get compression mypool/home/user1 
-NAME               PROPERTY     VALUE  SOURCE 
-mypool/home/user1  compression  on     local 
-</code> 
  
-====Using De-duplication==== 
  
-Another space saving property of ZFS file systems is **De-duplication**: +Total152 processes, 956 lwps, load averages0.08, 0.12, 0.12
- +
-<code> +
-root@solaris:~# zfs set dedup=on mypool/home/user1 +
-root@solaris:~# zfs get dedup mypool/home/user1 +
-NAME               PROPERTY  VALUE  SOURCE +
-mypool/home/user1  dedup     on     local+
 </code> </code>
 +====Allocating Memory====
  
-====Using Encryption====+Three types of memory capping are possible within a zone:
  
-Unlike **Compression** and **De-duplication****Encryption** can only be set on file system at the time of creation:+^ Cap ^ Description ^ 
 +| Physical | Total amount of physical memory available to the zone. Once past the capmemory pages are paged out | 
 +| Locked | Amount of memory that can be allocated to greedy application by a zone | 
 +| Swap | Amount of swap space that can be used by a zone |
  
-<code> +To cap the physical memory of a zoneyou need to add and correctly configure the **capped-memory** property:
-root@solaris:~# zfs create -o encryption=on mypool/home/user2 +
-Enter passphrase for 'mypool/home/user2': fenestros +
-Enter again: fenestros +
-</code> +
- +
-<WRAP center round important 60%> +
-Note that the passphrase is not shown in the real output of the command. It is in the above example only for the purposes of this lesson. +
-</WRAP> +
- +
-To check if encryption is active on file systemuse the following command:+
  
 <code> <code>
-root@solaris:~# zfs get encryption mypool/home/user1 +root@solaris:~# zonecfg -z myzone 
-NAME               PROPERTY    VALUE  SOURCE +zonecfg:myzone> add capped-memory 
-mypool/home/user1  encryption  off    +zonecfg:myzone:capped-memory> set physical=50m 
-root@solaris:~# zfs get encryption mypool/home/user2 +zonecfg:myzone:capped-memory> end 
-NAME               PROPERTY    VALUE  SOURCE +zonecfg:myzone> exit 
-mypool/home/user2  encryption  on     local+root@solaris:~# zonecfg -z myzone info capped-memory 
 +capped-memory: 
 + physical: 50M
 </code> </code>
  
 +====Zone Statistics====
  
- +Zone statistics can be displayed by using the **zonestat** command:
-====Replacing a Faulty Disk==== +
- +
-In the case of a faulty disk and no hot spares, replacing the disk is a one-line operation using the **replace** subcommand of the **zpool** command:+
  
 <code> <code>
-root@solaris:~# zpool status mypool +root@solaris:~# zonestat 5 3 
-  pool: mypool +Collecting data for first interval... 
- stateONLINE +Interval1, Duration: 0:00:05 
-  scannone requested +SUMMARY                   Cpus/Online: 2/2   PhysMem: 2047M  VirtMem3071M 
-config:+                    ---CPU----  --PhysMem-- --VirtMem-- --PhysNet-- 
 +               ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE 
 +            [total]  0.23 11.9% 1495M 73.0% 1967M 64.0%     0 0.00% 
 +           [system]  0.19 9.68%  309M 15.1%  891M 29.0%         - 
 +             global  0.04 4.52% 1134M 55.4% 1024M 33.3%     0 0.00% 
 +             myzone  0.00 0.10% 51.3M 2.50% 51.5M 1.67%     0 0.00%
  
- NAME        STATE     READ WRITE CKSUM +Interval: 2, Duration: 0:00:10 
- mypool      ONLINE           0     0 +SUMMARY                   Cpus/Online: 2/2   PhysMem: 2047M  VirtMem: 3071M 
-   mirror-0  ONLINE       0         0 +                    ---CPU----  --PhysMem-- --VirtMem-- --PhysNet-- 
-     c7t2d0  ONLINE       0         +               ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE 
-     c7t3d0  ONLINE           0     0 +            [total]  0.06 3.47% 1495M 73.0% 1967M 64.0%     0 0.00% 
- spares +           [system]  0.02 1.00%  310M 15.1%  891M 29.0        - 
-   c7t5d0    AVAIL   +             global  0.04 4.80% 1134M 55.4% 1024M 33.3%     0 0.00% 
 +             myzone  0.00 0.14% 51.3M 2.50% 51.5M 1.67%     0 0.00%
  
-errorsNo known data errors +Interval3, Duration: 0:00:15 
-root@solaris:~# zpool replace mypool c7t2d0 c7t4d0+SUMMARY                   Cpus/Online2/2   PhysMem: 2047M  VirtMem: 3071M 
 +                    ---CPU----  --PhysMem-- --VirtMem-- --PhysNet-- 
 +               ZONE  USED %PART  USED %USED  USED %USED PBYTE %PUSE 
 +            [total]  0.07 3.83% 1494M 72.9% 1963M 63.9%     0 0.00% 
 +           [system]  0.02 1.10%  308M 15.0%  891M 29.0%         - 
 +             global  0.05 5.34% 1134M 55.4% 1020M 33.2%     0 0.00% 
 +             myzone  0.00 0.12% 51.3M 2.50% 51.5M 1.67%     0 0.00%
 </code> </code>
  
-Use the **status** subcommand of the **zpool** command again to see what has happened: 
  
-<code> +====Non-global Zone Privileges ====
-root@solaris:~# zpool status mypool +
-  pool: mypool +
- state: ONLINE +
-  scan: resilvered 601K in 0h0m with 0 errors on Thu Dec 13 11:45:49 2012 +
-config:+
  
- NAME        STATE     READ WRITE CKSUM +Certain things cannot be done from within a non-global zone. The list of privileges assigned to a zone can be displayed as follows:
- mypool      ONLINE               0 +
-   mirror-0  ONLINE               0 +
-     c7t4d0  ONLINE               0 +
-     c7t3d0  ONLINE               0 +
- spares +
-   c7t5d0    AVAIL   +
  
-errorsNo known data errors +<code> 
-</code>+root@solaris:~# zlogin myzone ppriv -l 
 +contract_event 
 +contract_identity 
 +contract_observer 
 +cpc_cpu 
 +dtrace_kernel 
 +dtrace_proc 
 +dtrace_user 
 +file_chown 
 +file_chown_self 
 +file_dac_execute 
 +file_dac_read 
 +file_dac_search 
 +file_dac_write 
 +file_downgrade_sl 
 +file_flag_set 
 +file_link_any 
 +file_owner 
 +file_read 
 +file_setid 
 +file_upgrade_sl 
 +file_write 
 +graphics_access 
 +graphics_map 
 +ipc_dac_read 
 +ipc_dac_write 
 +ipc_owner 
 +net_access 
 +net_bindmlp 
 +net_icmpaccess 
 +net_mac_aware 
 +net_mac_implicit 
 +net_observability 
 +net_privaddr 
 +net_rawaccess 
 +proc_audit 
 +proc_chroot 
 +proc_clock_highres 
 +proc_exec 
 +proc_fork 
 +proc_info 
 +proc_lock_memory 
 +proc_owner 
 +proc_priocntl 
 +proc_session 
 +proc_setid 
 +proc_taskid 
 +proc_zone 
 +sys_acct 
 +sys_admin 
 +sys_audit 
 +sys_config 
 +sys_devices 
 +sys_ipc_config 
 +sys_linkdir 
 +sys_mount 
 +sys_iptun_config 
 +sys_flow_config 
 +sys_dl_config 
 +sys_ip_config 
 +sys_net_config 
 +sys_nfs 
 +sys_ppp_config 
 +sys_res_bind 
 +sys_res_config 
 +sys_resource 
 +sys_share 
 +sys_smb 
 +sys_suser_compat 
 +sys_time 
 +sys_trans_label 
 +win_colormap 
 +win_config 
 +win_dac_read 
 +win_dac_write 
 +win_devices 
 +win_dga 
 +win_downgrade_sl 
 +win_fontpath 
 +win_mac_read 
 +win_mac_write 
 +win_selection 
 +win_upgrade_sl 
 +</code> 
  
-<WRAP center round important 60%> 
-ZFS //Resilvering// is the equivalent of UFS re-synchronization. 
-</WRAP>  
  
-====Destroying Pool====+====Changing Zone's Name====
  
-Destroying a pool is achieved by using the **destroy** subcommand of the **zpool** command:+To change the name of a zone, it first has to be shutdown:
  
 <code> <code>
-root@solaris:~# zpool destroy mypool+root@solaris:~# zoneadm -z myzone halt
 </code> </code>
  
-As you can see by the following output, this operation has also destroyed all the associated snapshots:+Now you can change the zone name:
  
 <code> <code>
-root@solaris:~# zfs list +root@solaris:~# zonecfg -z myzone "set zonename=myzone1" 
-NAME                              USED  AVAIL  REFER  MOUNTPOINT +root@solaris:~# zoneadm list -cv 
-rpool                            7.03G  12.3G  4.58M  /rpool +  ID NAME             STATUS     PATH                           BRAND    IP     
-rpool/ROOT                       4.87G  12.3G    31K  legacy +   0 global           running    /                              solaris  shared 
-rpool/ROOT/solaris               4.86G  12.3G  3.92G  / +   myzone1          installed  /zones/myzone                  solaris  excl  
-rpool/ROOT/solaris-backup-1      2.47M  12.3G  1.98G  / +
-rpool/ROOT/solaris-backup-1/var    46K  12.3G   758M  /var +
-rpool/ROOT/solaris/var            865M  12.3G   207M  /var +
-rpool/VARSHARE                    102K  12.3G   102K  /var/share +
-rpool/dump                       1.03G  12.3G  1.00G  - +
-rpool/export                     90.5M  12.3G    32K  /export +
-rpool/export/home                90.5M  12.3G    32K  /export/home +
-rpool/export/home/trainee        90.4M  12.3G  90.4M  /export/home/trainee +
-rpool/swap                       1.03G  12.3G  1.00G  - +
-root@solaris:~# zfs list -t snapshot -r mypool +
-cannot open 'mypool': filesystem does not exist +
-root@solaris:~# ls -l /users +
-total 0+
 </code> </code>
  
-<WRAP center round important 60%> +====Changing Zone's Root Dataset====
-As you have seen above, destroying a pool, **all** the data in it and **all** the associated snapshots is disconcertingly simple. You should therefore be very careful when using the **destroy** subcommand. +
-</WRAP> +
- +
- +
- +
-====Creating RAID-5 Pool====+
  
-You can create a RAID-5 pool using the RAID-Z algorithm:+To change the underlying root dataset of your **myzone1** zone use the following command:
  
 <code> <code>
-root@solaris:~# zpool create mypool raidz c7t2d0 c7t3d0 c7t4d0 spare c7t5d0 +root@solaris:~# zoneadm -z myzone1 move /zones/myzone1 
-root@solaris:~# zpool status mypool +root@solaris:~# zoneadm list -cv 
-  pool: mypool +  ID NAME             STATUS     PATH                           BRAND    IP     
- state: ONLINE +   global           running    /                              solaris  shared 
-  scan: none requested +   myzone1          installed  /zones/myzone1                 solaris  excl  
-config: +
- +
- NAME        STATE     READ WRITE CKSUM +
- mypool      ONLINE               +
-   raidz1- ONLINE               0 +
-     c7t2d0  ONLINE               0 +
-     c7t3d0  ONLINE               0 +
-     c7t4d0  ONLINE               0 +
- spares +
-   c7t5d0    AVAIL    +
- +
-errors: No known data errors +
-</code> +
- +
-Destroy **mypool** : +
- +
-<code> +
-root@solaris:~# zpool destroy mypool+
 </code> </code>
  
-====Creating RAID-6 Pool====+====Backing Up Zone====
  
-You can create a RAID-6 pool using the RAID-Z2 algorithm:+Backing up a zone includes backing up the zone configuration **and** the application data in it. You can use any kind of backup software to backup data within the zone and then export it such that it may be re-injected after a zone restore. The zone configuration is backed up as follows:
  
 <code> <code>
-root@solaris:~# zpool create mypool raidz2 c7t2d0 c7t3d0 c7t4d0 c7t5d0 spare c7t6d0 +root@solaris:~# zonecfg -z myzone1 export -f myzone1.config 
-root@solaris:~# zpool status mypool +root@solaris:~# cat myzone1.config  
-  pool: mypool +create -b 
- state: ONLINE +set brand=solaris 
-  scan: none requested +set zonepath=/zones/myzone1 
-config: +set autoboot=true 
- +set scheduling-class=FSS 
- NAME        STATE     READ WRITE CKSUM +set ip-type=exclusive 
- mypool      ONLINE               0 +add anet 
-   raidz2-0  ONLINE               0 +set linkname=net0 
-     c7t2d0  ONLINE               0 +set lower-link=auto 
-     c7t3d0  ONLINE               0 +set configure-allowed-address=true 
-     c7t4d0  ONLINE               0 +set link-protection=mac-nospoof 
-     c7t5d0  ONLINE               0 +set mac-address=random 
- spares +end 
-   c7t6d0    AVAIL    +add capped-memory 
- +set physical=50M 
-errors: No known data errors+end 
 +add rctl 
 +set name=zone.cpu-shares 
 +add value (priv=privileged,limit=25,action=none) 
 +end
 </code> </code>
  
-Destroy **mypool** :+Now backup the zone's xml file:
  
 <code> <code>
-root@solaris:~# zpool destroy mypool+root@solaris:~# cp /zones/myzone1/root/etc/svc/profile/site/scit_profile.xml /root/myzone1.xml
 </code> </code>
  
-<WRAP center round todo 60%> 
-Create a triple parity RAID **mypool** using your five 200MB disks. Do not delete it. 
-</WRAP> 
  
-====Displaying the Zpool History====+==== Restoring a Zone====
  
-You can review everything that has been done to existing pools by using the **history** subcommand of the **zpool** command:+Disaster has struck! Uninstall and delete **myzone1**:
  
 <code> <code>
-root@solaris:~# zpool history +root@solaris:~# zoneadm -z myzone1 uninstall 
-History for 'mypool': +Are you sure you want to uninstall zone myzone1 (y/[n])? y 
-2012-12-13.14:02:17 zpool create mypool raidz3 c7t2d0 c7t3d0 c7t4d0 c7t5d0 spare c7t6d0 +Progress being logged to /var/log/zones/zoneadm.20121218T170820Z.myzone1.uninstall 
- +root@solaris:~#  
-History for 'rpool': +root@solaris:~# zonecfg -z myzone1 delete 
-2012-11-20.19:08:05 zpool create -f -B rpool c7t0d0s1 +Are you sure you want to delete zone myzone1 (y/[n])? y
-2012-11-20.19:08:05 zfs create -p -o mountpoint=/export rpool/export +
-2012-11-20.19:08:05 zfs set mountpoint=/export rpool/export +
-2012-11-20.19:08:05 zfs create -p rpool/export/home +
-2012-11-20.19:08:06 zfs create -p -o canmount=noauto -o mountpoint=/var/share rpool/VARSHARE +
-2012-11-20.19:08:06 zfs set mountpoint=/var/share rpool/VARSHARE +
-2012-11-20.19:08:07 zfs create -p -V 1024.0m rpool/dump +
-2012-11-20.19:08:12 zfs create -p -V 1024.0m rpool/swap +
-2012-11-20.19:08:20 zfs set primarycache=metadata rpool/swap +
-2012-11-20.19:25:51 zfs set primarycache=metadata rpool/swap +
-2012-11-20.19:26:25 zfs create rpool/export/home/trainee +
-2012-11-20.22:45:56 zfs set primarycache=metadata rpool/swap +
-2012-12-01.14:32:36 zfs set primarycache=metadata rpool/swap +
-2012-12-03.13:15:45 zfs set primarycache=metadata rpool/swap +
-2012-12-08.14:33:41 zfs /tmp/be +
-2012-12-11.15:33:50 zfs set primarycache=metadata rpool/swap +
-2012-12-12.09:57:00 zfs set primarycache=metadata rpool/swap+
 </code> </code>
  
-<WRAP center round important 60%> +Now restore myzone1 as follows:
-Note that the history related to destroyed pools has been deleted. +
-</WRAP>+
  
-=====LAB #Managing iSCSI Storage=====+<code> 
 +root@solaris:~zonecfg -z myzone1 -f myzone1.config 
 +root@solaris:~# zoneadm -z myzone1 install -c /root/myzone1.xml 
 +The following ZFS file system(s) have been created: 
 +    rpool/zones/myzone1 
 +Progress being logged to /var/log/zones/zoneadm.20121218T171621Z.myzone1.install 
 +       Image: Preparing at /zones/myzone1/root.
  
-====Installing the COMSTAR Server==== + AI Manifest: /tmp/manifest.xml.9BaOQP 
- +  SC Profile/root/myzone1.xml 
-Start by installing the COMSTAR storage server software: +    Zonenamemyzone1 
- +InstallationStarting ...
-<code> +
-root@solaris:~# pkg install group/feature/storage-server +
-           Packages to install:  20 +
-       Create boot environment No +
-Create backup boot environmentYes +
-            Services to change  1+
  
 +              Creating IPS image
 +Startup linked: 1/1 done
 +              Installing packages from:
 +                  solaris
 +                      origin:  http://pkg.oracle.com/solaris/release/
 DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
-Completed                              20/20     1023/1023    54.5/54. 636k/s+Completed                            183/183   33556/33556  222.2/222. 674k/s
  
 PHASE                                          ITEMS PHASE                                          ITEMS
-Installing new actions                     1863/1863+Installing new actions                   46825/46825
 Updating package state database                 Done  Updating package state database                 Done 
 Updating image state                            Done  Updating image state                            Done 
-Creating fast lookup database                   Done +Creating fast lookup database                   Done  
-</code>+Installation: Succeeded
  
-The **COMSTAR target mode framework** runs as the **stmf** service. Check to see if it is enabled:+        NoteMan pages can be obtained by installing pkg:/system/manual
  
-<code> + done.
-root@solaris:~# svcs \*stmf\* +
-STATE          STIME    FMRI +
-disabled       15:43:16 svc:/system/stmf:default +
-</code>+
  
-Enable the service:+        DoneInstallation completed in 678.453 seconds.
  
-<code> 
-root@solaris:~# svcadm enable stmf 
-root@solaris:~# svcs \*stmf\* 
-STATE          STIME    FMRI 
-online         16:01:56 svc:/system/stmf:default 
-</code> 
  
-You can check the status of the server using the **stmfadm** command:+  Next Steps: Boot the zone, then log into the zone console (zlogin -C)
  
-<code> +              to complete the configuration process. 
-root@solaris:~# stmfadm list-state + 
-Operational Status: online +Log saved in non-global zone as /zones/myzone1/root/var/log/zones/zoneadm.20121218T171621Z.myzone1.install
-Config Status     : initialized +
-ALUA Status       : disabled +
-ALUA Node         : 0+
 </code> </code>
  
-====Creating SCSI Logical Units==== +Log in as root and check the zone is running correctly:
- +
-First you need to create your **Backing Storage Device** within your **mypool** pool:+
  
 <code> <code>
-root@solaris:~# zfs create -V 100M mypool/iscsi +root@solaris:~# zlogin -S myzone1 
-root@solaris:~# zfs list +[Connected to zone 'myzone1' pts/3] 
-NAME                              USED  AVAIL  REFER  MOUNTPOINT +@myzone.solaris.loc:~$ ls 
-mypool                            103M  51.6M    31K  /mypool +bin     dev     etc     export  home    lib     mnt     net     nfs4    opt     proc    root    rpool   sbin    system  tmp     usr     var
-mypool/iscsi                      103M   155M    16K  - +
-rpool                            7.40G  11.9G  4.58M  /rpool +
-rpool/ROOT                       5.22G  11.9G    31K  legacy +
-rpool/ROOT/solaris               5.22G  11.9G  4.08G  / +
-rpool/ROOT/solaris-backup-1      2.47M  11.9G  1.98G  / +
-rpool/ROOT/solaris-backup-1/var    46K  11.9G   758M  /var +
-rpool/ROOT/solaris-backup-2       127K  11.9G  3.92G  / +
-rpool/ROOT/solaris-backup-2/var    58K  11.9G   266M  /var +
-rpool/ROOT/solaris/var            980M  11.9G   209M  /var +
-rpool/VARSHARE                    102K  11.9G   102K  /var/share +
-rpool/dump                       1.03G  12.0G  1.00G  - +
-rpool/export                      108M  11.9G    32K  /export +
-rpool/export/home                 108M  11.9G    32K  /export/home +
-rpool/export/home/trainee         108M  11.9G   108M  /export/home/trainee +
-rpool/swap                       1.03G  12.0G  1.00G  -+
 </code> </code>
  
-You can see your raw device in the **/dev/zvol/rdsk/mypool/** directory:+====Cloning a Local Zone==== 
 + 
 +In this section you are going to create a template zone that you can clone as necessary every time you need to install a new zone. Start by creating a zone called **cleanzone**:
  
 <code> <code>
-root@solaris:~# ls -l /dev/zvol/rdsk/mypool +root@solaris:~# zonecfg -z cleanzone 
-total 0 +Use 'create' to begin configuring a new zone. 
-lrwxrwxrwx   1 root     root           0 Dec 14 09:42 iscsi -../../../..//devices/pseudo/zfs@0:6,raw+zonecfg:cleanzonecreate 
 +create: Using system default template 'SYSdefault' 
 +zonecfg:cleanzone> set zonepath=/zones/cleanzone 
 +zonecfg:cleanzone> set autoboot=true 
 +zonecfg:cleanzone> verify 
 +zonecfg:cleanzone> commit 
 +zonecfg:cleanzone> exit
 </code> </code>
  
-You can now create a logical unit using the **create-lu** subcommand of the **sbdadm** command:+Install the zone:
  
 <code> <code>
-root@solaris:~# sbdadm create-lu /dev/zvol/rdsk/mypool/iscsi +root@solaris:~# zoneadm -z cleanzone install 
-Created the following LU:+The following ZFS file system(s) have been created: 
 +    rpool/zones/cleanzone 
 +Progress being logged to /var/log/zones/zoneadm.20121218T143129Z.cleanzone.install 
 +       ImagePreparing at /zones/cleanzone/root.
  
-       GUID                    DATA SIZE           SOURCE + AI Manifest: /tmp/manifest.xml.vAaqcB 
---------------------------------  -------------------  ---------------- +  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml 
-600144f0e2a54e00000050cae6d80001  104857600            /dev/zvol/rdsk/mypool/iscsi +    Zonename: cleanzone 
-</code>+Installation: Starting ...
  
-====Mapping the Logical Unit====+              Creating IPS image 
 +Startup linked: 1/1 done 
 +              Installing packages from: 
 +                  solaris 
 +                      origin:  http://pkg.oracle.com/solaris/release/ 
 +DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED 
 +Completed                            183/183   33556/33556  222.2/222.2  552k/s
  
-In order for the logical unit to be available to initiators, it has to be **mapped**. In order to map the logical device you need its GUID. You can use either one of the two following commands to get that information:+PHASE                                          ITEMS 
 +Installing new actions                   46825/46825 
 +Updating package state database                 Done  
 +Updating image state                            Done  
 +Creating fast lookup database                   Done  
 +InstallationSucceeded
  
-<code> +        NoteMan pages can be obtained by installing pkg:/system/manual
-root@solaris:~# sbdadm list-lu+
  
-Found 1 LU(s)+ done.
  
-       GUID                    DATA SIZE           SOURCE +        Done: Installation completed in 797.979 seconds.
---------------------------------  -------------------  ---------------- +
-600144f0e2a54e00000050cae6d80001  104857600            /dev/zvol/rdsk/mypool/iscsi +
-</code>+
  
-<code> 
-root@solaris:~# stmfadm list-lu -v 
-LU Name: 600144F0E2A54E00000050CAE6D80001 
-    Operational Status     : Online 
-    Provider Name          : sbd 
-    Alias                  : /dev/zvol/rdsk/mypool/iscsi 
-    View Entry Count       : 0 
-    Data File              : /dev/zvol/rdsk/mypool/iscsi 
-    Meta File              : not set 
-    Size                   : 104857600 
-    Block Size             : 512 
-    Management URL         : not set 
-    Vendor ID              : SUN      
-    Product ID             : COMSTAR          
-    Serial Num             : not set 
-    Write Protect          : Disabled 
-    Write Cache Mode Select: Enabled 
-    Writeback Cache        : Enabled 
-    Access State           : Active 
-</code> 
  
-Create simple mapping for this logical unit by using the **add-view** subcommand of the **stmfadm** command:+  Next Steps: Boot the zone, then log into the zone console (zlogin -C)
  
-<code> +              to complete the configuration process.
-root@solaris:~# stmfadm add-view 600144F0E2A54E00000050CAE6D80001 +
-</code>+
  
-====Creating a Target==== +Log saved in non-global zone as /zones/cleanzone/root/var/log/zones/zoneadm.20121218T143129Z.cleanzone.install
- +
-In order to create a target the **svc:/network/iscsi/target:default** service must be online. Check if it is: +
- +
-<code> +
-root@solaris:~# svcs \*scsi\* +
-STATE          STIME    FMRI +
-disabled       15:42:56 svc:/network/iscsi/target:default +
-online         Dec_12   svc:/network/iscsi/initiator:default+
 </code> </code>
  
-Start the service:+Boot the zone to import the zone's manifest:
  
 <code> <code>
-root@solaris:~# svcadm enable -r svc:/network/iscsi/target:default +root@solaris:~# zoneadm -z cleanzone boot
-root@solaris:~# svcs \*scsi\* +
-STATE          STIME    FMRI +
-online         Dec_12   svc:/network/iscsi/initiator:default +
-online         10:06:54 svc:/network/iscsi/target:default+
 </code> </code>
  
-Now create a target using the **create-target** subcommand of the **itadm** command:+Login to the zone and hit <key>Enter</key> then immediately leave the zone by using the **~.** shortcut:
  
 <code> <code>
-root@solaris:~# itadm create-target +root@solaris:~# zlogin -C cleanzone
-Target iqn.1986-03.com.sun:02:897fd011-8b3d-cf2b-fc1d-a010bd97d035 successfully created+
 </code> </code>
  
-To list the target(s)use the **list-target** subcommand of the **itadm** command:+To clone a zoneit first needs to be shutdown:
  
 <code> <code>
-root@solaris:~# itadm list-target +root@solaris:~# zoneadm -z cleanzone halt 
-TARGET NAME                                                  STATE    SESSIONS  +root@solaris:~# zoneadm list -cv 
-iqn.1986-03.com.sun:02:897fd011-8b3d-cf2b-fc1d-a010bd97d035  online   0        +  ID NAME             STATUS     PATH                           BRAND    IP     
 +   global           running    /                              solaris  shared 
 +   3 myzone1          running    /zones/myzone1                 solaris  excl   
 +   - cleanzone        installed  /zones/cleanzone               solaris  excl  
 </code> </code>
  
-====Configuring the Target for Discovery==== +Now create a clone of **cleanzone**:
- +
-Finally, you need to configure the target so it can be discovered by initiators:+
  
 <code> <code>
-root@solaris:~# devfsadm -i iscsi+root@solaris:~# zonecfg -z myzone2 "create -t cleanzone" 
 +root@solaris:~# zonecfg -z myzone2 "set zonepath=/zones/myzone2" 
 +root@solaris:~# zoneadm -z myzone2 clone cleanzone 
 +The following ZFS file system(s) have been created: 
 +    rpool/zones/myzone2 
 +Progress being logged to /var/log/zones/zoneadm.20121218T174936Z.myzone2.clone 
 +Log saved in non-global zone as /zones/myzone2/root/var/log/zones/zoneadm.20121218T174936Z.myzone2.clone
 </code> </code>
- 
-=====References===== 
- 
-  * **[[http://www.oracle.com/technetwork/documentation/solaris-11-192991.html|The Oracle Technology Network]]** 
  
 ----- -----
 <html> <html>
 <div align="center"> <div align="center">
-Copyright © 2011-2015 Hugh Norris.<br><br> +Copyright © 2019 Hugh Norris.
-<a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-nd/3.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-nd/3.0/">Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License</a>+
-</div>+
 </html> </html>
Menu