Différences
Ci-dessous, les différences entre deux révisions de la page.
Prochaine révision | Révision précédente | ||
elearning:workbooks:solaris:11:junior:l125 [2016/10/22 05:25] – modification externe 127.0.0.1 | elearning:workbooks:solaris:11:junior:l125 [2020/01/30 03:28] (Version actuelle) – modification externe 127.0.0.1 | ||
---|---|---|---|
Ligne 1: | Ligne 1: | ||
- | ====== | + | ====== |
- | =====Preparing your Solaris | + | =====Solaris |
- | Before continuing further, shutdown your Solaris | + | The term Solaris **Containers** is often confused with that of Solaris |
- | ^ Disk ^ Size ^ Name ^ | + | <WRAP center round important 60%> |
- | | c7t2d0 | 200 Mb | Disk1.vmdk | | + | Solaris container = Solaris Zone + Solaris Resource Manager ( SRM ) |
- | | c7t3d0 | 200 Mb | Disk2.vmdk | | + | </ |
- | | c7t4d0 | 200 Mb | Disk3.vmdk | | + | |
- | | c7t5d0 | 200 Mb | Disk4.vmdk | | + | |
- | | c7t6d0 | 200 Mb | Disk5.vmdk | | + | |
- | | c7t7d0 | 20 Gb | Mirror.vmdk | | + | |
- | Using the **System** section of the **Oracle VM VirtualBox Manager**, add a second processor to your Solaris 11 VM. | + | The **SRM** is responsible for workload and resource management. |
- | Finally, boot your Solaris | + | Solaris |
- | =====Introduction===== | + | There are two types of zones: |
- | All previous versions of Solaris, including Solaris 10 use the **Unix File System** ( [[wp> | + | * a single **Global** zone, |
+ | | ||
- | ====Solaris 11 and ZFS==== | + | Each local zone requires about 400 Mb of disk space and 15 Mb of RAM. |
- | The Solaris 11 implementation of ZFS includes the following capabilities: | + | ====The Global Zone==== |
- | * 128-bit addressing, | + | The global zone: |
- | * data integrity assurance, | + | |
- | * automated data corruption detection and repair, | + | |
- | * encryption, | + | |
- | * compression, | + | |
- | * de-duplication, | + | |
- | * quotas, | + | |
- | * file system migration between pools, | + | |
- | * snapshots. | + | |
- | ====ZFS Vocabulary==== | + | * has a zone **ID** of **0**, |
+ | * provides a unique instance of the Solaris kernel, | ||
+ | * contains all packages installed by IPS, | ||
+ | * can contains other software not installed by IPS, | ||
+ | * contains a database of all applications installed in the global zone, | ||
+ | * contains all configuration data concerning the global zone such as its host name, | ||
+ | * knows about all //devices// and all //file systems//, | ||
+ | * is aware of all local zones as well as their configuration, | ||
+ | * is the zone in which local zones can be created, installed, configured, managed, un-installed and deleted. | ||
- | The introduction of ZFS was obviously accompanied by a new vocabulary: | + | ====Non-global or Local Zones==== |
- | ^ Term ^ Description ^ | + | A local zone: |
- | | pool | A storage element regrouping one or more disk partitions containing one or more file systems | | + | |
- | | file system | A dataset containing directories and files | | + | |
- | | clone | A copy of a file system | | + | |
- | | snapshot | A read-only copy of the state of a file system | | + | |
- | | compression | The reduction of storage space achieved by the removal of duplicate data blocks | | + | |
- | | de-duplication | The reduction of storage space achieved by the removal of redundant data patterns | | + | |
- | | checksum | A 256-bit number used to validate data when read or written | | + | |
- | | encryption | The protection of data using a password | | + | |
- | | quota | The maximum amount of disk space used by a group or user | | + | |
- | | reservation | A preallocated amount of disk space assigned to a user or file system | | + | |
- | | mirror | An exact duplicate of a disk or partition | | + | |
- | | RAID-Z | ZFS implementation of [[wp> | + | |
- | | RAID-Z2 | ZFS implementation of [[wp> | + | |
- | | RAID-Z3 | ZFS implementation of Triple Parity RAID | | + | |
- | ====ZFS Commands=== | + | * is given a zone ID when it is booted, |
+ | * shares the kernel with the global zone, | ||
+ | * contains a some of the installed packages, | ||
+ | * shares packages with the global zone, | ||
+ | * can contain other software and files not present in the global zone, | ||
+ | * contains a database of all locally installed applications as well as all applications shared by the global zone, | ||
+ | * has no knowledge of the other local zones, | ||
+ | * cannot be used to manage or to un-install local zones, including itself, | ||
+ | * contains all configuration data concerning the local zone such as its host name and IP address, | ||
- | The ZFS commands are as follows: | + | For those familiar with Solaris 10 zones, there were two types of local zones: |
- | ^ Command ^ Description ^ | + | * **//Small zones//** or //Sparse Root zones// where the zone shared the following global zone directories: |
- | | zpool | Used to manage ZFS pools | | + | * /usr |
- | | zfs | Used to manage ZFS file systems | | + | * /lib |
+ | * /platform | ||
+ | * /sbin | ||
+ | * **//Big zones// | ||
- | ===The zpool Command=== | + | In Solaris 11 only Whole Root Zones remain. |
- | The **zpool** command uses a set of subcommands: | ||
- | ^ Command ^ Description ^ | + | =====Lab #1 - Installing |
- | | create | Creates | + | |
- | | destroy | Destroys a storage pool | | + | |
- | | list | Displays the health and storage usage of a pool | | + | |
- | | get | Displays a list of pool properties | | + | |
- | | set | Sets a property for a pool | | + | |
- | | status | Displays the health of a pool | | + | |
- | | history | Displays the commands issued for a pool since its creation | | + | |
- | | add | Adds a disk to an existing pool | | + | |
- | | remove | Removes a disk from an existing pool | | + | |
- | | replace | Replaces a disk in a pool by another disk | | + | |
- | | scrub | Verifies the checksums of a pool and repairs any damaged data blocks | | + | |
- | ===The zfs Command=== | + | In this lab you will be installing |
- | + | ||
- | The **zfs** command use a set of subcommands: | + | |
- | + | ||
- | ^ Command ^ Description ^ | + | |
- | | create | Creates a ZFS file system, sets its properties and automatically mounts it | | + | |
- | | destroy | Destroys a ZFS file system or snapshot | | + | |
- | | list | Displays the properties and storage usage of a ZFS file system | + | |
- | | get | Displays a list of ZFS file system properties | | + | |
- | | set | Sets a property for a ZFS file system | | + | |
- | | snapshot | Creates a read-only copy of the state of a ZFS file system | | + | |
- | | rollback | Returns the file system to the state of the **last** snapshot | | + | |
- | | send | Creates a file from a snapshot in order to migrate it to another pool | | + | |
- | | receive | Retrieves a file created by the subcommand **send** | | + | |
- | | clone | Creates a copy of a snapshot | | + | |
- | | promote | Transforms a clone into a ZFS file system | + | |
- | | diff | Displays | + | |
- | | mount | Mounts a ZFS file system at a specific mount point | | + | |
- | | unmount | Unmounts a ZFS file system | | + | |
- | + | ||
- | ====Solaris Slices==== | + | |
- | + | ||
- | Those familiar with UFS on Solaris | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | Searching for disks...done | + | NAME USED AVAIL REFER MOUNTPOINT |
- | + | mypool | |
- | + | mypool/ | |
- | AVAILABLE DISK SELECTIONS: | + | rpool 7.40G 11.9G 4.58M /rpool |
- | 0. c7t0d0 < | + | rpool/ROOT 5.22G 11.9G 31K legacy |
- | /pci@0, | + | rpool/ROOT/solaris |
- | 1. c7t2d0 < | + | rpool/ROOT/solaris-backup-1 2.47M 11.9G 1.98G / |
- | /pci@0,0/pci8086, | + | rpool/ROOT/solaris-backup-1/var 46K 11.9G |
- | 2. c7t3d0 < | + | rpool/ROOT/solaris-backup-2 |
- | /pci@0,0/pci8086, | + | rpool/ROOT/solaris-backup-2/var 58K 11.9G |
- | 3. c7t4d0 <ATA-VBOX HARDDISK-1.0-200.00MB> | + | rpool/ROOT/solaris/var 980M 11.9G |
- | /pci@0,0/pci8086, | + | rpool/VARSHARE |
- | 4. c7t5d0 <ATA-VBOX HARDDISK-1.0-200.00MB> | + | rpool/dump 1.03G 12.0G 1.00G - |
- | /pci@0,0/pci8086, | + | rpool/ |
- | 5. c7t6d0 <ATA-VBOX HARDDISK-1.0-200.00MB> | + | rpool/ |
- | /pci@0,0/pci8086, | + | rpool/ |
- | 6. c7t7d0 <ATA-VBOX HARDDISK-1.0 cyl 2608 alt 2 hd 255 sec 63> | + | rpool/ |
- | /pci@0,0/pci8086, | + | |
- | Specify disk (enter its number): 0 | + | |
- | selecting c7t0d0 | + | |
- | [disk formatted] | + | |
- | /dev/dsk/c7t0d0s1 is part of active ZFS pool rpool. | + | |
- | + | ||
- | + | ||
- | FORMAT MENU: | + | |
- | disk - select a disk | + | |
- | type - select (define) a disk type | + | |
- | partition | + | |
- | current | + | |
- | format | + | |
- | fdisk - run the fdisk program | + | |
- | repair | + | |
- | label - write label to the disk | + | |
- | analyze | + | |
- | defect | + | |
- | backup | + | |
- | verify | + | |
- | inquiry | + | |
- | volname | + | |
- | !< | + | |
- | quit | + | |
- | format> part | + | |
- | + | ||
- | + | ||
- | PARTITION MENU: | + | |
- | 0 - change `0' partition | + | |
- | | + | |
- | 2 - change `2' partition | + | |
- | 3 - change `3' partition | + | |
- | 4 - change `4' partition | + | |
- | 5 - change `5' partition | + | |
- | 6 - change `6' partition | + | |
- | select - select a predefined table | + | |
- | modify - modify a predefined partition table | + | |
- | name - name the current table | + | |
- | print | + | |
- | | + | |
- | !< | + | |
- | quit | + | |
- | partition> | + | |
- | Current partition table (original): | + | |
- | Total disk sectors available: 41926589 + 16384 (reserved sectors) | + | |
- | + | ||
- | Part Tag Flag First Sector | + | |
- | 0 BIOS_boot | + | |
- | | + | |
- | 2 unassigned | + | |
- | | + | |
- | 4 unassigned | + | |
- | | + | |
- | 6 unassigned | + | |
- | 8 | + | |
- | + | ||
- | partition> | + | |
</ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | Note the following line in the above output: | + | You **cannot** create zone datasets under the **rpool/ROOT** dataset. |
- | + | ||
- | **/ | + | |
- | + | ||
- | Since you are using ZFS for storage management, you no longer need to bother about slices ! | + | |
</ | </ | ||
- | ====iSCSI Storage==== | + | ====Configuring the Zone's Dataset==== |
- | In Solaris 10 the configuration of iSCSI LUNs was accomplished using the **iscsitadm** command and the ZFS **shareiscsi** property. In Solaris 11 these have been replaced by by the use of **Common Multiprotocol SCSI Target** ( COMSTAR ). COMSTAR | + | It seems that the best option |
- | COMSTAR includes the following features: | + | < |
- | + | root@solaris:~# zfs create -o mountpoint=/ | |
- | | + | root@solaris: |
- | | + | NAME USED AVAIL REFER MOUNTPOINT |
- | | + | mypool |
- | | + | mypool/ |
- | + | rpool 7.40G 11.9G 4.58M /rpool | |
- | An iSCSI target is an **endpoint** waiting for connections from clients called **initiators**. A target can provide multiple **Logical Units** which provides classic read and write data operations. | + | rpool/ |
- | + | rpool/ | |
- | Each logical unit is backed by a **storage device**. You can create a logical unit backed by any one of the following: | + | rpool/ |
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | rpool/ | ||
+ | </ | ||
- | * a file, | + | Now create your zone using the **zonecfg** command: |
- | * a thin-provisioned file, | + | |
- | * a disk partition, | + | |
- | * a ZFS volume. | + | |
- | + | ||
- | =====LAB #1 - Managing ZFS Storage===== | + | |
- | + | ||
- | ====Displaying Online Help==== | + | |
- | + | ||
- | Both the **zpool**and **zfs** commands have built-in online help: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | The following commands are supported: | + | Use 'create' to begin configuring a new zone. |
- | add attach | + | zonecfg:myzone> create |
- | help | + | create: Using system default template ' |
- | replace | + | zonecfg:myzone> set zonepath=/ |
- | For more info, run: zpool help <command> | + | zonecfg: |
- | root@solaris:~# zfs help | + | zonecfg:myzone> |
- | The following commands are supported: | + | |
- | allow | + | |
- | groupspace | + | |
- | list mount | + | |
- | rollback | + | |
- | unmount | + | |
- | For more info, run: zfs help <command> | + | |
</ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | Note that you can get help on subcommands by either using **zpool help < | + | The **-z** switch stands for the **zonename**. |
</ | </ | ||
- | ====Checking Pool Status==== | + | Zones are represented by **XML** files. As you can see above, when created, |
- | + | ||
- | Use the **zpool** command with the **list** subcommand to display the details of your pool: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | NAME SIZE ALLOC | + | <?xml version=" |
- | rpool 19.6G 6.96G 12.7G 35% | + | |
- | </code> | + | |
- | Now use the **status** subcommand: | + | <!-- |
+ | | ||
- | < | + | DO NOT EDIT THIS FILE. Use zonecfg(1M) instead. |
- | root@solaris: | + | --> |
- | pool: rpool | + | |
- | state: ONLINE | + | |
- | scan: none requested | + | |
- | config: | + | |
- | NAME STATE READ WRITE CKSUM | + | < |
- | rpool | + | |
- | c7t0d0s1 | + | |
- | errors: No known data errors | + | <zone name=" |
+ | ip-type=" | ||
+ | < | ||
+ | | ||
+ | </ | ||
</ | </ | ||
- | ====Creating a Mirrored Pool==== | + | Note that you also have set the **autoboot** property to **yes** so that the zone starts on system boot. |
- | Create a ZFS mirrored pool called | + | To show the configuration that zonecfg has already filled in for you by using the **/ |
< | < | ||
- | root@solaris:~# zpool create mypool mirror c7t2d0 c7t3d0 | + | zonecfg: |
+ | zonename: myzone | ||
+ | zonepath: / | ||
+ | brand: | ||
+ | autoboot: true | ||
+ | bootargs: | ||
+ | file-mac-profile: | ||
+ | pool: | ||
+ | limitpriv: | ||
+ | scheduling-class: | ||
+ | ip-type: exclusive | ||
+ | hostid: | ||
+ | fs-allowed: | ||
+ | anet: | ||
+ | linkname: net0 | ||
+ | lower-link: | ||
+ | allowed-address not specified | ||
+ | configure-allowed-address: | ||
+ | defrouter not specified | ||
+ | allowed-dhcp-cids not specified | ||
+ | link-protection: | ||
+ | mac-address: | ||
+ | mac-prefix not specified | ||
+ | mac-slot not specified | ||
+ | vlan-id not specified | ||
+ | priority not specified | ||
+ | rxrings not specified | ||
+ | txrings not specified | ||
+ | mtu not specified | ||
+ | maxbw not specified | ||
+ | rxfanout not specified | ||
+ | vsi-typeid not specified | ||
+ | vsi-vers not specified | ||
+ | vsi-mgrid not specified | ||
+ | etsbw-lcl not specified | ||
+ | cos not specified | ||
+ | pkey not specified | ||
+ | linkmode not specified | ||
+ | zonecfg:myzone> | ||
</ | </ | ||
- | Check that your pool has been created: | + | Finally, **commit** the configuration, |
< | < | ||
- | root@solaris: | + | zonecfg: |
- | NAME | + | zonecfg: |
- | mypool | + | zonecfg: |
- | rpool | + | root@solaris: |
</ | </ | ||
- | Display the file systems | + | Your zone's configuration is now in its own **XML** file: |
+ | |||
+ | < | ||
+ | root@solaris: | ||
+ | <?xml version=" | ||
+ | < | ||
+ | <!-- | ||
+ | DO NOT EDIT THIS FILE. Use zonecfg(1M) instead. | ||
+ | --> | ||
+ | <zone name=" | ||
+ | < | ||
+ | </ | ||
+ | </ | ||
+ | |||
+ | Display the zones on your system by using the **list** subcommand of the **zoneadm** command: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | NAME | + | |
- | mypool | + | 0 global |
- | rpool 7.02G 12.3G 4.58M /rpool | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ROOT/solaris/var 865M 12.3G | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/export | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
</ | </ | ||
- | <WRAP center round important 60%> | + | The switches used with the **list** subcommand are: |
- | Note that the zpool command automatically creates a file system on **mypool** and mounts it at **/ | + | |
- | </ | + | ^ Switch ^ Description ^ |
+ | | -c | Display all zones | | ||
+ | | -v | Display verbose output | | ||
- | ====Adding File Systems to an Existing Pool==== | + | ====Installing a Zone==== |
- | Now create two file systems in your pool called **/home** and **/ | + | Now you are ready to install |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | |
- | root@solaris: | + | |
- | NAME USED AVAIL REFER MOUNTPOINT | + | |
- | mypool | + | |
- | mypool/ | + | |
- | mypool/ | + | |
- | rpool 7.02G 12.3G 4.58M /rpool | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
</ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | Note that the two file systems **share** the same disk space as the parent pool. | + | Go grab a cup of coffee or juice ! The installation process can take upto 20 minutes. |
</ | </ | ||
- | ====Changing the Pool Mount Point==== | + | When installation is complete, |
- | + | ||
- | Suppose that you want the /home file system mounted elsewhere rather than under the /mypool mount point. With ZFS, this is very simple: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris:~# zfs set mountpoint=/ | + | The following ZFS file system(s) have been created: |
- | root@solaris: | + | rpool/zones/myzone |
- | NAME USED AVAIL REFER MOUNTPOINT | + | Progress being logged to /var/log/zones/zoneadm.20121214T123059Z.myzone.install |
- | mypool | + | Image: Preparing at /zones/myzone/root. |
- | mypool/ | + | |
- | mypool/ | + | |
- | rpool | + | |
- | rpool/ROOT | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ROOT/solaris/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/export | + | |
- | rpool/export/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | </ | + | |
- | <WRAP center round important 60%> | + | AI Manifest: / |
- | Note that ZFS has automatically and transparently unmounted **/mypool/home** and re-mounted it at **/users**. | + | SC Profile: |
- | </ | + | |
+ | Installation: | ||
- | ====Adding a Hot Spare==== | + | Creating IPS image |
+ | Startup linked: 1/1 done | ||
+ | Installing packages from: | ||
+ | solaris | ||
+ | origin: | ||
+ | DOWNLOAD | ||
+ | Completed | ||
- | To display all of the properties associated with **mypool**, use the **zpool** command and the **get** subcommand: | + | PHASE ITEMS |
+ | Installing new actions | ||
+ | Updating package state database | ||
+ | Updating image state Done | ||
+ | Creating fast lookup database | ||
+ | Installation: Succeeded | ||
- | < | + | |
- | root@solaris: | + | |
- | NAME PROPERTY | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | mypool | + | |
- | </code> | + | |
- | <WRAP center round important 60%> | + | done. |
- | Note that the **autoreplace** property is set to **off**. In order to use a hot spare, this property needs to be set to **on**. | + | |
- | </ | + | |
- | Set the autoreplace property to on: | + | Done: Installation completed in 3389.965 seconds. |
- | < | ||
- | root@solaris: | ||
- | root@solaris: | ||
- | NAME PROPERTY | ||
- | mypool | ||
- | </ | ||
- | Add the fourth 200 Mb disk that you have created to **mypool** as a spare: | + | Next Steps: Boot the zone, then log into the zone console (zlogin -C) |
- | < | + | to complete the configuration process. |
- | root@solaris: | + | |
- | root@solaris: | + | |
- | pool: mypool | + | |
- | | + | |
- | scan: none requested | + | |
- | config: | + | |
- | + | ||
- | NAME STATE READ WRITE CKSUM | + | |
- | mypool | + | |
- | mirror-0 | + | |
- | c7t2d0 | + | |
- | c7t3d0 | + | |
- | spares | + | |
- | c7t5d0 | + | |
- | errors: No known data errors | + | Log saved in non-global zone as / |
</ | </ | ||
- | ====Observing Pool Activity==== | + | Now use zonesadm' |
- | + | ||
- | Create a random data file in **/ | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | [1] 2617 | + | ID NAME |
+ | 0 global | ||
+ | - myzone | ||
</ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | Write down the PID, you will need it in 2 minutes | + | Note that the myzone **STATUS** is now **installed** as opposed |
</ | </ | ||
- | Now display the writes to the pool using the **iostat** subcommand of the **zpool** command: | + | ====A Zone's First Boot==== |
+ | |||
+ | Verify myzone and then boot it: | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | | + | root@solaris: |
- | pool alloc | + | </ |
- | ---------- | + | |
- | mypool | + | |
- | mirror | + | |
- | c7t2d0 | + | |
- | c7t3d0 | + | |
- | ---------- | + | |
- | rpool | + | |
- | c7t0d0s1 | + | |
- | ---------- | + | |
- | | + | Check if the zone status is now **running**: |
- | pool alloc | + | |
- | ---------- | + | |
- | mypool | + | |
- | mirror | + | |
- | c7t2d0 | + | |
- | c7t3d0 | + | |
- | ---------- | + | |
- | rpool | + | |
- | c7t0d0s1 | + | |
- | ---------- | + | |
- | | + | < |
- | pool alloc | + | root@solaris: |
- | ---------- | + | |
- | mypool | + | |
- | mirror | + | 1 myzone |
- | | + | </ |
- | | + | |
- | ---------- | + | |
- | rpool | + | |
- | c7t0d0s1 | + | |
- | ---------- | + | |
- | | + | Now you can login to the zone using the **zlogin** command: |
- | pool alloc | + | |
- | ---------- | + | |
- | mypool | + | |
- | mirror | + | |
- | c7t2d0 | + | |
- | c7t3d0 | + | |
- | ---------- | + | |
- | rpool | + | |
- | c7t0d0s1 | + | |
- | ---------- | + | |
- | | + | < |
- | pool alloc | + | root@solaris: |
- | ---------- | + | [Connected to zone ' |
- | mypool | + | |
- | mirror | + | |
- | c7t2d0 | + | |
- | c7t3d0 | + | |
- | ---------- | + | |
- | rpool | + | |
- | c7t0d0s1 | + | |
- | ---------- | + | |
- | + | ||
- | ^C | + | |
</ | </ | ||
- | <WRAP center round todo 60%> | + | Hit <key>Enter</key> and configure the zone: |
- | Is your mirror functioning ? | + | |
- | </WRAP> | + | |
- | Now kill the process creating the file **randomfile** : | + | |
+ | | ||
+ | | ||
+ | | ||
+ | * user password = fenestr0$ | ||
- | # kill -9 PID [Entrée] | + | Once configured you will see messages similar to the following: |
- | + | ||
- | Delete | + | |
< | < | ||
- | root@solaris:~# rm -rf / | + | SC profile successfully generated. |
- | [1]+ Killed | + | Exiting System Configuration Tool. Log is available at: |
+ | /system/volatile/sysconfig/sysconfig.log.7316 | ||
</ | </ | ||
- | ====Setting a User Quota==== | + | Use the tilde-dot ( ~. ) shortcut |
- | + | ||
- | To set a user quota, you need to use the **set** subcommand of **zpool**: | + | |
< | < | ||
- | root@solaris: | + | ~. |
- | root@solaris: | + | [Connection to zone ' |
- | NAME PROPERTY | + | root@solaris: |
- | mypool | + | |
- | root@solaris: | + | |
- | NAME USED AVAIL REFER MOUNTPOINT | + | |
- | mypool | + | |
- | mypool/ | + | |
- | mypool/ | + | |
- | rpool 7.03G 12.3G 4.58M /rpool | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
</ | </ | ||
- | <WRAP center round important 60%> | + | ====Logging into a Zone Directly as Root==== |
- | Note that the quota of 50 Mb has been set on / | + | |
- | </ | + | |
- | Now create a random data file in / | + | Log back into the zone as root using the **-S** switch of the **zlogin** command: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | cat: output error (0/131072 characters written) | + | [Connected to zone ' |
- | Disc quota exceeded | + | @myzone:~$ whoami |
+ | root | ||
+ | @myzone:~$ ~. | ||
+ | [Connection to zone ' | ||
+ | root@solaris: | ||
</ | </ | ||
- | <WRAP center round important 60%> | + | ====Logging into a Zone as a specific User==== |
- | After a few minutes, you will see the **Disc quota exceeded** message. | + | |
- | </ | + | |
- | Looking at the available disk space on / | + | To log into the zone as the **myzone** user that you previously created, use the following command: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | NAME USED AVAIL REFER MOUNTPOINT | + | [Connected to zone ' |
- | mypool/ | + | No directory! Logging in with home=/ |
+ | Oracle Corporation SunOS 5.11 11.1 September 2012 | ||
+ | -bash-4.1$ whoami | ||
+ | myzone | ||
+ | -bash-4.1$ ~. | ||
+ | [Connection to zone ' | ||
+ | root@solaris: | ||
</ | </ | ||
- | Delete the **testfile** file: | + | =====LAB #2 - Administering Zones===== |
- | < | + | ====Sharing Files between the Global and Local Zones==== |
- | root@solaris: | + | |
- | </ | + | |
- | ====Setting | + | To share files between the two zones, you need to configure |
- | As with setting quotas, setting | + | In the global zone, create |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris:~# zfs get reservation mypool/ | + | |
- | NAME | + | |
- | mypool/home/user1 reservation | + | |
</ | </ | ||
- | ====Using Snapshots==== | ||
- | Create a file in **/ | + | Now use the **zonecfg** command to configure the share: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | zonecfg: |
- | snapshot1 | + | zonecfg: |
+ | zonecfg: | ||
+ | zonecfg: | ||
+ | zonecfg: | ||
+ | zonecfg: | ||
+ | zonecfg: | ||
+ | root@solaris: | ||
</ | </ | ||
- | To create a snapshot of a ZFS file system, you need to use the **snapshot** subcommand of the **zfs** command: | + | <WRAP center round important 60%> |
+ | Note that **dir** indicates | ||
+ | </ | ||
+ | |||
+ | Now create a file in **/ | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
</ | </ | ||
- | The snapshot | + | Reboot myzone, check it is up and running, log into myzone as root and check you can see the share. Finally, create a file in **/root/share**: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | total 3 | + | root@solaris: |
- | drwxr-xr-x | + | ID NAME |
+ | 0 global | ||
+ | 2 myzone | ||
+ | root@solaris: | ||
+ | [Connected to zone ' | ||
+ | @myzone:~$ cd /root | ||
+ | @myzone:~root$ ls | ||
+ | share | ||
+ | @myzone:~root$ ls share | ||
+ | testshare | ||
+ | @myzone:~root$ touch share/ | ||
+ | @myzone: | ||
+ | shareback | ||
</ | </ | ||
- | As you can see, the snapshot contains | + | Leave myzone and check if you can see the **shareback** file from the global zone: |
< | < | ||
- | root@solaris: | + | @myzone: |
- | total 2 | + | [Connection to zone ' |
- | -rw-r--r-- | + | root@solaris: |
+ | shareback | ||
</ | </ | ||
- | It is important to note here that the .zfs directory is also hidden from the **ls** command, even when using the **-a** switch: | + | You can also share the global zone's DVD-ROM drive. However do **not** use the process explained above since it creates a permanent LOFS mount which will stop you ejecting the DVD from the global zone's DVD-ROM drive whilst the local zone is running: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | /users/user1: | + | root@solaris:~# ls / |
- | total 8 | + | 32Bit |
- | drwxr-xr-x | + | 64Bit |
- | drwxr-xr-x 3 root | + | AUTORUN.INF |
- | -rw-r--r-- | + | root@solaris:~# mount -F lofs / |
</ | </ | ||
- | You can also create a recursive snapshot | + | Now check you can see the contents |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | 32Bit | ||
+ | 64Bit | ||
+ | AUTORUN.INF | ||
+ | autorun.sh | ||
+ | cert | ||
+ | OS2 | ||
+ | runasroot.sh | ||
+ | VBoxLinuxAdditions.run | ||
+ | VBoxSolarisAdditions.pkg | ||
+ | VBoxWindowsAdditions-amd64.exe | ||
+ | VBoxWindowsAdditions-x86.exe | ||
+ | VBoxWindowsAdditions.exe | ||
</ | </ | ||
- | The snapshots are stored in their respective .zfs directories: | + | Finally unmount the DVD-Rom and eject it: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | Dec13-1 | + | root@solaris: |
- | root@solaris: | + | cdrom /dev/dsk/c7t1d0s2 ejected |
- | Dec13 Dec13-1 | + | |
</ | </ | ||
- | You can list all snapshots | + | ====Removing the Share==== |
+ | |||
+ | In order to remove the LOFS share, proceed | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | NAME | + | zonecfg: |
- | mypool@Dec13-1 | + | fs: |
- | mypool/home@Dec13-1 | + | dir: /root/share |
- | mypool/home/user1@Dec13 | + | special: |
- | mypool/home/user1@Dec13-1 | + | raw not specified |
+ | type: lofs | ||
+ | options: [rw, | ||
+ | zonecfg: | ||
+ | zonecfg: | ||
+ | root@solaris:~# zoneadm | ||
</ | </ | ||
- | Create another file in **/ | + | ====Allocating CPU Resources==== |
- | < | + | First, lets see what the non-global zone currently sees as available processors: |
- | root@solaris: | + | |
- | root@solaris: | + | |
- | total 4 | + | |
- | -rw-r--r-- | + | |
- | -rw-r--r-- | + | |
- | root@solaris: | + | |
- | This is a test file for the first snapshot | + | |
- | root@solaris: | + | |
- | This is a test file for the second snapshot | + | |
- | </ | + | |
- | + | ||
- | Now take a second recursive snapshot of **mypool**: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris:~# zfs list -t snapshot -r mypool | + | Status of virtual processor 0 as of: 12/15/2012 06:54:05 |
- | NAME | + | |
- | mypool@Dec13-1 | + | |
- | mypool@Dec13-2 | + | and has an i387 compatible floating point processor. |
- | mypool/home@Dec13-1 | + | Status of virtual processor 1 as of: 12/15/2012 06:54:05 |
- | mypool/ | + | |
- | mypool/home/user1@Dec13 | + | The i386 processor operates at 2271 MHz, |
- | mypool/ | + | and has an i387 compatible floating point processor. |
- | mypool/ | + | |
</ | </ | ||
- | The **diff** subcommand | + | As you can see, the zone has //grabbed// both of the processors available in the global zone. In order to limit the availabilty to just 1 processor, you need to change |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | M / | + | zonecfg: |
- | M /users/user1/ | + | zonecfg: |
- | + /users/user1/ | + | zonecfg: |
+ | zonecfg: | ||
+ | root@solaris:~# zoneadm | ||
+ | root@solaris: | ||
+ | Status of virtual processor 0 as of: 12/15/2012 07:12:29 | ||
+ | | ||
+ | The i386 processor operates at 2271 MHz, | ||
+ | and has an i387 compatible floating point processor. | ||
</ | </ | ||
<WRAP center round important 60%> | <WRAP center round important 60%> | ||
- | The above out put shows that **/ | + | The dedicated cpu is now invisible to all other non-global zones. You can also define a range of CPUs, such as **1-3**, in which case, when the non-global zone boots, the system will allocate 1 CPU as a minimum and 2 or 3 CPUs if they are available. |
</ | </ | ||
- | This output can contain | + | Before proceeding further, remove |
- | + | ||
- | ^ Character ^ Description ^ | + | |
- | | M | **M**odification | | + | |
- | | R | **R**enamed | | + | |
- | | + | Added | | + | |
- | | - | Deleted | | + | |
- | + | ||
- | Note that you cannot compare the snapshots in the reverse order: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | Unable to obtain diffs: mypool/ | + | root@solaris:~# zoneadm |
</ | </ | ||
- | ====Rolling Back to a Snapshot==== | + | ====Fair Share Scheduler==== |
- | In the case that you wish to rollback to a specific snapshot, note that you can **only** roll back to the last snapshot | + | Another way of sharing resources is to use the Fair Share Scheduler. Firstly, |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | NAME | + | root@solaris: |
- | mypool@Dec13-1 | + | CONFIGURED CLASSES |
- | mypool@Dec13-2 | + | ================== |
- | mypool/ | + | |
- | mypool/ | + | |
- | mypool/ | + | |
- | mypool/ | + | |
- | mypool/ | + | |
- | root@solaris: | + | |
- | cannot rollback to ' | + | |
- | use ' | + | |
- | mypool/ | + | |
- | </ | + | |
- | Delete the **Dec13-2** snapshot as follows: | + | SYS (System Class) |
- | + | TS (Time Sharing) | |
- | < | + | SDC (System Duty-Cycle Class) |
- | root@solaris: | + | FX (Fixed Priority) |
- | root@solaris: | + | IA (Interactive) |
- | NAME | + | RT (Real Time) |
- | mypool@Dec13-1 | + | FSS (Fair Share) |
- | mypool@Dec13-2 | + | |
- | mypool/ | + | |
- | mypool/ | + | |
- | mypool/ | + | |
- | mypool/ | + | |
</ | </ | ||
- | Now roll back to **Dec13-1**: | + | Next set the FSS scheduler as the default for your zone: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | |
- | total 2 | + | |
- | -rw-r--r-- | + | |
</ | </ | ||
- | <WRAP center round important 60%> | + | Now you can give your global zone 75% of your processors leaving 25% for your zone: |
- | Note that the **snapshot2** file has obviously disappeared since it was not in the **Dec13-1** snapshot. | + | |
- | </ | + | |
- | + | ||
- | ====Cloning a Snapshot==== | + | |
- | + | ||
- | Snapshots are read-only. To convert a snapshot to a writable file system, | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | zonecfg: |
- | NAME USED AVAIL REFER MOUNTPOINT | + | zonecfg: |
- | mypool | + | root@solaris: |
- | mypool/ | + | zonecfg: |
- | mypool/ | + | zonecfg: |
- | mypool/ | + | |
- | rpool 7.03G 12.3G 4.58M /rpool | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
</ | </ | ||
- | Display | + | Finally use the **prstat** command to display the CPU resource balancing: |
< | < | ||
- | root@solaris:~# ls -l /users/user3 | + | PID USERNAME |
- | total 2 | + | 3725 trainee |
- | -rw-r--r-- 1 root | + | 3738 trainee |
- | </code> | + | 11208 daemon |
+ | 1159 trainee | ||
+ | 3661 trainee | ||
+ | 5 root 0K 0K sleep 99 | ||
+ | 3683 trainee | ||
+ | 12971 root 11M 3744K cpu1 | ||
+ | 12134 netadm | ||
+ | 12308 root 5880K 2296K sleep 59 0 | ||
+ | 3645 trainee | ||
+ | 3644 trainee | ||
+ | 12400 root 3964K 1008K sleep 59 0 | ||
+ | 3658 trainee | ||
+ | 814 root 16M 6288K sleep 59 0 | ||
+ | 932 root | ||
+ | 957 root 11M 1144K sleep 59 0 | ||
+ | 818 root 0K 0K sleep 99 | ||
+ | | ||
+ | 881 daemon | ||
+ | 95 netadm | ||
+ | | ||
+ | 85 daemon | ||
+ | 50 root 16M 1560K sleep 59 0 | ||
+ | ZONEID | ||
+ | | ||
+ | | ||
- | ====Using Compression==== | ||
- | In order to minimize storage space, you can make a file system use compression. Compression can be activated either at creation time or after creation. Compression only works for new data. Any existing data in the file system at the time of activating compression remains uncompressed. | ||
- | To activate compression on an existing file system, you need to change the file system' | ||
- | < | ||
- | root@solaris: | ||
- | root@solaris: | ||
- | NAME | ||
- | mypool/ | ||
- | </ | ||
- | ====Using De-duplication==== | ||
- | Another space saving property of ZFS file systems is **De-duplication**: | + | Total: 152 processes, 956 lwps, load averages: 0.08, 0.12, 0.12 |
- | + | ||
- | < | + | |
- | root@solaris:~# zfs set dedup=on mypool/ | + | |
- | root@solaris: | + | |
- | NAME | + | |
- | mypool/ | + | |
</ | </ | ||
+ | ====Allocating Memory==== | ||
- | ====Using Encryption==== | + | Three types of memory capping are possible within a zone: |
- | Unlike **Compression** and **De-duplication**, **Encryption** | + | ^ Cap ^ Description ^ |
+ | | Physical | Total amount of physical memory available to the zone. Once past the cap, memory pages are paged out | | ||
+ | | Locked | Amount of memory that can be allocated to a greedy application by a zone | | ||
+ | | Swap | Amount | ||
- | < | + | To cap the physical memory |
- | root@solaris: | + | |
- | Enter passphrase for ' | + | |
- | Enter again: fenestros | + | |
- | </ | + | |
- | + | ||
- | <WRAP center round important 60%> | + | |
- | Note that the passphrase is not shown in the real output | + | |
- | </ | + | |
- | + | ||
- | To check if encryption is active on a file system, use the following command: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | NAME | + | zonecfg: |
- | mypool/ | + | zonecfg: |
- | root@solaris: | + | zonecfg: |
- | NAME | + | zonecfg: |
- | mypool/ | + | root@solaris: |
+ | capped-memory: | ||
+ | physical: 50M | ||
</ | </ | ||
+ | ====Zone Statistics==== | ||
- | + | Zone statistics can be displayed by using the **zonestat** command: | |
- | ====Replacing a Faulty Disk==== | + | |
- | + | ||
- | In the case of a faulty disk and no hot spares, replacing the disk is a one-line operation | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | pool: mypool | + | Collecting data for first interval... |
- | state: ONLINE | + | Interval: 1, Duration: 0:00:05 |
- | | + | SUMMARY |
- | config: | + | |
+ | | ||
+ | [total] | ||
+ | | ||
+ | | ||
+ | | ||
- | NAME STATE READ WRITE CKSUM | + | Interval: 2, Duration: 0:00:10 |
- | mypool | + | SUMMARY |
- | mirror-0 | + | ---CPU---- |
- | c7t2d0 | + | |
- | c7t3d0 | + | [total] |
- | spares | + | [system] |
- | c7t5d0 | + | global |
+ | myzone | ||
- | errors: No known data errors | + | Interval: 3, Duration: 0:00:15 |
- | root@solaris:~# zpool replace mypool c7t2d0 c7t4d0 | + | SUMMARY |
+ | ---CPU---- | ||
+ | | ||
+ | [total] | ||
+ | | ||
+ | | ||
+ | | ||
</ | </ | ||
- | Use the **status** subcommand of the **zpool** command again to see what has happened: | ||
- | < | + | ====Non-global Zone Privileges ==== |
- | root@solaris: | + | |
- | pool: mypool | + | |
- | | + | |
- | scan: resilvered 601K in 0h0m with 0 errors on Thu Dec 13 11:45:49 2012 | + | |
- | config: | + | |
- | NAME STATE READ WRITE CKSUM | + | Certain things cannot be done from within a non-global zone. The list of privileges assigned to a zone can be displayed as follows: |
- | mypool | + | |
- | mirror-0 ONLINE | + | |
- | c7t4d0 | + | |
- | c7t3d0 | + | |
- | spares | + | |
- | c7t5d0 | + | |
- | errors: No known data errors | + | < |
- | </ | + | root@solaris:~# zlogin myzone ppriv -l |
+ | contract_event | ||
+ | contract_identity | ||
+ | contract_observer | ||
+ | cpc_cpu | ||
+ | dtrace_kernel | ||
+ | dtrace_proc | ||
+ | dtrace_user | ||
+ | file_chown | ||
+ | file_chown_self | ||
+ | file_dac_execute | ||
+ | file_dac_read | ||
+ | file_dac_search | ||
+ | file_dac_write | ||
+ | file_downgrade_sl | ||
+ | file_flag_set | ||
+ | file_link_any | ||
+ | file_owner | ||
+ | file_read | ||
+ | file_setid | ||
+ | file_upgrade_sl | ||
+ | file_write | ||
+ | graphics_access | ||
+ | graphics_map | ||
+ | ipc_dac_read | ||
+ | ipc_dac_write | ||
+ | ipc_owner | ||
+ | net_access | ||
+ | net_bindmlp | ||
+ | net_icmpaccess | ||
+ | net_mac_aware | ||
+ | net_mac_implicit | ||
+ | net_observability | ||
+ | net_privaddr | ||
+ | net_rawaccess | ||
+ | proc_audit | ||
+ | proc_chroot | ||
+ | proc_clock_highres | ||
+ | proc_exec | ||
+ | proc_fork | ||
+ | proc_info | ||
+ | proc_lock_memory | ||
+ | proc_owner | ||
+ | proc_priocntl | ||
+ | proc_session | ||
+ | proc_setid | ||
+ | proc_taskid | ||
+ | proc_zone | ||
+ | sys_acct | ||
+ | sys_admin | ||
+ | sys_audit | ||
+ | sys_config | ||
+ | sys_devices | ||
+ | sys_ipc_config | ||
+ | sys_linkdir | ||
+ | sys_mount | ||
+ | sys_iptun_config | ||
+ | sys_flow_config | ||
+ | sys_dl_config | ||
+ | sys_ip_config | ||
+ | sys_net_config | ||
+ | sys_nfs | ||
+ | sys_ppp_config | ||
+ | sys_res_bind | ||
+ | sys_res_config | ||
+ | sys_resource | ||
+ | sys_share | ||
+ | sys_smb | ||
+ | sys_suser_compat | ||
+ | sys_time | ||
+ | sys_trans_label | ||
+ | win_colormap | ||
+ | win_config | ||
+ | win_dac_read | ||
+ | win_dac_write | ||
+ | win_devices | ||
+ | win_dga | ||
+ | win_downgrade_sl | ||
+ | win_fontpath | ||
+ | win_mac_read | ||
+ | win_mac_write | ||
+ | win_selection | ||
+ | win_upgrade_sl | ||
+ | </ | ||
- | <WRAP center round important 60%> | ||
- | ZFS // | ||
- | </ | ||
- | ====Destroying | + | ====Changing |
- | Destroying a pool is achieved by using the **destroy** subcommand | + | To change |
< | < | ||
- | root@solaris: | + | root@solaris: |
</ | </ | ||
- | As you can see by the following output, this operation has also destroyed all the associated snapshots: | + | Now you can change |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | NAME | + | root@solaris: |
- | rpool 7.03G 12.3G 4.58M /rpool | + | |
- | rpool/ | + | 0 global |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ROOT/solaris/var 865M 12.3G | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/export/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | root@solaris: | + | |
- | cannot open ' | + | |
- | root@solaris: | + | |
- | total 0 | + | |
</ | </ | ||
- | <WRAP center round important 60%> | + | ====Changing |
- | As you have seen above, destroying a pool, **all** the data in it and **all** the associated snapshots is disconcertingly simple. You should therefore be very careful when using the **destroy** subcommand. | + | |
- | </ | + | |
- | + | ||
- | + | ||
- | + | ||
- | ====Creating | + | |
- | You can create a RAID-5 pool using the RAID-Z algorithm: | + | To change |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | root@solaris: |
- | | + | |
- | | + | |
- | scan: none requested | + | |
- | config: | + | |
- | + | ||
- | NAME | + | |
- | mypool | + | |
- | | + | |
- | c7t2d0 | + | |
- | c7t3d0 | + | |
- | c7t4d0 | + | |
- | spares | + | |
- | c7t5d0 | + | |
- | + | ||
- | errors: No known data errors | + | |
- | </ | + | |
- | + | ||
- | Destroy **mypool** : | + | |
- | + | ||
- | < | + | |
- | root@solaris: | + | |
</ | </ | ||
- | ====Creating | + | ====Backing Up a Zone==== |
- | You can create a RAID-6 pool using the RAID-Z2 algorithm: | + | Backing up a zone includes backing up the zone configuration **and** the application data in it. You can use any kind of backup software to backup data within |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | root@solaris: |
- | pool: mypool | + | create -b |
- | state: ONLINE | + | set brand=solaris |
- | scan: none requested | + | set zonepath=/ |
- | config: | + | set autoboot=true |
- | + | set scheduling-class=FSS | |
- | NAME STATE READ WRITE CKSUM | + | set ip-type=exclusive |
- | mypool | + | add anet |
- | raidz2-0 ONLINE | + | set linkname=net0 |
- | c7t2d0 | + | set lower-link=auto |
- | c7t3d0 | + | set configure-allowed-address=true |
- | c7t4d0 | + | set link-protection=mac-nospoof |
- | c7t5d0 | + | set mac-address=random |
- | spares | + | end |
- | c7t6d0 | + | add capped-memory |
- | + | set physical=50M | |
- | errors: No known data errors | + | end |
+ | add rctl | ||
+ | set name=zone.cpu-shares | ||
+ | add value (priv=privileged, | ||
+ | end | ||
</ | </ | ||
- | Destroy **mypool** | + | Now backup the zone's xml file: |
< | < | ||
- | root@solaris: | + | root@solaris: |
</ | </ | ||
- | <WRAP center round todo 60%> | ||
- | Create a triple parity RAID **mypool** using your five 200MB disks. Do not delete it. | ||
- | </ | ||
- | ====Displaying the Zpool History==== | + | ==== Restoring a Zone==== |
- | You can review everything that has been done to existing pools by using the **history** subcommand of the **zpool** command: | + | Disaster |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | History for ' | + | Are you sure you want to uninstall zone myzone1 (y/[n])? y |
- | 2012-12-13.14: | + | Progress being logged to /var/log/zones/zoneadm.20121218T170820Z.myzone1.uninstall |
- | + | root@solaris:~# | |
- | History for ' | + | root@solaris:~# zonecfg |
- | 2012-11-20.19: | + | Are you sure you want to delete zone myzone1 (y/[n])? y |
- | 2012-11-20.19: | + | |
- | 2012-11-20.19: | + | |
- | 2012-11-20.19: | + | |
- | 2012-11-20.19: | + | |
- | 2012-11-20.19: | + | |
- | 2012-11-20.19:08:07 zfs create -p -V 1024.0m rpool/ | + | |
- | 2012-11-20.19:08:12 zfs create -p -V 1024.0m rpool/swap | + | |
- | 2012-11-20.19:08:20 zfs set primarycache=metadata rpool/swap | + | |
- | 2012-11-20.19:25:51 zfs set primarycache=metadata rpool/ | + | |
- | 2012-11-20.19: | + | |
- | 2012-11-20.22: | + | |
- | 2012-12-01.14: | + | |
- | 2012-12-03.13: | + | |
- | 2012-12-08.14: | + | |
- | 2012-12-11.15: | + | |
- | 2012-12-12.09: | + | |
</ | </ | ||
- | <WRAP center round important 60%> | + | Now restore myzone1 as follows: |
- | Note that the history related to destroyed pools has been deleted. | + | |
- | </ | + | |
- | =====LAB | + | < |
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | The following ZFS file system(s) have been created: | ||
+ | rpool/ | ||
+ | Progress being logged to / | ||
+ | | ||
- | ====Installing the COMSTAR Server==== | + | AI Manifest: /tmp/manifest.xml.9BaOQP |
- | + | | |
- | Start by installing the COMSTAR storage server software: | + | |
- | + | Installation: Starting ... | |
- | < | + | |
- | root@solaris: | + | |
- | Packages to install: | + | |
- | | + | |
- | Create backup boot environment: Yes | + | |
- | | + | |
+ | Creating IPS image | ||
+ | Startup linked: 1/1 done | ||
+ | Installing packages from: | ||
+ | solaris | ||
+ | origin: | ||
DOWNLOAD | DOWNLOAD | ||
- | Completed | + | Completed |
PHASE ITEMS | PHASE ITEMS | ||
- | Installing new actions | + | Installing new actions |
Updating package state database | Updating package state database | ||
Updating image state Done | Updating image state Done | ||
- | Creating fast lookup database | + | Creating fast lookup database |
- | </ | + | Installation: |
- | The **COMSTAR target mode framework** runs as the **stmf** service. Check to see if it is enabled: | + | Note: Man pages can be obtained by installing pkg:/ |
- | < | + | done. |
- | root@solaris: | + | |
- | STATE STIME FMRI | + | |
- | disabled | + | |
- | </ | + | |
- | Enable the service: | + | Done: Installation completed in 678.453 seconds. |
- | < | ||
- | root@solaris: | ||
- | root@solaris: | ||
- | STATE STIME FMRI | ||
- | online | ||
- | </ | ||
- | You can check the status of the server using the **stmfadm** command: | + | Next Steps: Boot the zone, then log into the zone console (zlogin -C) |
- | < | + | to complete the configuration process. |
- | root@solaris:~# stmfadm list-state | + | |
- | Operational Status: online | + | Log saved in non-global zone as / |
- | Config Status | + | |
- | ALUA Status | + | |
- | ALUA Node : 0 | + | |
</ | </ | ||
- | ====Creating SCSI Logical Units==== | + | Log in as root and check the zone is running correctly: |
- | + | ||
- | First you need to create your **Backing Storage Device** within your **mypool** pool: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris:~# zfs list | + | [Connected to zone ' |
- | NAME USED AVAIL REFER MOUNTPOINT | + | @myzone.solaris.loc:~$ ls |
- | mypool | + | bin |
- | mypool/ | + | |
- | rpool 7.40G 11.9G 4.58M /rpool | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
- | rpool/ | + | |
</ | </ | ||
- | You can see your raw device in the **/ | + | ====Cloning a Local Zone==== |
+ | |||
+ | In this section you are going to create a template zone that you can clone as necessary every time you need to install a new zone. Start by creating a zone called | ||
< | < | ||
- | root@solaris: | + | root@solaris: |
- | total 0 | + | Use ' |
- | lrwxrwxrwx | + | zonecfg:cleanzone> create |
+ | create: Using system default template ' | ||
+ | zonecfg: | ||
+ | zonecfg:cleanzone> | ||
+ | zonecfg: | ||
+ | zonecfg: | ||
+ | zonecfg: | ||
</ | </ | ||
- | You can now create a logical unit using the **create-lu** subcommand of the **sbdadm** command: | + | Install |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | Created the following LU: | + | The following ZFS file system(s) have been created: |
+ | rpool/zones/cleanzone | ||
+ | Progress being logged to /var/log/zones/ | ||
+ | Image: Preparing at / | ||
- | GUID DATA SIZE | + | AI Manifest: / |
- | -------------------------------- | + | |
- | 600144f0e2a54e00000050cae6d80001 | + | |
- | </ | + | Installation: |
- | ====Mapping the Logical Unit==== | + | Creating IPS image |
+ | Startup linked: 1/1 done | ||
+ | Installing packages from: | ||
+ | solaris | ||
+ | origin: | ||
+ | DOWNLOAD | ||
+ | Completed | ||
- | In order for the logical unit to be available to initiators, it has to be **mapped**. In order to map the logical device you need its GUID. You can use either one of the two following commands to get that information: | + | PHASE ITEMS |
+ | Installing new actions | ||
+ | Updating package state database | ||
+ | Updating image state Done | ||
+ | Creating fast lookup database | ||
+ | Installation: Succeeded | ||
- | < | + | Note: Man pages can be obtained by installing pkg:/ |
- | root@solaris:~# sbdadm list-lu | + | |
- | Found 1 LU(s) | + | done. |
- | GUID DATA SIZE | + | Done: Installation completed in 797.979 seconds. |
- | -------------------------------- | + | |
- | 600144f0e2a54e00000050cae6d80001 | + | |
- | </ | + | |
- | < | ||
- | root@solaris: | ||
- | LU Name: 600144F0E2A54E00000050CAE6D80001 | ||
- | Operational Status | ||
- | Provider Name : sbd | ||
- | Alias : / | ||
- | View Entry Count : 0 | ||
- | Data File : / | ||
- | Meta File : not set | ||
- | Size : 104857600 | ||
- | Block Size : 512 | ||
- | Management URL : not set | ||
- | Vendor ID : SUN | ||
- | Product ID : COMSTAR | ||
- | Serial Num : not set | ||
- | Write Protect | ||
- | Write Cache Mode Select: Enabled | ||
- | Writeback Cache : Enabled | ||
- | Access State : Active | ||
- | </ | ||
- | Create simple mapping for this logical unit by using the **add-view** subcommand of the **stmfadm** command: | + | Next Steps: Boot the zone, then log into the zone console (zlogin -C) |
- | < | + | to complete the configuration process. |
- | root@solaris: | + | |
- | </ | + | |
- | ====Creating a Target==== | + | Log saved in non-global zone as /zones/cleanzone/root/var/log/zones/zoneadm.20121218T143129Z.cleanzone.install |
- | + | ||
- | In order to create a target the **svc:/network/iscsi/target: | + | |
- | + | ||
- | < | + | |
- | root@solaris:~# svcs \*scsi\* | + | |
- | STATE STIME FMRI | + | |
- | disabled | + | |
- | online | + | |
</ | </ | ||
- | Start the service: | + | Boot the zone to import the zone's manifest: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | root@solaris: | + | |
- | STATE STIME FMRI | + | |
- | online | + | |
- | online | + | |
</ | </ | ||
- | Now create a target | + | Login to the zone and hit < |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | Target iqn.1986-03.com.sun: | + | |
</ | </ | ||
- | To list the target(s), use the **list-target** subcommand of the **itadm** command: | + | To clone a zone, it first needs to be shutdown: |
< | < | ||
- | root@solaris: | + | root@solaris: |
- | TARGET | + | root@solaris: |
- | iqn.1986-03.com.sun: | + | |
+ | | ||
+ | 3 myzone1 | ||
+ | - cleanzone | ||
</ | </ | ||
- | ====Configuring the Target for Discovery==== | + | Now create a clone of **cleanzone**: |
- | + | ||
- | Finally, you need to configure the target so it can be discovered by initiators: | + | |
< | < | ||
- | root@solaris: | + | root@solaris: |
+ | root@solaris: | ||
+ | root@solaris: | ||
+ | The following ZFS file system(s) have been created: | ||
+ | rpool/ | ||
+ | Progress being logged to / | ||
+ | Log saved in non-global zone as / | ||
</ | </ | ||
- | |||
- | =====References===== | ||
- | |||
- | * **[[http:// | ||
----- | ----- | ||
< | < | ||
<div align=" | <div align=" | ||
- | Copyright © 2011-2015 | + | Copyright © 2019 Hugh Norris. |
- | <a rel=" | + | |
- | </ | + | |
</ | </ |