Solaris 11 Zones: Difference between revisions

From Lolly's Wiki
Jump to navigationJump to search
m (Text replacement - "<source" to "<syntaxhighlight")
Line 2: Line 2:


==zoneclone.sh==
==zoneclone.sh==
<source lang=bash>
<syntaxhighlight lang=bash>
#!/bin/bash
#!/bin/bash
SRC_ZONE=$1
SRC_ZONE=$1
Line 69: Line 69:
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.  
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.  


<source lang=bash>
<syntaxhighlight lang=bash>
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Line 79: Line 79:


===Move all RGs from node first===
===Move all RGs from node first===
<source lang=bash>
<syntaxhighlight lang=bash>
# clrg evacuate -n $(hostname) +
# clrg evacuate -n $(hostname) +
</source>
</source>


===Update Solaris===
===Update Solaris===
<source lang=bash>
<syntaxhighlight lang=bash>
# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v
# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v


Line 92: Line 92:
===Disable zone on other node and move to self===
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
But leave the HAStoragePlus resource online
<source lang=bash>
<syntaxhighlight lang=bash>
# clrs disable zone01-zone-rs
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
# clrg switch -n $(hostname) zone01-rg
Line 98: Line 98:


===Attach, boot -w, detach without cluster===
===Attach, boot -w, detach without cluster===
<source lang=bash>
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zoneadm -z zone01 boot -w
Line 110: Line 110:


===Enable zone in cluster===
===Enable zone in cluster===
<source lang=bash>
<syntaxhighlight lang=bash>
# clrs enable zone01-zone-rs
# clrs enable zone01-zone-rs
</source>
</source>


==Some other things==
==Some other things==
<source lang=bash>
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
# clrs enable zone01-rs
</source>
</source>


<source lang=bash>
<syntaxhighlight lang=bash>
# /usr/lib/brand/solaris/attach:
# /usr/lib/brand/solaris/attach:



Revision as of 16:52, 25 November 2021

Zones

zoneclone.sh

<syntaxhighlight lang=bash>

  1. !/bin/bash

SRC_ZONE=$1 DST_ZONE=$2 DST_DIR=$3 DST_DATASET=$4

if [ $# -lt 3 ] ; then

 echo "Not enough arguments!"
 echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
 exit 1

fi

zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {

 echo "Destination zone exists!"
 exit 1

}

zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {

 echo "Source zone does not exist!"
 exit 1

}

SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')" if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then

 echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
 exit 1

fi

if [ -n "${DST_DATASET}" ] ; then

 if [ -d ${DST_DIR} ] ; then
   rmdir ${DST_DIR} || {
     echo "${DST_DIR} must be empty!"
     exit 1
   }
 fi
 # Is parent dataset there?
 zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
   echo "Destination dataset does not exist!"
   exit 1
 }
 zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}

fi

[ -d ${DST_DIR} ] || {

 echo "Destination dir must exist!"
 exit 1

}


zonecfg -z ${SRC_ZONE} export \

| nawk -v zonepath=${DST_DIR} '
 BEGIN {
   FS="=";
   OFS="=";
 }
 /set zonepath/{$2=zonepath}
 { print; }
 ' \
| zonecfg -z ${DST_ZONE} -f -
 

zoneadm -z ${DST_ZONE} clone ${SRC_ZONE} </source>

Way that works with Solaris Cluster and immutable zones

Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one boot -w is necessary to come up in cluster.

<syntaxhighlight lang=bash> Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6 Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher. Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /. Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images. Apr 1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed. </source>

Move all RGs from node first

<syntaxhighlight lang=bash>

  1. clrg evacuate -n $(hostname) +

</source>

Update Solaris

<syntaxhighlight lang=bash>

  1. pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v
  1. init 6

</source>

Disable zone on other node and move to self

But leave the HAStoragePlus resource online <syntaxhighlight lang=bash>

  1. clrs disable zone01-zone-rs
  2. clrg switch -n $(hostname) zone01-rg

</source>

Attach, boot -w, detach without cluster

<syntaxhighlight lang=bash>

  1. zoneadm -z zone01 attach -u
  2. zoneadm -z zone01 boot -w
  3. zlogin zone01 svcs -xv # <- wait for all services to be ready
  4. zlogin zone01 svcs -xv # <- wait for all services to be ready

...

  1. zlogin zone01 svcs -xv # <- wait for all services to be ready
  2. zoneadm -z zone01 halt
  3. zoneadm -z zone01 detach

</source>

Enable zone in cluster

<syntaxhighlight lang=bash>

  1. clrs enable zone01-zone-rs

</source>

Some other things

<syntaxhighlight lang=bash>

  1. zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
  2. clrs enable zone01-rs

</source>

<syntaxhighlight lang=bash>

  1. /usr/lib/brand/solaris/attach:

Brand specific options: brand-specific usage: Usage:

        attach [-uv] [-a archive | -d directory | -z zbe]
               [-c profile.xml | dir] [-x attach-last-booted-zbe|
                       force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]
       -u      Update the software in the attached zone boot environment to
               match the sofware in the global zone boot environment.
       -v      Verbose.
       -c      Update the zone configuration with the sysconfig profile
               specified in the given file or directory.
       -a      Extract the specified archive into the zone then attach the
               active boot environment found in the archive.  The archive
               may be a zfs, cpio, or tar archive.  It may be compressed with
               gzip or bzip2.
       -d      Copy the specified directory into a new zone boot environment
               then attach the zone boot environment.
       -z      Attach the specified zone boot environment.
       -x      attach-last-booted-zbe  : Attach the last booted zone boot
                                          environment.
               force-zbe-clone         : Clone zone boot environment
                                          on attach.
               deny-zbe-clone          : Do not clone zone boot environment
                                          on attach.
               destroy-orphan-zbes     : Destroy all orphan zone boot
                                          environments. (not associated with
                                          any global BE)

</source>