Solaris 11 Zones: Difference between revisions

From Lolly's Wiki
Jump to navigationJump to search
No edit summary
m (Text replacement - "[[Kategorie:" to "[[Category:")
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Kategorie:Solaris11|Zones]]
[[Category:Solaris11|Zones]]


==zoneclone.sh==
==zoneclone.sh==
<source lang=bash>
<syntaxhighlight lang=bash>
#!/bin/bash
#!/bin/bash
SRC_ZONE=$1
SRC_ZONE=$1
Line 64: Line 64:
    
    
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
</syntaxhighlight>


==Way that works with Solaris Cluster and immutable zones==
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.  
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.  
<syntaxhighlight lang=bash>
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher.
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /.
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images.
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed.
</syntaxhighlight>


===Move all RGs from node first===
===Move all RGs from node first===
<source lang=bash>
<syntaxhighlight lang=bash>
# clrg evacuate -n $(hostname) +
# clrg evacuate -n $(hostname) +
</source>
</syntaxhighlight>


===Update Solaris===
===Update Solaris===
<source lang=bash>
<syntaxhighlight lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v
 
# init 6
# init 6
</source>
</syntaxhighlight>


===Disable zone on other node and move to self===
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
But leave the HAStoragePlus resource online
<source lang=bash>
<syntaxhighlight lang=bash>
# clrs disable zone01-zone-rs
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
# clrg switch -n $(hostname) zone01-rg
</source>
</syntaxhighlight>


===Attach, boot -w, detach without cluster===
===Attach, boot -w, detach without cluster===
<source lang=bash>
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zoneadm -z zone01 boot -w
# zlogin    zone01 svcs -xv        # <- wait for services to start
# zlogin    zone01 svcs -xv        # <- wait for all services to be ready
# zlogin    zone01 svcs -xv        # <- wait for all services to be ready
...
# zlogin    zone01 svcs -xv        # <- wait for all services to be ready
# zoneadm -z zone01 halt
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
# zoneadm -z zone01 detach
</source>
</syntaxhighlight>


===Enable zone in cluster===
===Enable zone in cluster===
<source lang=bash>
<syntaxhighlight lang=bash>
# clrs enable zone01-zone-rs
# clrs enable zone01-zone-rs
</source>
</syntaxhighlight>


==Some other things==
==Some other things==
<source lang=bash>
<syntaxhighlight lang=bash>
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
# clrs enable zone01-rs
</source>
</syntaxhighlight>


<source lang=bash>
<syntaxhighlight lang=bash>
# /usr/lib/brand/solaris/attach:
# /usr/lib/brand/solaris/attach:


Line 138: Line 151:
                                           environments. (not associated with
                                           environments. (not associated with
                                           any global BE)
                                           any global BE)
</source>
</syntaxhighlight>

Latest revision as of 04:33, 26 November 2021


zoneclone.sh

#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4

if [ $# -lt 3 ] ; then
  echo "Not enough arguments!"
  echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
  exit 1
fi

zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
  echo "Destination zone exists!"
  exit 1
}

zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
  echo "Source zone does not exist!"
  exit 1
}

SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
  echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
  exit 1
fi

if [ -n "${DST_DATASET}" ] ; then
  if [ -d ${DST_DIR} ] ; then
    rmdir ${DST_DIR} || {
      echo "${DST_DIR} must be empty!"
      exit 1
    }
  fi
  # Is parent dataset there?
  zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
    echo "Destination dataset does not exist!"
    exit 1
  }
  zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi

[ -d ${DST_DIR} ] || {
  echo "Destination dir must exist!"
  exit 1
}


zonecfg -z ${SRC_ZONE} export \
 | nawk -v zonepath=${DST_DIR} '
  BEGIN {
    FS="=";
    OFS="=";
  }
  /set zonepath/{$2=zonepath}
  { print; }
  ' \
 | zonecfg -z ${DST_ZONE} -f -
  
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}

Way that works with Solaris Cluster and immutable zones

Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one boot -w is necessary to come up in cluster.

Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Installing: Using existing zone boot environment
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Zone BE root dataset: zone01/zone/rpool/ROOT/solaris-6
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Cache: Using /var/pkg/publisher.
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Updating non-global zone: Linking to image /.
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Finished processing linked images.
Apr  1 02:31:52 node01 SC[SUNWsczone.start_sczbt]:zone01-rg:zone01-zone-rs: [ID 567783 daemon.error] start_sczbt rc<1> - Result: Attach Failed.

Move all RGs from node first

# clrg evacuate -n $(hostname) +

Update Solaris

# pkg update --be-name $(pkg info -r system/kernel | nawk '/Build Release:/{split($NF,release,".");}/Branch:/{split($NF,versions,".");print "Solaris_"release[2]"."versions[3]"_SRU"versions[4];}') --accept -v

# init 6

Disable zone on other node and move to self

But leave the HAStoragePlus resource online

# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg

Attach, boot -w, detach without cluster

# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin     zone01 svcs -xv        # <- wait for all services to be ready
# zlogin     zone01 svcs -xv        # <- wait for all services to be ready
...
# zlogin     zone01 svcs -xv        # <- wait for all services to be ready
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach

Enable zone in cluster

# clrs enable zone01-zone-rs

Some other things

# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
# /usr/lib/brand/solaris/attach:

Brand specific options:
brand-specific usage: 
Usage:
         attach [-uv] [-a archive | -d directory | -z zbe]
                [-c profile.xml | dir] [-x attach-last-booted-zbe|
                        force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]

        -u      Update the software in the attached zone boot environment to
                match the sofware in the global zone boot environment.
        -v      Verbose.
        -c      Update the zone configuration with the sysconfig profile
                specified in the given file or directory.
        -a      Extract the specified archive into the zone then attach the
                active boot environment found in the archive.  The archive
                may be a zfs, cpio, or tar archive.  It may be compressed with
                gzip or bzip2.
        -d      Copy the specified directory into a new zone boot environment
                then attach the zone boot environment.
        -z      Attach the specified zone boot environment.
        -x      attach-last-booted-zbe  : Attach the last booted zone boot
                                           environment.
                force-zbe-clone         : Clone zone boot environment
                                           on attach.
                deny-zbe-clone          : Do not clone zone boot environment
                                           on attach.
                destroy-orphan-zbes     : Destroy all orphan zone boot
                                           environments. (not associated with
                                           any global BE)