Solaris 11 Zones: Difference between revisions

From Lolly's Wiki
Jump to navigationJump to search
No edit summary
No edit summary
Line 115: Line 115:
    
    
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}
</source>
==Way that works with Solaris Cluster and immutable zones==
Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one <i>boot -w</i> is necessary to come up in cluster.
===Move all RGs from node first===
<source lang=bash>
# clrg evacuate -n $(hostname) +
</source>
===Update Solaris===
<source lang=bash>
# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
# init 6
</source>
===Disable zone on other node and move to self===
But leave the HAStoragePlus resource online
<source lang=bash>
# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg
</source>
===Attach, boot -w, detach without cluster===
<source lang=bash>
# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin    zone01 svcs -xv        # <- wait for services to start
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach
</source>
===Enable zone in cluster===
<source lang=bash>
# clrs enable zone01-zone-rs
</source>
</source>

Revision as of 02:33, 1 April 2016

Zones

# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
..
Planning linked: 0/1 done; 1 working: zone:zone01
Linked image 'zone:zone01' output:
...
# zlogin zone01 beadm list | tail -1
solaris-7 !RO   -          18.09M  static 2015-12-21 17:57 
# clrs disable zone01-rs

Switch to patched node...

# zoneadm -z zone01 attach -x deny-zbe-clone -z solaris-7
# clrs enable zone01-rs
# /usr/lib/brand/solaris/attach:

Brand specific options:
brand-specific usage: 
Usage:
         attach [-uv] [-a archive | -d directory | -z zbe]
                [-c profile.xml | dir] [-x attach-last-booted-zbe|
                        force-zbe-clone|deny-zbe-clone|destroy-orphan-zbes]

        -u      Update the software in the attached zone boot environment to
                match the sofware in the global zone boot environment.
        -v      Verbose.
        -c      Update the zone configuration with the sysconfig profile
                specified in the given file or directory.
        -a      Extract the specified archive into the zone then attach the
                active boot environment found in the archive.  The archive
                may be a zfs, cpio, or tar archive.  It may be compressed with
                gzip or bzip2.
        -d      Copy the specified directory into a new zone boot environment
                then attach the zone boot environment.
        -z      Attach the specified zone boot environment.
        -x      attach-last-booted-zbe  : Attach the last booted zone boot
                                           environment.
                force-zbe-clone         : Clone zone boot environment
                                           on attach.
                deny-zbe-clone          : Do not clone zone boot environment
                                           on attach.
                destroy-orphan-zbes     : Destroy all orphan zone boot
                                           environments. (not associated with
                                           any global BE)

zoneclone.sh

#!/bin/bash
SRC_ZONE=$1
DST_ZONE=$2
DST_DIR=$3
DST_DATASET=$4

if [ $# -lt 3 ] ; then
  echo "Not enough arguments!"
  echo "Usage: $0 <src_zone> <dst_zone> <dst_dir> [dst_dataset]"
  exit 1
fi

zonecfg -z ${DST_ZONE} info >/dev/null 2>&1 && {
  echo "Destination zone exists!"
  exit 1
}

zonecfg -z ${SRC_ZONE} info >/dev/null 2>&1 || {
  echo "Source zone does not exist!"
  exit 1
}

SRC_ZONE_STATUS="$(zoneadm list -cs | nawk -v zone=${SRC_ZONE} '$1==zone {print $2;}')"
if [ "_${SRC_ZONE_STATUS}_" != "_installed_" ] ; then
  echo "Zone ${SRC_ZONE} must be in the status \"installed\" and not \"${SRC_ZONE_STATUS}\"!"
  exit 1
fi

if [ -n "${DST_DATASET}" ] ; then
  if [ -d ${DST_DIR} ] ; then
    rmdir ${DST_DIR} || {
      echo "${DST_DIR} must be empty!"
      exit 1
    }
  fi
  # Is parent dataset there?
  zfs list -Ho name ${DST_DATASET%/*} >/dev/null 2>&1 || {
    echo "Destination dataset does not exist!"
    exit 1
  }
  zfs create -o mountpoint=${DST_DIR} ${DST_DATASET}
fi

[ -d ${DST_DIR} ] || {
  echo "Destination dir must exist!"
  exit 1
}


zonecfg -z ${SRC_ZONE} export \
 | nawk -v zonepath=${DST_DIR} '
  BEGIN {
    FS="=";
    OFS="=";
  }
  /set zonepath/{$2=zonepath}
  { print; }
  ' \
 | zonecfg -z ${DST_ZONE} -f -
  
zoneadm -z ${DST_ZONE} clone ${SRC_ZONE}

Way that works with Solaris Cluster and immutable zones

Problem was that some steps of updating (indexing man pages etc) could not be done in the immutable zone after solaris update. So one boot -w is necessary to come up in cluster.

Move all RGs from node first

# clrg evacuate -n $(hostname) +

Update Solaris

# pkg update --be-name solaris_11.3.3_sc_4.2.5.1-1 -v --accept
# init 6

Disable zone on other node and move to self

But leave the HAStoragePlus resource online

# clrs disable zone01-zone-rs
# clrg switch -n $(hostname) zone01-rg

Attach, boot -w, detach without cluster

# zoneadm -z zone01 attach -u
# zoneadm -z zone01 boot -w
# zlogin     zone01 svcs -xv        # <- wait for services to start
# zoneadm -z zone01 halt
# zoneadm -z zone01 detach

Enable zone in cluster

# clrs enable zone01-zone-rs