ZFS cheatsheet: Difference between revisions

From Lolly's Wiki
Jump to navigationJump to search
m (Text replacement - "</source" to "</syntaxhighlight")
 
(13 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Category:ZFS|cheatsheet]]
== Links ==
* [[ZFS_Recovery|Reparieren von defktem ZFS]]
* [[ZFS_Recovery|Reparieren von defktem ZFS]]


Line 4: Line 6:
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS Best Practices Guide http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
* ZFS FAQ bei Opensolaris.org http://www.opensolaris.org/os/community/zfs/faq/
== Löschen nicht löschbarer Snapshots ==
Hier nach einem abgebrochenen ZFS send/recv:
<syntaxhighlight lang=bash>
# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy
# zfs holds -r MYSQL-LOG@copy_20130403
NAME                    TAG            TIMESTAMP               
MYSQL-LOG@copy_20130403  .send-22887-0  Wed Apr  3 09:03:32 2013 
# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403
</syntaxhighlight>


== ZFS Tuning ==
== ZFS Tuning ==
Line 9: Line 24:
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen:
Erstmal schauen, was phase ist:
Erstmal schauen, was phase ist:
<pre>
<syntaxhighlight lang=bash>
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload]                            13508608B 309323764    0
Total [hat_memload]                            13508608B 309323764    0
Line 40: Line 55:
Total                    1045037              4082
Total                    1045037              4082
Physical                  1045036              4082
Physical                  1045036              4082
</pre>
</syntaxhighlight>
Oder nur ZFS
Oder nur ZFS
<pre>
<syntaxhighlight lang=bash>
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k
</pre>
</syntaxhighlight>


Ausgeben aller ARC-Parameter:
Ausgeben aller ARC-Parameter:
<pre>
<syntaxhighlight lang=bash>
lollypop@wirefall:~#  echo "::arc -m" | mdb -k
lollypop@wirefall:~#  echo "::arc -m" | mdb -k
hits                      =  80839319
hits                      =  80839319
Line 107: Line 122:
arc_meta_limit            =      128 MB
arc_meta_limit            =      128 MB
arc_meta_max              =      313 MB
arc_meta_max              =      313 MB
</pre>
</syntaxhighlight>


Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:
<pre>
<syntaxhighlight lang=bash>
# echo ::zfs_params | mdb -k
# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_max = 0x100000000
Line 135: Line 150:
prefetch_data_misses      =        0
prefetch_data_misses      =        0
...
...
</pre>
</syntaxhighlight>


Setzen von Kernelparametern geht auch online mit:
Setzen von Kernelparametern geht auch online mit:
<pre>
<syntaxhighlight lang=bash>
# echo zfs_arc_max/Z100000000 | mdb -kw
# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max:    <old value>            =      0x100000000
zfs_arc_max:    <old value>            =      0x100000000
</pre>
</syntaxhighlight>
Das setzt den zfs_arc_max auf 4GB = 0x100000000
Das setzt den zfs_arc_max auf 4GB = 0x100000000


Line 147: Line 162:
In der /etc/system einfach:
In der /etc/system einfach:
set zfs:zfs_arc_max = <Number of bytes>
set zfs:zfs_arc_max = <Number of bytes>
Easy calculation:
<syntaxhighlight lang=bash>
# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000
</syntaxhighlight>


Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
Siehe auch [http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Limiting_the_ARC_Cache Limiting the ARC Cache]
But !!!! NEVER DO THIS !!!!
Never use ''mdb -kw'' to set the values!!!
But on a '''test system''' you could try to get the position in the Kernel with
<syntaxhighlight lang=bash>
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
</syntaxhighlight>
Calculate for example 8GB:
<syntaxhighlight lang=bash>
# printf "0x%x\n" $[ 8 * 1024 ** 3 ] 
0x200000000
</syntaxhighlight>
And raise the values like this:
arc.c = arc.c_max
arc.p = arc.c / 2
<syntaxhighlight lang=bash>
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000              =      0x200000000
> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480              =      0x200000000
> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000              =      0x100000000
</syntaxhighlight>


== ZFS Platzverbrauch besser anzeigen ==
== ZFS Platzverbrauch besser anzeigen ==
Line 170: Line 230:
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
== Migration UFS-Root -> ZFS-Root via Live-Upgrade ==
Erstmal den ZFS Rootpool anlegen:
Erstmal den ZFS Rootpool anlegen:
  # zpool create rpool /dev/dsk/<disk>
  # zpool create rpool /dev/dsk/<zfs-disk>
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.
Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.


Line 187: Line 247:
<pre>
<pre>
# zpool export rpool
# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# zfs unmount rpool
Line 195: Line 256:
</pre>
</pre>


Bootblock für ZFS auf die ZFS-Platte
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>
Aktivieren des neuen BEs
Aktivieren des neuen BEs
  # luactivate zfsBE
  # luactivate zfsBE


[[Kategorie:ZFS]]
 
==cannot destroy 'snapshot': dataset is busy==
<syntaxhighlight lang=bash>
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME                          TAG            TIMESTAMP               
zpool1/raiddisk0@send_1  .send-14952-0  Mon Jun 15 15:29:09 2015 
zpool1/raiddisk0@send_1  .send-16117-0  Mon Jun 15 15:29:28 2015 
zpool1/raiddisk0@send_1  .send-26208-0  Tue Jun 16 10:14:47 2015 
zpool1/raiddisk0@send_1  .send-8129-0  Mon Jun 15 15:26:54 2015 
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 #
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #
</syntaxhighlight>
 
==Fragmentation==
<syntaxhighlight lang=bash>
# zdb -mm <pool> | nawk '/fragmentation/{count++;frag+=$NF}END{printf "Overall fragmentation %.2d\n",(frag/count);}'
</syntaxhighlight>

Latest revision as of 02:50, 26 November 2021

Links

Löschen nicht löschbarer Snapshots

Hier nach einem abgebrochenen ZFS send/recv:

# zfs destroy MYSQL-LOG/binlog@copy_20130403
cannot destroy 'MYSQL-LOG/binlog@copy_20130403': dataset is busy

# zfs holds -r MYSQL-LOG@copy_20130403
NAME                     TAG            TIMESTAMP                 
MYSQL-LOG@copy_20130403  .send-22887-0  Wed Apr  3 09:03:32 2013  

# zfs release .send-22887-0 MYSQL-LOG@copy_20130403
# zfs destroy MYSQL-LOG/binlog@copy_20130403

ZFS Tuning

Eine gefühlte Langsamkeit auf Systemen mit ZFS kommt vom sehr großen Cachehunger. Den kann man eingrenzen: Erstmal schauen, was phase ist:

lollypop@wirefall:~# echo "::kmastat ! grep Total" |mdb -k
Total [hat_memload]                             13508608B 309323764     0
Total [kmem_msb]                                24010752B   1509706     0
Total [kmem_va]                                660340736B    140448     0
Total [kmem_default]                           690409472B 1416078794     0
Total [kmem_io_64G]                             34619392B      8456     0
Total [kmem_io_4G]                                 16384B        92     0
Total [kmem_io_2G]                                 24576B        62     0
Total [bp_map]                                   1048576B    234488     0
Total [umem_np]                                   786432B       976     0
Total [id32]                                        4096B      2620     0
Total [zfs_file_data_buf]                      1471275008B   1326646     0
Total [segkp]                                     589824B    192886     0
Total [ip_minor_arena_sa]                             64B     13332     0
Total [ip_minor_arena_la]                            192B     45183     0
Total [spdsock]                                       64B         1     0
Total [namefs_inodes]                                 64B        24     0
lollypop@wirefall:~#  echo "::memstat" | mdb -k
Page Summary                Pages                MB  %Tot
------------     ----------------  ----------------  ----
Kernel                     255013               996   24%
ZFS File Data              359196              1403   34%
Anon                       346538              1353   33%
Exec and libs               33948               132    3%
Page cache                   4836                18    0%
Free (cachelist)            22086                86    2%
Free (freelist)             23420                91    2%

Total                     1045037              4082
Physical                  1045036              4082

Oder nur ZFS

echo "::memstat ! egrep '(Page Summary|-----|ZFS)'"| mdb -k

Ausgeben aller ARC-Parameter:

lollypop@wirefall:~#  echo "::arc -m" | mdb -k
hits                      =  80839319
misses                    =   3717788
demand_data_hits          =   4127150
demand_data_misses        =     51589
demand_metadata_hits      =   9467792
demand_metadata_misses    =   2125852
prefetch_data_hits        =    127941
prefetch_data_misses      =    596238
prefetch_metadata_hits    =  67116436
prefetch_metadata_misses  =    944109
mru_hits                  =   2031248
mru_ghost_hits            =   1906199
mfu_hits                  =  78514880
mfu_ghost_hits            =    993236
deleted                   =    880714
recycle_miss              =   1381210
mutex_miss                =       197
evict_skip                =  38573528
evict_l2_cached           =         0
evict_l2_eligible         = 94658370048
evict_l2_ineligible       = 8946457600
hash_elements             =     79571
hash_elements_max         =     82328
hash_collisions           =   3005774
hash_chains               =     22460
hash_chain_max            =         8
p                         =        64 MB
c                         =       512 MB
c_min                     =       127 MB
c_max                     =       512 MB
size                      =       512 MB
hdr_size                  =  14825736
data_size                 = 468982784
other_size                =  53480992
l2_hits                   =         0
l2_misses                 =         0
l2_feeds                  =         0
l2_rw_clash               =         0
l2_read_bytes             =         0
l2_write_bytes            =         0
l2_writes_sent            =         0
l2_writes_done            =         0
l2_writes_error           =         0
l2_writes_hdr_miss        =         0
l2_evict_lock_retry       =         0
l2_evict_reading          =         0
l2_free_on_write          =         0
l2_abort_lowmem           =         0
l2_cksum_bad              =         0
l2_io_error               =         0
l2_size                   =         0
l2_hdr_size               =         0
memory_throttle_count     =         0
arc_no_grow               =         0
arc_tempreserve           =         0 MB
arc_meta_used             =       150 MB
arc_meta_limit            =       128 MB
arc_meta_max              =       313 MB

Man kann sich auch alle Parameter ausgeben lassen, die für ZFS gesetzt sind mit:

# echo ::zfs_params | mdb -k
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x100000000
zfs_arc_min = 0x0
arc_shrink_shift = 0x5
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
...
# echo "::arc -a" | mdb -k
hits                      =    592730
misses                    =      5095
demand_data_hits          =         0
demand_data_misses        =         0
demand_metadata_hits      =    592719
demand_metadata_misses    =      4866
prefetch_data_hits        =         0
prefetch_data_misses      =         0
...

Setzen von Kernelparametern geht auch online mit:

# echo zfs_arc_max/Z100000000 | mdb -kw
zfs_arc_max:    <old value>             =       0x100000000

Das setzt den zfs_arc_max auf 4GB = 0x100000000

Limitieren des ARC Cache

In der /etc/system einfach: set zfs:zfs_arc_max = <Number of bytes>

Easy calculation:

# NUMGB=32
# printf "set zfs:zfs_arc_max = 0x%x\n" $[ ${NUMGB} * 1024 ** 3 ]
set zfs:zfs_arc_max = 0x800000000

Siehe auch Limiting the ARC Cache


But !!!! NEVER DO THIS !!!!

Never use mdb -kw to set the values!!!

But on a test system you could try to get the position in the Kernel with

> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64

Calculate for example 8GB:

# printf "0x%x\n" $[ 8 * 1024 ** 3 ]  
0x200000000

And raise the values like this:

arc.c = arc.c_max
arc.p = arc.c / 2
# mdb -kw
Loading modules: [ unix krtld genunix dtrace specfs uppc pcplusmp cpu.generic zfs mpt_sas sockfs ip hook neti dls sctp arp usba uhci fcp fctl qlc nca md lofs sata cpc fcip random crypto logindmux ptm ufs sppp nfs ipc ]
> arc_stats::print -a arcstat_p.value.ui64 arcstat_c.value.ui64 arcstat_c_max.value.ui64
fffffffffbcfaf90 arcstat_p.value.ui64 = 0x4000000
fffffffffbcfafc0 arcstat_c.value.ui64 = 0x40000000
fffffffffbcfb020 arcstat_c_max.value.ui64 = 0x40000000
> fffffffffbcfb020/Z 0x200000000
arc_stats+0x4a0:0x40000000              =       0x200000000

> fffffffffbcfafc0/Z 0x200000000
arc_stats+0x440:0x44a42480              =       0x200000000

> fffffffffbcfaf90/Z 0x100000000
arc_stats+0x410:0x4000000               =       0x100000000

ZFS Platzverbrauch besser anzeigen

$ zfs list -o space
NAME               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
rpool              25.4G  7.79G         0     64K              0      7.79G
rpool/ROOT         25.4G  6.29G         0     18K              0      6.29G
rpool/ROOT/snv_98  25.4G  6.29G         0   6.29G              0          0
rpool/dump         25.4G  1.00G         0   1.00G              0          0
rpool/export       25.4G    38K         0     20K              0        18K
rpool/export/home  25.4G    18K         0     18K              0          0
rpool/swap         25.8G   512M         0    111M           401M          0

Wenn zfs list -o space als shortcut noch nicht zur Verfügung steht, geht meist:

$ zfs list -o name,avail,used,usedsnap,usedds,usedrefreserv,usedchild -t filesystem,volume

Migration UFS-Root -> ZFS-Root via Live-Upgrade

Erstmal den ZFS Rootpool anlegen:

# zpool create rpool /dev/dsk/<zfs-disk>

Wer Problemen aus dem Weg gehen möchte, läßt den Namen bei rpool.

Boot-Environment (BE) mit lucreate erstellen

# lucreate -c ufsBE -n zfsBE -p rpool

Hiermit werden die Files in die ZFS-Umgebung kopiert.

Prüfen, ob das BootFS richtig gesetzt wurde:

# zpool get bootfs rpool
NAME   PROPERTY  VALUE             SOURCE
rpool  bootfs    rpool/ROOT/zfsBE  local

Auskommentieren von eventuell noch nachgebliebenen rootdev-Einträgen in der /etc/system

# zpool export rpool
# mkdir /tmp/rpool
# zpool import -R /tmp/rpool rpool
# zfs unmount rpool
# rmdir /tmp/rpool/rpool
# zfs mount rpool/ROOT/zfsBE
# perl -pi.orig -e 's#^(rootdev.*)$#* \1#g' /tmp/rpool/etc/system
# zpool export rpool

Bootblock für ZFS auf die ZFS-Platte

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<zfs-disk>

Aktivieren des neuen BEs

# luactivate zfsBE


cannot destroy 'snapshot': dataset is busy

root@sun1 # zfs destroy zpool1/raiddisk0@send_1
cannot destroy 'zpool1/raiddisk0@send_1': dataset is busy
root@sun1 # zfs holds zpool1/raiddisk0@send_1
NAME                          TAG            TIMESTAMP                 
zpool1/raiddisk0@send_1  .send-14952-0  Mon Jun 15 15:29:09 2015  
zpool1/raiddisk0@send_1  .send-16117-0  Mon Jun 15 15:29:28 2015  
zpool1/raiddisk0@send_1  .send-26208-0  Tue Jun 16 10:14:47 2015  
zpool1/raiddisk0@send_1  .send-8129-0   Mon Jun 15 15:26:54 2015  
root@sun1 # zfs release .send-14952-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-16117-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-26208-0 zpool1/raiddisk0@send_1
root@sun1 # zfs release .send-8129-0 zpool1/raiddisk0@send_1
root@sun1 # zfs holds zpool1/raiddisk0@send_1
root@sun1 # 
root@sun1 # zfs destroy zpool1/raiddisk0@send_1
root@sun1 #

Fragmentation

# zdb -mm <pool> | nawk '/fragmentation/{count++;frag+=$NF}END{printf "Overall fragmentation %.2d\n",(frag/count);}'