ZFS nice commands: Difference between revisions

From Lolly's Wiki
Jump to navigationJump to search
(Die Seite wurde neu angelegt: „Kategorie:ZFS =Some ZFS commands I use often (on Linux)= ==zpool== ===Get zpool status=== <source lang=bash> # zpool status -P pool: rpool state: ONLI…“)
 
m (Text replacement - "<source " to "<syntaxhighlight ")
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
[[Kategorie:ZFS]]
[[Category:ZFS]]


=Some ZFS commands I use often (on Linux)=
=Some ZFS commands I use often (on Linux)=
Line 5: Line 5:
==zpool==
==zpool==
===Get zpool status===
===Get zpool status===
<source lang=bash>
<syntaxhighlight lang=bash>
# zpool status -P
# zpool status -P
   pool: rpool
   pool: rpool
Line 15: Line 15:
rpool                                                          ONLINE      0    0    0
rpool                                                          ONLINE      0    0    0
  /dev/disk/by-id/ata-SanDisk_SDSSDHII960G_151740411091-part4  ONLINE      0    0    0
  /dev/disk/by-id/ata-SanDisk_SDSSDHII960G_151740411091-part4  ONLINE      0    0    0
</source>
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
* -P : Display real paths for vdevs instead of only the last component of the path.
<source lang=bash>
<syntaxhighlight lang=bash>
# zpool status -PL
# zpool status -PL
   pool: rpool
   pool: rpool
Line 29: Line 29:


errors: No known data errors
errors: No known data errors
</source>
</syntaxhighlight>
* -P : Display real paths for vdevs instead of only the last component of the path.
* -P : Display real paths for vdevs instead of only the last component of the path.
* -L : Display real paths for vdevs resolving all symbolic links.
* -L : Display real paths for vdevs resolving all symbolic links.
===Get zpool size===
===Get zpool size===
<source lang=bash>
<syntaxhighlight lang=bash>
# zpool list
# zpool list
NAME    SIZE  ALLOC  FREE  EXPANDSZ  FRAG    CAP  DEDUP  HEALTH  ALTROOT
NAME    SIZE  ALLOC  FREE  EXPANDSZ  FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  788G  609G  179G        -    53%    77%  1.00x  ONLINE  -
rpool  788G  609G  179G        -    53%    77%  1.00x  ONLINE  -
</source>
</syntaxhighlight>
Ooooh... bad fragmentation! So what? It's a SSD!
Ooooh... bad fragmentation! So what? It's a SSD!


===Get the ashift value===
===Get the ashift value===
<source lang=bash>
<syntaxhighlight lang=bash>
# zpool list -o name,ashift
# zpool list -o name,ashift
NAME  ASHIFT
NAME  ASHIFT
rpool      9
rpool      9
</source>
</syntaxhighlight>
which means 2^9=512 := 512 byte blocks in the backend... that is uncool for SSDs.
which means 2^9=512 := 512 byte blocks in the backend... that is uncool for SSDs.
<source lang=bash>
<syntaxhighlight lang=bash>
# echo $[ 2 ** 12 ]
# echo $[ 2 ** 12 ]
4096
4096


# zpool set ashift=12 rpool
# zpool set ashift=12 rpool
</source>
</syntaxhighlight>


<source lang=bash>
<syntaxhighlight lang=bash>
# zpool list -o name,ashift
# zpool list -o name,ashift
NAME  ASHIFT
NAME  ASHIFT
rpool      12
rpool      12
</source>
</syntaxhighlight>
which means 2^12=4096 := 4k blocks in the backend. Perfect!
which means 2^12=4096 := 4k blocks in the backend. Perfect!


Line 64: Line 64:
==zdb==
==zdb==
===Traverse all blocks===
===Traverse all blocks===
<source lang=bash>
<syntaxhighlight lang=bash>
# zdb -b rpool
# zdb -b rpool


Line 83: Line 83:
additional, non-pointer bps of type 0:    237576
additional, non-pointer bps of type 0:    237576
Dittoed blocks on same vdev: 1230844
Dittoed blocks on same vdev: 1230844
</source>
</syntaxhighlight>

Latest revision as of 23:57, 25 November 2021


Some ZFS commands I use often (on Linux)

zpool

Get zpool status

# zpool status -P
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:

	NAME                                                           STATE     READ WRITE CKSUM
	rpool                                                          ONLINE       0     0     0
	  /dev/disk/by-id/ata-SanDisk_SDSSDHII960G_151740411091-part4  ONLINE       0     0     0
  • -P : Display real paths for vdevs instead of only the last component of the path.
# zpool status -PL
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 0h41m with 0 errors on Tue Nov 27 11:49:30 2018
config:

	NAME         STATE     READ WRITE CKSUM
	rpool        ONLINE       0     0     0
	  /dev/sda4  ONLINE       0     0     0

errors: No known data errors
  • -P : Display real paths for vdevs instead of only the last component of the path.
  • -L : Display real paths for vdevs resolving all symbolic links.

Get zpool size

# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool   788G   609G   179G         -    53%    77%  1.00x  ONLINE  -

Ooooh... bad fragmentation! So what? It's a SSD!

Get the ashift value

# zpool list -o name,ashift
NAME   ASHIFT
rpool       9

which means 2^9=512 := 512 byte blocks in the backend... that is uncool for SSDs.

# echo $[ 2 ** 12 ]
4096

# zpool set ashift=12 rpool
# zpool list -o name,ashift
NAME   ASHIFT
rpool       12

which means 2^12=4096 := 4k blocks in the backend. Perfect!

zfs

zdb

Traverse all blocks

# zdb -b rpool

Traversing all blocks to verify nothing leaked ...

loading space map for vdev 0 of 1, metaslab 196 of 197 ...
 609G completed (4928MB/s) estimated time remaining: 0hr 00min 00sec        
	No leaks (block sum matches space maps exactly)

	bp count:        32920989
	ganged count:           0
	bp logical:    760060348928      avg:  23087
	bp physical:   650570102784      avg:  19761     compression:   1.17
	bp allocated:  654308115456      avg:  19875     compression:   1.16
	bp deduped:             0    ref>1:      0   deduplication:   1.00
	SPA allocated: 654308115456     used: 77.33%

	additional, non-pointer bps of type 0:     237576
	Dittoed blocks on same vdev: 1230844