ZFS Recovery: Difference between revisions
No edit summary |
m (Text replacement - "</source" to "</syntaxhighlight") |
||
Line 22: | Line 22: | ||
Destroy and re-create the pool from | Destroy and re-create the pool from | ||
a backup source. | a backup source. | ||
</ | </syntaxhighlight> | ||
Unter /etc/zfs: | Unter /etc/zfs: | ||
Line 33: | Line 33: | ||
# zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}' | # zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}' | ||
name: 'defect_pool' | name: 'defect_pool' | ||
</ | </syntaxhighlight> | ||
Für einen ZPool im Solaris Cluster: | Für einen ZPool im Solaris Cluster: | ||
Line 41: | Line 41: | ||
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | 0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | ||
0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0 | 0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0 | ||
</ | </syntaxhighlight> | ||
oder | oder | ||
<source lang=bash> | <source lang=bash> | ||
# zpool import -o readonly=on -c defect_pool.cachefile | # zpool import -o readonly=on -c defect_pool.cachefile | ||
</ | </syntaxhighlight> | ||
<source lang=bash> | <source lang=bash> | ||
Line 61: | Line 61: | ||
# zpool import -o readonly=on -T <txg> defect_pool | # zpool import -o readonly=on -T <txg> defect_pool | ||
</ | </syntaxhighlight> | ||
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853 | Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853 | ||
Line 68: | Line 68: | ||
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012. | Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012. | ||
Discarded approximately 22 minutes of transactions. | Discarded approximately 22 minutes of transactions. | ||
</ | </syntaxhighlight> | ||
==PANIC, NOTICE: spa_import_rootpool: error 19== | ==PANIC, NOTICE: spa_import_rootpool: error 19== |
Revision as of 15:22, 25 November 2021
Panic at boot time
Siehe SunAlert 233602 : Solaris 10 Assertion Failure in ZFS May Cause a System Panic:
The best recovery for this is to do the following:
1. Set the following in /etc/system: set zfs:zfs_recover=1 set aok=1 2. Import the pool using 'zpool import' 3. Run a full scrub on the pool using 'zpool scrub' 4. Use 'zdb -d' and make sure that there is no ondisk corruption reported 5. Once the pool comes to a clean state, comment / remove the added entries in /etc/system.
Zurückgehen auf einen früheren Uberblock
<source lang=bash>
- zpool import defect_pool
cannot import 'defect_pool': I/O error
Destroy and re-create the pool from a backup source.
</syntaxhighlight>
Unter /etc/zfs: <source lang=bash>
- cd /etc/zfs
- strings zpool.cache | nawk '/c[0-9]+t/'
... /dev/dsk/c7t0d0s0 ...
- zdb -l /dev/dsk/c7t0d0s0 | nawk '$1=="name:"{print;exit;}'
name: 'defect_pool'
</syntaxhighlight>
Für einen ZPool im Solaris Cluster: <source lang=bash>
- cd /var/cluster/run/HAStoragePlus/zfs/
- strings defect_pool.cachefile | nawk '/c[0-9]+t/'
0/dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 0/dev/dsk/c8t600A0B80006E10E40000D47D4E51CF9Ed0s0 </syntaxhighlight> oder <source lang=bash>
- zpool import -o readonly=on -c defect_pool.cachefile
</syntaxhighlight>
<source lang=bash>
- zdb -lu /dev/dsk/c8t600A0B80006E103C000008164E51CDD2d0s0 | nawk '/txg =/{txg=$NF}/timestamp =/{printf "txg %d\t%s\n",txg,$0}' | sort -n -k 2n,2n | uniq | tail -10
txg 40353851 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012 txg 40353852 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012 txg 40353853 timestamp = 1352184849 UTC = Tue Nov 6 07:54:09 2012 txg 40353870 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012 txg 40353871 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012 txg 40353872 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012 txg 40353873 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012 txg 40353874 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012 txg 40353875 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012 txg 40353879 timestamp = 1352185334 UTC = Tue Nov 6 08:02:14 2012
- zpool import -o readonly=on -T <txg> defect_pool
</syntaxhighlight>
Also z.B. auf Tue Nov 6 07:54:09 2012 -> txg 40353853 <source lang=bash>
- zpool import -T 40353853 defect_pool
Pool defect_pool returned to its state as of Tue Nov 06 07:32:33 2012. Discarded approximately 22 minutes of transactions. </syntaxhighlight>
PANIC, NOTICE: spa_import_rootpool: error 19
Die Lösung ist, den Pool und das Device explizit anzugeben. Wenn beim booten also kommt:
NOTICE: spa_import_rootpool: error 19 Cannot mount root on /pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a panic[cpu0]/thread=fffffffffbc28820: vfs_mountroot: cannot mount root
Hilft ein Boot in den Failsafe mode und editieren der /a/rpool/boot/grub/menu.lst, oder Eingabe der Parameter in der Grub-Commandline:
title s10x_u8wos_08a findroot (s10x_u8wos_08a,0,a) bootfs rpool/ROOT/s10x_u8wos_08a kernel$ /platform/i86pc/multiboot -B zfs-bootfs=rpool/ROOT/s10x_u8wos_08a,bootpath="/pci@0,0/pci8086,340a@3/pci1000,3150@0/sd@1,0:a" module /platform/i86pc/boot_archive