我已经尝试导出并重新导入池,但无济于事,尝试导入会让我这样:
- root@kyou:/home/matt# zpool import -fFX -d /dev/disk/by-id/
- pool: storage
- id: 15855792916570596778
- state: UNAVAIL
- status: One or more devices contains corrupted data.
- action: The pool cannot be imported due to damaged devices or data.
- see: http://zfsonlinux.org/msg/ZFS-8000-5E
- config:
- storage UNAVAIL insufficient replicas
- raidz1-0 UNAVAIL insufficient replicas
- ata-SAMSUNG_HD103SJ_S246J90B134910 UNAVAIL
- ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 UNAVAIL
- ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 UNAVAIL
/ dev / disk / by-id中的符号链接也存在:
- root@kyou:/home/matt# ls -l /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910* /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51*
- lrwxrwxrwx 1 root root 9 May 27 19:31 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910 -> ../../sdb
- lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1 -> ../../sdb1
- lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part9 -> ../../sdb9
- lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523 -> ../../sdd
- lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1 -> ../../sdd1
- lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part9 -> ../../sdd9
- lrwxrwxrwx 1 root root 9 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969 -> ../../sde
- lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1 -> ../../sde1
- lrwxrwxrwx 1 root root 10 May 27 19:15 /dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part9 -> ../../sde9
检查列出的各种/ dev / sd *设备,它们看起来是正确的(raidz阵列中的3个1TB驱动器).
我在每个驱动器上运行zdb -l,将其转储到文件,然后运行diff. 3的唯一区别是guid字段(我假设是预期的).每个标签上的所有3个标签基本相同,如下:
- version: 5000
- name: 'storage'
- state: 0
- txg: 4
- pool_guid: 15855792916570596778
- hostname: 'kyou'
- top_guid: 1683909657511667860
- guid: 8815283814047599968
- vdev_children: 1
- vdev_tree:
- type: 'raidz'
- id: 0
- guid: 1683909657511667860
- nparity: 1
- Metaslab_array: 33
- Metaslab_shift: 34
- ashift: 9
- asize: 3000569954304
- is_log: 0
- create_txg: 4
- children[0]:
- type: 'disk'
- id: 0
- guid: 8815283814047599968
- path: '/dev/disk/by-id/ata-SAMSUNG_HD103SJ_S246J90B134910-part1'
- whole_disk: 1
- create_txg: 4
- children[1]:
- type: 'disk'
- id: 1
- guid: 18036424618735999728
- path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51422523-part1'
- whole_disk: 1
- create_txg: 4
- children[2]:
- type: 'disk'
- id: 2
- guid: 10307555127976192266
- path: '/dev/disk/by-id/ata-WDC_WD10EARS-00Y5B1_WD-WMAV51535969-part1'
- whole_disk: 1
- create_txg: 4
- features_for_read:
愚蠢的是,我最近没有这个池的备份.但是,重新启动之前池很好,Linux看到磁盘很好(我现在运行smartctl来仔细检查)
所以,总结一下:
>我升级了Ubuntu,失去了对我的两个zpool之一的访问权限.
>游泳池之间的区别是JBOD,另一个是zraid.
>无法安装的zpool中的所有驱动器都标记为UNAVAIL,没有损坏数据的注释
>这些池都是使用从/ dev / disk / by-id /引用的磁盘创建的.
>从/ dev / disk / by-id到各种/ dev / sd设备的符号链接似乎是正确的
> zdb可以读取驱动器中的标签.
>池已尝试导出/导入,无法再次导入.
我可以通过zpool / zfs调用某种黑魔法将这些磁盘带回合理的阵列吗?我可以运行zpool create zraid …而不会丢失我的数据吗?我的数据无论如何都消失了吗?
- root@kyou:/home/matt# zpool import -f storage
- cannot import 'storage': one or more devices are already in use
https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/VVEwd1VFDmc
It was using the same partitions and was adding them to mdraid during any boot before ZFS was loaded.
我记得在dmesg中看到一些mdadm行,果然:
- root@kyou:/home/matt# cat /proc/mdstat
- Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
- md126 : active raid5 sdd[2] sdb[0] sde[1]
- 1953524992 blocks level 5,64k chunk,algorithm 2 [3/3] [UUU]
这些驱动器曾经是软件raid5阵列的一部分.出于某种原因,在升级过程中,它决定重新扫描驱动器,并发现驱动器曾经是md阵列的一部分,并决定重新创建它.这证实了:
- root@kyou:/storage# mdadm --examine /dev/sd[a-z]
这三个驱动器显示了大量信息.现在,停止数组:
- root@kyou:/home/matt# mdadm --stop /dev/md126
- mdadm: stopped /dev/md126
并重新运行导入:
- root@kyou:/home/matt# zpool import -f storage
将阵列重新上线.
现在我创建该池的快照进行备份,并在它们上运行mdadm –zero-superblock.