zfs/cmd/zpool
LOLi a8fa31b50b Fix 'zpool add' handling of nested interior VDEVs
When replacing a faulted device which was previously handled by a spare
multiple levels of nested interior VDEVs will be present in the pool
configuration; the following example illustrates one of the possible
situations:

   NAME                          STATE     READ WRITE CKSUM
   testpool                      DEGRADED     0     0     0
     raidz1-0                    DEGRADED     0     0     0
       spare-0                   DEGRADED     0     0     0
         replacing-0             DEGRADED     0     0     0
           /var/tmp/fault-dev    UNAVAIL      0     0     0  cannot open
           /var/tmp/replace-dev  ONLINE       0     0     0
         /var/tmp/spare-dev1     ONLINE       0     0     0
       /var/tmp/safe-dev         ONLINE       0     0     0
   spares
     /var/tmp/spare-dev1         INUSE     currently in use

This is safe and allowed, but get_replication() needs to handle this
situation gracefully to let zpool add new devices to the pool.

Reviewed-by: George Melikov <mail@gmelikov.ru>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: loli10K <ezomori.nozomu@gmail.com>
Closes #6678
Closes #6996
2018-01-30 10:27:31 -06:00
..
zpool.d zpool iostat/status -c improvements 2017-06-05 10:52:15 -07:00
.gitignore Add .gitignore files to exclude build products 2010-01-08 11:35:17 -08:00
Makefile.am zpool iostat/status -c improvements 2017-06-05 10:52:15 -07:00
zpool_iter.c Restrict zpool iostat/status -c to search path 2017-07-24 11:53:59 -07:00
zpool_main.c Fix column alignment with long zpool names 2017-12-04 17:21:38 -08:00
zpool_util.c codebase style improvements for OpenZFS 6459 port 2017-01-22 13:25:40 -08:00
zpool_util.h zpool iostat/status -c improvements 2017-06-05 10:52:15 -07:00
zpool_vdev.c Fix 'zpool add' handling of nested interior VDEVs 2018-01-30 10:27:31 -06:00