zfs/cmd
Cameron Harr dfece78a42 Add 'zpool status -e' flag to see unhealthy vdevs
When very large pools are present, it can be laborious to find
reasons for why a pool is degraded and/or where an unhealthy vdev
is. This option filters out vdevs that are ONLINE and with no errors
to make it easier to see where the issues are. Root and parents of
unhealthy vdevs will always be printed.

Testing:
ZFS errors and drive failures for multiple vdevs were simulated with
zinject.

Sample vdev listings with '-e' option
- All vdevs healthy
    NAME        STATE     READ WRITE CKSUM
    iron5       ONLINE       0     0     0

- ZFS errors
    NAME        STATE     READ WRITE CKSUM
    iron5       ONLINE       0     0     0
      raidz2-5  ONLINE       1     0     0
        L23     ONLINE       1     0     0
        L24     ONLINE       1     0     0
        L37     ONLINE       1     0     0

- Vdev faulted
    NAME        STATE     READ WRITE CKSUM
    iron5       DEGRADED     0     0     0
      raidz2-6  DEGRADED     0     0     0
        L67     FAULTED      0     0     0  too many errors

- Vdev faults and data errors
    NAME        STATE     READ WRITE CKSUM
    iron5       DEGRADED     0     0     0
      raidz2-1  DEGRADED     0     0     0
        L2      FAULTED      0     0     0  too many errors
      raidz2-5  ONLINE       1     0     0
        L23     ONLINE       1     0     0
        L24     ONLINE       1     0     0
        L37     ONLINE       1     0     0
      raidz2-6  DEGRADED     0     0     0
        L67     FAULTED      0     0     0  too many errors

- Vdev missing
    NAME        STATE     READ WRITE CKSUM
    iron5       DEGRADED     0     0     0
      raidz2-6  DEGRADED     0     0     0
        L67     UNAVAIL      3     1     0

- Slow devices when -s provided with -e
    NAME        STATE     READ WRITE CKSUM  SLOW
    iron5       DEGRADED     0     0     0     -
      raidz2-5  DEGRADED     0     0     0     -
        L10     FAULTED      0     0     0     0  external device fault
        L51     ONLINE       0     0     0    14

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Cameron Harr <harr1@llnl.gov>
Closes #15769
2024-02-13 14:22:48 -08:00
..
arc_summary Removed Python 2 and Python 3.5- support 2023-04-13 15:59:45 -07:00
arcstat Removed Python 2 and Python 3.5- support 2023-04-13 15:59:45 -07:00
dbufstat Removed Python 2 and Python 3.5- support 2023-04-13 15:59:45 -07:00
fsck_zfs Turn shellcheck into a normal make target. Fix new files it caught 2021-06-09 13:05:34 -07:00
mount_zfs `mount.zfs -o zfsutil` leverages `zfs_mount_at()` 2022-02-16 17:58:55 -08:00
raidz_test Removed duplicated includes 2021-03-22 12:34:58 -07:00
vdev_id Remove basename(1). Clean up/shorten some coreutils pipelines 2022-02-16 17:58:55 -08:00
zdb Report ashift of L2ARC devices in zdb 2023-12-21 11:16:30 -08:00
zed zed: fix typo in variable ZED_POWER_OFF_ENCLO*US*RE_SLOT_ON_FAULT 2024-02-13 14:22:48 -08:00
zfs Fix "Add colored output to zfs list" 2023-04-13 09:39:53 -07:00
zfs_ids_to_path zfs_ids_to_path: print correct wrong values 2021-04-14 13:19:50 -07:00
zgenhostid zgenhostid: use argument path directly 2021-06-08 14:47:05 -07:00
zhack cppcheck: integrete cppcheck 2021-01-26 16:12:26 -08:00
zinject cppcheck: integrete cppcheck 2021-01-26 16:12:26 -08:00
zpool Add 'zpool status -e' flag to see unhealthy vdevs 2024-02-13 14:22:48 -08:00
zpool_influxdb Use fallthrough macro 2021-11-02 09:50:30 -07:00
zstream Fix unreachable code in zstreamdump 2022-12-01 12:39:41 -08:00
ztest zed: mark disks as REMOVED when they are removed 2023-03-27 11:32:09 -07:00
zvol_id Use substantially more robust program exit status logic in zvol_id 2021-09-14 12:23:38 -07:00
zvol_wait zvol_wait logic may terminate prematurely 2022-11-01 12:35:36 -07:00
Makefile.am Turn shellcheck into a normal make target. Fix new files it caught 2021-06-09 13:05:34 -07:00