zfs/cmd
Cameron Harr 0823388752
Add 'zpool status -e' flag to see unhealthy vdevs
When very large pools are present, it can be laborious to find
reasons for why a pool is degraded and/or where an unhealthy vdev
is. This option filters out vdevs that are ONLINE and with no errors
to make it easier to see where the issues are. Root and parents of
unhealthy vdevs will always be printed.

Testing:
ZFS errors and drive failures for multiple vdevs were simulated with
zinject.

Sample vdev listings with '-e' option
- All vdevs healthy
    NAME        STATE     READ WRITE CKSUM
    iron5       ONLINE       0     0     0

- ZFS errors
    NAME        STATE     READ WRITE CKSUM
    iron5       ONLINE       0     0     0
      raidz2-5  ONLINE       1     0     0
        L23     ONLINE       1     0     0
        L24     ONLINE       1     0     0
        L37     ONLINE       1     0     0

- Vdev faulted
    NAME        STATE     READ WRITE CKSUM
    iron5       DEGRADED     0     0     0
      raidz2-6  DEGRADED     0     0     0
        L67     FAULTED      0     0     0  too many errors

- Vdev faults and data errors
    NAME        STATE     READ WRITE CKSUM
    iron5       DEGRADED     0     0     0
      raidz2-1  DEGRADED     0     0     0
        L2      FAULTED      0     0     0  too many errors
      raidz2-5  ONLINE       1     0     0
        L23     ONLINE       1     0     0
        L24     ONLINE       1     0     0
        L37     ONLINE       1     0     0
      raidz2-6  DEGRADED     0     0     0
        L67     FAULTED      0     0     0  too many errors

- Vdev missing
    NAME        STATE     READ WRITE CKSUM
    iron5       DEGRADED     0     0     0
      raidz2-6  DEGRADED     0     0     0
        L67     UNAVAIL      3     1     0

- Slow devices when -s provided with -e
    NAME        STATE     READ WRITE CKSUM  SLOW
    iron5       DEGRADED     0     0     0     -
      raidz2-5  DEGRADED     0     0     0     -
        L10     FAULTED      0     0     0     0  external device fault
        L51     ONLINE       0     0     0    14

Reviewed-by: Tony Hutter <hutter2@llnl.gov>
Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Cameron Harr <harr1@llnl.gov>
Closes #15769
2024-02-07 09:12:12 -08:00
..
raidz_test RAID-Z expansion feature 2023-11-08 10:19:41 -08:00
zdb libzdb: Initial breakout of libzdb 2024-02-05 10:00:41 -08:00
zed Add Gotify notification support to ZED 2024-01-09 09:49:30 -08:00
zfs fix(mount): do not truncate shares not zfs mount 2024-01-12 12:05:11 -08:00
zinject Fix unsafe string operations 2022-09-27 16:47:24 -07:00
zpool Add 'zpool status -e' flag to see unhealthy vdevs 2024-02-07 09:12:12 -08:00
zpool_influxdb Do not report bytes skipped by scan as issued. 2023-06-30 08:47:13 -07:00
zstream Reject streams that set ->drr_payloadlen to unreasonably large values 2023-01-23 13:16:22 -08:00
Makefile.am Add zilstat script to report zil kstats in a user friendly manner 2022-09-02 13:24:07 -07:00
arc_summary "ARC prefetch metadata accesses:" appears twice in the output. 2023-10-23 13:41:29 -07:00
arcstat.in Update arc_summary and arcstat outputs 2023-01-05 09:29:13 -08:00
dbufstat.in Consider `dnode_t` allocations in dbuf cache size accounting 2023-11-17 13:25:53 -08:00
fsck.zfs.in cmd: move single-file binaries up, extract udev programs to udev/ 2022-05-10 10:20:34 -07:00
mount_zfs.c nvpair: Constify string functions 2023-03-14 15:25:50 -07:00
zfs_ids_to_path.c Replace dead opensolaris.org license link 2022-07-11 14:16:13 -07:00
zgenhostid.c Replace dead opensolaris.org license link 2022-07-11 14:16:13 -07:00
zhack.c Allow zhack label repair to restore detached devices. 2023-05-03 09:03:57 -07:00
zilstat.in zil: Add some more statistics. 2023-05-25 13:51:53 -07:00
ztest.c RAID-Z expansion feature 2023-11-08 10:19:41 -08:00
zvol_wait zvol_wait logic may terminate prematurely 2022-10-11 12:12:04 -07:00