677cdf7865
Within function sas_handler() userspace commands like '/usr/sbin/multipath' have been replaced with sourcing device details from within sysfs which reduced a significant amount of overhead and processing time. Multiple JBOD enclosures and their order are sourced from the bsg driver (/sys/class/enclosure) to isolate chassis top-level expanders, which are then dynamically indexed based on host channel of the multipath subordinate disk member device being processed. Additionally added a "mixed" mode for slot identification for environments where a ZFS server system may contain SAS disk slots where there is no expander (direct connect to HBA) while an attached external JBOD with an expander have different slot identifier methods. How Has This Been Tested? ~~~~~~~~~~~~~~~~~~~~~~~~~ Testing was performed on a AMD EPYC based dual-server high-availability multipath environment with multiple HBAs per ZFS server and four SAS JBODs. The two primary JBODs were multipath/cross-connected between the two ZFS-HA servers. The secondary JBODs were daisy-chained off of the primary JBODs using aligned SAS expander channels (JBOD-0 expanderA--->JBOD-1 expanderA, JBOD-0 expanderB--->JBOD-1 expanderB, etc). Pools were created, exported and re-imported, imported globally with 'zpool import -a -d /dev/disk/by-vdev'. Low level udev debug outputs were traced to isolate and resolve errors. Result: ~~~~~~~ Initial testing of a previous version of this change showed how reliance on userspace utilities like '/usr/sbin/multipath' and '/usr/bin/lsscsi' were exacerbated by increasing numbers of disks and JBODs. With four 60-disk SAS JBODs and 240 disks the time to process a udevadm trigger was 3 minutes 30 seconds during which nearly all CPU cores were above 80% utilization. By switching reliance on userspace utilities to sysfs in this version, the udevadm trigger processing time was reduced to 12.2 seconds and negligible CPU load. This patch also fixes few shellcheck complains. Reviewed-by: Gabriel A. Devenyi <gdevenyi@gmail.com> Reviewed-by: Tony Hutter <hutter2@llnl.gov> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Co-authored-by: Jeff Johnson <jeff.johnson@aeoncomputing.com> Signed-off-by: Jeff Johnson <jeff.johnson@aeoncomputing.com> Signed-off-by: Arshad Hussain <arshad.hussain@aeoncomputing.com> Closes #11526 |
||
---|---|---|
.github | ||
cmd | ||
config | ||
contrib | ||
etc | ||
include | ||
lib | ||
man | ||
module | ||
rpm | ||
scripts | ||
tests | ||
udev | ||
.editorconfig | ||
.gitignore | ||
.gitmodules | ||
AUTHORS | ||
CODE_OF_CONDUCT.md | ||
COPYRIGHT | ||
LICENSE | ||
META | ||
Makefile.am | ||
NEWS | ||
NOTICE | ||
README.md | ||
TEST | ||
autogen.sh | ||
configure.ac | ||
copy-builtin | ||
zfs.release.in |
README.md
OpenZFS is an advanced file system and volume manager which was originally developed for Solaris and is now maintained by the OpenZFS community. This repository contains the code for running OpenZFS on Linux and FreeBSD.
Official Resources
- Documentation - for using and developing this repo
- ZoL Site - Linux release info & links
- Mailing lists
- OpenZFS site - for conference videos and info on other platforms (illumos, OSX, Windows, etc)
Installation
Full documentation for installing OpenZFS on your favorite operating system can be found at the Getting Started Page.
Contribute & Develop
We have a separate document with contribution guidelines.
We have a Code of Conduct.
Release
OpenZFS is released under a CDDL license.
For more details see the NOTICE, LICENSE and COPYRIGHT files; UCRL-CODE-235197
Supported Kernels
- The
META
file contains the officially recognized supported Linux kernel versions. - Supported FreeBSD versions are 12-STABLE and 13-CURRENT.