To simplify creation and management of test configurations the
dragon and x4550 configureis have been integrated with udev. Our
current best guess as to how we'll actually manage the disks in
these systems is with a udev mapping scheme. The current leading
scheme is to map each drive to a simpe <CHANNEL><RANK> id. In
this mapping each CHANNEL is represented by the letters a-z, and
the RANK is represented by the numbers 1-n. A CHANNEL should
identify a group of RANKS which are all attached to a single
controller, each RANK represents a disk. This provides a nice
mechanism to locate a specific drive given a known hardware
configuration. Various hardware vendors use a similar scheme.
A nice side effect of these changes is it allowed me to make
the raid0/raid10/raidz/raidz2 setup functions generic. This
makes adding new test configs easy, you just need to create
a udev rules file for your test config which conforms to the
naming scheme.
Pass an alternate location via module option for the zpool.cache file
used by the kernel. This allows us to write in-tree tests which do
not modify any out-of-tree files we do not own. This is just standard
good behavior for any test suite.
Additionally, refine the existing test case to explicity use the cache
file when looking for pools to import. And add a second test cache
which is forced to probe the disks for available pools to import.
This is an initial script for validation of zfs/zpool configuration.
For now there is only one test here to ensure that /etc/zfs/zpool.cache
is being updated properly from the kernel module. Additional tests
should be added, I believe Richardo said there was an existing test
suite out there which validated the behavior of many zpool/zfs commands.
It would be nice to add that as appropriate.
The current test rig consists of two 60 disk dragon drawers in configured
in 4-x15 mode. Each drawer has 4 SAS connections to my node for a total
of 8 SAS connections spread over 4 dual-port LSI SAS adapters. The
configures are as follows:
- raid0: All 120 drives in a single pool.
- raidz: 15 RAIDZ groups of 7+1.
- raidz2: 15 RAIDZ2 groups of 6+2.
This change extends the existing in-tree test infrastructure such
that it can also be run as part of a the installed package. This
simplifies testing on multiple systems and is generally all around
useful. The scripts may still be run in-tree and will use the
in-tree build products as long as .script-config exists.
Modern kernel build systems at least post 2.6.16 will set this properly
so we should not. In fact post 2.6.28 the include headers have moved
under arch so the guess we make here is completely wrong. Letting
the kernel build system set this ensure it will be correct. Also
drop the ulimit from the Makefile which, not surprisingly, turns out
to be very non-portable. If your expecting failures set the ulimit
in your shell before kicking off the test suite.
SLES10 ships util-linux-2.12r-35.30 which does not support the -f option
to losetup. To avoid this problem the unused_loop_device() function was
added which attempts to find an unused loop device by checking each
/dev/loop* device with losetup to see if it is configured.