This script was added to provide a simple way to check that zpool
layers correctly on all the standard linux block device types.
It's still a little fragile if there a hiccup in say the md or
lvm tool chain but aside from that it works well.
The 'make check' target now also calls this script in a safe mode
which only operates on files and loopback devices. To check other
block devices types is must be explicitly run by hand because it
will overwrite various block devices.
For the sake of completeness we need to validate everything works
well not just on IDE or SCSI drives. But we need to verify a
zpool configured on top of the Linux virtual block devices.
These scripts simply that testing process, and have shown that
while everything is good on top of a ram disk. Right now the
code base panics the kernel when layered on top of either an
md or dm style device. For the moment don't do that.
To simplify creation and management of test configurations the
dragon and x4550 configureis have been integrated with udev. Our
current best guess as to how we'll actually manage the disks in
these systems is with a udev mapping scheme. The current leading
scheme is to map each drive to a simpe <CHANNEL><RANK> id. In
this mapping each CHANNEL is represented by the letters a-z, and
the RANK is represented by the numbers 1-n. A CHANNEL should
identify a group of RANKS which are all attached to a single
controller, each RANK represents a disk. This provides a nice
mechanism to locate a specific drive given a known hardware
configuration. Various hardware vendors use a similar scheme.
A nice side effect of these changes is it allowed me to make
the raid0/raid10/raidz/raidz2 setup functions generic. This
makes adding new test configs easy, you just need to create
a udev rules file for your test config which conforms to the
naming scheme.
Pass an alternate location via module option for the zpool.cache file
used by the kernel. This allows us to write in-tree tests which do
not modify any out-of-tree files we do not own. This is just standard
good behavior for any test suite.
Additionally, refine the existing test case to explicity use the cache
file when looking for pools to import. And add a second test cache
which is forced to probe the disks for available pools to import.
This is an initial script for validation of zfs/zpool configuration.
For now there is only one test here to ensure that /etc/zfs/zpool.cache
is being updated properly from the kernel module. Additional tests
should be added, I believe Richardo said there was an existing test
suite out there which validated the behavior of many zpool/zfs commands.
It would be nice to add that as appropriate.