While I completely agree the udev is the lesser of many possibles
evils when solving the device issue... it is still evil. After
attempting to craft a single rule which will work for various
versions of udev in various distros. I've come to the conclusion
the only maintainable way to solve this issue is to split the rule
from any particular configuration.
This commit provides a generic 60-zpool.rules file which use a
small helper util 'zpool_id' to parse a configuration file by
default located in /etc/zfs/zdev.conf. The helper script maps
a by-path udev name to a more friendly name of <channel><rank>
for large configurations.
As part of this change all of the support scripts why rely on
this udev naming convention have been updated as needed. Example
zdev.conf files have also been added for 3 different systems by
you will always need to add one for your exact hardware.
Finally, included in these changes are the proper tweaks to the
build system to ensure everything still get's packaged properly
in the rpms and can run in or out of tree.
Moving forward udevadm {trigger/settle} replaced udevtrigger/udevsettle
as the correct interface to use. However, since we need to work in
both environments for testing check and see if udevadm is available.
If it is then use it. If it is not fall back to the legacy interface.
The script has been updated to download the latest documentations
packages for Solaris and extract the needed ZFS man pages. These
will still need a little markup to handle changes between the
Solaris and Linux versions of ZFS. Howver, they should be pretty
minor I've tried hard to keep the interface the same.
In additional to the script update the zdb, zfs, and zpool man
pages have been added to the repo.
This script was added to provide a simple way to check that zpool
layers correctly on all the standard linux block device types.
It's still a little fragile if there a hiccup in say the md or
lvm tool chain but aside from that it works well.
The 'make check' target now also calls this script in a safe mode
which only operates on files and loopback devices. To check other
block devices types is must be explicitly run by hand because it
will overwrite various block devices.
For the sake of completeness we need to validate everything works
well not just on IDE or SCSI drives. But we need to verify a
zpool configured on top of the Linux virtual block devices.
These scripts simply that testing process, and have shown that
while everything is good on top of a ram disk. Right now the
code base panics the kernel when layered on top of either an
md or dm style device. For the moment don't do that.
To simplify creation and management of test configurations the
dragon and x4550 configureis have been integrated with udev. Our
current best guess as to how we'll actually manage the disks in
these systems is with a udev mapping scheme. The current leading
scheme is to map each drive to a simpe <CHANNEL><RANK> id. In
this mapping each CHANNEL is represented by the letters a-z, and
the RANK is represented by the numbers 1-n. A CHANNEL should
identify a group of RANKS which are all attached to a single
controller, each RANK represents a disk. This provides a nice
mechanism to locate a specific drive given a known hardware
configuration. Various hardware vendors use a similar scheme.
A nice side effect of these changes is it allowed me to make
the raid0/raid10/raidz/raidz2 setup functions generic. This
makes adding new test configs easy, you just need to create
a udev rules file for your test config which conforms to the
naming scheme.