After spending considerable time thinking about this I've come to the conclusion that on Linux systems we don't need Solaris style devid support. Instead was can simply use udev if we are careful, there are even some advantages. The Solaris style devid's are designed to provide a mechanism by which a device can be opened reliably regardless of it's location in the system. This is exactly what udev provides us on Linux, a flexible mechanism for consistently identifing the same devices regardless of probing order. We just need to be careful to always open the device by the path provided at creation time, this path must be stored in ZPOOL_CONFIG_PATH. This in fact has certain advantages. For example, if in your system you always want the zpool to be able to locate the disk regardless of physical location you can create the pool using /dev/disk/by-id/. This is perhaps what you'ld want on a desktop system where the exact location is not that important. It's more critical that all the disks can be found. However, in an enterprise setup there's a good chace that the physical location of each drive is important. You have like set things up such that your raid groups span multiple hosts adapters, such that you can lose an adapter without downtime. In this case you would want to use the /dev/disk/by-path/ path to ensure the path information is preserved and you always open the disks at the right physical locations. This would ensure your system never gets accidently misconfigured and still just works because the zpool was still able to locate the disk. Finally, if you want to get really fancy you can always create your own udev rules. This way you could implement whatever lookup sceme you wanted in user space for your drives. This would include nice cosmetic things like being able to control the device names in tools like zpool status, since the name as just based of the device names. I've yet to come up with a good reason to implement devid support on Linux since we have udev. But I've still just commented it out for now because somebody might come up with a really good I forgot. |
||
---|---|---|
cmd | ||
config | ||
doc | ||
lib | ||
module | ||
scripts | ||
.topdeps | ||
.topmsg | ||
AUTHORS | ||
COPYING | ||
COPYRIGHT | ||
ChangeLog | ||
DISCLAIMER | ||
GIT | ||
META | ||
Makefile.am | ||
OPENSOLARIS.LICENSE | ||
README | ||
TODO | ||
ZFS.RELEASE | ||
autogen.sh | ||
configure.ac | ||
zfs-modules.spec.in | ||
zfs.spec.in | ||
zfs_unconfig.h |
README
============================ ZFS KERNEL BUILD ============================ 1) Build the SPL (Solaris Porting Layer) module which is designed to provide many Solaris APIs in the Linux kernel which are needed by ZFS. To build the SPL: tar -xzf spl-x.y.z.tgz cd spl-x.y.z ./configure --with-linux=<kernel src> make make check <as root> 2) Build ZFS, this port is based on build specified by the ZFS.RELEASE file. You will need to have both the kernel and SPL source available. To build ZFS for use as a Linux kernel module. tar -xzf zfs-x.y.z.tgz cd zfs-x.y.z ./configure --with-linux=<kernel src> \ --with-spl=<spl src> make make check <as root> ============================ ZPIOS TEST SUITE ============================ 3) Provided is an in-kernel test application called zpios which can be used to simulate a parallel IO load. It may be used as a stress or performance test for your configuration. To simplify testing scripts provided in the scripts/ directory which provide a few pre-built zpool configurations and zpios test cases. By default 'make check' as root will run a simple test against several small loopback devices created in /tmp/. cd scripts ./zfs.sh # Load the ZFS/SPL modules ./zpios.sh -c lo-raid0.sh -t tiny -v # Tiny zpios loopback test ./zfs.sh -u # Unload the ZFS/SPL modules Enjoy, Brian Behlendorf <behlendorf1@llnl.gov>