Twice now I've been bitten by building agaist a kernel which is
configured such that it is incompatible with the CDDL license. These
build failures don't occur until the linking phase at which point they
simply callout the offending symbol. No location information can be
provided at this point so it often can be confusing what the problem is
particularly when building against a new kernel for the first time.
To help address this I've added a configure check which can be extended
over time to detect known kernel config options which if set will break
the ZFS build. Currently I have just added CONFIG_DEBUG_LOCK_ALLOC which
makes mutex's GPL-only and is on by default in the RHEL6 alpha builds.
I know for a fact there are other similiar options which can be added
as they are encountered.
While I completely agree the udev is the lesser of many possibles
evils when solving the device issue... it is still evil. After
attempting to craft a single rule which will work for various
versions of udev in various distros. I've come to the conclusion
the only maintainable way to solve this issue is to split the rule
from any particular configuration.
This commit provides a generic 60-zpool.rules file which use a
small helper util 'zpool_id' to parse a configuration file by
default located in /etc/zfs/zdev.conf. The helper script maps
a by-path udev name to a more friendly name of <channel><rank>
for large configurations.
As part of this change all of the support scripts why rely on
this udev naming convention have been updated as needed. Example
zdev.conf files have also been added for 3 different systems by
you will always need to add one for your exact hardware.
Finally, included in these changes are the proper tweaks to the
build system to ensure everything still get's packaged properly
in the rpms and can run in or out of tree.
To simplify creation and management of test configurations the
dragon and x4550 configureis have been integrated with udev. Our
current best guess as to how we'll actually manage the disks in
these systems is with a udev mapping scheme. The current leading
scheme is to map each drive to a simpe <CHANNEL><RANK> id. In
this mapping each CHANNEL is represented by the letters a-z, and
the RANK is represented by the numbers 1-n. A CHANNEL should
identify a group of RANKS which are all attached to a single
controller, each RANK represents a disk. This provides a nice
mechanism to locate a specific drive given a known hardware
configuration. Various hardware vendors use a similar scheme.
A nice side effect of these changes is it allowed me to make
the raid0/raid10/raidz/raidz2 setup functions generic. This
makes adding new test configs easy, you just need to create
a udev rules file for your test config which conforms to the
naming scheme.
This change extends the existing in-tree test infrastructure such
that it can also be run as part of a the installed package. This
simplifies testing on multiple systems and is generally all around
useful. The scripts may still be run in-tree and will use the
in-tree build products as long as .script-config exists.
These changes bring the zfs-0.4.4 tree in to compliance with
the spl-0.4.4 packaging changes. The bottom line is 2 source
rpms and 4 binary rpms will now be generated when creating
packages there will be:
zfs-<version>.src.rpm
- Fully rebuildable source rpm for libzfs and utils.
zfs-modules-<version>.src.rpm
- Fully rebuildable source rpm for kernel modules.
zfs-<version>.<arch>.rpm
- Binary rpm for libzfs and utils. The utils in this package are
compatible with all zfs-module rpms of the same version.
zfs-devel-<version>.<arch>.rpm
- Binary rpm containing headers for building against libzfs libraries.
zfs-modules-<verion>-<kernel>.arch.rpm
- Binary rpm containing the kernel modules for a specific kernel build.
The package name contains the kernel version and you should have one
of these packages installed to match every kernel on your system.
zfs-modules-devel-<verion>-<kernel>.arch.rpm
- Binary rpm containing development header and module symbols needed
for building additional kernel modules which are dependent on the
zfs module stack.
Expect minor interations on these changes as I validate they work
properly on CHAOS, RHEL, Fedora, and SLES style distros.
- ZFS_AC_KERNEL updated to exclude -obj entries in /usr/src/ when
attempting to automatically detect your kernel source.
- ZFS_AC_KERNEL check for *-obj directory when attempting to
detect the objects for your kernel source.
- ZFS_AC_SPL updated to additionally check for Modules.symvers build
product. This seems to be specific to SLES system, for Vanilla,
Fedora, RHEL, and Chaos kernels the symbol file is just called
Module.symvers.
- ZFS_CHECK_SYMBOL_EXPORT also should also check the exported SPL
symbols in addition to the exported core kernel systems.
This is used when you need to configure the project but you don't
actually intend to build it. Thus you don't really need access to
either the kernel or spl headers and symbols. At Livermore I use
this when I only intend to use the 'make dist' or 'make srpm' target.
An update to the build system to properly support all commonly
used Makefile targets these include:
make all # Build everything
make install # Install everything
make clean # Clean up build products
make distclean # Clean up everything
make dist # Create package tarball
make srpm # Create package source RPM
make rpm # Create package binary RPMs
make tags # Create ctags and etags for everything
Extra care was taken to ensure that the source RPMs are fully
rebuildable against Fedora/RHEL/Chaos kernels. To build binary
RPMs from the source RPM for your system simply run:
rpmbuild --rebuild zfs-x.y.z-1.src.rpm
This will produce two binary RPMs with correct 'requires'
dependencies for your kernel. One will contain all zfs modules
and support utilities, the other is a devel package for compiling
additional kernel modules which are dependant on the zfs.
zfs-x.y.z-1_<kernel version>.x86_64.rpm
zfs-devel-x.y.2-1_<kernel version>.x86_64.rpm
of the kernel specific build info in to config/kernel,
likewise and user specific build flags should go in
config/user. This seems like a reasonable way to go.