While in theory I like the idea of compiler warnings always being
fatal. In practice this causes problems when small harmless errors
cause build failures for end users. To handle this I've updated
the build system such that -Werror is only used when --enable-debug
is passed to configure. This is how I always build when developing
so I'll catch all build warnings and end users will not get stuck
by minor issues.
For all module/library functions ensure so stack frame exceeds 1024
bytes. Ideally this should be set lower to say 512 bytes but there
are still numerous functions which exceed even this limit. For now
this is set to 1024 to ensure we catch the worst offenders.
Additionally, set the limit for ztest to 1024 bytes since the idea
here is to catch stack issues in user space before we find them by
overrunning a kernel stack. This should also be reduced to 512
bytes as soon as all the trouble makes are fixed.
Finally, add -fstack-check to gcc build options when --enable-debug
is specified at configure time. This ensures that each page on the
stack will be touched and we will generate a segfault on stack
overflow.
Over time we can gradually fix the following functions:
536 zfs:dsl_deadlist_regenerate
536 zfs:dsl_load_sets
536 zfs:zil_parse
544 zfs:zfs_ioc_recv
552 zfs:dsl_deadlist_insert_bpobj
552 zfs:vdev_dtl_sync
584 zfs:copy_create_perms
608 zfs:ddt_class_contains
608 zfs:ddt_prefetch
608 zfs:__dprintf
616 zfs:ddt_lookup
648 zfs:dsl_scan_ddt
696 zfs:dsl_deadlist_merge
736 zfs:ddt_zap_walk
744 zfs:dsl_prop_get_all_impl
872 zfs:dnode_evict_dbufs
After much contemplation I can't see a clean way to use udev entirely
in-tree for testing. This patch removed a horrible horrible hack which
would copy the needed udev bits in to place on your system to make it
work. That however is simply not acceptable, nothing you in in-tree
should ever ever ever install something on your system.
Since I could not come up with a clean way to use udev in-tree. The
fix is to simply parse the zdev config file and create the needed
symlinks in a sub-diretory or your working tree. This is not as clean
as using udev but it does work perfectly well for in-tree testing.
Twice now I've been bitten by building agaist a kernel which is
configured such that it is incompatible with the CDDL license. These
build failures don't occur until the linking phase at which point they
simply callout the offending symbol. No location information can be
provided at this point so it often can be confusing what the problem is
particularly when building against a new kernel for the first time.
To help address this I've added a configure check which can be extended
over time to detect known kernel config options which if set will break
the ZFS build. Currently I have just added CONFIG_DEBUG_LOCK_ALLOC which
makes mutex's GPL-only and is on by default in the RHEL6 alpha builds.
I know for a fact there are other similiar options which can be added
as they are encountered.
While I completely agree the udev is the lesser of many possibles
evils when solving the device issue... it is still evil. After
attempting to craft a single rule which will work for various
versions of udev in various distros. I've come to the conclusion
the only maintainable way to solve this issue is to split the rule
from any particular configuration.
This commit provides a generic 60-zpool.rules file which use a
small helper util 'zpool_id' to parse a configuration file by
default located in /etc/zfs/zdev.conf. The helper script maps
a by-path udev name to a more friendly name of <channel><rank>
for large configurations.
As part of this change all of the support scripts why rely on
this udev naming convention have been updated as needed. Example
zdev.conf files have also been added for 3 different systems by
you will always need to add one for your exact hardware.
Finally, included in these changes are the proper tweaks to the
build system to ensure everything still get's packaged properly
in the rpms and can run in or out of tree.
To simplify creation and management of test configurations the
dragon and x4550 configureis have been integrated with udev. Our
current best guess as to how we'll actually manage the disks in
these systems is with a udev mapping scheme. The current leading
scheme is to map each drive to a simpe <CHANNEL><RANK> id. In
this mapping each CHANNEL is represented by the letters a-z, and
the RANK is represented by the numbers 1-n. A CHANNEL should
identify a group of RANKS which are all attached to a single
controller, each RANK represents a disk. This provides a nice
mechanism to locate a specific drive given a known hardware
configuration. Various hardware vendors use a similar scheme.
A nice side effect of these changes is it allowed me to make
the raid0/raid10/raidz/raidz2 setup functions generic. This
makes adding new test configs easy, you just need to create
a udev rules file for your test config which conforms to the
naming scheme.
This change extends the existing in-tree test infrastructure such
that it can also be run as part of a the installed package. This
simplifies testing on multiple systems and is generally all around
useful. The scripts may still be run in-tree and will use the
in-tree build products as long as .script-config exists.
This change extends the existing in-tree test infrastructure such
that it can also be run as part of a the installed package. This
simplifies testing on multiple systems and is generally all around
useful. The scripts may still be run in-tree and will use the
in-tree build products as long as .script-config exists.
These changes bring the zfs-0.4.4 tree in to compliance with
the spl-0.4.4 packaging changes. The bottom line is 2 source
rpms and 4 binary rpms will now be generated when creating
packages there will be:
zfs-<version>.src.rpm
- Fully rebuildable source rpm for libzfs and utils.
zfs-modules-<version>.src.rpm
- Fully rebuildable source rpm for kernel modules.
zfs-<version>.<arch>.rpm
- Binary rpm for libzfs and utils. The utils in this package are
compatible with all zfs-module rpms of the same version.
zfs-devel-<version>.<arch>.rpm
- Binary rpm containing headers for building against libzfs libraries.
zfs-modules-<verion>-<kernel>.arch.rpm
- Binary rpm containing the kernel modules for a specific kernel build.
The package name contains the kernel version and you should have one
of these packages installed to match every kernel on your system.
zfs-modules-devel-<verion>-<kernel>.arch.rpm
- Binary rpm containing development header and module symbols needed
for building additional kernel modules which are dependent on the
zfs module stack.
Expect minor interations on these changes as I validate they work
properly on CHAOS, RHEL, Fedora, and SLES style distros.
- ZFS_AC_KERNEL updated to exclude -obj entries in /usr/src/ when
attempting to automatically detect your kernel source.
- ZFS_AC_KERNEL check for *-obj directory when attempting to
detect the objects for your kernel source.
- ZFS_AC_SPL updated to additionally check for Modules.symvers build
product. This seems to be specific to SLES system, for Vanilla,
Fedora, RHEL, and Chaos kernels the symbol file is just called
Module.symvers.
- ZFS_CHECK_SYMBOL_EXPORT also should also check the exported SPL
symbols in addition to the exported core kernel systems.
This is used when you need to configure the project but you don't
actually intend to build it. Thus you don't really need access to
either the kernel or spl headers and symbols. At Livermore I use
this when I only intend to use the 'make dist' or 'make srpm' target.
An update to the build system to properly support all commonly
used Makefile targets these include:
make all # Build everything
make install # Install everything
make clean # Clean up build products
make distclean # Clean up everything
make dist # Create package tarball
make srpm # Create package source RPM
make rpm # Create package binary RPMs
make tags # Create ctags and etags for everything
Extra care was taken to ensure that the source RPMs are fully
rebuildable against Fedora/RHEL/Chaos kernels. To build binary
RPMs from the source RPM for your system simply run:
rpmbuild --rebuild zfs-x.y.z-1.src.rpm
This will produce two binary RPMs with correct 'requires'
dependencies for your kernel. One will contain all zfs modules
and support utilities, the other is a devel package for compiling
additional kernel modules which are dependant on the zfs.
zfs-x.y.z-1_<kernel version>.x86_64.rpm
zfs-devel-x.y.2-1_<kernel version>.x86_64.rpm
of the kernel specific build info in to config/kernel,
likewise and user specific build flags should go in
config/user. This seems like a reasonable way to go.