It appears that in earlier kernels the maximum name length of a
kobject was KOBJ_NAME_LEN (20) bytes. This was later extended to
dynamically allocate enough memory if it was over KOBJ_NAME_LEN,
and finally it was always made dynamic. Unfortunately, util this
last step happened it doesn't look like it always safe to use
names larger than KOBJ_NAME_LEN. For example, under the RHEL5
2.6.18 kernel if the kobject name length exceeds KOBJ_NAME_LEN
a NULL dereference is tripped.
To avoid this issue the build system has been update to check
to see if KOBJ_NAME_LEN is defined. If it is we have to assume
the maximum kobject name length is only 20 bytes. This 20 byte
name must minimally include the following components.
<zpool>/<dataset>[@snapshot[partition]]
The splat module is only needed for the spl regression tests.
But if we add it to MODULES then 'zfs.sh -u' will be able to
unload it if needed, The downside if 'zfs.sh' will always
load it but it's overhead is minimal and in a production
setting you'll always be doing a 'modprobe zfs' anyway so
this is really just for testing.
The long term fix for Debian and Slackware style packaging is
to add native support for building these packages. Unfortunately,
that is a large chunk of work I don't have time for right now.
That said it would be nice to have at least basic packages for
these distributions.
As a quick short/medium term solution I've settled on using alien
to convert the RPM packages to DEB or TGZ style packages. The
build system has been updated with the following build targets
which will first build RPM packages and then convert them as
needed to the target package type:
make rpm: Create .rpm packages
make deb: Create .deb packages
make tgz: Create .tgz packages
make pkg: Create the right package type for your distribution
The solution comes with lot of caveats and your mileage may vary.
But basically the big limitations are that the resulting packages:
1) Will not have the correct dependency information.
2) Will not include the kernel version in the release.
3) Will not handle all differences between distributions.
But the resulting packages should be easy to install and remove
from your system and take care of running 'depmod -a' and such.
As I said at the top this is not the right long term solution.
If any of the upstream distribution maintainers want to jump in
and help do this right for their distribution I'd love the help.
Some buggy NPTL threading implementations include the guard area within
the stack size allocations. In this case we need to allocate an extra
page to account for the guard area since we only have two pages of usable
stack on Linux. Added an autoconf test that detects such implementations
by running a test program designed to segfault if the bug is present.
Set a flag NPTL_GUARD_WITHIN_STACK that is tested to decide if extra
stack space must be allocated for the guard area.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
The stack check implementation in older versions of gcc has
a fairly low default limit on STACK_CHECK_MAX_FRAME_SIZE of
roughly 4096. This results in numerous warning when it is
used with code which was designed to run in user space and
thus may be relatively stack heavy. The avoid these warnings,
which are fatal with -Werror, this patch targets the use of
-fstack-check to libraries which are compiled in both user
space and kernel space. The only utility which uses this
flag is ztest which is designed to simulate running in the
kernel and must meet the -fstack-check requirements. All
other user space utilities do not use -fstack-check.
warning: frame size too large for reliable stack checking
warning: try reducing the number of local variables
While in theory I like the idea of compiler warnings always being
fatal. In practice this causes problems when small harmless errors
cause build failures for end users. To handle this I've updated
the build system such that -Werror is only used when --enable-debug
is passed to configure. This is how I always build when developing
so I'll catch all build warnings and end users will not get stuck
by minor issues.
As of autoconf-2.65 the AC_LANG_SOURCE source macro no longer
includes the confdef.h results when expanded. To handle this
simply explicitly include confdef.h in conftest.c. This will
cause two copies to of confdef.h to be added to the test for
earlier autoconf versions but this is not harmful.
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
It seems the upstream community moved the definition of UTS_RELEASE
yet again as of linux-2.6.33. Update the build system to check in
all three possible locations where your kernel version may be defined.
$kernelbuild/include/linux/version.h
$kernelbuild/include/linux/utsrelease.h
$kernelbuild/include/generated/utsrelease.h
It turns out the gcc option -Wframe-larger-than=<size> which I recently
added to the build system is not supported in older versions of gcc.
Since this is just a flag to ensure I keep stack usage under control
I've added a configure check to detect if gcc supports it. If it's
available we use it in the proper places, if it's not we don't.
For all module/library functions ensure so stack frame exceeds 1024
bytes. Ideally this should be set lower to say 512 bytes but there
are still numerous functions which exceed even this limit. For now
this is set to 1024 to ensure we catch the worst offenders.
Additionally, set the limit for ztest to 1024 bytes since the idea
here is to catch stack issues in user space before we find them by
overrunning a kernel stack. This should also be reduced to 512
bytes as soon as all the trouble makes are fixed.
Finally, add -fstack-check to gcc build options when --enable-debug
is specified at configure time. This ensures that each page on the
stack will be touched and we will generate a segfault on stack
overflow.
Over time we can gradually fix the following functions:
536 zfs:dsl_deadlist_regenerate
536 zfs:dsl_load_sets
536 zfs:zil_parse
544 zfs:zfs_ioc_recv
552 zfs:dsl_deadlist_insert_bpobj
552 zfs:vdev_dtl_sync
584 zfs:copy_create_perms
608 zfs:ddt_class_contains
608 zfs:ddt_prefetch
608 zfs:__dprintf
616 zfs:ddt_lookup
648 zfs:dsl_scan_ddt
696 zfs:dsl_deadlist_merge
736 zfs:ddt_zap_walk
744 zfs:dsl_prop_get_all_impl
872 zfs:dnode_evict_dbufs
It used to be the case that libuuid was part of e2fsprogs. However,
since other projects unrelated to e2fsprogs make use of that library
it was getting aukward to have that dependency. The package was
split from e2fsprogs and moved to its own devel package in the
util-linux-ng project. Anyway, I'm just updating the error message
to reflect the new upstream reality.
The cleanest way to do this is to set AM_LIBTOOLFLAGS = --silent. However,
AM_LIBTOOLFLAGS is not honored by automake-1.9.6-2.1 which is what I have
been using. To cleanly handle this I am updating to automake-1.11-3 which
is why it looks like there is a lot of churn in the Makefiles.
We need dependent packages to be able to include zfs_config.h to
build properly. This was partially solved previously be using
AH_BOTTOM to #undef common #defines (PACKAGE, VERSION, etc) which
autoconf always adds and cannot be easily removed. This solution
works as long as the zfs_config.h is included before your projects
config.h. That turns out to be easier said than done. In particular,
this is a problem when your package includes its config.h using the
-include gcc option which ensures the first thing included is your
config.h.
To handle all cases cleanly I have removed the AH_BOTTOM hack and
replaced it with an AC_CONFIG_HEADERS command. This command runs
immediately after zfs_config.h is written and with a little awk-foo
it strips the offending #defines from the file. This eliminates
the problem entirely and makes header safe for inclusion.
After much contemplation I can't see a clean way to use udev entirely
in-tree for testing. This patch removed a horrible horrible hack which
would copy the needed udev bits in to place on your system to make it
work. That however is simply not acceptable, nothing you in in-tree
should ever ever ever install something on your system.
Since I could not come up with a clean way to use udev in-tree. The
fix is to simply parse the zdev config file and create the needed
symlinks in a sub-diretory or your working tree. This is not as clean
as using udev but it does work perfectly well for in-tree testing.
Previously the ZFS configure was dependent on a correct Module{s}.symvers
file which is generated as one of the last steps of the full SPL build.
This meant you could not do a recursive configure because this will
configure all sub-packages before building any of them.
To resolve this issue the ZFS code has been updated to make a very
educated guess as to this file name at configure time. This means
SPL_SYMBOLS may still be used in various places in the build system
such as modules/Makefile.in. But we do give up the ability to
seemlessly detect symbols exported by the SPL at ZFS configure time.
At the moment this is not as issue, hopefully it will stay that way.
/lib/modules/$(uname -r)/source. This will likely fail when building
under a mock (http://fedoraproject.org/wiki/Projects/Mock) chroot
environment since `uname -r` will report the running kernel which
likely is not the kernel in your chroot. To cleanly handle this
we fallback to using the first kernel in your chroot.
The kernel-devel package which contains all the kernel headers and
a few build products such as Module.symver{s} is all the is required.
Full source is not needed.
Twice now I've been bitten by building agaist a kernel which is
configured such that it is incompatible with the CDDL license. These
build failures don't occur until the linking phase at which point they
simply callout the offending symbol. No location information can be
provided at this point so it often can be confusing what the problem is
particularly when building against a new kernel for the first time.
To help address this I've added a configure check which can be extended
over time to detect known kernel config options which if set will break
the ZFS build. Currently I have just added CONFIG_DEBUG_LOCK_ALLOC which
makes mutex's GPL-only and is on by default in the RHEL6 alpha builds.
I know for a fact there are other similiar options which can be added
as they are encountered.
While I completely agree the udev is the lesser of many possibles
evils when solving the device issue... it is still evil. After
attempting to craft a single rule which will work for various
versions of udev in various distros. I've come to the conclusion
the only maintainable way to solve this issue is to split the rule
from any particular configuration.
This commit provides a generic 60-zpool.rules file which use a
small helper util 'zpool_id' to parse a configuration file by
default located in /etc/zfs/zdev.conf. The helper script maps
a by-path udev name to a more friendly name of <channel><rank>
for large configurations.
As part of this change all of the support scripts why rely on
this udev naming convention have been updated as needed. Example
zdev.conf files have also been added for 3 different systems by
you will always need to add one for your exact hardware.
Finally, included in these changes are the proper tweaks to the
build system to ensure everything still get's packaged properly
in the rpms and can run in or out of tree.