Commit Graph

761 Commits

Author SHA1 Message Date
Brian Behlendorf d1142fbffe Remove custom install-data-local for headers
Rather than use a custom install target it is cleaner to define
a 'kerneldir' and set 'kernel_HEADERS' appropriately.  This
allows us to leverage the standing configure install support.

Additionally, I took this opertunity add the missing make files
to the include subdirectories.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-03-01 16:55:06 -08:00
Brian Behlendorf 0298f3d67f Add KMODDIR to install target
Provide a mechanism to control the directory name the modules
are installed in.  The kernel privdes INSTALL_MOD_DIR for
this but it was hardcoded to be 'addon/spl'.

Add a KMODDIR variable which can be passed to 'make install'
to override the default directory name.  While we're here
change the default from 'addon/spl' to 'extra' which is the
kernel.org default.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-03-01 16:55:06 -08:00
Brian Behlendorf fea77534f0 Fix spl_config.h install permissions
The default permissions used by install are 755.  Since this
file isn't executable 644 is more appropriate.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-03-01 16:55:06 -08:00
Brian Behlendorf 8adf71e9b0 Remove INSTALL
The generic INSTALL instructions can be safely dropped.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-03-01 16:55:00 -08:00
Brian Behlendorf 4bf3909e51 Disable automatic log dumping
Long ago infrastructure was added to the SPL to keep an internal
debug log of the last few seconds of activity.  This was helpful
during the early development, but these days it is no longer
needed.  I haven't had to resort to this debug buffer to resolve
an issue for several years now.

Today better more generic tools like systemtap and ftrace have
evolved to the point where they can be used for this purpose.
Along with the stack trace dumped to the system console, and in
rare cases a crash dump we almost always have the debug we need.

Therefore, I'm disabling the code which automatically dumps
this log to disk during an assertion except for the case where
spl_debug_panic_on_bug is set (disabled by default).

This should be viewed as a first step towards either.

  a) Retiring this infrastructure and complexity entirely, or
  b) Integrating this logging more properly with ftrace.

As part of this change I'm also removing from the packages the
undocumented spl utility which is used to decode the binary logs.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-02-05 16:13:27 -08:00
Richard Yao a0625691b3 Fix HAVE_MUTEX_OWNER_TASK_STRUCT autotools check on PPC64
The HAVE_MUTEX_OWNER_TASK_STRUCT fails on PPC64 with the following
error:

error: 'current' undeclared (first use in this function)

We include linux/sched.h to ensure that current is available.

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-02-05 15:36:03 -08:00
Brian Behlendorf dd3678fc29 Fix atomic64_* autoconf checks
The SPL_AC_ATOMIC_SPINLOCK, SPL_AC_TYPE_ATOMIC64_CMPXCHG, and
SPL_AC_TYPE_ATOMIC64_XCHG were all directly including the
'asm/atomic.h' header.  As of Linux 3.4 this header was removed
which results in a build failure.

The right thing to do is include 'linux/atomic.h' however we
can't safely do this because it doesn't exist in 2.6.26 kernels.
Therefore, we include 'linux/fs.h' which in turn includes the
correct atomic header regardless of the kernel version.

When these incorrect APIs are used in ZFS the following build
failure results.

  arc.c:791:80: warning: '__ret' may be used uninitialized
  in this function [-Wuninitialized]
  arc.c:791:1875: error: call to '__cmpxchg_wrong_size'
  declared with attribute error: Bad argument size for cmpxchg

Since this is all Linux 2.6.24 compatibility code there's
an argument to be made that it should be removed because
kernels this old are not supported.  However, because we're
so close to a release I'm going to leave it in place for now.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes zfsonlinux/zfs#814
Closes zfsonlinux/zfs#1254
2013-02-05 10:05:46 -08:00
Brian Behlendorf 869f30f1ae SPL 0.6.0-rc14 2013-02-01 11:24:54 -08:00
Brian Behlendorf 6ef94aa67a Fix tsd_get/set() race with tsd_exit/destroy()
The tsd_exit() and tsd_destroy() functions remove entries from
hash bins without taking the hash bin lock.  They do take the
table lock, but tsd_get() and tsd_set() only take the hash bin
lock to allow for maximum concurency.

The result is that while tsd_get() and tsd_set() are traversing
the hash bin list it can be modified by another thread in which
happens to hash to the same value.  To avoid this add the needed
locking to tsd_exit() and tsd_destroy().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #174
2013-01-31 13:54:59 -08:00
Brian Behlendorf de081a2ab4 Check for KALLSYMS
Check at ./configure time that the kernel was built with kallsyms
support.  If the kernel doesn't have CONFIG_KALLSYMS defined the
modules will still compile cleanly but will not be loadable.  So
we really want to catch this early during ./configure.  Note that
we do not require CONFIG_KALLSYMS_ALL but it may be safely defined.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #6
2013-01-29 16:35:23 -08:00
Eric Dillmann 3cbfd259b7 Define BE_IN16 & BE_IN32 for lz4 compression
The new lz4 compression algorithm, zfsonlinux/zfs@9759c60, requires
the generic BE_IN16 and BE_IN32 functions.  These are added to the SPL
for other consumers to take advantage of.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-01-29 09:30:23 -08:00
Brian Behlendorf 0936c3449f Add spl_kmem_cache_expire module option
Cache aging was implemented because it was part of the default Solaris
kmem_cache behavior.  The idea is that per-cpu objects which haven't been
accessed in several seconds should be returned to the cache.  On the other
hand Linux slabs never move objects back to the slabs unless there is
memory pressure on the system.

This behavior is now configurable through the 'spl_kmem_cache_expire'
module option.  The value is a bit mask with the following meaning.

  0x1 - Solaris style cache aging eviction is enabled.
  0x2 - Linux style low memory eviction is enabled.

Both methods may be safely enabled simultaneously, but by default
both are disabled.  It has never been clear if the kmem cache aging
(which has been around from day one) actually does any good.  It has
however been the source of numerous bugs so I wouldn't mind retiring
it entirely.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes zfsonlinux/zfs#1227
Closes #210
2013-01-28 09:34:12 -08:00
Brian Behlendorf 84dd1f4f15 Remove spl_invalidate_inodes()
This functionality is no longer required by ZFS, see commit
zfsonlinux/zfs@7b3e34ba5a.
Since there are no other consumers, and because it adds
additional autoconf complexity which must be maintained
the spl_invalidate_inodes() function has been removed.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue zfsonlinux/zfs#795
2013-01-17 11:40:47 -08:00
Brian Behlendorf d4899f4747 kmem-cache: Fix slab ageing soft lockup
Commit a10287e00d slightly reworked
the slab ageing code such that it is no longer dependent on the
Linux delayed work queue interfaces.

This was good for portability and performance, but it requires us
to use the on_each_cpu() function to execute the spl_magazine_age()
function.  That means that the function is now executing in interrupt
context whereas before it was scheduled in normal process context.
And that means we need to be slightly more careful about the locking
in the interrupt handler.

With the reworked code it's possible that we'll be holding the
skc->skc_lock and be interrupted to handle the spl_magazine_age()
IRQ.  This will result in a deadlock and soft lockup errors unless
we're careful to detect the contention and avoid taking the lock in
the interupt handler.  So that's what this patch does.

Alternately, (and slightly more conventionally) we could have used
spin_lock_irqsave() to prevent this race entirely but I'd perfer to
avoid disabling interrupts as much as possible due to performance
concerns.  There is absolutely no penalty for us not aging objects
out of the magazine due to contention.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Signed-off-by: Prakash Surya <surya1@llnl.gov>
Closes zfsonlinux/zfs#1193
2013-01-14 10:07:58 -08:00
Ned Bass 8842263bd0 call_usermodehelper() should wait for process
As of Linux 3.4 the UMH_WAIT_* constants were renumbered.  In
particular, the meaning of "1" changed from UMH_WAIT_PROC (wait for
process to complete), to UMH_WAIT_EXEC (wait for the exec, but not the
process).  A number of call sites used the number 1 instead of the
constant name, so the behavior was not as expected on kernels with
this change.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-01-09 16:54:19 -08:00
Brian Behlendorf 42b3ce622f Check for ZLIB_INFLATE and ZLIB_DEFLATE
Check at ./configure time that the kernel was built with zlib
support enabled.  This support may either be configured as a
module or builtin to the kernel.  But if it's missing the build
will fail so it's best to catch this early.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes zfsonlinux/zfs#582
2013-01-09 16:40:25 -08:00
Brian Behlendorf 050cd84e62 Linux compat 3.7.1, on_each_cpu()
Some kernels require that we include the 'linux/irqflags.h'
header for the SPL_AC_3ARGS_ON_EACH_CPU check.  Otherwise,
the functions local_irq_enable()/local_irq_disable() will not
be defined and the prototype will be misdetected as the four
argument version.

This change actually include 'linux/interrupt.h' which in turn
includes 'linux/irqflags.h' to be as generic as possible.

Additionally, passing NULL as the function can result in a
gcc error because the on_each_cpu() macro executes it
unconditionally.  To make the test more robust we pass the
dummy function on_each_cpu_func().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #204
2013-01-09 10:28:28 -08:00
Brian Behlendorf 1c7b3eaf87 RHEL 6.4 compat, fallocate()
In the upstream kernel the FALLOC_FL_PUNCH_HOLE #define was
introduced after the fallocate() function was moved from the
inode_operations to the file_operations structure.  Therefore,
the SPL code assumed that if FALLOC_FL_PUNCH_HOLE was defined
it was safe to use f_ops->fallocate().

Unfortunately, the RHEL6.4 kernel has only backported the
FALLOC_FL_PUNCH_HOLE #define and not the fallocate() change.

To address this compatibility issue the spl_filp_fallocate()
helper function was added to properly detect which interface
is available.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2013-01-08 09:53:13 -08:00
Matt Johnston 46a75aadb7 Add cv_wait_io() to account I/O time
Under Linux when a task is waiting on I/O it should call the
io_schedule() function for proper accounting.  The Solaris
cv_wait() function provides no way to specify what the cv
is waiting on therefore cv_wait_io() is introduced.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #206
2013-01-07 10:29:26 -08:00
Brian Behlendorf 02d25048d2 SPL 0.6.0-rc13 2012-12-20 11:01:47 -08:00
Brian Behlendorf 5b2fdbb69c Refresh AUTHORS
The AUTHORS file was getting stale.  Refresh its contents
using the authors listed in the git commit logs.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-19 09:40:18 -08:00
Brian Behlendorf dd5b6d96f1 Remove the ChangeLog
The ChangeLog was retired long ago, the git commit logs are
authoritative.  To avoid any confusion remove the ChangeLog.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-19 09:28:18 -08:00
Brian Behlendorf 034f1b331e Fix spl_kmem_init_kallsyms_lookup() panic
Due to I/O buffering the helper may return successfully before
the proc handler has a chance to execute.  To catch this case
wait up to 1 second to verify spl_kallsyms_lookup_name_fn was
updated to a non SYMBOL_POISON value.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes zfsonlinux/zfs#699
Closes zfsonlinux/zfs#859
2012-12-19 09:06:35 -08:00
Richard Yao 30196bfd42 Do not use KERNEL_DIR env var in Makefile.am
A Gentoo user reported an issue where the build system would
attempt to recurse into the kernel source tree if KERNEL_DIR
is set in the environment. KERNEL_DIR is an environment variable
that is used when the kernel sources are in a non-standard
location, so it is necessary to stop relying on it to prevent
this issue.

https://bugs.gentoo.org/show_bug.cgi?id=433946

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-17 10:59:12 -08:00
Brian Behlendorf 18e0c500a7 Merge branch 'taskq'
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #199
2012-12-12 10:45:48 -08:00
Brian Behlendorf eb0be2ed46 Removed SPL_AC_3ARGS_INIT_WORK check
All consumers of the kernel delayed work queues have been shifted
over to rely on the taskq implementation.  This compatibility code
can now be removed.  Any new callers which need this functionality
should use the taskq interfaces for delayed work items.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:57:10 -08:00
Brian Behlendorf 33e94ef1dd kmem-cache: Use a taskq for async allocations
Shift the asynchronous allocations over to use the taskq interfaces.
This allows us to abandon the kernels delayed work queue interface
and all the compatibility code it requires.

This code never actually used the delay functionality it was just
done this way to leverage the existing compatibility code.  All that
is required is a thread context to perform the allocation in.  The
only thing clever in this change is that we take advantage of the
preallocated task queue entries to avoid a memory allocation.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf a10287e00d kmem-cache: Use taskqs for ageing
Shift the cache and magazine ageing functionality over to the new
delayed taskq interfaces.  This allows us to abandon the kernels
delayed work queue interface and all the compatibility code it
requires.

However, the delayed taskq interface does not allow us to schedule
a task for a specfic cpu so the ageing code was slightly reworked.
The magazine ageing delay has been directly linked to the cache
ageing function.  The spl_cache_age() function invokes on_each_cpu()
in order to run spl_magazine_age() on each cpu.  It then blocks
waiting for them to complete and promptly reclaims any free slabs.

When restructing the code wasn't the primary goal I think the
new code is far more understable and maintainable.  It also should
help minimize magazine thrashing because free slabs are immediately
released after the magazine is aged.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf 296a8e596d kmem-cache: spl_kmem_cache_create() may always sleep
When this code was originally written I went overboard and allowed
for the possibility of creating a cache in an atomic context.  In
practice there are no callers which ever do this.  This makes sense
since a cache is by design a long lived data structure.

To prevent abuse of this function going forward I'm removing the
code which is supported to handle an atomic context.  All allocators
have been updated to use KM_SLEEP and the might_sleep() debug macro
has been added to immediately detect atomic callers.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf a5a98e7260 splat taskq:front: Reduce stack frame
The slightly increased size of the taskq_ent_t when debugging is
enabled has pushed the taskq:front splat test over frame size
limit.  To resolve this dynamically allocate the taskq_ent_t
structures so they are part of the heap instead of the stack.

  In function 'splat_taskq_test6_impl'
  error: the frame size of 1648 bytes is larger than 1024 bytes

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf 94ff5d38e3 splat taskq:order: Reduce stack frame
The slightly increased size of the taskq_ent_t when debugging is
enabled has pushed the taskq:order splat test over frame size
limit.  To resolve this dynamically allocate the taskq_ent_t
structures so they are part of the heap instead of the stack.

  In function 'splat_taskq_test5_impl'
  error: the frame size of 1680 bytes is larger than 1024 bytes

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf 3238e71763 splat taskq:cancel: Add test case
Add a test case for taskq_cancel_id() to verify it is working
properly.  Just like taskq:delay we start by dispatching 100
tasks.  However this time 1/3 of the tasks use taskq_dispatch()
and will be run immediately, and 2/3 use taskq_dispatch_delay().
The idea is to create a busy taskq with both active, pending,
and delayed tasks.

After all the items have been successfully dispatched the test
begins randomly canceling known task ids.  It will do this for
5 seconds randomly canceling a task id and then sleeping for a
few milliseconds.   The task being canceled may have already run,
still be on the pending list, or may be currently being executed
by a worker thread.  The idea is to ensure we catch any subtle
race conditions.

Once all the non-canceled tasks have completed we cross check
the number of tasks which ran with the number of tasks which
were successfully canceled.  Additionally, we verify that the
taskq_cancel_id() function never blocks longer than needed.
This time is bounded by the longest run time of the task which
was dispatched.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:49 -08:00
Brian Behlendorf 2f35782620 splat taskq:delay: Add test case
Add a test case for taskq_dispatch_delay() to verify it is working
properly.  The test dispatchs 100 tasks to a taskq with random
expiration times spread over 5 seconds.  As each task expires and
gets executed by a worker thread it verifies that it was run at
the correct time.  Once all the delayed tasks have been executed
we double check that all the dispatched tasks were successful.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Brian Behlendorf d9acd930b5 taskq delay/cancel functionality
Add the ability to dispatch a delayed task to a taskq.  The desired
behavior is for the task to be queued but not executed by a worker
thread until the expiration time is reached.  To achieve this two
new functions were added.

* taskq_dispatch_delay() -

  This function behaves exactly like taskq_dispatch() however it
takes a third 'expire_time' argument.  The caller should pass the
desired time the task should be executed as an absolute value in
jiffies.  The task is guarenteed not to run before this time, it
may run slightly latter if all the worker threads are busy.

* taskq_cancel_id() -

  Given a task id attempt to cancel the task before it gets executed.
This is primarily useful for canceling delay tasks but can be used for
canceling any previously dispatched task.  There are three possible
return values.

  0      - The task was found and canceled before it was executed.
  ENOENT - The task was not found, either it was already run or an
           invalid task id was supplied by the caller.
  EBUSY  - The task is currently executing any may not be canceled.
           This function will block until the task has been completed.

* taskq_wait_all() -

  The taskq_wait_id() function was renamed taskq_wait_all() to more
clearly reflect its actual behavior.  It is only curreny used by
the splat taskq regression tests.

* taskq_wait_id() -

  Historically, the only difference between this function and
taskq_wait() was that you passed the task id.  In both functions you
would block until ALL lower task ids which executed.  This was
semantically correct but could be very slow particularly if there
were delay tasks submitted.

  To better accomidate the delay tasks this function was reimplemnted.
It will now only block until the passed task id has been completed.

This is actually a fairly low risk change for a few reasons.

* Only new ZFS callers will make use of the new interfaces and
  very little common code was changed to support the new functions.

* The existing taskq_wait() implementation was not changed just
  slightly refactored.

* The newly optimized taskq_wait_id() implementation was never
  used by ZFS we can't accidentally introduce a new bug there.

NOTE: This functionality does not exist in the Illumos taskqs.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Brian Behlendorf aed8671cb0 taskq style, remove #define wrappers
When the taskq implementation was originally written I wrapped all
the API functions in #define's.  This was done as a preventative
measure to ensure that a taskq symbol never conflicted with an
existing kernel symbol.

However, in practice the taskq symbols never conflicted.  The only
major conflicts occured with the kmem cache API.  Since this added
layer of obfuscation never bought us anything for the taskq's I'm
removing it.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Brian Behlendorf 472a34caff taskq style, convert spaces to soft tabs
Update the taskq implementation to conform with the style used
throughout the rest of the code.  There are no functional
changes in this commit.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Steven Johnson 794f145bf9 splat linux:shrinker: Fix fail-safe
Ensure the fail-safe is reset between successive tests.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:04:29 -08:00
Steven Johnson ca072ee70f splat linux:shrinker: Fix race condition
Ensure the test thread blocks until the shrinker has completed its
work.  This is done by putting the test thread to sleep and waking
it each time the shrinker callback runs.  Once the shrinker size
drops to zero or we time out the test is allowed to proceed.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #96
Closes #125
Closes #182
2012-12-12 09:04:11 -08:00
Brian Behlendorf 576ec6aac4 splat command verbose behavior
The splat command takes a verbose option which when set prints
the internal debug log for every test.  This is helpful when
tracking down a common failure, but for a rare failure the
volume of log data is distracting.

Therefore, the verbose option has been adjusted to allow only
printing the debug log on failure.  The legacy behavior is still
available by specifying the verbose option twice.  For example:

$ splat -t all:all     # Never print the debug log
$ splat -v -t all:all  # Only print debug log on failure
$ splat -vv -t all:all # Always print the debug log

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-11 15:08:19 -08:00
Steven Johnson 9b88fa165f splat taskq:front: Fix race
The taskq:front test has a race condition where task 4 and 8
race to complete, due to an incorrectly calculated set of delay
"factors" (T). If task 4 wins and actually finishes first, the
verification of the order of completion will fail.

The delays calculated to order task completion do not take into
account the terminal line in the table, and so are all off by
a factor of 1. This causes all the tasks in all queues to finish
sooner than expected and the accumulated error is the root cause
of tasks 4 and 8 racing to complete first. Before the change the
"actual" table looks like I commented in #130.

I changed:

* the table in the comment to correctly reflect the test and the
  factor timings needed.
* the individual task delay factors of T so that ONLY 1 task will
  every 2T. (on average)
* 1T was reduced from 100ms to 50ms. This halves the duration of
  the test and makes any remaining raciness more likely to cause
  failures, but it did not cause the test to fail.
* simplified the delay factor logic by using a table look-up
  instead of a switch.
* Added a "task started" message so that with -v it is possible
  to see the order tasks are started.
* Moved the "task completed" message inside the spinlock so that
  with -v the message truly reflects the absolute order of
  completion as guaranteed by the spinlock.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #130
2012-12-05 12:23:40 -08:00
Brian Behlendorf 053678f3b0 Handle errors from spl_kern_path_locked()
When the Linux 3.6 KERN_PATH_LOCKED compatibility code was added
by commit bcb1589 an entirely new vn_remove() implementation was
added.  That function did not properly handle an error from
spl_kern_path_locked() which would result in an panic.  This
patch addresses the issue by returning the error to the caller.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #187
2012-12-03 12:06:25 -08:00
Brian Behlendorf b84412a6e8 Linux compat 3.7, kernel_thread()
The preferred kernel interface for creating threads has been
kthread_create() for a long time now.  However, several of the
SPLAT tests still use the legacy kernel_thread() function which
has finally been dropped (mostly).

Update the condvar and rwlock SPLAT tests to use the modern
interface.  Frankly this is something we should have done a
long time ago.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #194
2012-12-03 09:36:21 -08:00
Brian Behlendorf 251677e98f Verify --with-linux source directory exists
Previously this check was only performed when ./configure was
attempting to autodetect your kernel source directory.  But we
should also handle the case where --with-linux was provided
and is obviously wrong.  This way we catch the error before
invoking make and compiling the source with an incorrect
autoconf results.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #162
2012-11-29 15:05:54 -08:00
Brian Behlendorf 043f9b5724 Disable FS reclaim when allocating new slabs
Allowing the spl_cache_grow_work() function to reclaim inodes
allows for two unlikely deadlocks.  Therefore, we clear __GFP_FS
for these allocations.  The two deadlocks are:

* While holding the ZFS_OBJ_HOLD_ENTER(zsb, obj1) lock a function
  calls kmem_cache_alloc() which happens to need to allocate a
  new slab.  To allocate the new slab we enter FS level reclaim
  and attempt to evict several inodes.  To evict these inodes we
  need to take the ZFS_OBJ_HOLD_ENTER(zsb, obj2) lock and it
  just happens that obj1 and obj2 use the same hashed lock.

* Similar to the first case however instead of getting blocked
  on the hash lock we block in txg_wait_open() which is waiting
  for the next txg which isn't coming because the txg_sync
  thread is blocked in kmem_cache_alloc().

Note this isn't a 100% fix because vmalloc() won't strictly
honor __GFP_FS.  However, it practice this is sufficient because
several very unlikely things must all occur concurrently.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue zfsonlinux/zfs#1101
2012-11-27 13:43:27 -08:00
Brian Behlendorf e71a4534b3 SPL 0.6.0-rc12 2012-11-13 14:28:25 -08:00
Brian Behlendorf 366346c565 Merge branch 'kmem-cache-optimization'
This branch contains kmem cache optimizations designed to resolve
the lockups reported in zfsonlinux/zfs#922.  The lockups were
largely the result of spin lock contention in the slab under low
memory conditions.  Fundamentally, these changes are all designed
to minimize that contention though a variety of methods.

  * Improved vmem cached deadlock detection
  * Track emergency objects in rbtree
  * Optimize spl_kmem_cache_free()
  * Never spin in kmem_cache_alloc()

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
zfsonlinux/zfs#922
2012-11-08 11:09:17 -08:00
Brian Behlendorf dc1b30224f Never spin in kmem_cache_alloc()
If we are reaping from the cache and a concurrent allocation
occurs then the caller must block until the reaping is complete.
This is signaled by the clearing of the KMC_BIT_REAPING bit.

Otherwise the caller will be in a tight loop which takes and
releases the skc->skc_cache lock.  When there are multiple
concurrent callers the system will thrash on the lock and
appear to lock up.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 15:48:39 -08:00
Brian Behlendorf a1af8fb1ea Optimize spl_kmem_cache_free()
Because only virtual slabs may have emergency objects and these
objects are guaranteed to have physical addresses.  It can be
easily determined if the passed object is a virtual slab object
or an emergency object.  This allows us to completely optimize
the emergency object free case out of the common free path.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:54:19 -08:00
Brian Behlendorf ed3163484d Track emergency object in rbtree
In the initial implementation emergency objects were tracked on a
per-cache list.  The assumption was that under normal operation we
would never allocate more than a handful of these objects.  So the
cost of walking the list during free was expected to be negligible.

However real world usage has shown that emergency objects tend to
be allocated in batches.  A deadlock will be detected and several
thousand emergency objects will be allocated before the original
blocked slab allocation can complete.

Therefore the original list has been replaced by a red black tree
which is sorted by the memory address of each allocated object.
This bounds the worst case insertion and removal time to O(log n)
which minimize contention on the assoicated spin lock.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:54:19 -08:00
Brian Behlendorf 165f13c33a Improved vmem cached deadlock detection
The entire goal of performing the slab allocations asynchronously
is to be able to detect when a vmalloc() deadlocks.  In this case,
and only this case, do we want to start allocating emergency objects.
The trick here is to minimize false positives because the overhead
of tracking emergency objects is far higher than normal slab objects.

With that goal in mind the code was reworked to be less sensitive
to slow allocations by increasing the wait time.  Once a cache is
is marked deadlocked all subsequent allocations which can not be
satisfied with existing cache objects will immediately allocate new
emergency objects.  This behavior persists until the asynchronous
allocation completes and clears the deadlocked flag.

The result of these tweaks is that far fewer emergency objects
get created which is important because this minimizes the cost of
releasing them latter in kmem_cache_free().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:54:15 -08:00