Commit Graph

8176 Commits

Author SHA1 Message Date
Brian Behlendorf 18e0c500a7 Merge branch 'taskq'
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #199
2012-12-12 10:45:48 -08:00
Brian Behlendorf eb0be2ed46 Removed SPL_AC_3ARGS_INIT_WORK check
All consumers of the kernel delayed work queues have been shifted
over to rely on the taskq implementation.  This compatibility code
can now be removed.  Any new callers which need this functionality
should use the taskq interfaces for delayed work items.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:57:10 -08:00
Brian Behlendorf 33e94ef1dd kmem-cache: Use a taskq for async allocations
Shift the asynchronous allocations over to use the taskq interfaces.
This allows us to abandon the kernels delayed work queue interface
and all the compatibility code it requires.

This code never actually used the delay functionality it was just
done this way to leverage the existing compatibility code.  All that
is required is a thread context to perform the allocation in.  The
only thing clever in this change is that we take advantage of the
preallocated task queue entries to avoid a memory allocation.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf a10287e00d kmem-cache: Use taskqs for ageing
Shift the cache and magazine ageing functionality over to the new
delayed taskq interfaces.  This allows us to abandon the kernels
delayed work queue interface and all the compatibility code it
requires.

However, the delayed taskq interface does not allow us to schedule
a task for a specfic cpu so the ageing code was slightly reworked.
The magazine ageing delay has been directly linked to the cache
ageing function.  The spl_cache_age() function invokes on_each_cpu()
in order to run spl_magazine_age() on each cpu.  It then blocks
waiting for them to complete and promptly reclaims any free slabs.

When restructing the code wasn't the primary goal I think the
new code is far more understable and maintainable.  It also should
help minimize magazine thrashing because free slabs are immediately
released after the magazine is aged.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf 296a8e596d kmem-cache: spl_kmem_cache_create() may always sleep
When this code was originally written I went overboard and allowed
for the possibility of creating a cache in an atomic context.  In
practice there are no callers which ever do this.  This makes sense
since a cache is by design a long lived data structure.

To prevent abuse of this function going forward I'm removing the
code which is supported to handle an atomic context.  All allocators
have been updated to use KM_SLEEP and the might_sleep() debug macro
has been added to immediately detect atomic callers.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf a5a98e7260 splat taskq:front: Reduce stack frame
The slightly increased size of the taskq_ent_t when debugging is
enabled has pushed the taskq:front splat test over frame size
limit.  To resolve this dynamically allocate the taskq_ent_t
structures so they are part of the heap instead of the stack.

  In function 'splat_taskq_test6_impl'
  error: the frame size of 1648 bytes is larger than 1024 bytes

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf 94ff5d38e3 splat taskq:order: Reduce stack frame
The slightly increased size of the taskq_ent_t when debugging is
enabled has pushed the taskq:order splat test over frame size
limit.  To resolve this dynamically allocate the taskq_ent_t
structures so they are part of the heap instead of the stack.

  In function 'splat_taskq_test5_impl'
  error: the frame size of 1680 bytes is larger than 1024 bytes

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:54 -08:00
Brian Behlendorf 3238e71763 splat taskq:cancel: Add test case
Add a test case for taskq_cancel_id() to verify it is working
properly.  Just like taskq:delay we start by dispatching 100
tasks.  However this time 1/3 of the tasks use taskq_dispatch()
and will be run immediately, and 2/3 use taskq_dispatch_delay().
The idea is to create a busy taskq with both active, pending,
and delayed tasks.

After all the items have been successfully dispatched the test
begins randomly canceling known task ids.  It will do this for
5 seconds randomly canceling a task id and then sleeping for a
few milliseconds.   The task being canceled may have already run,
still be on the pending list, or may be currently being executed
by a worker thread.  The idea is to ensure we catch any subtle
race conditions.

Once all the non-canceled tasks have completed we cross check
the number of tasks which ran with the number of tasks which
were successfully canceled.  Additionally, we verify that the
taskq_cancel_id() function never blocks longer than needed.
This time is bounded by the longest run time of the task which
was dispatched.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:56:49 -08:00
Brian Behlendorf 2f35782620 splat taskq:delay: Add test case
Add a test case for taskq_dispatch_delay() to verify it is working
properly.  The test dispatchs 100 tasks to a taskq with random
expiration times spread over 5 seconds.  As each task expires and
gets executed by a worker thread it verifies that it was run at
the correct time.  Once all the delayed tasks have been executed
we double check that all the dispatched tasks were successful.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Brian Behlendorf d9acd930b5 taskq delay/cancel functionality
Add the ability to dispatch a delayed task to a taskq.  The desired
behavior is for the task to be queued but not executed by a worker
thread until the expiration time is reached.  To achieve this two
new functions were added.

* taskq_dispatch_delay() -

  This function behaves exactly like taskq_dispatch() however it
takes a third 'expire_time' argument.  The caller should pass the
desired time the task should be executed as an absolute value in
jiffies.  The task is guarenteed not to run before this time, it
may run slightly latter if all the worker threads are busy.

* taskq_cancel_id() -

  Given a task id attempt to cancel the task before it gets executed.
This is primarily useful for canceling delay tasks but can be used for
canceling any previously dispatched task.  There are three possible
return values.

  0      - The task was found and canceled before it was executed.
  ENOENT - The task was not found, either it was already run or an
           invalid task id was supplied by the caller.
  EBUSY  - The task is currently executing any may not be canceled.
           This function will block until the task has been completed.

* taskq_wait_all() -

  The taskq_wait_id() function was renamed taskq_wait_all() to more
clearly reflect its actual behavior.  It is only curreny used by
the splat taskq regression tests.

* taskq_wait_id() -

  Historically, the only difference between this function and
taskq_wait() was that you passed the task id.  In both functions you
would block until ALL lower task ids which executed.  This was
semantically correct but could be very slow particularly if there
were delay tasks submitted.

  To better accomidate the delay tasks this function was reimplemnted.
It will now only block until the passed task id has been completed.

This is actually a fairly low risk change for a few reasons.

* Only new ZFS callers will make use of the new interfaces and
  very little common code was changed to support the new functions.

* The existing taskq_wait() implementation was not changed just
  slightly refactored.

* The newly optimized taskq_wait_id() implementation was never
  used by ZFS we can't accidentally introduce a new bug there.

NOTE: This functionality does not exist in the Illumos taskqs.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Brian Behlendorf aed8671cb0 taskq style, remove #define wrappers
When the taskq implementation was originally written I wrapped all
the API functions in #define's.  This was done as a preventative
measure to ensure that a taskq symbol never conflicted with an
existing kernel symbol.

However, in practice the taskq symbols never conflicted.  The only
major conflicts occured with the kmem cache API.  Since this added
layer of obfuscation never bought us anything for the taskq's I'm
removing it.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Brian Behlendorf 472a34caff taskq style, convert spaces to soft tabs
Update the taskq implementation to conform with the style used
throughout the rest of the code.  There are no functional
changes in this commit.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:54:07 -08:00
Steven Johnson 794f145bf9 splat linux:shrinker: Fix fail-safe
Ensure the fail-safe is reset between successive tests.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-12 09:04:29 -08:00
Steven Johnson ca072ee70f splat linux:shrinker: Fix race condition
Ensure the test thread blocks until the shrinker has completed its
work.  This is done by putting the test thread to sleep and waking
it each time the shrinker callback runs.  Once the shrinker size
drops to zero or we time out the test is allowed to proceed.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #96
Closes #125
Closes #182
2012-12-12 09:04:11 -08:00
Brian Behlendorf 576ec6aac4 splat command verbose behavior
The splat command takes a verbose option which when set prints
the internal debug log for every test.  This is helpful when
tracking down a common failure, but for a rare failure the
volume of log data is distracting.

Therefore, the verbose option has been adjusted to allow only
printing the debug log on failure.  The legacy behavior is still
available by specifying the verbose option twice.  For example:

$ splat -t all:all     # Never print the debug log
$ splat -v -t all:all  # Only print debug log on failure
$ splat -vv -t all:all # Always print the debug log

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-12-11 15:08:19 -08:00
Richard Yao e4d89e9cfc Switch KM_SLEEP to KM_PUSHPAGE
When writes to zvols invoke ZIL, zfs_range_new_proxy() is called,
which allocates memory using KM_SLEEP, triggering a warning.
Switch to KM_PUSHPAGE to silence that warning.  See commit
b8d06fca08 for additional details.

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1138
2012-12-10 09:44:45 -08:00
Brian Behlendorf 53c7411919 Revert "Fix unlink/xattr deadlock"
This reverts commit b00131d43c which
is no longer needed due to e89260a1c8.

This change forces all xattr znodes to hold a reference on their
parent which ensures prune_icache() will never attempt to evict
both the parent and child concurrently.  This effectively prevents
the deadlock condition from ever occuring.

Therefore we can safely revert back to the upstream synchronous
cleanup code.  This is nice because it keeps our code base closer
to upstream and resolves the performance issues introduced by the
original deadlock fix.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #457
2012-12-05 13:41:30 -08:00
Brian Behlendorf d3aa3ea96e Preserve inode mtime/ctime in .writepage()
When updating a file via mmap()'ed I/O preserve the mtime/ctime
which were updated when the page was made writable by the generic
callback filemap_page_mkwrite().

But more importantly than preserving the exact time add the missing
call to sa_bulk_update().  This ensures that the znode modifications
are written to disk as part of the transaction.  Without this the
inode may mistaken rollback to the previous on-disk znode state.

Additionally, for mmap()'ed znodes explicitly set the atime, mtime,
and ctime on close using the up to date values in the inode.  This
is critical because writepage() may occur after close and on close
we need to ensure the values are correct.

Original-patch-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #764
2012-12-05 13:00:25 -08:00
Steven Johnson 9b88fa165f splat taskq:front: Fix race
The taskq:front test has a race condition where task 4 and 8
race to complete, due to an incorrectly calculated set of delay
"factors" (T). If task 4 wins and actually finishes first, the
verification of the order of completion will fail.

The delays calculated to order task completion do not take into
account the terminal line in the table, and so are all off by
a factor of 1. This causes all the tasks in all queues to finish
sooner than expected and the accumulated error is the root cause
of tasks 4 and 8 racing to complete first. Before the change the
"actual" table looks like I commented in #130.

I changed:

* the table in the comment to correctly reflect the test and the
  factor timings needed.
* the individual task delay factors of T so that ONLY 1 task will
  every 2T. (on average)
* 1T was reduced from 100ms to 50ms. This halves the duration of
  the test and makes any remaining raciness more likely to cause
  failures, but it did not cause the test to fail.
* simplified the delay factor logic by using a table look-up
  instead of a switch.
* Added a "task started" message so that with -v it is possible
  to see the order tasks are started.
* Moved the "task completed" message inside the spinlock so that
  with -v the message truly reflects the absolute order of
  completion as guaranteed by the spinlock.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #130
2012-12-05 12:23:40 -08:00
Jorgen Lundman 53c2ec1d1b Fix 'zpool create' segfault due to bad syntax
Incorrect syntax should never cause a segfault.  In this case
listing multiple comma delimited options after '-o' triggered
the problem.  For example:

  zpool create -o ashift=12,listsnaps=on

This patch resolves the issue by wrapping the calls which use
hdr with a NULL test.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1118
2012-12-04 11:15:25 -08:00
Ned Bass 2957f38d78 vdev_id support for device link aliases
Add a vdev_id feature to map device names based on already defined
udev device links.  To increase the odds that vdev_id will run after
the rules it depends on, increase the vdev.rules rule number from 60
to 69.  With this change, vdev_id now provides functionality analogous
to zpool_id and zpool_layout, paving the way to retire those tools.

A defined alias takes precedence over a topology-derived name, but the
two naming methods can otherwise coexist. For example, one might name
drives in a JBOD with the sas_direct topology while naming an internal
L2ARC device with an alias.

For example, the following lines in vdev_id.conf will result in the
creation of links /dev/disk/by-vdev/{d1,d2}, each pointing to the same
target as the device link specified in the third field.

  #     by-vdev
  #     name     fully qualified or base name of device link
  alias d1       /dev/disk/by-id/wwn-0x5000c5002de3b9ca
  alias d2       wwn-0x5000c5002def789e

Also perform some minor vdev_id cleanup, such as removal of the unused
-s command line option.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #981
2012-12-03 14:04:47 -08:00
Brian Behlendorf e89260a1c8 Directory xattr znodes hold a reference on their parent
Unlike normal file or directory znodes, an xattr znode is
guaranteed to only have a single parent.  Therefore, we can
take a refernce on that parent if it is provided at create
time and cache it.  Additionally, we take care to cache it
on any subsequent zfs_zaccess() where the parent is provided
as an optimization.

This allows us to avoid needing to do a zfs_zget() when
setting up the SELinux security xattr in the create path.
This is critical because a hash lookup on the directory
will deadlock since it is locked.

The zpl_xattr_security_init() call has also been moved up
to the zpl layer to ensure TXs to create the required
xattrs are performed after the create TX.  Otherwise we
run the risk of deadlocking on the open create TX.

Ideally the security xattr should be fully constructed
before the new inode is unlocked.  However, doing so would
require far more extensive changes to ZFS.

This change may also have the benefitial side effect of
ensuring xattr directory znodes are evicted from the cache
before normal file or directory znodes due to the extra
reference.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #671
2012-12-03 12:10:46 -08:00
Brian Behlendorf 053678f3b0 Handle errors from spl_kern_path_locked()
When the Linux 3.6 KERN_PATH_LOCKED compatibility code was added
by commit bcb1589 an entirely new vn_remove() implementation was
added.  That function did not properly handle an error from
spl_kern_path_locked() which would result in an panic.  This
patch addresses the issue by returning the error to the caller.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #187
2012-12-03 12:06:25 -08:00
Turbo Fredriksson 645fb9cc21 Implemented sharing datasets via SMB using libshare
Add the initial support for the 'smbshare' option using the
existing libshare infrastructure.  Because this implementation
relies on usershares samba version 3.0.23 is required.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #493
2012-12-03 09:42:15 -08:00
Brian Behlendorf b84412a6e8 Linux compat 3.7, kernel_thread()
The preferred kernel interface for creating threads has been
kthread_create() for a long time now.  However, several of the
SPLAT tests still use the legacy kernel_thread() function which
has finally been dropped (mostly).

Update the condvar and rwlock SPLAT tests to use the modern
interface.  Frankly this is something we should have done a
long time ago.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #194
2012-12-03 09:36:21 -08:00
Cyril Plisko 4588bf5701 Make zpool attach -o ashift=... actually work
Commit df83110856 missed update to
getopt() call, while delivering all the rest. This commit adds
"o" to getopt().

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #566
2012-11-30 13:50:26 -08:00
Brian Behlendorf c3275b56a1 Add load_nvlist() error handling
Add the missing error handling to load_nvlist().  There's no good
reason this needs to be fatal.  All callers of load_nvlist() do
correctly handle an error condition and it is preferable that an
error be returned.  This will allow 'zpool import -FX' to safely
attempt to rollback through previous txgs looking for a good one.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1120
2012-11-30 13:48:17 -08:00
Brian Behlendorf c372b36e3e Allow GPT+EFI vdevs for root pools
Commit 57a4edd allows the bootfs property to be set on any pool.
However, many of the zpool commands still prevent you from using
EFI labeled devices for the root pool.  For example:

    # zpool attach rpool /dev/sda /dev/sdb
    cannot label 'sdb': EFI labeled devices are not supported on
    root pools. on root devices.

For non-Solaris builds such as Linux disable this error.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1077
2012-11-30 13:45:14 -08:00
Brian Behlendorf 004324ecc6 Disable page allocation warnings for super block
Due to the slightly increased size of the ZFS super block
caused by 30315d2 there are now allocation warnings.  The
allocation size is still small (just over 8k) and super
blocks are rarely allocated so we suppress the warning.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue #1101
2012-11-30 11:04:44 -08:00
Brian Behlendorf 56a517ae3a Verify --with-linux source directory exists
Previously this check was only performed when ./configure was
attempting to autodetect your kernel source directory.  But we
should also handle the case where --with-linux was provided
and is obviously wrong.  This way we catch the error before
invoking make and compiling the source with an incorrect
autoconf results.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes zfsonlinux/spl#162
2012-11-29 15:08:35 -08:00
Brian Behlendorf 251677e98f Verify --with-linux source directory exists
Previously this check was only performed when ./configure was
attempting to autodetect your kernel source directory.  But we
should also handle the case where --with-linux was provided
and is obviously wrong.  This way we catch the error before
invoking make and compiling the source with an incorrect
autoconf results.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #162
2012-11-29 15:05:54 -08:00
Cyril Plisko 38b344d22a vdev_id fails to handle complex device topologies
While expanding positional parameters shell requires non-single
digits to be enclosed in braces. When the SAS topology is
non-trivial the number of positional parameters generated internally
by vdev_id script (using set -- ...) easily crosses single digit limit
and vdev_id fails to generate links.

Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1119
2012-11-29 13:07:47 -08:00
Ned Bass a6ef9522ea Make vdev_id POSIX sh compatible
Full bash may not be available in all environments where udev helpers
run, such as in an initial ramdisk.  To avoid breakage in this case,
remove use of bash-specific features such as variable arrays and the
`declare' keyword from the vdev_id script.

Signed-off-by: Ned Bass <bass6@llnl.gov>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #870
2012-11-27 14:23:22 -08:00
Brian Behlendorf f74a147c02 Fix NULL deref when zvol_alloc() fails
If zvol_alloc() fails zv will be set to NULL and dereferenced
in out_dmu_objset_disown.  To avoid this entirely the zv->objset
line is moved up in to the success block.

Original-patch-by: Jorgen Lundman <lundman@lundman.net>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1109
2012-11-27 14:10:31 -08:00
Brian Behlendorf 30315d237b Increase ZFS_OBJ_MTX_SZ to 256
Increasing this limit costs us 6144 bytes of memory per mounted
filesystem, but this is small price to pay for accomplishing
the following:

* Allows for up to 256-way concurreny when performing lookups
  which helps performance when there are a large number of
  processes.

* Minimizes the likelyhood of encountering the deadlock
  described in issue #1101.  Because vmalloc() won't strictly
  honor __GFP_FS there is still a very remote chance of a
  deadlock.  See the zfsonlinux/spl@043f9b57 commit.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1101
2012-11-27 13:46:32 -08:00
Brian Behlendorf 043f9b5724 Disable FS reclaim when allocating new slabs
Allowing the spl_cache_grow_work() function to reclaim inodes
allows for two unlikely deadlocks.  Therefore, we clear __GFP_FS
for these allocations.  The two deadlocks are:

* While holding the ZFS_OBJ_HOLD_ENTER(zsb, obj1) lock a function
  calls kmem_cache_alloc() which happens to need to allocate a
  new slab.  To allocate the new slab we enter FS level reclaim
  and attempt to evict several inodes.  To evict these inodes we
  need to take the ZFS_OBJ_HOLD_ENTER(zsb, obj2) lock and it
  just happens that obj1 and obj2 use the same hashed lock.

* Similar to the first case however instead of getting blocked
  on the hash lock we block in txg_wait_open() which is waiting
  for the next txg which isn't coming because the txg_sync
  thread is blocked in kmem_cache_alloc().

Note this isn't a 100% fix because vmalloc() won't strictly
honor __GFP_FS.  However, it practice this is sufficient because
several very unlikely things must all occur concurrently.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Issue zfsonlinux/zfs#1101
2012-11-27 13:43:27 -08:00
Brian Behlendorf 0e20a31b4b Recreate minors when renaming zvols
When a zvol with snapshots is renamed the device files under
/dev/zvol/ are not renamed.  This patch resolves the problem
by destroying and recreating the minors with the new name so
the links can be recreated bu udev.

Original-patch-by: Suman Chakravartula <schakrava@gmail.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #408
2012-11-19 16:59:44 -08:00
nordaux 33364b15d3 mount.zfs: canonicalize mount point for mtab
Canonicalize the mount point passed to the mount.zfs helper.
This way a clean path is always added to mtab which ensures
the umount can properly locate and remove the entry.

Test case:
$ mkdir /mnt/foo
$ mount -t zfs zpool/foo /mnt/../mnt/foo////
$ umount /mnt/foo
$ cat /etc/mtab | grep zpool/foo
zpool/foo /mnt/../mnt/foo//// zfs rw 0 0

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #573
2012-11-15 14:28:52 -08:00
Brian Behlendorf 54602c3771 Merge branch 'ashift'
This branch adds some overdue ashift improvements.

  * Add '-o ashift' to 'zpool add' and 'zpool attach'
  * Improve AF hard disk detection
  * Allow 'zpool import' to handle increases in ashift

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-15 11:51:23 -08:00
Cyril Plisko df83110856 Add "-o ashift" to zpool add and zpool attach
When adding devices to an existing pool "ashift" property is
auto-detected.  However, if this property was overridden at
the pool creation time (i.e. zpool create -o ashift=12 tank ...)
this may not be what the user wants.  This commit lets the user
specify the value of "ashift" property to be used with newly
added drives. For example,

    zpool add -o ashift=12 tank disk1
    zpool attach -o ashift=12 tank disk1 disk2

Signed-off-by: Cyril Plisko <cyril.plisko@mountall.com>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #566
2012-11-15 11:06:14 -08:00
Brian Behlendorf 2404b01499 Improve AF hard disk detection
Use the bdev_physical_block_size() interface to determine the
minimize write size which can be issued without incurring a
read-modify-write operation.  This is used to set the ashift
correctly to prevent a performance penalty when using AF hard
disks.

Unfortunately, this interface isn't entirely reliable because
it's not uncommon for disks to misreport this value.  For this
reason you may still need to manually set your ashift with:

  zpool create -o ashift=12 ...

The solution to this in the upstream Illumos source was to add
a white list of known offending drives.  Maintaining such a list
will be a burden, but it still may be worth doing if we can
detect a large number of these drives.  This should be considered
as future work.

Reported-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #916
2012-11-15 11:06:14 -08:00
George Wilson 32a9872bba Illumos #2671: zpool import should not fail if vdev ashift has increased
Reviewed by: Adam Leventhal <ahl@delphix.com>
Reviewed by: Eric Schrock <eric.schrock@delphix.com>
Reviewed by: Richard Elling <richard.elling@richardelling.com>
Reviewed by: Gordon Ross <gwr@nexenta.com>
Reviewed by: Garrett D'Amore <garrett@damore.org>
Approved by: Richard Lowe <richlowe@richlowe.net>

Refererces to Illumos issue:
      https://www.illumos.org/issues/2671

This patch has been slightly modified from the upstream Illumos
version.  In the upstream implementation a warning message is
logged to the console.  To prevent pointless console noise this
notification is now posted as a "ereport.fs.zfs.vdev.bad_ashift"
event.

The event indicates a non-optimial (but entirely safe) ashift
value was used to create the pool.  Depending on your workload
this may impact pool performance.  Unfortunately, the only way
to correct the issue is to recreate the pool with a new ashift.

NOTE: The unrelated fix to the comment in zpool_main.c appears
in the upstream commit and was preserved for consistnecy.

Ported-by: Cyril Plisko <cyril.plisko@mountall.com>
Reworked-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #955
2012-11-15 11:05:59 -08:00
Brian Behlendorf 3997bc7435 zfs-0.6.0-rc12 2012-11-13 14:35:44 -08:00
Brian Behlendorf e71a4534b3 SPL 0.6.0-rc12 2012-11-13 14:28:25 -08:00
Richard Yao 2af96a5df5 Fix hard coded path in 60-vdev.rules.in
The udev data directory was hard coded in 60-vdev.rules.in. That causes
a problem when a distribution changes the location of the directory.
This was not an issue in the past because virtually all distributions
used the same path, but that is beginning to change following a decision
by the systemd developers to change the directory location to reflect
their take-over of udev maintainership. The testing branch of Gentoo
Linux adopted this change, which enabled the hardcoded directory
location to trigger a regression.

Signed-off-by: Richard Yao <ryao@cs.stonybrook.edu>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #1085
2012-11-13 12:04:15 -08:00
Brian Behlendorf 4c837f0d93 Fix "allocating allocated segment" panic
Gunnar Beutner did all the hard work on this one by correctly
identifying that this issue is a race between dmu_sync() and
dbuf_dirty().

Now in all cases the caller is responsible for preventing this
race by making sure the zfs_range_lock() is held when dirtying
a buffer which may be referenced in a log record.  The mmap
case which relies on zfs_putpage() was not taking the range
lock.  This code was accidentally dropped when the function
was rewritten for the Linux VFS.

This patch adds the required range locking to zfs_putpage().

It also adds the missing ZFS_ENTER()/ZFS_EXIT() macros which
aren't strictly required due to the VFS holding a reference.
However, this makes the code more consistent with the upsteam
code and there's no harm in being extra careful here.

Original-patch-by: Gunnar Beutner <gunnar@beutner.name>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #541
2012-11-09 19:01:09 -08:00
Brian Behlendorf e26ade5101 Fix zvol+btrfs hang
When using a zvol to back a btrfs filesystem the btrfs mount
would hang.  This was due to the bio completion callback used
in btrfs assuming that lower level drivers would never modify
the bio->bi_io_vecs after they were submitted via bio_submit().
If they are modified btrfs will miscalculate which pages need
to be unlocked resulting in a hang.

It's worth mentioning that other file systems such as ext[234]
and xfs work fine because they do not make the same assumption
in the bio completion callback.

The most straight forward way to fix the issue is to present
the semantics expected by btrfs.  This is done by cloning the
bios attached to each request and then using the clones bvecs
to perform the required accounting.  The clones are freed after
each read/write and the original unmodified bios are linked back
in to the request.

Signed-off-by: Chris Wedgwood <cw@f00f.org>
Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
Closes #469
2012-11-09 12:24:51 -08:00
Brian Behlendorf 366346c565 Merge branch 'kmem-cache-optimization'
This branch contains kmem cache optimizations designed to resolve
the lockups reported in zfsonlinux/zfs#922.  The lockups were
largely the result of spin lock contention in the slab under low
memory conditions.  Fundamentally, these changes are all designed
to minimize that contention though a variety of methods.

  * Improved vmem cached deadlock detection
  * Track emergency objects in rbtree
  * Optimize spl_kmem_cache_free()
  * Never spin in kmem_cache_alloc()

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
zfsonlinux/zfs#922
2012-11-08 11:09:17 -08:00
Brian Behlendorf dc1b30224f Never spin in kmem_cache_alloc()
If we are reaping from the cache and a concurrent allocation
occurs then the caller must block until the reaping is complete.
This is signaled by the clearing of the KMC_BIT_REAPING bit.

Otherwise the caller will be in a tight loop which takes and
releases the skc->skc_cache lock.  When there are multiple
concurrent callers the system will thrash on the lock and
appear to lock up.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 15:48:39 -08:00
Brian Behlendorf a1af8fb1ea Optimize spl_kmem_cache_free()
Because only virtual slabs may have emergency objects and these
objects are guaranteed to have physical addresses.  It can be
easily determined if the passed object is a virtual slab object
or an emergency object.  This allows us to completely optimize
the emergency object free case out of the common free path.

Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov>
2012-11-06 14:54:19 -08:00