2008-11-20 20:01:55 +00:00
|
|
|
/*
|
|
|
|
* CDDL HEADER START
|
|
|
|
*
|
|
|
|
* The contents of this file are subject to the terms of the
|
|
|
|
* Common Development and Distribution License (the "License").
|
|
|
|
* You may not use this file except in compliance with the License.
|
|
|
|
*
|
|
|
|
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
|
|
|
|
* or http://www.opensolaris.org/os/licensing.
|
|
|
|
* See the License for the specific language governing permissions
|
|
|
|
* and limitations under the License.
|
|
|
|
*
|
|
|
|
* When distributing Covered Code, include this CDDL HEADER in each
|
|
|
|
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
|
|
|
|
* If applicable, add the following below this CDDL HEADER, with the
|
|
|
|
* fields enclosed by brackets "[]" replaced with your own identifying
|
|
|
|
* information: Portions Copyright [yyyy] [name of copyright owner]
|
|
|
|
*
|
|
|
|
* CDDL HEADER END
|
|
|
|
*/
|
|
|
|
/*
|
2009-08-18 18:43:27 +00:00
|
|
|
* Copyright 2009 Sun Microsystems, Inc. All rights reserved.
|
2008-11-20 20:01:55 +00:00
|
|
|
* Use is subject to license terms.
|
|
|
|
*/
|
|
|
|
|
Linux ZVOL implementation; kernel-side changes
At last a useful user space interface for the Linux ZFS port arrives.
With the addition of the ZVOL real ZFS based block devices are available
and can be compared head to head with Linux's MD and LVM block drivers.
The Linux ZVOL has not yet had any performance work done but from a user
perspective it should be functionally complete and behave like any other
Linux block device.
The ZVOL has so far been tested using zconfig.sh on the following x86_64
based platforms: FC11, CHAOS4, RHEL5, RHEL6, and SLES11. However, more
testing is required to ensure everything is working as designed.
What follows in a somewhat detailed list of changes includes in this
commit to make ZVOL's possible. A few other issues were addressed in
the context of these changes which will also be mentioned.
* Added module/zfs/zvol.c which is based off the original Solaris ZVOL
implementation but rewritten to intergrate with the Linux block device
APIs. The basic design remains the similar in Linux with the major
change being request processing. Request processing is handled by
registering a request function which the elevator calls once all request
merges is finished and the elevator unplugs. This function is called
under a spin lock and the request structure is passed to the block driver
to be queued for IO. The elevator must be notified asyncronously once
the request completes or fails with an error. This allows us the block
driver a chance to handle many request concurrently. For the ZVOL we
maintain a taskq with a service thread per core. As requests are delivered
by the elevator each request is dispatched to the taskq. The task queue
handles each request with a write or read helper function which basically
copies the request data in to our out of the DMU object. Writes single
completion as soon as the DMU has the data unless they are marked sync.
Reads are all handled syncronously however the elevator will merge many
small reads in to a large read before it submitting the request.
* Cachine is worth specifically mentioning. Because both the Linux VFS
and the ZFS ARC both want to fully manage the cache we unfortunately
end up with two caches. This means our memory foot print is larger
than otherwise expected, and it means we have an extra copy between
the caches, but it does not impact correctness. All syncs are barrior
requests I believe are handled correctly. Longer term there is lots of
room for improvement here but it will require fairly extensive changes
to either the Linux VFS and VM layer, or additional DMU interfaces to
handle managing buffer not directly allocated by the ARC.
* Added module/zfs/include/sys/blkdev.h which contains all the Linux
compatibility foo which is required to handle changes in the Linux block
APIs from 2.6.18 thru 2.6.31 based kernels.
* The dmu_{read,write}_uio interfaces which don't make sense on Linux
have been modified to dmu_{read,write}_req functions which consume the
standard Linux IO request structure. Their function fundamentally
remains the same so this happily worked out pretty cleanly.
* The /dev/zfs character device is no longer created through the half
implemented Solaris driver DDI interfaces. It is now simply created
with it's own major number as a Linux misc device which greatly simplifies
everything. It is only capable of handling ioctls() but this fits nicely
because that's all it ever has to do. The ZVOL devices unlike in Solaris
do not leverage the same major number as /dev/zfs but instead register
their own major. Because only one major is allocated and space is reserved
for 16 partitions per-device there is a limit of 16384 concurrent ZVOL
devices. By using multiple majors like the scsi driver this limit could
be addressed if it becomes a problem.
* The {spa,zfs,zvol}_busy() functions have all be removed because they
are not required on a Linux system. Under Linux the registered module
exit function will not be called while the are still references to the
module. Once the exit function is called however it must succeed or
block, it may not fail so returning an error on module unload makes to
sense under Linux.
* With the addition of ZVOL support all the HAVE_ZVOL defines were removed
for obvious reasons. However, the HAVE_ZPL defines have been relocated
in to the linux-{kernel,user}-disk topic branches and must remain until
the ZPL is implemented.
2009-11-20 19:06:59 +00:00
|
|
|
#ifdef HAVE_ZPL
|
|
|
|
|
2008-11-20 20:01:55 +00:00
|
|
|
#include <sys/refcount.h>
|
|
|
|
#include <sys/rrwlock.h>
|
|
|
|
|
|
|
|
/*
|
|
|
|
* This file contains the implementation of a re-entrant read
|
|
|
|
* reader/writer lock (aka "rrwlock").
|
|
|
|
*
|
|
|
|
* This is a normal reader/writer lock with the additional feature
|
|
|
|
* of allowing threads who have already obtained a read lock to
|
|
|
|
* re-enter another read lock (re-entrant read) - even if there are
|
|
|
|
* waiting writers.
|
|
|
|
*
|
|
|
|
* Callers who have not obtained a read lock give waiting writers priority.
|
|
|
|
*
|
|
|
|
* The rrwlock_t lock does not allow re-entrant writers, nor does it
|
|
|
|
* allow a re-entrant mix of reads and writes (that is, it does not
|
|
|
|
* allow a caller who has already obtained a read lock to be able to
|
|
|
|
* then grab a write lock without first dropping all read locks, and
|
|
|
|
* vice versa).
|
|
|
|
*
|
|
|
|
* The rrwlock_t uses tsd (thread specific data) to keep a list of
|
|
|
|
* nodes (rrw_node_t), where each node keeps track of which specific
|
|
|
|
* lock (rrw_node_t::rn_rrl) the thread has grabbed. Since re-entering
|
|
|
|
* should be rare, a thread that grabs multiple reads on the same rrwlock_t
|
|
|
|
* will store multiple rrw_node_ts of the same 'rrn_rrl'. Nodes on the
|
|
|
|
* tsd list can represent a different rrwlock_t. This allows a thread
|
|
|
|
* to enter multiple and unique rrwlock_ts for read locks at the same time.
|
|
|
|
*
|
|
|
|
* Since using tsd exposes some overhead, the rrwlock_t only needs to
|
|
|
|
* keep tsd data when writers are waiting. If no writers are waiting, then
|
|
|
|
* a reader just bumps the anonymous read count (rr_anon_rcount) - no tsd
|
|
|
|
* is needed. Once a writer attempts to grab the lock, readers then
|
|
|
|
* keep tsd data and bump the linked readers count (rr_linked_rcount).
|
|
|
|
*
|
|
|
|
* If there are waiting writers and there are anonymous readers, then a
|
|
|
|
* reader doesn't know if it is a re-entrant lock. But since it may be one,
|
|
|
|
* we allow the read to proceed (otherwise it could deadlock). Since once
|
|
|
|
* waiting writers are active, readers no longer bump the anonymous count,
|
|
|
|
* the anonymous readers will eventually flush themselves out. At this point,
|
|
|
|
* readers will be able to tell if they are a re-entrant lock (have a
|
|
|
|
* rrw_node_t entry for the lock) or not. If they are a re-entrant lock, then
|
|
|
|
* we must let the proceed. If they are not, then the reader blocks for the
|
|
|
|
* waiting writers. Hence, we do not starve writers.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* global key for TSD */
|
|
|
|
uint_t rrw_tsd_key;
|
|
|
|
|
|
|
|
typedef struct rrw_node {
|
|
|
|
struct rrw_node *rn_next;
|
|
|
|
rrwlock_t *rn_rrl;
|
|
|
|
} rrw_node_t;
|
|
|
|
|
|
|
|
static rrw_node_t *
|
|
|
|
rrn_find(rrwlock_t *rrl)
|
|
|
|
{
|
|
|
|
rrw_node_t *rn;
|
|
|
|
|
|
|
|
if (refcount_count(&rrl->rr_linked_rcount) == 0)
|
|
|
|
return (NULL);
|
|
|
|
|
|
|
|
for (rn = tsd_get(rrw_tsd_key); rn != NULL; rn = rn->rn_next) {
|
|
|
|
if (rn->rn_rrl == rrl)
|
|
|
|
return (rn);
|
|
|
|
}
|
|
|
|
return (NULL);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Add a node to the head of the singly linked list.
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
rrn_add(rrwlock_t *rrl)
|
|
|
|
{
|
|
|
|
rrw_node_t *rn;
|
|
|
|
|
|
|
|
rn = kmem_alloc(sizeof (*rn), KM_SLEEP);
|
|
|
|
rn->rn_rrl = rrl;
|
|
|
|
rn->rn_next = tsd_get(rrw_tsd_key);
|
|
|
|
VERIFY(tsd_set(rrw_tsd_key, rn) == 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* If a node is found for 'rrl', then remove the node from this
|
|
|
|
* thread's list and return TRUE; otherwise return FALSE.
|
|
|
|
*/
|
|
|
|
static boolean_t
|
|
|
|
rrn_find_and_remove(rrwlock_t *rrl)
|
|
|
|
{
|
|
|
|
rrw_node_t *rn;
|
|
|
|
rrw_node_t *prev = NULL;
|
|
|
|
|
|
|
|
if (refcount_count(&rrl->rr_linked_rcount) == 0)
|
2009-08-18 18:43:27 +00:00
|
|
|
return (B_FALSE);
|
2008-11-20 20:01:55 +00:00
|
|
|
|
|
|
|
for (rn = tsd_get(rrw_tsd_key); rn != NULL; rn = rn->rn_next) {
|
|
|
|
if (rn->rn_rrl == rrl) {
|
|
|
|
if (prev)
|
|
|
|
prev->rn_next = rn->rn_next;
|
|
|
|
else
|
|
|
|
VERIFY(tsd_set(rrw_tsd_key, rn->rn_next) == 0);
|
|
|
|
kmem_free(rn, sizeof (*rn));
|
|
|
|
return (B_TRUE);
|
|
|
|
}
|
|
|
|
prev = rn;
|
|
|
|
}
|
|
|
|
return (B_FALSE);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rrw_init(rrwlock_t *rrl)
|
|
|
|
{
|
|
|
|
mutex_init(&rrl->rr_lock, NULL, MUTEX_DEFAULT, NULL);
|
|
|
|
cv_init(&rrl->rr_cv, NULL, CV_DEFAULT, NULL);
|
|
|
|
rrl->rr_writer = NULL;
|
|
|
|
refcount_create(&rrl->rr_anon_rcount);
|
|
|
|
refcount_create(&rrl->rr_linked_rcount);
|
|
|
|
rrl->rr_writer_wanted = B_FALSE;
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rrw_destroy(rrwlock_t *rrl)
|
|
|
|
{
|
|
|
|
mutex_destroy(&rrl->rr_lock);
|
|
|
|
cv_destroy(&rrl->rr_cv);
|
|
|
|
ASSERT(rrl->rr_writer == NULL);
|
|
|
|
refcount_destroy(&rrl->rr_anon_rcount);
|
|
|
|
refcount_destroy(&rrl->rr_linked_rcount);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rrw_enter_read(rrwlock_t *rrl, void *tag)
|
|
|
|
{
|
|
|
|
mutex_enter(&rrl->rr_lock);
|
2009-08-18 18:43:27 +00:00
|
|
|
#if !defined(DEBUG) && defined(_KERNEL)
|
|
|
|
if (!rrl->rr_writer && !rrl->rr_writer_wanted) {
|
|
|
|
rrl->rr_anon_rcount.rc_count++;
|
|
|
|
mutex_exit(&rrl->rr_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
DTRACE_PROBE(zfs__rrwfastpath__rdmiss);
|
|
|
|
#endif
|
2008-11-20 20:01:55 +00:00
|
|
|
ASSERT(rrl->rr_writer != curthread);
|
|
|
|
ASSERT(refcount_count(&rrl->rr_anon_rcount) >= 0);
|
|
|
|
|
|
|
|
while (rrl->rr_writer || (rrl->rr_writer_wanted &&
|
|
|
|
refcount_is_zero(&rrl->rr_anon_rcount) &&
|
|
|
|
rrn_find(rrl) == NULL))
|
|
|
|
cv_wait(&rrl->rr_cv, &rrl->rr_lock);
|
|
|
|
|
|
|
|
if (rrl->rr_writer_wanted) {
|
|
|
|
/* may or may not be a re-entrant enter */
|
|
|
|
rrn_add(rrl);
|
|
|
|
(void) refcount_add(&rrl->rr_linked_rcount, tag);
|
|
|
|
} else {
|
|
|
|
(void) refcount_add(&rrl->rr_anon_rcount, tag);
|
|
|
|
}
|
|
|
|
ASSERT(rrl->rr_writer == NULL);
|
|
|
|
mutex_exit(&rrl->rr_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
|
|
|
rrw_enter_write(rrwlock_t *rrl)
|
|
|
|
{
|
|
|
|
mutex_enter(&rrl->rr_lock);
|
|
|
|
ASSERT(rrl->rr_writer != curthread);
|
|
|
|
|
|
|
|
while (refcount_count(&rrl->rr_anon_rcount) > 0 ||
|
|
|
|
refcount_count(&rrl->rr_linked_rcount) > 0 ||
|
|
|
|
rrl->rr_writer != NULL) {
|
|
|
|
rrl->rr_writer_wanted = B_TRUE;
|
|
|
|
cv_wait(&rrl->rr_cv, &rrl->rr_lock);
|
|
|
|
}
|
|
|
|
rrl->rr_writer_wanted = B_FALSE;
|
|
|
|
rrl->rr_writer = curthread;
|
|
|
|
mutex_exit(&rrl->rr_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rrw_enter(rrwlock_t *rrl, krw_t rw, void *tag)
|
|
|
|
{
|
|
|
|
if (rw == RW_READER)
|
|
|
|
rrw_enter_read(rrl, tag);
|
|
|
|
else
|
|
|
|
rrw_enter_write(rrl);
|
|
|
|
}
|
|
|
|
|
|
|
|
void
|
|
|
|
rrw_exit(rrwlock_t *rrl, void *tag)
|
|
|
|
{
|
|
|
|
mutex_enter(&rrl->rr_lock);
|
2009-08-18 18:43:27 +00:00
|
|
|
#if !defined(DEBUG) && defined(_KERNEL)
|
|
|
|
if (!rrl->rr_writer && rrl->rr_linked_rcount.rc_count == 0) {
|
|
|
|
rrl->rr_anon_rcount.rc_count--;
|
|
|
|
if (rrl->rr_anon_rcount.rc_count == 0)
|
|
|
|
cv_broadcast(&rrl->rr_cv);
|
|
|
|
mutex_exit(&rrl->rr_lock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
DTRACE_PROBE(zfs__rrwfastpath__exitmiss);
|
|
|
|
#endif
|
2008-11-20 20:01:55 +00:00
|
|
|
ASSERT(!refcount_is_zero(&rrl->rr_anon_rcount) ||
|
|
|
|
!refcount_is_zero(&rrl->rr_linked_rcount) ||
|
|
|
|
rrl->rr_writer != NULL);
|
|
|
|
|
|
|
|
if (rrl->rr_writer == NULL) {
|
2009-08-18 18:43:27 +00:00
|
|
|
int64_t count;
|
|
|
|
if (rrn_find_and_remove(rrl))
|
|
|
|
count = refcount_remove(&rrl->rr_linked_rcount, tag);
|
|
|
|
else
|
|
|
|
count = refcount_remove(&rrl->rr_anon_rcount, tag);
|
|
|
|
if (count == 0)
|
|
|
|
cv_broadcast(&rrl->rr_cv);
|
2008-11-20 20:01:55 +00:00
|
|
|
} else {
|
|
|
|
ASSERT(rrl->rr_writer == curthread);
|
|
|
|
ASSERT(refcount_is_zero(&rrl->rr_anon_rcount) &&
|
|
|
|
refcount_is_zero(&rrl->rr_linked_rcount));
|
|
|
|
rrl->rr_writer = NULL;
|
|
|
|
cv_broadcast(&rrl->rr_cv);
|
|
|
|
}
|
|
|
|
mutex_exit(&rrl->rr_lock);
|
|
|
|
}
|
|
|
|
|
|
|
|
boolean_t
|
|
|
|
rrw_held(rrwlock_t *rrl, krw_t rw)
|
|
|
|
{
|
|
|
|
boolean_t held;
|
|
|
|
|
|
|
|
mutex_enter(&rrl->rr_lock);
|
|
|
|
if (rw == RW_WRITER) {
|
|
|
|
held = (rrl->rr_writer == curthread);
|
|
|
|
} else {
|
|
|
|
held = (!refcount_is_zero(&rrl->rr_anon_rcount) ||
|
|
|
|
!refcount_is_zero(&rrl->rr_linked_rcount));
|
|
|
|
}
|
|
|
|
mutex_exit(&rrl->rr_lock);
|
|
|
|
|
|
|
|
return (held);
|
|
|
|
}
|
Linux ZVOL implementation; kernel-side changes
At last a useful user space interface for the Linux ZFS port arrives.
With the addition of the ZVOL real ZFS based block devices are available
and can be compared head to head with Linux's MD and LVM block drivers.
The Linux ZVOL has not yet had any performance work done but from a user
perspective it should be functionally complete and behave like any other
Linux block device.
The ZVOL has so far been tested using zconfig.sh on the following x86_64
based platforms: FC11, CHAOS4, RHEL5, RHEL6, and SLES11. However, more
testing is required to ensure everything is working as designed.
What follows in a somewhat detailed list of changes includes in this
commit to make ZVOL's possible. A few other issues were addressed in
the context of these changes which will also be mentioned.
* Added module/zfs/zvol.c which is based off the original Solaris ZVOL
implementation but rewritten to intergrate with the Linux block device
APIs. The basic design remains the similar in Linux with the major
change being request processing. Request processing is handled by
registering a request function which the elevator calls once all request
merges is finished and the elevator unplugs. This function is called
under a spin lock and the request structure is passed to the block driver
to be queued for IO. The elevator must be notified asyncronously once
the request completes or fails with an error. This allows us the block
driver a chance to handle many request concurrently. For the ZVOL we
maintain a taskq with a service thread per core. As requests are delivered
by the elevator each request is dispatched to the taskq. The task queue
handles each request with a write or read helper function which basically
copies the request data in to our out of the DMU object. Writes single
completion as soon as the DMU has the data unless they are marked sync.
Reads are all handled syncronously however the elevator will merge many
small reads in to a large read before it submitting the request.
* Cachine is worth specifically mentioning. Because both the Linux VFS
and the ZFS ARC both want to fully manage the cache we unfortunately
end up with two caches. This means our memory foot print is larger
than otherwise expected, and it means we have an extra copy between
the caches, but it does not impact correctness. All syncs are barrior
requests I believe are handled correctly. Longer term there is lots of
room for improvement here but it will require fairly extensive changes
to either the Linux VFS and VM layer, or additional DMU interfaces to
handle managing buffer not directly allocated by the ARC.
* Added module/zfs/include/sys/blkdev.h which contains all the Linux
compatibility foo which is required to handle changes in the Linux block
APIs from 2.6.18 thru 2.6.31 based kernels.
* The dmu_{read,write}_uio interfaces which don't make sense on Linux
have been modified to dmu_{read,write}_req functions which consume the
standard Linux IO request structure. Their function fundamentally
remains the same so this happily worked out pretty cleanly.
* The /dev/zfs character device is no longer created through the half
implemented Solaris driver DDI interfaces. It is now simply created
with it's own major number as a Linux misc device which greatly simplifies
everything. It is only capable of handling ioctls() but this fits nicely
because that's all it ever has to do. The ZVOL devices unlike in Solaris
do not leverage the same major number as /dev/zfs but instead register
their own major. Because only one major is allocated and space is reserved
for 16 partitions per-device there is a limit of 16384 concurrent ZVOL
devices. By using multiple majors like the scsi driver this limit could
be addressed if it becomes a problem.
* The {spa,zfs,zvol}_busy() functions have all be removed because they
are not required on a Linux system. Under Linux the registered module
exit function will not be called while the are still references to the
module. Once the exit function is called however it must succeed or
block, it may not fail so returning an error on module unload makes to
sense under Linux.
* With the addition of ZVOL support all the HAVE_ZVOL defines were removed
for obvious reasons. However, the HAVE_ZPL defines have been relocated
in to the linux-{kernel,user}-disk topic branches and must remain until
the ZPL is implemented.
2009-11-20 19:06:59 +00:00
|
|
|
#endif /* HAVE_ZPL */
|