9fe45dc1ac
Thread specific data has implemented using a hash table, this avoids the need to add a member to the task structure and allows maximum portability between kernels. This implementation has been optimized to keep the tsd_set() and tsd_get() times as small as possible. The majority of the entries in the hash table are for specific tsd entries. These entries are hashed by the product of their key and pid because by design the key and pid are guaranteed to be unique. Their product also has the desirable properly that it will be uniformly distributed over the hash bins providing neither the pid nor key is zero. Under linux the zero pid is always the init process and thus won't be used, and this implementation is careful to never to assign a zero key. By default the hash table is sized to 512 bins which is expected to be sufficient for light to moderate usage of thread specific data. The hash table contains two additional type of entries. They first type is entry is called a 'key' entry and it is added to the hash during tsd_create(). It is used to store the address of the destructor function and it is used as an anchor point. All tsd entries which use the same key will be linked to this entry. This is used during tsd_destory() to quickly call the destructor function for all tsd associated with the key. The 'key' entry may be looked up with tsd_hash_search() by passing the key you wish to lookup and DTOR_PID constant as the pid. The second type of entry is called a 'pid' entry and it is added to the hash the first time a process set a key. The 'pid' entry is also used as an anchor and all tsd for the process will be linked to it. This list is using during tsd_exit() to ensure all registered destructors are run for the process. The 'pid' entry may be looked up with tsd_hash_search() by passing the PID_KEY constant as the key, and the process pid. Note that tsd_exit() is called by thread_exit() so if your using the Solaris thread API you should not need to call tsd_exit() directly. |
||
---|---|---|
cmd | ||
config | ||
include | ||
lib | ||
module | ||
patches | ||
scripts | ||
.gitignore | ||
AUTHORS | ||
COPYING | ||
ChangeLog | ||
DISCLAIMER | ||
INSTALL | ||
META | ||
Makefile.am | ||
Makefile.in | ||
README.markdown | ||
autogen.sh | ||
configure | ||
configure.ac | ||
spl-modules.spec.in | ||
spl.spec.in | ||
spl_config.h.in |
README.markdown
The Solaris Porting Layer (SPL) is a Linux kernel module which provides many of the Solaris kernel APIs. This shim layer makes it possible to run Solaris kernel code in the Linux kernel with relatively minimal modification. This can be particularly useful when you want to track upstream Solaris development closely and don’t want the overhead of maintaining a large patch which converts Solaris primitives to Linux primitives.
To build packages for your distribution:
$ ./configure
$ make pkg
Full documentation for building, configuring, and using the SPL can be found at: http://zfsonlinux.org