Age | Commit message (Collapse) | Author | Files | Lines |
|
This patch adds two lock functions used by CTDB to perform asynchronous
locking. These functions do not actually perform any fcntl operations,
but only increment internal counters.
- tdb_transaction_write_lock_mark()
- tdb_transaction_write_lock_unmark()
It also exposes two internal functions
- tdb_lock_nonblock()
- tdb_unlock()
These functions are NOT exposed in include/tdb.h to prevent any further
uses of these functions. If you ever need to use these functions, consider
using tdb2.
Signed-off-by: Amitay Isaacs <amitay@gmail.com>
|
|
We unmap the tdb on expand, the remap. But when we have INCOHERENT_MMAP
(ie. OpenBSD) and we're inside a transaction, doing the expand can mean
we need to read from the database to partially fill a transaction block.
This fails, because if mmap is incoherent we never allow accessing the
database via read/write.
The solution is not to unmap and remap until we've actually written the
padding at the end of the file.
Reported-by: Amitay Isaacs <amitay@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Fri Mar 23 02:53:15 CET 2012 on sn-devel-104
|
|
fde694274e1e5a11d1473695e7ec7a97f95d39e4 made tdb_mmap return an int,
but didn't put the return 0 on the "internal db" case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This comment appears in two places in the code (commit
4c6a8273c6dd3e2aeda5a63c4a62aa55bc133099 from 2001):
/*
* We must ensure the file is unmapped before doing this
* to ensure consistency with systems like OpenBSD where
* writes and mmaps are not consistent.
*/
But this doesn't help, because if one process is using mmap and another
using pwrite, we get incoherent results. As demonstrated by OpenBSD's
failure on the tdb unit tests.
Rather than disable mmap on OpenBSD, we test for this issue and force mmap
to be enabled. This means that we will fail on very large TDBs on 32-bit
systems, but it's better than the horrendous performance penalty on every
OpenBSD system.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The most convenient way to write unit tests in C is to directly
#include the C files (CCAN uses this, for example). That works quite
well, but it means that tdb_private.h now needs to be protected
against multiple inclusions.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Autobuild-User: Jeremy Allison <jra@samba.org>
Autobuild-Date: Fri Jan 6 04:16:41 CET 2012 on sn-devel-104
|
|
This avoids a tdb_fetch, thus a malloc/memcpy/free in the tdb_store path
|
|
We allocate a new recovery area by expanding the file. But if the
recovery area is already at the end of file (as shown in at least one
client case), we can simply expand the record, rather than freeing it
and creating a new one.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Wed Dec 21 06:25:40 CET 2011 on sn-devel-104
|
|
If we're expanding because the current recovery area is too small, we
expand only the amount we need. This can quickly lead to exponential
growth when we have a slowly-expanding record (hence a
slowly-expanding transaction size).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
|
|
I came across a tdb which had wrapped to 4G + 4K, and the contents had been
destroyed by processes which thought it only 4k long. Fix this by checking
on open, and making tdb_oob() check for wrap itself.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Mon Dec 19 07:52:01 CET 2011 on sn-devel-104
|
|
TDB2 testing revealed that tdb1 doesn't do this. It's minor, but fix it.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Tue Aug 16 10:47:41 CEST 2011 on sn-devel-104
|
|
Andrew Bartlett complained that valgrind needs --partial-loads-ok=yes otherwise
the Jenkins hash makes it complain.
My benchmarking here revealed that at least with modern gcc (4.5) and CPU
(Intel i5 32 bit) there's no measurable performance penalty for the
"correct" code, so rip out the optimized one.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Wed Jun 8 11:05:47 CEST 2011 on sn-devel-104
|
|
If it's really the recovery area, we can trust the rec_len field, and
don't have to go groping for bitpatterns.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Tue Apr 19 14:15:22 CEST 2011 on sn-devel-104
|
|
ldb can create huge records when saving indexes.
Limit the tdb expansion to avoid consuming a lot of memory for
no good reason if the record being saved is huge.
|
|
tdb_repack() is expensive and consumes memory, so we can spend some
effort to see if it's worthwhile. In particular, tdbbackup doesn't
need to repack: it started with an empty database!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is why macros are dangerous; these were converting the pointers, not the
things pointed to!
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
|
|
(ret < 0) can never be true
|
|
Autobuild-User: Volker Lendecke <vlendec@samba.org>
Autobuild-Date: Sat Feb 12 19:50:55 CET 2011 on sn-devel-104
|
|
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Wed Dec 29 10:12:05 CET 2010 on sn-devel-104
|
|
In order to suppress compiler warnings.
|
|
tdb_name() might be used within the given log function,
which might be called from within tdb_open_ex().
metze
Autobuild-User: Stefan Metzmacher <metze@samba.org>
Autobuild-Date: Fri Nov 12 11:22:21 UTC 2010 on sn-devel-104
|
|
Autobuild-User: Jelmer Vernooij <jelmer@samba.org>
Autobuild-Date: Thu Oct 21 11:47:22 UTC 2010 on sn-devel-104
|
|
This flag to tdb_open/tdb_open_ex effects creation of a new database:
1) Uses the Jenkins lookup3 hash instead of the old gdbm hash if none is
specified,
2) Places a non-zero field in header->rwlocks, so older versions of TDB will
refuse to open it.
This means that the caller (ie Samba) can set this flag to safely
change the hash function. Versions of TDB from this one on will either
use the correct hash or refuse to open (if a different hash is specified).
Older TDB versions will see the nonzero rwlocks field and refuse to open
it under any conditions.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
If the caller to tdb_open_ex() doesn't specify a hash, and tdb_old_hash
doesn't match, try tdb_jenkins_hash.
This was Metze's idea: it makes life simpler, especially with the upcoming
TDB_INCOMPATIBLE_HASH flag.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is a better hash than the default: shipping it with tdb makes it easy
for callers to use it as the hash by passing it to tdb_open_ex().
This version taken from CCAN and modified, which took it from
http://www.burtleburtle.net/bob/c/lookup3.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Guenther
|
|
this might help reduce test times and load on test machines
|
|
This is Stefan Metzmacher <metze@samba.org>'s patch with minor changes:
1) Use the TDB_MAGIC constant so both hashes aren't of strings.
2) Check the hash in tdb_check (paranoia, really).
3) Additional check in the (unlikely!) case where both examples hash to 0.
4) Cosmetic changes to var names and complaint message.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We must not endian-convert the magic string, just the rest.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Commit bc1c82ea137 "Fix tdb_check() to work with read-only tdb databases."
claimed to do this, but tdb_lockall_read() fails on read-only databases.
Also make sure we can still do tdb_check() inside a transaction (weird,
but we previously allowed it so don't break the API).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We can end up with dead areas when we die during transaction commit;
tdb_check() fails on such a (valid) database.
This is particularly noticable now we no longer truncate on recovery;
if the recovery area was at the end of the file we used to remove it
that way.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We saw tdb_lockall() take 71 seconds under heavy load; this is because Linux
(at least) doesn't prevent new small locks being obtained while we're waiting
for a big log.
The workaround is to do divide and conquer using non-blocking chainlocks: if
we get down to a single chain we block. Using a simple test program where
children did "hold lock for 100ms, sleep for 1 second" the time to do
tdb_lockall() dropped signifiantly. There are ln(hashsize) locks taken in
the contended case, but that's slow anyway.
More analysis is given in my blog at http://rusty.ozlabs.org/?p=120
This may also help transactions, though in that case it's the initial
read lock which uses this gradual locking routine; the update-to-write-lock
code is separate and still tries to update in one go.
Even though ABI doesn't change, minor version bumped so behavior change
can be easily detected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
tdb_lockall() uses F_WRLCK internally, which doesn't work on a fd opened with O_RDONLY. Use tdb_lockall_read() instead.
Jeremy.
|
|
Guenther
|
|
Guenther
|
|
Commit 207a213c/24fed55d purported to fix the problem of signals during
tdb_new_database (which could cause a spurious short write, hence a failure).
However, the code is wrong: newdb+written is not correct.
Fix this by introducing a general tdb_write_all() and using it here and in
the tracing code.
Cc: Stefan Metzmacher <metze@samba.org>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We now use -fvisibilty=hidden to hide symbols from outside the tdb
shared library.
This also moved tdb_transaction_recover() into the tdb_private.h
header, as it should never have been a public API. For that reason we
are changing the version number. We're only doing a minor version
increment as it is extremely unlikely that anyone was actually using
tdb_transaction_recover() as its locking requirements were rather
unusual.
Pair-Programmed-With: Rusty Russell <rusty@samba.org>
|
|
|
|
|
|
|
|
|
|
tdb transactions were designed to be robust against the machine
powering off, but interestingly were never designed to handle the case
where an administrator kill -9's a process during commit. Because
recovery is only done on tdb_open, processes with the tdb already
mapped will simply use it despite it being corrupt and needing
recovery.
The solution to this is to check for recovery every time we grab a
data lock: we could have gained the lock because a process just died.
This has no measurable cost: here is the time for tdbtorture -s 0 -n 1
-l 10000:
Before:
2.75 2.50 2.81 3.19 2.91 2.53 2.72 2.50 2.78 2.77 = Avg 2.75
After:
2.81 2.57 3.42 2.49 3.02 2.49 2.84 2.48 2.80 2.43 = Avg 2.74
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
|
|
The current recovery code truncates the tdb file on recovery. This is
fine if recovery is only done on first open, but is a really bad idea
as we move to allowing recovery on "live" databases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Now the transaction code uses the standard allrecord lock, that stops
us from trying to grab any per-record locks anyway. We don't need to
have special noop lock ops for transactions.
This is a nice simplification: if you see brlock, you know it's really
going to grab a lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
tdb_release_extra_locks() is too general: it carefully skips over the
transaction lock, even though the only caller then drops it. Change
this, and rename it to show it's clearly transaction-specific.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Now the transaction allrecord lock is the standard one, and thus is cleaned
in tdb_release_extra_locks(), _tdb_transaction_cancel() doesn't need to
know what type it is.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Centralize locking of all chains of the tdb; rename _tdb_lockall to
tdb_allrecord_lock and _tdb_unlockall to tdb_allrecord_unlock, and
tdb_brlock_upgrade to tdb_allrecord_upgrade.
Then we use this in the transaction code. Unfortunately, if the transaction
code records that it has grabbed the allrecord lock read-only, write locks
will fail, so we treat this upgradable lock as a write lock, and mark it
as upgradable using the otherwise-unused offset field.
One subtlety: now the transaction code is using the allrecord_lock, the
tdb_release_extra_locks() function drops it for us, so we no longer need
to do it manually in _tdb_transaction_cancel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|