Age | Commit message (Collapse) | Author | Files | Lines |
|
Records themselves get (read) locked by the traversal code against delete.
Interestingly, this locking isn't done when the allrecord lock has been
taken, though the allrecord lock until recently didn't cover the actual
records (it now goes to end of file).
The write record lock, grabbed by the delete code, is not suppressed
by the allrecord lock. This is now bad: it causes us to punch a hole
in the allrecord lock when we release the write record lock. Make this
consistent: *no* record locks of any kind when the allrecord lock is
taken.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We were previously inconsistent with our "global" lock: the
transaction code grabbed it from FREELIST_TOP to end of file, and the
rest of the code grabbed it from FREELIST_TOP to end of the hash
chains. Change it to always grab to end of file for simplicity and
so we can merge the two.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This was redundant before this patch series: it mirrored num_lockrecs
exactly. It still does.
Also, skip useless branch when locks == 1: unconditional assignment is
cheaper anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is pure overhead, but it centralizes the locking. Realloc (esp. as
most implementations are lazy) is fast compared to the fnctl anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Use our newly-generic nested lock tracking for the active lock.
Note that the tdb_have_extra_locks() and tdb_release_extra_locks()
functions have to skip over this lock now it is tracked.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This never nests, so it's overkill, but it centralizes the locking into
lock.c and removes the ugly flag in the transaction code to track whether
we have the lock or not.
Note that we have a temporary hack so this places a real lock, despite
the fact that we are in a transaction.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Rather than a boutique lock and a separate nest count, use our
newly-generic nested lock tracking for the transaction lock.
Note that the tdb_have_extra_locks() and tdb_release_extra_locks()
functions have to skip over this lock now it is tracked.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Factor out two loops which find locks; we are going to introduce a couple
more so a helper makes sense.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Move locking intelligence back into lock.c, rather than open-coding the
lock release in transaction.c.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
In many places we check whether locks are held: add a helper to do this.
The _tdb_lockall() case has already checked for the allrecord lock, so
the extra work done by tdb_have_extra_locks() is merely redundant.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
tdb_transaction_lock() and tdb_transaction_unlock() do nothing if we
hold the allrecord lock. However, the two locks don't overlap, so
this is wrong.
This simplification makes the transaction lock a straight-forward nested
lock.
There are two callers for these functions:
1) The transaction code, which already makes sure the allrecord_lock
isn't held.
2) The traverse code, which wants to stop transactions whether it has the
allrecord lock or not. There have been deadlocks here before, however
this should not bring them back (I hope!)
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Because fcntl locks don't nest, we track them in the tdb->lockrecs array
and only place/release them when the count goes to 1/0. We only do this
for record locks, so we simply place the list number (or -1 for the free
list) in the structure.
To generalize this:
1) Put the offset rather than list number in struct tdb_lock_type.
2) Rename _tdb_lock() to tdb_nest_lock, make it non-static and move the
allrecord check out to the callers (except the mark case which doesn't
care).
3) Rename _tdb_unlock() to tdb_nest_unlock(), make it non-static and
move the allrecord out to the callers (except mark again).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The word global is overloaded in tdb. The global_lock inside struct
tdb_context is used to indicate we hold a lock across all the chains.
Rename it to allrecord_lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The word global is overloaded in tdb. The GLOBAL_LOCK offset is used at
open time to serialize initialization (and by the transaction code to block
open).
Rename it to OPEN_LOCK.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Now tdb_open() calls tdb_transaction_cancel() instead of
_tdb_transaction_cancel, we can make it static.
Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
|
|
This is taken from the CCAN code base: rather than using tdb_brlock for
locking and unlocking, we split it into brlock and brunlock functions.
For extra debugging information, brunlock says what kind of lock it is
unlocking (even though fnctl locks don't need this). This requires an
extra argument to tdb_transaction_unlock() so we know whether the
lock was upgraded to a write lock or not.
We also use a "flags" argument tdb_brlock:
1) TDB_LOCK_NOWAIT replaces lck_type = F_SETLK (vs F_SETLKW).
2) TDB_LOCK_MARK_ONLY replaces setting TDB_MARK_LOCK bit in ltype.
3) TDB_LOCK_PROBE replaces the "probe" argument.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Signed-off-by: Matthias Dieter Wallnöfer <mwallnoefer@yahoo.de>
|
|
This might help on some filesystems
|
|
|
|
If a process (or the machine) dies after just after writing the
recovery head (pointing at the end of file), the recovery record will filled
with 0x42. This will not invoke a recovery on open, since rec.magic
!= TDB_RECOVERY_MAGIC.
Unfortunately, the first transaction commit will happily reuse that
area: tdb_recovery_allocate() doesn't check the magic. The recovery
record has length 0x42424242, and it writes that back into the
now-valid-looking transaction header) for the next comer (which
happens to be tdb_wipe_all in my tests).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
There was a bug in tdb where the
tdb_brlock(tdb, GLOBAL_LOCK, F_UNLCK, F_SETLKW, 0, 1);
(ending the transaction-"mutex") was done before the
/* remove the recovery marker */
This means that when a transaction is committed there is a window where another
opener of the file sees the transaction marker while the transaction committer
is still fully functional and working on it. This led to transaction being
rolled back by that second opener of the file while transaction_commit() gave
no error to the caller.
This patch moves the F_UNLCK to after the recovery marker was removed, closing
this window.
|
|
We need to keep TDB_ALLOW_NESTING as default behavior,
so that existing code continues to work.
However we may change the default together with a major version
number change in future.
metze
|
|
Make the default be that transaction is not allowed and any attempt to create a nested transaction will fail with TDB_ERR_NESTING.
If an application can cope with transaction nesting and the implicit
semantics of tdb_transaction_commit(), it can enable transaction nesting
by using the TDB_ALLOW_NESTING flag.
(cherry picked from ctdb commit 3e49e41c21eb8c53084aa8cc7fd3557bdd8eb7b6)
Signed-off-by: Stefan Metzmacher <metze@samba.org>
|
|
metze
|
|
|
|
So that erroneous double tdb_close() calls do not try to close() same
fd again. This is like SAFE_FREE() but for fd.
Signed-off-by: Kirill Smelkov <kirr@mns.spb.ru>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This can help with ldb where we rewrite the index records
|
|
metze
|
|
ctdb wants a quick way to detect corrupt tdbs; particularly, tdbs with
loops in their hash chains. tdb_check() provides this.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
It was a regrettable hack which I used to reduce line count in tdb; in fact it caused confusion as can be seen in this patch.
In particular, ecode now needs to be set before TDB_LOG anyway, and having it exposed in
the header is useless (the struct tdb_context isn't defined, so it's doubly useless).
Also, we should never set errno, as io.c was doing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
When TDB_TRACE is defined (in tdb_private.h), verbose tracing of tdb operations is enabled.
This can be replayed using "replay_trace" from http://ccan.ozlabs.org/info/tdb.
The majority of this patch comes from moving internal functions to _<funcname> to
avoid double-tracing. There should be no additional overhead for the normal (!TDB_TRACE)
case.
Note that the verbose traces compress really well with rzip.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We previously only allowed a commit to happen after a prepare
commit. It is in fact safe to allow reads between a prepare and a
commit, and the s4 replication code can make use of that, so allow it.
|
|
Guenther
|
|
In future, this may happen, and we don't want to clobber them.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Jeremy.
|
|
over the 2G offset on systems which support 64 bit file offsets. This fixes
that case.
On systems with 32 bit offsets, expansion and fcntl locking on these records
will fail anyway. SAMBA already does '#define _FILE_OFFSET_BITS 64' in
config.h (on my 32-bit x86 Linux system at least) to get 64 bit file offsets.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The flags are user-visible, via tdb_get_flags/add_flags/remove_flags.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Stefan Metzmacher <metze@samba.org>
|
|
thread/fork safe tdb_reopen_all() should be a noop".
This version just wraps the reopen code, so we still re-grab the lock and do
the normal sanity checks.
The reason we do this at all is to avoid global fd limits, see:
http://forums.fedoraforum.org/showthread.php?t=210393
Note also that this whole reopen concept is fundamentally racy: if the parent
goes away before the child calls tdb_reopen_all, the database can be left
without an active lock and another TDB_CLEAR_IF_FIRST opener will clear it.
A fork_with_tdbs() wrapper could use a pipe to solve this, but it's hardly
elegant (what if there are other independent things which have similar needs?).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Stefan Metzmacher <metze@samba.org>
|
|
tdb_reopen_all() should be a noop"
This reverts commit e17df483fbedb81aededdef5fbb6ae1d034bc2dd.
tdb_reopen_all also restores the active lock, required for TDB_CLEAR_IF_FIRST.
Signed-off-by: Stefan Metzmacher <metze@samba.org>
|
|
current code does a free of the old record in this case, then fail.
|
|
might take us out-of-bounds. Only pretend to be length 1 for the malloc.
|
|
54a51839ea65aa788b18fce8de0ae4f9ba63e4e7 "Make tdb transaction lock
recursive (samba version)" was broken: I "cleaned it up" and prevented
it from ever unlocking.
To see the problem:
$ bin/tdbtorture -s 1248142523
tdb_brlock failed (fd=3) at offset 8 rw_type=1 lck_type=14 len=1
tdb_transaction_lock: failed to get transaction lock
tdb_transaction_start failed: Resource deadlock avoided
My testcase relied on the *count* being correct, which it was. Fixing that
now.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Michael Adam <obnox@samba.org>
|
|
This patch replaces 6ed27edbcd3ba1893636a8072c8d7a621437daf7 and
1a416ff13ca7786f2e8d24c66addf00883e9cb12, which fixed the bug where traversals
inside transactions would release the transaction lock early.
This solution is more general, and solves the more minor symptom that nested
traversals would also release the transaction lock early. (It was also suggestd in
Volker's comment in 6ed27ed).
This patch also applies to ctdb, if the traverse.c part is removed (ctdb's tdb
code never received the previous two fixes).
Tested using the testsuite from ccan (adapted to the samba code). Thanks to
Michael Adam for feedback.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Michael Adam <obnox@samba.org>
|
|
This greatly reduces the fragmentation of databases where records
tend to grow slowly by a small amount each time. The case where this
is most seen is the ldb index records. Adding this overallocation
reduced the size of the resulting database by more than 20x when
running a test that adds 10k users.
|
|
The idea behind this is to recover from badly fragmented free
lists. Choosing the point where the file expands is fairly arbitrary,
but seems to work well.
|
|
During a transaction commit tdb normally uses fsync/msync calls to
make it crash safe. This can be disabled using the TDB_NOSYNC flag,
but it wasn't disabling all the code paths that caused a fsync/msync.
|
|
|
|
|
|
Using tdb_transaction_prepare_commit() gives us 2-phase commits. This
allows us to safely commit across multiple tdb databases at once, with
reasonable transaction semantics
Signed-off-by: tridge@samba.org
|