Age | Commit message (Collapse) | Author | Files | Lines |
|
We unmap the tdb on expand, the remap. But when we have INCOHERENT_MMAP
(ie. OpenBSD) and we're inside a transaction, doing the expand can mean
we need to read from the database to partially fill a transaction block.
This fails, because if mmap is incoherent we never allow accessing the
database via read/write.
The solution is not to unmap and remap until we've actually written the
padding at the end of the file.
Reported-by: Amitay Isaacs <amitay@gmail.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Fri Mar 23 02:53:15 CET 2012 on sn-devel-104
|
|
fde694274e1e5a11d1473695e7ec7a97f95d39e4 made tdb_mmap return an int,
but didn't put the return 0 on the "internal db" case.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This comment appears in two places in the code (commit
4c6a8273c6dd3e2aeda5a63c4a62aa55bc133099 from 2001):
/*
* We must ensure the file is unmapped before doing this
* to ensure consistency with systems like OpenBSD where
* writes and mmaps are not consistent.
*/
But this doesn't help, because if one process is using mmap and another
using pwrite, we get incoherent results. As demonstrated by OpenBSD's
failure on the tdb unit tests.
Rather than disable mmap on OpenBSD, we test for this issue and force mmap
to be enabled. This means that we will fail on very large TDBs on 32-bit
systems, but it's better than the horrendous performance penalty on every
OpenBSD system.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
If we're expanding because the current recovery area is too small, we
expand only the amount we need. This can quickly lead to exponential
growth when we have a slowly-expanding record (hence a
slowly-expanding transaction size).
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
I came across a tdb which had wrapped to 4G + 4K, and the contents had been
destroyed by processes which thought it only 4k long. Fix this by checking
on open, and making tdb_oob() check for wrap itself.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-User: Rusty Russell <rusty@rustcorp.com.au>
Autobuild-Date: Mon Dec 19 07:52:01 CET 2011 on sn-devel-104
|
|
ldb can create huge records when saving indexes.
Limit the tdb expansion to avoid consuming a lot of memory for
no good reason if the record being saved is huge.
|
|
|
|
Now the transaction code uses the standard allrecord lock, that stops
us from trying to grab any per-record locks anyway. We don't need to
have special noop lock ops for transactions.
This is a nice simplification: if you see brlock, you know it's really
going to grab a lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is taken from the CCAN code base: rather than using tdb_brlock for
locking and unlocking, we split it into brlock and brunlock functions.
For extra debugging information, brunlock says what kind of lock it is
unlocking (even though fnctl locks don't need this). This requires an
extra argument to tdb_transaction_unlock() so we know whether the
lock was upgraded to a write lock or not.
We also use a "flags" argument tdb_brlock:
1) TDB_LOCK_NOWAIT replaces lck_type = F_SETLK (vs F_SETLKW).
2) TDB_LOCK_MARK_ONLY replaces setting TDB_MARK_LOCK bit in ltype.
3) TDB_LOCK_PROBE replaces the "probe" argument.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
metze
|
|
It was a regrettable hack which I used to reduce line count in tdb; in fact it caused confusion as can be seen in this patch.
In particular, ecode now needs to be set before TDB_LOG anyway, and having it exposed in
the header is useless (the struct tdb_context isn't defined, so it's doubly useless).
Also, we should never set errno, as io.c was doing.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
might take us out-of-bounds. Only pretend to be length 1 for the malloc.
|
|
|