Age | Commit message (Collapse) | Author | Files | Lines |
|
for the case that another local process has started a transaction
bewteen releasing the transaction_lock record and starting the
transaction.
Michael
|
|
in db_ctdb_transaction_fetch_start() for error conditions when re-fetching
the transaction_lock record inside the transaction
Michael
|
|
node.
In ctdb_transaction_commit(), when the trans2_commit control fails, there
is a race condition in the 1 second sleep between the local transaction_cancel
and the call to ctdb_replay_transaction(): The database is not locked, and
neither is the transaction_lock record. So another client can start and possibly
complete a new transaction in this gap, but only on the same node: The locking
of the transaction_lock record on a different node which involves migration of
the record to the other node has been disabled by introduction of the
transaction_active flag on the db which closes precisely this gap from the start
of the commit until the call to TRANS2_FINISH or TRANS2_ERROR.
But this mechanism does not cover the case where a process on the same node
tries to start a transaction: There is no obstacle to locking the transaction_lock
record because the record does not need to be migrated.
This commit closes this race condition in ctdb_transaction_fetch_start()
by using the new ctdb_ctrl_transaction_active() call to ask the local
ctdb daemon whether it has a transaction running on the database.
If so, the check is repeated until the running transaction is done.
This does introduce an additional call to the local ctdbd when starting
transactions, but it does close the (hopefully) last race condition.
Michael
|
|
CTDB_CONTROL_TRANS2_COMMIT
Michael
|
|
There are two races in concurrent transactions on a single node.
One in starting a transaction and one with replay during commit.
This commit closes the first race by storing the client pid in the
transaction-lock record and comparing the stored pid against its own
pid after releasing the lock and refetching the record inside the
transaction.
Michael
|
|
Michael
|
|
Michael
|
|
This fetches a record from the db and splits out the ctdb header.
Michael
|
|
and use it in db_ctdb_store() and db_ctdb_transaction_store().
Michael
|
|
Michael
|
|
an existing record
not only when creating a record.
This matches commit e9194a130327d6b05a8ab90bd976475b0e93b06d from ctdb-master.
Michael
|
|
Michael
|
|
Michael
|
|
metze
|
|
|
|
|
|
|
|
(This used to be commit 010c7101e59477f0d5f3bf11c17f474ec6f79cc1)
|
|
(This used to be commit dd9e4e6db04acf20f6ef7705955358c7ca442bbd)
|
|
allowed for tdb. This is needed for the registry db backend.
(This used to be commit 4b04ec29c76df837a7909725bbbf4c79d5abdb4d)
|
|
(This used to be commit a2f70fc175b748ef160a998d0563c28381ea3466)
|
|
out of sync
(This used to be commit 571ec7893c8b40959c005d510c039e3f231ffc67)
|
|
thinking it was a failure of a transaction cancel
(This used to be commit 22dbe158ed62ae47bbcb41bba3db345294f75437)
|
|
(This used to be commit fe6a03e7b11cd859fddae5ba924ea5e071b8ccea)
|
|
1) when all nodes write the same value to the record, or when writing
a value that is already there, we can skip the write and save
ourselves a network transactions
2) when all remote nodes fail an update, and we then fail a replay, we
don't need to trigger a recovery. This solves a corner case where
we could get into a recovery loop
(This used to be commit 2481bfce4307274806584b0d8e295cc7f638e184)
|
|
could lead to it blocking forever
(This used to be commit a633390d3a7cb04a7c4e14cba9c533621793287e)
|
|
(This used to be commit ba64a757f86fb60994e12e81416083ac0fa11c21)
|
|
(This used to be commit 76fbe56e827193d939676da23a580aa0f9394dd1)
|
|
(This used to be commit 037516f1362c8d64da1d47a0cdaf83198d3eaeaf)
|
|
(This used to be commit 2e85cbe88b3d1674b915f62e02be7d005fddaa39)
|
|
(This used to be commit f91a3e0f7b7737c1d0667cd961ea950e2b93e592)
|
|
Michael
(This used to be commit d776d8df262e1753fb428450140df94e63035af5)
|
|
Only retry when ctdbd_persisten_update() failed.
Michael
(This used to be commit ff413a4614c8b272a34b2a9e56a329a8e8749a34)
|
|
store.
Michael
(This used to be commit eaf76c751f9bde2843174b400c109304831df83e)
|
|
as delete_rec operation from fetch_locked()
Michael
(This used to be commit f4aab595a0219305fbedf8890e787b690660a55a)
|
|
to reduce code duplication.
Michael
(This used to be commit 09a197e756459877cab7b4d09f534c6a41cfdd71)
|
|
This is because ctdbd can fail in performing the persistent_store
due to race conditions, and this does not mean it can't succeed
the next time.
To not loop infinitely, this makes use of a new parametric option:
"dbwrap ctdb:max store retries" (integer) which defaults to 5
and sets the upper limit for the number or repeats of the
fetch/store cycle.
Michael
(This used to be commit 2bcc9e6ecef876030e552a607d92597f60203db2)
|
|
in the persistent db_ctdb_store operation.
This is to prevent deadlocks in db_ctdb_persistent_store().
There is a tradeoff: Usually, the record is still locked
after db->store operation. This lock is usually released
via the talloc destructor with the TALLOC_FREE to
the record. So we have two choices:
- Either re-lock the record after the call to persistent_store
or cancel_persistent update and this way not changing any
assumptions callers may have about the state, but possibly
introducing new race conditions.
- Or don't lock the record again but just remove the
talloc_destructor. This is less racy but assumes that
the lock is always released via TALLOC_FREE of the record.
I choose the first variant for now since it seems less racy.
We can't guarantee that we succeed in getting the lock
anyways. The only real danger here is that a caller
performs multiple store operations after a fetch_locked()
which is currently not the case.
Michael
(This used to be commit d004c9a7281d2577c3ba2012c8f790cc198ea700)
|
|
Michael
(This used to be commit c939c55e5182258092faceefa58a7f328f18619e)
|
|
database in an inconsistent state if we crash during the operation
Signed-off-by: Ronnie Sahlberg <ronniesahlberg@gmail.com>
(This used to be commit 09329f1f9114af44fc4e5e4f29a7315912313125)
|
|
(This used to be commit 123fc3980a83d956bffaa689f3af81bbf81ce1c1)
|
|
Only filled in for tdb so far, for rbt it's pointless, and ctdb itself needs to
be extended
(This used to be commit 0a55e018dd68af06d84332d54148bbfb0b510b22)
|
|
(http://samba.org/~tridge/3_0-ctdb)
Signed-off-by: Alexander Bokovoy <ab@samba.org>(This used to be commit 0c8e23afbbb2d081fc23908bafcad04650bfacea)
|
|
(This used to be commit 9f9c933c16abacb2d0aa7bc7faa5b1ddac61b0e5)
|
|
The lockup could happen when packet_read_sync() gets two packets in a row, the
first one being an async message, and the second one being the response to a
ctdb request.
Also add some debug msg to ctdb_conn.c, and cut off the "locking key" messages
to only dump 20 hex chars at debug level 10. >10 will dump everything.
(This used to be commit 0a55880a240b619810371a19144dd0a75208adfe)
|
|
when using "clustering = yes" and ctdbd isn't running
metze
(This used to be commit c5f020ba1fdefe0422dd466b9c68ff67c74ceddd)
|
|
(This used to be commit b0132e94fc5fef936aa766fb99a306b3628e9f07)
|
|
Jeremy.
(This used to be commit 407e6e695b8366369b7c76af1ff76869b45347b3)
|
|
I'm 100% certain I've forgotten to merge something, but the main code
should be in. It's mainly in dbwrap_ctdb.c, ctdbd_conn.c and
messages_ctdbd.c.
There should be no changes to the non-cluster case, it does survive make
test on my laptop.
It survives some very basic tests with ctdbd enables, I did not do the
full test suite for clusters yet.
Phew...
Volker
(This used to be commit 15553d6327a3aecdd2b0b94a3656d04bf4106323)
|