summaryrefslogtreecommitdiff
path: root/source3/lib/g_lock.c
AgeCommit message (Collapse)AuthorFilesLines
2010-02-12Fix warning messages on compile in g_lock.c Volker & Michael please check.Jeremy Allison1-14/+4
Jeremy.
2010-02-12s3:g_lock: remove a nested event loop, replacing the inner loop by selectMichael Adam1-38/+101
This made smbd crash in g_lock_lock() when trying to start a transaction on a db with an already started transaction, e.g. in a tcon_and_X where the share_info.tdb was not yet initialized but share_info.tdb was already locked by another process or writing acces to the winreg rpc pipe where the registry tdb was already locked by another process. What we really _want_ to do here by design is to react to MSG_DBWRAP_G_LOCK_RETRY messages that are either sent by a client doing g_lock_unlock or by ourselves when we receive a CTDB_SRVID_SAMBA_NOTIFY or CTDB_SRVID_RECONFIGURE message from ctdbd, i.e. when either a client holding a lock or a complete node has died. Doing this properly involves calling tevent_loop_once(), but doing this here with the main ctdbd messaging context creates a nested event loop when g_lock_lock() is called from the main event loop. So as a quick fix, we act a little corasely here: we do a select on the ctdb connection fd and when it is readable or we get EINTR, then we retry without actually parsing any ctdb packages or dispatching messages. This means that we retry more often than necessary and intended by design, but this does not harm and it is unobtrusive. When we have finished, the main loop will pick up all the messages and ctdb packets. The only extra twist is that we cannot use timed events here but have to handcode a timeout for select. Michael
2010-02-12s3:g_lock: remove an unreached code path.Michael Adam1-4/+0
Michael
2010-02-12s3: Implement global locks in a g_lock tdbVolker Lendecke1-0/+594
This is the basis to implement global locks in ctdb without depending on a shared file system. The initial goal is to make ctdb persistent transactions deterministic without too many timeouts.