summaryrefslogtreecommitdiff
path: root/source3/lib
AgeCommit message (Collapse)AuthorFilesLines
2010-03-26s3-util_sock: Rise debug level for getpeername failed messages.Karolin Seeger1-4/+6
Don't show all getpeername failed messages in debug level 0 and 1. Karolin Signed-off-by: Volker Lendecke <vl@samba.org>
2010-03-26s3-event: switch s3 to using tevent_re_initialise()Andrew Tridgell2-8/+2
This correctly initialises the event backend, and checks for errors (thanks to Metze for suggesting this)
2010-03-25s3: Add a comment to serverid_parent_init, this is pretty confusingVolker Lendecke1-0/+6
2010-03-25s3: Add a comment to messaging_tdb_parent_init, this is pretty confusingVolker Lendecke1-0/+6
2010-03-25s3: Make sure our CLEAR_IF_FIRST optimization works for serverid.tdbVolker Lendecke1-0/+16
In the child, we fully re-open serverid.tdb, which leads to one fcntl lock for CLEAR_IF_FIRST detection per smbd. This opens the tdb in the parent and holds it, so that tdb_reopen_all correctly catches the CLEAR_IF_FIRST bit.
2010-03-25s3: Make sure our CLEAR_IF_FIRST optimization works for messaging.tdbVolker Lendecke1-0/+16
In the child, we fully re-open messaging.tdb, which leads to one fcntl lock for CLEAR_IF_FIRST detection per smbd. This opens the tdb in the parent and holds it, so that tdb_reopen_all correctly catches the CLEAR_IF_FIRST bit.
2010-03-25s3: Fix some nonempty blank linesVolker Lendecke1-6/+6
2010-03-24s3: Optimize gencache for smbd exitVolker Lendecke1-14/+75
If thousands of smbds try to gencache_stabilize at the same time because the network died, all of them might be sitting in transaction_start. Don't do the stabilize transaction if nothing has changed in gencache_notrans.tdb. Volker
2010-03-22share_info.tdb could use non-canonicalized sharenames.Jeremy Allison1-14/+143
Fix this by moving canonicalization into lib/sharesec.c. Update the db version to 3. Ensures we always find share names with security descriptors attached. Jeremy.
2010-03-22s3: Add the "ctdb locktime warn threshold" parameterVolker Lendecke1-0/+12
This is mainly a debugging aid for post-mortem analysis in case a cluster file system is slow.
2010-03-21s3: Open winbindd_cache.tdb with read/write access.Bo Yang1-1/+1
Open winbindd_cache.tdb with read/write access when validate the cache, otherwise, validation fails to get lock in tdb_check. It results in validation failure even the cache is good. Signed-off-by: Bo Yang <boyang@samba.org>
2010-03-17s3-eventlog: fix elog_tdbname(), we were always lower-casing entire lockdir ↵Günther Deschner1-5/+19
path... Found by RPC-EVENTLOG torture test. Guenther
2010-03-14s3: Use a switch to implement map_nt_error_from_tdbVolker Lendecke1-30/+47
First, this immediately gave me the warning that TDB_ERR_NESTING was not covered and second, this saved 48 bytes in the .o :-)
2010-03-14s3: Remove some unused codeVolker Lendecke1-20/+0
2010-03-13s3: Make tdb_wrap_open more robustVolker Lendecke1-41/+87
This hides the use of talloc_reference from the caller, making it impossible to wrongly call talloc_free() on the result.
2010-03-12s3: Add "g_lock_do" as a convenience wrapper functionVolker Lendecke1-0/+64
2010-03-10s3: Fix a long-standing problem with recycled PIDsVolker Lendecke3-14/+309
When a samba server process dies hard, it has no chance to clean up its entries in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb. For locking.tdb and brlock.tdb Samba is robust by checking every time we read an entry from the database if the corresponding process still exists. If it does not exist anymore, the entry is deleted. This is not 100% failsafe though: On systems with a limited PID space there is a non-zero chance that between the smbd's death and the fresh access, the PID is recycled by another long-running process. This renders all files that had been locked by the killed smbd potentially unusable until the new process also dies. This patch is supposed to fix the problem the following way: Every process ID in every database is augmented by a random 64-bit number that is stored in a serverid.tdb. Whenever we need to check if a process still exists we know its PID and the 64-bit number. We look up the PID in serverid.tdb and compare the 64-bit number. If it's the same, the process still is a valid smbd holding the lock. If it is different, a new smbd has taken over. I believe this is safe against an smbd that has died hard and the PID has been taken over by a non-samba process. This process would not have registered itself with a fresh 64-bit number in serverid.tdb, so the old one still exists in serverid.tdb. We protect against this case by the parent smbd taking care of deregistering PIDs from serverid.tdb and the fact that serverid.tdb is CLEAR_IF_FIRST. CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not work when all smbds are restarted. For this, "net serverid wipe" has to be run before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up sessionid.tdb and connections.tdb. While there, this also cleans up overloading connections.tdb with all the process entries just for messaging_send_all(). Volker
2010-03-10s3: Make TLDAP_IS_ALPHA and TLDAP_IS_ADH static functionsVolker Lendecke1-5/+12
2010-03-09Fix typoSimo Sorce1-2/+2
2010-03-09s3:tldap add own filter parsingSimo Sorce1-114/+600
Also add torture test to check filter parsing.
2010-03-08Revert "Fix bug #7067 - Linux asynchronous IO (aio) can cause smbd to fail ↵Karolin Seeger1-61/+4
to respond to a read or write." This reverts commit a6ae7a552f851a399991262377cc0e062e40ac20. This fixes bug #7222 (All users have full rigths on all shares) (CVE-2010-0728). (cherry picked from commit 1c9494c76cc9686c61e0966f38528d3318f3176f)
2010-03-05s3: Remove the unused parameter "persistent" from fetch_locked_internalVolker Lendecke1-8/+2
2010-03-05s3: db->persistent==true was handled earlier, make this more obviousVolker Lendecke1-1/+1
2010-03-01s3: Abstract access to sessionid.tdb, similar to conn_tdb.cVolker Lendecke1-0/+138
2010-03-01s3: Add connections_forall_read()Volker Lendecke1-0/+42
In a cluster, this makes a large difference: For r/w traverse, we have to do a fetch_locked on every record which for most users of connections_forall is just overkill.
2010-03-01s3: Make the difference between r/o and r/w in connections_db_ctx more obviousVolker Lendecke1-9/+4
2010-03-01s3: Make connections_forall open connections.tdb r/wVolker Lendecke1-1/+7
connections_forall is called from count_current_connections() which potentially deletes dead records. This needs r/w access to connections.tdb. connections_traverse says it does not provide this. Does not really matter in the smbd case, because we have opened it before r/w, so this is "just" cleanup.
2010-02-25s3-nltest: fix uninitialized query level.Günther Deschner1-1/+1
Guenther
2010-02-24s3: Make connections_fetch_record() staticVolker Lendecke1-2/+2
2010-02-23s3: Consolidate server_id_self into the equivalent procid_self()Volker Lendecke1-5/+0
2010-02-23s3-lib: Remove obsolete signal type cast.Andreas Schneider3-12/+12
2010-02-20s3: Make string_to_sid survive the LOCAL-string_to_sid testVolker Lendecke1-13/+40
2010-02-18s3: optimize strict allocate for XFS on IRIXBjörn Jacke1-0/+25
2010-02-16s3: Fix timeout calculation if g_lock_lock is given a timeout < 60sVolker Lendecke1-1/+6
Detected while showing this code to obnox :-)
2010-02-16s3: Slightly increase parallelism in g_lockVolker Lendecke1-1/+7
There's no need to still hold the g_lock tdb-level lock while telling the waiters to retry
2010-02-16s3: Avoid starving locks when many processes die at the same timeVolker Lendecke1-6/+4
In g_lock_unlock we have a little race between the process_exists and messaging_send call: We only send to 5 waiters now, they all might have died between us checking their existence and sending the message. This change makes g_lock_lock retry at least once every minute.
2010-02-16s3: Avoid a thundering herd in g_lock_unlockVolker Lendecke1-1/+16
Only notify the first 5 pending lock waiters. This avoids a thundering herd problem that is really nasty in a cluster. It also makes acquiring a lock a bit more FIFO, lock waiters are added to the end of the array.
2010-02-16s3: Optimize g_lock_lock for a heavily contended caseVolker Lendecke1-3/+36
Only check the existence of the lock owner in g_lock_parse, check the rest of the records only when we got the lock successfully. This reduces the load on process_exists which can involve a network roundtrip in the clustered case.
2010-02-16s3: Fix handling of processes that died in g_lockVolker Lendecke1-3/+5
g_lock_parse might have thrown away entries from the locks array because the processes were not around anymore. Don't store the orphaned entries.
2010-02-15s3: Fix a typoVolker Lendecke1-1/+1
2010-02-14s3: Fix initgroups return checkPeter Watkins1-1/+1
A return code of 1 from initgroups() is OK since apparently it means the gid has already been set. The man page doesn't mention this.
2010-02-14s3-lib: use TYPESAFE_QSORT() in s3 interfaces codeAndrew Tridgell1-1/+1
2010-02-13s3: Remove unused comparison fn from "struct sorted_tree"Volker Lendecke1-8/+2
2010-02-13s3: Make adt_tree data definitions private to adt_tree.cVolker Lendecke1-0/+14
2010-02-13s3: SORTED_TREE -> struct sorted_treeVolker Lendecke1-6/+8
2010-02-13s3: TREE_NODE -> struct tree_nodeVolker Lendecke1-12/+18
2010-02-13s3: Fix some nonempty blank linesVolker Lendecke1-69/+68
2010-02-12Use sec_initial_uid() in the places where being root doesn't matter,Jeremy Allison3-3/+3
and 0 in the places where it does. Jeremy
2010-02-12Fix warning messages on compile in g_lock.c Volker & Michael please check.Jeremy Allison1-14/+4
Jeremy.
2010-02-12s3:g_lock: remove a nested event loop, replacing the inner loop by selectMichael Adam1-38/+101
This made smbd crash in g_lock_lock() when trying to start a transaction on a db with an already started transaction, e.g. in a tcon_and_X where the share_info.tdb was not yet initialized but share_info.tdb was already locked by another process or writing acces to the winreg rpc pipe where the registry tdb was already locked by another process. What we really _want_ to do here by design is to react to MSG_DBWRAP_G_LOCK_RETRY messages that are either sent by a client doing g_lock_unlock or by ourselves when we receive a CTDB_SRVID_SAMBA_NOTIFY or CTDB_SRVID_RECONFIGURE message from ctdbd, i.e. when either a client holding a lock or a complete node has died. Doing this properly involves calling tevent_loop_once(), but doing this here with the main ctdbd messaging context creates a nested event loop when g_lock_lock() is called from the main event loop. So as a quick fix, we act a little corasely here: we do a select on the ctdb connection fd and when it is readable or we get EINTR, then we retry without actually parsing any ctdb packages or dispatching messages. This means that we retry more often than necessary and intended by design, but this does not harm and it is unobtrusive. When we have finished, the main loop will pick up all the messages and ctdb packets. The only extra twist is that we cannot use timed events here but have to handcode a timeout for select. Michael