Age | Commit message (Collapse) | Author | Files | Lines |
|
Much as I dislike macros, this one is there. So why not use it...
|
|
Guenther
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This avoids the thundering herd problem when 5000 smbds exit simultaneously
because the network went down.
|
|
This reverts commit 5ca63676dc59e83ffd9560fdcfa26063f267f283.
That does not fully fix the problem, adding a tdb_transaction_start_nonblock to fix it.
|
|
Don't show all getpeername failed messages in debug level 0 and 1.
Karolin
Signed-off-by: Volker Lendecke <vl@samba.org>
|
|
This correctly initialises the event backend, and checks for errors
(thanks to Metze for suggesting this)
|
|
|
|
|
|
In the child, we fully re-open serverid.tdb, which leads to one fcntl lock for
CLEAR_IF_FIRST detection per smbd. This opens the tdb in the parent and holds
it, so that tdb_reopen_all correctly catches the CLEAR_IF_FIRST bit.
|
|
In the child, we fully re-open messaging.tdb, which leads to one fcntl lock for
CLEAR_IF_FIRST detection per smbd. This opens the tdb in the parent and holds
it, so that tdb_reopen_all correctly catches the CLEAR_IF_FIRST bit.
|
|
|
|
If thousands of smbds try to gencache_stabilize at the same time because the
network died, all of them might be sitting in transaction_start. Don't do the
stabilize transaction if nothing has changed in gencache_notrans.tdb.
Volker
|
|
Fix this by moving canonicalization into lib/sharesec.c. Update the
db version to 3. Ensures we always find share names with security
descriptors attached.
Jeremy.
|
|
This is mainly a debugging aid for post-mortem analysis in case a cluster file
system is slow.
|
|
Open winbindd_cache.tdb with read/write access when validate the cache,
otherwise, validation fails to get lock in tdb_check. It results in
validation failure even the cache is good.
Signed-off-by: Bo Yang <boyang@samba.org>
|
|
path...
Found by RPC-EVENTLOG torture test.
Guenther
|
|
First, this immediately gave me the warning that TDB_ERR_NESTING was not
covered and second, this saved 48 bytes in the .o :-)
|
|
|
|
This hides the use of talloc_reference from the caller, making it impossible to
wrongly call talloc_free() on the result.
|
|
|
|
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
|
|
|
|
|
|
Also add torture test to check filter parsing.
|
|
to respond to a read or write."
This reverts commit a6ae7a552f851a399991262377cc0e062e40ac20.
This fixes bug #7222 (All users have full rigths on all shares) (CVE-2010-0728).
(cherry picked from commit 1c9494c76cc9686c61e0966f38528d3318f3176f)
|
|
|
|
|
|
|
|
In a cluster, this makes a large difference: For r/w traverse, we have to do a
fetch_locked on every record which for most users of connections_forall is just
overkill.
|
|
|
|
connections_forall is called from count_current_connections() which potentially
deletes dead records. This needs r/w access to connections.tdb.
connections_traverse says it does not provide this. Does not really matter in
the smbd case, because we have opened it before r/w, so this is "just" cleanup.
|
|
Guenther
|
|
|
|
|
|
|
|
|
|
|
|
Detected while showing this code to obnox :-)
|
|
There's no need to still hold the g_lock tdb-level lock while telling the
waiters to retry
|
|
In g_lock_unlock we have a little race between the process_exists and
messaging_send call: We only send to 5 waiters now, they all might have died
between us checking their existence and sending the message. This change makes
g_lock_lock retry at least once every minute.
|
|
Only notify the first 5 pending lock waiters. This avoids a thundering herd
problem that is really nasty in a cluster. It also makes acquiring a lock a bit
more FIFO, lock waiters are added to the end of the array.
|
|
Only check the existence of the lock owner in g_lock_parse, check the rest of
the records only when we got the lock successfully. This reduces the load on
process_exists which can involve a network roundtrip in the clustered case.
|
|
g_lock_parse might have thrown away entries from the locks array because the
processes were not around anymore. Don't store the orphaned entries.
|