Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
Guenther
Autobuild-User: Günther Deschner <gd@samba.org>
Autobuild-Date: Wed Feb 2 15:44:21 CET 2011 on sn-devel-104
|
|
main loop"
This reverts commit 455fccf86b6544cd17a2571c63a88f8aebff3f74.
I'll add a more generic fix for this problem.
metze
|
|
nmbd --port didn't work
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Autobuild-User: Stefan Metzmacher <metze@samba.org>
Autobuild-Date: Fri Jan 7 17:44:08 CET 2011 on sn-devel-104
|
|
Guenther
|
|
|
|
DoS protection like the max winbind clients. Settable by
nmbd:unexpected_clients
|
|
|
|
Autobuild-User: Volker Lendecke <vlendec@samba.org>
Autobuild-Date: Wed Jan 5 16:03:24 CET 2011 on sn-devel-104
|
|
pass this in as the &now parameter. Push this call inside of
event_add_to_select_args() to the correct point so it doesn't
get called unless needed.
Jeremy.
Autobuild-User: Jeremy Allison <jra@samba.org>
Autobuild-Date: Thu Dec 23 01:08:11 CET 2010 on sn-devel-104
|
|
transaction id of packets it was requested to send via a client, and
only store replies that match these ids. On the client side change
clients to always attempt to ask nmbd first for name_query and
node_status calls, and then fall back to doing socket calls if
we can't talk to nmbd (either nmbd is not running, or we're not
root and cannot open the messaging tdb's). Fix readers of unexpected.tdb
to delete packets they've successfully read.
This should fix a long standing problem of unexpected.tdb
growing out of control in noisy NetBIOS envioronments with
lots of bradcasts, yet still allow unprivileged client apps
to work mostly as well as they already did (nmblookup for
example) in an environment when nmbd isn't running.
Jeremy.
Autobuild-User: Jeremy Allison <jra@samba.org>
Autobuild-Date: Sun Nov 14 05:22:45 UTC 2010 on sn-devel-104
|
|
the daemons themselves. Allows client utilities to silently
fail to create a messaging context due to access denied on the
messaging tdb (which I need for the following patch).
Jeremy.
|
|
X_FILE does not gain us anything in this use case, we want our log
messages on disk, not in a buffer, and we don't gain anything from the
X_FILE api. I discussed the matter with tridge, who feels that to use
FILE in the first place was a mistake, and that X_FILE isn't any
better, but was a stop-gap to avoid issues on solaris.
Andrew Bartlett
|
|
This change improves the setup_logging() API so that callers which
wish to set up logging to stderr can simply ask for it, rather than
directly modify the dbf global variable.
Andrew Bartlett
|
|
This will reduce the noise from merges of the rest of the
libcli/security code, without this commit changing what code
is actually used.
This includes (along with other security headers) dom_sid.h and
security_token.h
Andrew Bartlett
Autobuild-User: Andrew Bartlett <abartlet@samba.org>
Autobuild-Date: Tue Oct 12 05:54:10 UTC 2010 on sn-devel-104
|
|
|
|
Previously, only one fd handler was being called per main message loop
in all smbd child processes.
In the case where multiple fds are available for reading the fd
corresponding to the event closest to the beginning of the event list
would be run. Obviously this is arbitrary and could cause unfairness.
Usually, the first event fd is the network socket, meaning heavy load
of client requests can starve out other fd events such as oplock
or notify upcalls from the kernel.
In this patch, I have changed the behavior of run_events() to unset
any fd that it has already called a handler function, as well
as decrement the number of fds that were returned from select().
This allows the caller of run_events() to iterate it, until all
available fds have been handled.
I then changed the main loop in smbd child processes to iterate
run_events(). This way, all available fds are handled on each wake
of select, while still checking for timed or signalled events between
each handler function call. I also added an explicit check for
EINTR from select(), which previously was masked by the fact that
run_events() would handle any signal event before the return code
was checked.
This required a signature change to run_events() but all other callers
should have no change in their behavior. I also fixed a bug in
run_events() where it could be called with a selrtn value of -1,
doing unecessary looping through the fd_event list when no fds were
available.
Also, remove the temporary echo handler hack, as all fds should be
treated fairly now.
|
|
Guenther
|
|
TDB_CLEAR_IF_FIRST tdb's. For tdb's like gencache where we open
without CLEAR_IF_FIRST and then with CLEAR_IF_FIRST if corrupt
this is still safe to use as if opening an existing tdb the new
hash will be ignored - it's only used on creating a new tdb not
opening an old one.
Jeremy.
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
Couldn't find any reproducer for a short request, so removing it for now.
Guenther
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
purely cosmetic, no code change.
Guenther
|
|
Guenther
|
|
|
|
Guenther
|
|
Guenther
|
|
|
|
Guenther
|
|
|
|
|
|
This removes some deep references to procid_self()
|
|
Found by clang-analyzer.
|
|
Guenther
|
|
Guenther
|
|
|
|
This shrinks include/includes.h.gch by the size of 7 MB and reduces build time
as follows:
ccache build w/o patch
real 4m21.529s
ccache build with patch
real 3m6.402s
pch build w/o patch
real 4m26.318s
pch build with patch
real 3m6.932s
Guenther
|
|
|
|
|
|
When a samba server process dies hard, it has no chance to clean up its entries
in locking.tdb, brlock.tdb, connections.tdb and sessionid.tdb.
For locking.tdb and brlock.tdb Samba is robust by checking every time we read
an entry from the database if the corresponding process still exists. If it
does not exist anymore, the entry is deleted. This is not 100% failsafe though:
On systems with a limited PID space there is a non-zero chance that between the
smbd's death and the fresh access, the PID is recycled by another long-running
process. This renders all files that had been locked by the killed smbd
potentially unusable until the new process also dies.
This patch is supposed to fix the problem the following way: Every process ID
in every database is augmented by a random 64-bit number that is stored in a
serverid.tdb. Whenever we need to check if a process still exists we know its
PID and the 64-bit number. We look up the PID in serverid.tdb and compare the
64-bit number. If it's the same, the process still is a valid smbd holding the
lock. If it is different, a new smbd has taken over.
I believe this is safe against an smbd that has died hard and the PID has been
taken over by a non-samba process. This process would not have registered
itself with a fresh 64-bit number in serverid.tdb, so the old one still exists
in serverid.tdb. We protect against this case by the parent smbd taking care of
deregistering PIDs from serverid.tdb and the fact that serverid.tdb is
CLEAR_IF_FIRST.
CLEAR_IF_FIRST does not work in a cluster, so the automatic cleanup does not
work when all smbds are restarted. For this, "net serverid wipe" has to be run
before smbd starts up. As a convenience, "net serverid wipedbs" also cleans up
sessionid.tdb and connections.tdb.
While there, this also cleans up overloading connections.tdb with all the
process entries just for messaging_send_all().
Volker
|
|
|
|
|
|
This uses another char* cast hack. Left alone for now.
|
|
Jeremy.
|
|
(cherry picked from commit 4d23d777bc6d4fad20d0f3084fe658635812bee9)
|
|
Add a simple "processed packet queue" cache to stop nmbd responding to
packets received on the broadcast and non-broadcast socket (which
it has opened when "nmbd bind explicit broadcast = yes").
This is a very simple packet queue - it only keeps the packets
processed during a single call to listen_for_packets() (i.e. one
select call). This means that if the delivery notification for a
packet received on both broadcast and non-broadcast addresses
is done in two different select calls, the packet will still be
processed twice. This is a very rare occurrance and we can just
live with it when it does as the protocol is stateless. If this
is ever flagged as a repeatable problem then we can add a longer
lived cache, using timeout processing to clear etc. etc. But without
storing all packets processed we can never be *sure* we've eliminated
the race condition so I'm going to go with this simple solution until
someone proves a more complex one is needed :-).
Jeremy.
|