Age | Commit message (Collapse) | Author | Files | Lines |
|
Cope with a wider range of auth padding in dcerpc bind_ack and
alter_context packets. We now use a helper function that calculates
the right auth padding.
|
|
|
|
Like TYPESAFE_QSORT() but for the ldb_qsort() function
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
This might help on some filesystems
|
|
|
|
This makes it much harder to get the type of a qsort comparison
function wrong.
|
|
|
|
and 0 in the places where it does.
Jeremy
|
|
behavior.
Cause all exit paths to go through one place, where all cleanup is
done. change_to_root_user() for pathname operations that should succeed if
the path exists, even if the connecting user has no access.
For example, a share can now be defined with a path of /root/only/access
(where /root/only/access is a directory path with all components only
accessible to root e.g. root owned, permissions 700 on every component).
Non-root users will now correctly connect, but get ACCESS_DENIED on
all activities (which matches Windows behavior). Previously, non-root
users would get NT_STATUS_BAD_NETWORK_NAME on doing a TConX to this
share, even though it's a perfectly valid share path (just not accessible
to them).
This change was inspired by the research I did for bug #7126, which
was reported by bepi@adria.it.
As this is a change in a core function, I'm proposing to leave
this only in master for 3.6.0, not back-port to any existing releases.
This should give us enough time to decide if this is the way we want this to
behave (as Windows) or if we prefer the previous behavior.
Jeremy.
|
|
Jeremy.
|
|
|
|
The "lock spin time" parameter mimics the following Windows
setting which by default is 250ms in Windows and 200ms in Samba.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\LockViolationDelay
When a client sends repeated, non-blocking, contending BRL requests
to a Windows server, after the first Windows starts treating these
requests as timed blocking locks with the above timeout.
As an efficiency, I've changed the behavior when this setting is 0,
to skip this logic and treat all requests as non-blocking locks.
This gives the smbd server behavior similar to the 3.0 release with
the do_spin_lock() implementation.
I've also changed the blocking lock parameter in the call from
push_blocking_lock_request() to true as all requests made in this
path are blocking by definition.
|
|
This hasn't been turned on or been capable of doing so for
many years now. Makes this jumbo function smaller...
Jeremy.
|
|
NT_STATUS_OBJECT_PATH_INVALID error"
This reverts commit 2fdd8b10c6abadd27c579e772c0482214d2363a5.
This fix is incorrect. The original code works as desired,
I made a mistake here.
Jeremy.
|
|
NT_STATUS_OBJECT_PATH_INVALID error
As tridge's comment says, we should be ignoring ACCESS_DENIED
on the share path in a TconX call, instead allowing the mount
and having individual SMB calls fail (as Windows does). The
original code erroneously caught SMB_VFS_STAT != 0 and errored
out on that.
Jeremy.
|
|
Michael
|
|
|
|
Called, from key_exists, scan_sorted_subkeys re-creates the sorted
subkeys record of the given key and then searches through it.
The race is that between creation and parsing of the sorted subkey
record, another process that stores some other subkey of the same
parent key will delete the sorted subkey record, resulting in an
WERR_BADFILE of an operation that should actually succeed.
This patch fixes the issue by wrapping the creation and parsing
into a transaction.
Michael
|
|
Michael
|
|
This made smbd crash in g_lock_lock() when trying to start a
transaction on a db with an already started transaction,
e.g. in a tcon_and_X where the share_info.tdb was not yet
initialized but share_info.tdb was already locked by another
process or writing acces to the winreg rpc pipe where the
registry tdb was already locked by another process.
What we really _want_ to do here by design is to react to
MSG_DBWRAP_G_LOCK_RETRY messages that are either sent
by a client doing g_lock_unlock or by ourselves when
we receive a CTDB_SRVID_SAMBA_NOTIFY or
CTDB_SRVID_RECONFIGURE message from ctdbd, i.e. when
either a client holding a lock or a complete node
has died.
Doing this properly involves calling tevent_loop_once(),
but doing this here with the main ctdbd messaging context
creates a nested event loop when g_lock_lock() is called
from the main event loop.
So as a quick fix, we act a little corasely here: we do
a select on the ctdb connection fd and when it is readable
or we get EINTR, then we retry without actually parsing
any ctdb packages or dispatching messages. This means that
we retry more often than necessary and intended by design,
but this does not harm and it is unobtrusive. When we have
finished, the main loop will pick up all the messages and
ctdb packets. The only extra twist is that we cannot use
timed events here but have to handcode a timeout for select.
Michael
|
|
Michael
|
|
Michael
|
|
The key for reading and writing was inconsistent due to a
off by one data length.
Michael
|
|
This skips update of the __db_sequence_number__ record when nothing else has
been written. There are transactions that are just openend and then nothing
is written until transaction_commit is called. This is for instance the case
with registry initialization routines: They start a transaction and only
write somthing when the registry has not been initialized yet.
So this change will skip many db_seqnum bumps and TRANS3_COMMIT roundtrips.
Michael
|
|
I carefully prepared the return value only to "return 0;" at the bottom. :-(
This may well have hit us for instance in the nested cancel case
and produced random errors.
Michael
|
|
The logic bug was that if a record was found in the marshall buffer,
then always the ctdb header of tha last record in the marshall buffer
was returned, and not the ctdb header of the last occurrence of the
requested record.
This is fixed by introducing an additional temporary variable.
Michael
|
|
Michael
|
|
Michael
|
|
Don't treat this as an error but return seqnum 0 instead.
Michael
|
|
|
|
Michael
|
|
For persistent databases, 64bit integer is kept in a special record
__db_sequence_number__. This record is incremented with each completed
transaction.
The retry mechanism for failing TRANS3_COMMIT controls inside the
db_ctdb_transaction_commit() function now relies one a modified
behaviour of ctdbd's treatment of persistent databases in recoveries.
Recently, a special treatment for persistent databases had been
introduced in ctdb (1.0.108) to work around the problems with the
orinal design of persistent transactions.
Now with the rewrite we need to revert to the old behaviour that
ctdb always takes the newest copies of all records.
This change also paves the way for a next step, which will make
recovery use the db seqnum to tell which node has the newest copy
of a persistent db and use that node's copy. This will greatly
reduce the amount of data transferred with each recovery.
Michael
|
|
The return values calculated by the callers were wrong anyways since
the new marshalling code does not set the local tdbs tdb error code.
Michael
|
|
Michael
|
|
This is the new implementation of ctdb transactions using the
global lock feature. It is needed by the current dbwrap_ctdb code.
Michael
|
|
|
|
|
|
This simplifies the transaction code a lot:
* transaction_start essentially consists of acquiring a global lock.
* No write operations at all are performed on the local database
until the transaction is committed: Every store operation is just
going into the marshall buffer.
* The commit operation calls a new simplified TRANS3_COMMIT control
in ctdb which rolls out thae changes to all nodes including the
node that is performing the transaction.
Michael
|
|
|
|
|
|
This is the basis to implement global locks in ctdb without depending on a
shared file system. The initial goal is to make ctdb persistent transactions
deterministic without too many timeouts.
|
|
|
|
In samba_kdc_trust_message2entry() on error, hdb_free_entry()
may end up trying to access uninitialized memory or double
free the hdb_entry.
|