Age | Commit message (Collapse) | Author | Files | Lines |
|
Jeremy.
|
|
Metze, you'll probably be happier with this work as it
doesn't abuse tevent in the way you dislike. This is a
first cut at the code, which will need lots of testing
but I'm hoping this will give people an idea of where I'm
going with this.
Jeremy.
|
|
Jeremy.
|
|
Jeremy
|
|
This will allow us to share logic much easier between SMB1 and SMB2
servers.
Jeremy
|
|
Rename functions to be internally consistent. Next step is
to cope queueing single (non-compounded) SMB2 requests to
put some code inside the stubs.
Jeremy.
|
|
Allocate a uint16_t internal SMB1 mid for an SMB2 request.
Add a back pointer from the faked up smb_request struct
to the smb2 request.
Getting ready to add restart code for blocking locks,
share mode violations and oplocks in SMB2.
Jeremy.
|
|
The "lock spin time" parameter mimics the following Windows
setting which by default is 250ms in Windows and 200ms in Samba.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\LockViolationDelay
When a client sends repeated, non-blocking, contending BRL requests
to a Windows server, after the first Windows starts treating these
requests as timed blocking locks with the above timeout.
As an efficiency, I've changed the behavior when this setting is 0,
to skip this logic and treat all requests as non-blocking locks.
This gives the smbd server behavior similar to the 3.0 release with
the do_spin_lock() implementation.
I've also changed the blocking lock parameter in the call from
push_blocking_lock_request() to true as all requests made in this
path are blocking by definition.
|
|
Jeremy.
|
|
When we are waiting on a pending byte range lock, another smbd might
exit uncleanly, and therefore not notify us of the removal of the
lock, and thus not trigger the lock to be retried.
We coped with this up to now by adding a message_send_all() in the
SIGCHLD and cluster reconfigure handlers to send a MSG_SMB_UNLOCK to
all smbd processes. That would generate O(N^2) work when a large
number of clients disconnected at once (such as on a network outage),
which could leave the whole system unusable for a very long time (many
minutes, or even longer).
By adding a minimum re-check time for pending byte range locks we
avoid this problem by ensuring that pending locks are retried at a
more regular interval.
|
|
held outside of samba.
Fixes case where a connection with a pending lock can me marked "idle", and ensures
that the lock queue timeout is always recalculated.
Jeremy.
|
|
Jeremy.
|
|
|
|
We keep the seqnum/mid mapping in the smb_request structure.
This also moves one global variable into the
smbd_server_connection struct.
metze
|
|
This patch adds 3 new VFS OPs for Windows byte range locking: BRL_LOCK_WINDOWS,
BRL_UNLOCK_WINDOWS and BRL_CANCEL_WINDOWS. Specifically:
* I renamed brl_lock_windows, brl_unlock_windows and brl_lock_cancel to
*_default as the default implementations of the VFS ops.
* The blocking_lock_record (BLR) is now passed into the brl_lock_windows and
brl_cancel_windows paths. The Onefs implementation uses it - future
implementations may find it useful too.
* Created brl_lock_cancel to do what brl_lock/brl_unlock do: set up a
lock_struct and call either the Posix or Windows lock function. These happen
to be the same for the default implementation.
* Added helper functions: increment_current_lock_count() and
decrement_current_lock_count().
* Minor spelling correction in brl_timeout_fn: brl -> blr.
* Changed blocking_lock_cancel() to return the BLR that it has cancelled. This
allows us to assert its the lock that we wanted to cancel. If this assert ever
fires, this path will need to take in the BLR to cancel, rather than choosing
on its own.
* Adds a small helper function: find_blocking_lock_record_by_id(). Used by the
OneFS implementation, but could be useful for others.
|
|
blocking_lock_record.
|
|
This changelist allows for the addition of custom performance
monitoring modules through smb.conf. Entrypoints in the main message
processing code have been added to capture the command, subop, ioctl,
identity and message size statistics.
|
|
|
|
This the global variable "orig_inbuf" in the old chain_reply code. This global
variable was one of the reasons why we had the silly restriction to not allow
async requests within a request chain.
|
|
The goal is to move all this variables into a big context structure.
metze
|
|
metze
|
|
Instead, fix up the outbuf in send_xx_reply. In those routines, we know
what we are returning.
|
|
|
|
|
|
The only caller of this function is locking_close_file(). This checks itself if
brl_lock != NULL. The additional check is not necessary here.
|
|
The "else" is pointless here, we did a "return True" in the if branch.
|
|
Use "continue" for (SVAL(blr->inbuf,smb_mid) != mid)
|
|
Use a "continue" for (blr->fsp->fnum != fsp->fnum)
|
|
|
|
This is necessary if we want to keep the whole smb_request for deferred ops.
The explicit settings of req->inbuf will be removed once all those deferring
operations are converted to store the whole request and not just the inbuf.
|
|
|
|
|
|
|
|
|
|
|
|
(This used to be commit 444e35e7df1f13fc285183da8fb41b30ad99a3fa)
|
|
Jeremy.
(This used to be commit 3e3205309b75edf7d29633525adfdceb5f8856eb)
|
|
a lock due to file closure make sure we null out the fsp pointer
so it isn't dangling. This is an old bug (not related to the new
changes).
Jeremy.
(This used to be commit b5ee972b0c04b4d119573d95ac458a3b6be30c5c)
|
|
with Volker. Mostly making sure we have data on the incoming
packet type, not stored in the smb header.
Jeremy.
(This used to be commit c4e5a505043965eec77b5bb9bc60957e8f3b97c8)
|
|
to zero). If non-zero, writeX calls greater than this
value will be left in the socket buffer for later handling
with recvfile (or userspace equivalent). Definition of
recvfile for your system is left as an exercise for
the reader (I'm working on getting splice working :-).
Jeremy.
(This used to be commit 11c03b75ddbcb6e36b231bb40a1773d1c550621c)
|
|
bugs in various places whilst doing this (places that assumed
BOOL == int). I also need to fix the Samba4 pidl generation
(next checkin).
Jeremy.
(This used to be commit f35a266b3cbb3e5fa6a86be60f34fe340a3ca71f)
|
|
This adds the two functions talloc_stackframe() and talloc_tos().
* When a new talloc stackframe is allocated with talloc_stackframe(), then
* the TALLOC_CTX returned with talloc_tos() is reset to that new
* frame. Whenever that stack frame is TALLOC_FREE()'ed, then the reverse
* happens: The previous talloc_tos() is restored.
*
* This API is designed to be robust in the sense that if someone forgets to
* TALLOC_FREE() a stackframe, then the next outer one correctly cleans up and
* resets the talloc_tos().
The original motivation for this patch was to get rid of the
sid_string_static & friends buffers. Explicitly passing talloc context
everywhere clutters code too much for my taste, so an implicit
talloc_tos() is introduced here. Many of these static buffers are
replaced by a single static pointer.
The intended use would thus be that low-level functions can rather
freely push stuff to talloc_tos, the upper layers clean up by freeing
the stackframe. The more of these stackframes are used and correctly
freed the more exact the memory cleanup happens.
This patch removes the main_loop_talloc_ctx, tmp_talloc_ctx and
lp_talloc_ctx (did I forget any?)
So, never do a
tmp_ctx = talloc_init("foo");
anymore, instead, use
tmp_ctx = talloc_stackframe()
:-)
Volker
(This used to be commit 6585ea2cb7f417e14540495b9c7380fe9c8c717b)
|
|
Ronnie. If a lock timeout expires, we must check we can get the
lock before responding with failure. Volker is writing a torture test.
Jeremy.
(This used to be commit 45380f356b99d575645873b05af17c504b091dc8)
|
|
(This used to be commit 17df313db42199e26d7d2044f6a1d845aacd1a90)
|
|
(This used to be commit cb8fab5663db2cb408e1b85a7287d3670b09d503)
|
|
This changes send_trans2_replies to not depend on large buffers anymore
and finishes the trans2 conversion.
(This used to be commit b1d133e4ffa8c9b8219ba6e7b83e23ca4bdd1616)
|
|
Fake a struct smb_request here.
Volker
(This used to be commit f712d1c92bee024a165b5facabdac1e2c866d9b1)
|
|
(This used to be commit e6d592dcb8ff9f986b531435d0d03df20880880b)
|
|
InBuffer/OutBuffer
The complete history of this patch can be found under
http://www.samba.org/~vlendec/inbuf-checkin/.
Jeremy, Jerry: If possible I would like to see this in 3.2.0. I'm only
checking into 3_2 at the moment, as it currently will slow down operations for
all non-converted (i.e. all at this moment) operations, as it will copy the
talloc'ed inbuf over the global InBuffer. It will need quite a bit of effort
to convert everything necessary for the normal operations an XP box does.
I have patches for negprot, session setup, tcon_and_X, open_and_X, close. More
to come, but I would appreciate some help here.
Volker
(This used to be commit 5594af2b208c860d3f4b453af6a649d9e4295d1c)
|
|
(This used to be commit b0132e94fc5fef936aa766fb99a306b3628e9f07)
|