Age | Commit message (Collapse) | Author | Files | Lines | |
---|---|---|---|---|---|
2011-03-30 | s3-vfs: include smbd/smbd.h in vfs modules. | Günther Deschner | 1 | -0/+1 | |
Guenther | |||||
2011-01-26 | s3-modules: Fixed the for-loop code block. | Andreas Schneider | 1 | -0/+1 | |
s3-modules: Fixed the for-loop code block. | |||||
2010-06-12 | s3: Explicitly pass sconn to process_blocking_lock_queue | Volker Lendecke | 1 | -2/+2 | |
2010-06-09 | Rename "allow_smb2" -> "using_smb2" and make the usage clearer. | Jeremy Allison | 1 | -2/+2 | |
2010-04-30 | Plumb the SMB2 front end into the blocking lock backend. | Jeremy Allison | 1 | -13/+41 | |
Metze, you'll probably be happier with this work as it doesn't abuse tevent in the way you dislike. This is a first cut at the code, which will need lots of testing but I'm hoping this will give people an idea of where I'm going with this. Jeremy. | |||||
2010-04-29 | Move the global blocking lock records into the smb1 state. | Jeremy Allison | 1 | -4/+4 | |
Jeremy | |||||
2009-05-12 | s3 onefs: Self-contend level2 oplocks on BRL | Zack Kirsch | 1 | -1/+14 | |
2009-03-31 | s3 onefs: Add missing newlines to debug statements in the onefs module | Tim Prouty | 1 | -3/+3 | |
2009-03-31 | s3 onefs: Async failures are resulting in SMB_ASSERT->smb_panic while ↵ | Zack Kirsch | 1 | -2/+2 | |
running many of the LOCK torture tests. Return true from the onefs cancel function if we've errored, which can happen when the CBRL domain is configured to only give out 1 lock. :) | |||||
2009-03-13 | s3 OneFS: Add kernel strict locking support | Dave Richards | 1 | -10/+83 | |
2009-03-01 | s3 OneFS: Refactor config code and cleanup includes | Tim Prouty | 1 | -0/+1 | |
2009-02-24 | S3: Add in profile counters for new vfs and syscall entries. | todd stecher | 1 | -2/+24 | |
2009-02-18 | s3: OneFS: Pass in the client's fnum to the ifs_cbrl syscall. | Zack Kirsch | 1 | -3/+4 | |
2009-02-13 | OneFS implementation of BRL VFS ops: | Zack Kirsch | 1 | -0/+453 | |
* Much of the beginning should look familiar, as I re-used the OneFS oplock callback record concept. This was necessary to keep our own state around - it really only consists of a lock state, per asynchronous lock that is currently unsatisfied. The onefs_cbrl_callback_records map to BLRs by the id. * There are 4 states an async lock can be in. NONE means there is no async currently out for the lock, as opposed to ASYNC. DONE means we've locked *every* lock (keep in mind a request can ask for multiple locks at a time.) ERROR is an error. * onefs_cbrl_async_success: The lock_num is incremented, and the state changed, so that when process_blocking_lock_queue is run, we will try the *next* lock, rather than the same one again. * onefs_brl_lock_windows() has some complicated logic: * We do a no-op if we're passed a BLR and the matching state is ASYNC -- this means Samba is trying to get the same lock twice, and we just need to wait longer, so we return an error. * PENDING lock calls happen when the lock is being queued on the BLQ -- we do async in this case. * We also do async in the case that we're passed a BLR, but the lock is not pending. This is an async lock being probed by process_blocking_lock_queue. * We do a sync lock for any normal first request of a lock. * Failure is returned, but it doesn't go to the client unless the lock has actually timed out. |