Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
|
|
- Revert Tim's changes for the moment. I need to see what the issue is and
arrange to use "struct statvfs" if at all possible.
Derrell
|
|
|
|
|
|
|
|
|
|
|
|
|
|
names to the one given by anonymize_prefix, without generating a hash number. This setting is optional and is compatible with the module configuration format of Samba 3.3.
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
spoolss_SetPrinterInfo0.
Guenther
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
Guenther
|
|
|
|
that "offered" read from the rpc packet in spoolss is under
that size. Tidyup from analysis from Veracode.
Jeremy.
|
|
|
|
Jeremy.
|
|
This tests for mis-behaved case-insensitive get_real_filename
implementations.
|
|
The statvfs struct isn't guaranteed to be portable across operating
systems. Since libsmbclient isn't actually calling statvfs and just
using the statvfs struct to store similar information, this patch adds
a new portable smbc_statvfs struct. This fixes a few of the failures
in the build farm introduced by:
ae259575c447e61665c8e7070c476914161b953f
Derrell, please check.
|
|
|
|
Add 'perfcount module = pc_test' to exercise this module. Results are
logged into smb.log every 50 operations (configurable via smb.conf).
|
|
|
|
and it still doesn't build you know it's messed up.
Jeremy.
|
|
* Much of the beginning should look familiar, as I re-used the OneFS oplock
callback record concept. This was necessary to keep our own state around - it
really only consists of a lock state, per asynchronous lock that is currently
unsatisfied. The onefs_cbrl_callback_records map to BLRs by the id.
* There are 4 states an async lock can be in. NONE means there is no async
currently out for the lock, as opposed to ASYNC. DONE means we've locked
*every* lock (keep in mind a request can ask for multiple locks at a time.)
ERROR is an error.
* onefs_cbrl_async_success: The lock_num is incremented, and the state changed,
so that when process_blocking_lock_queue is run, we will try the *next* lock,
rather than the same one again.
* onefs_brl_lock_windows() has some complicated logic:
* We do a no-op if we're passed a BLR and the matching state is ASYNC --
this means Samba is trying to get the same lock twice, and we just need
to wait longer, so we return an error.
* PENDING lock calls happen when the lock is being queued on the BLQ -- we
do async in this case.
* We also do async in the case that we're passed a BLR, but the lock is not
pending. This is an async lock being probed by process_blocking_lock_queue.
* We do a sync lock for any normal first request of a lock.
* Failure is returned, but it doesn't go to the client unless the lock has
actually timed out.
|
|
This patch adds 3 new VFS OPs for Windows byte range locking: BRL_LOCK_WINDOWS,
BRL_UNLOCK_WINDOWS and BRL_CANCEL_WINDOWS. Specifically:
* I renamed brl_lock_windows, brl_unlock_windows and brl_lock_cancel to
*_default as the default implementations of the VFS ops.
* The blocking_lock_record (BLR) is now passed into the brl_lock_windows and
brl_cancel_windows paths. The Onefs implementation uses it - future
implementations may find it useful too.
* Created brl_lock_cancel to do what brl_lock/brl_unlock do: set up a
lock_struct and call either the Posix or Windows lock function. These happen
to be the same for the default implementation.
* Added helper functions: increment_current_lock_count() and
decrement_current_lock_count().
* Minor spelling correction in brl_timeout_fn: brl -> blr.
* Changed blocking_lock_cancel() to return the BLR that it has cancelled. This
allows us to assert its the lock that we wanted to cancel. If this assert ever
fires, this path will need to take in the BLR to cancel, rather than choosing
on its own.
* Adds a small helper function: find_blocking_lock_record_by_id(). Used by the
OneFS implementation, but could be useful for others.
|
|
blocking_lock_record.
|
|
Until we reach 1.0.0, we better require the exact same version.
metze
|
|
- aio events are removed
- tevent_req infrastructure was added
metze
|
|
This is almost a copy of the async_req code,
which will be removed later.
metze
|
|
metze
|
|
metze
|
|
It makes no sense to support aio events because,
the current implementation was based on IOCB_CMD_EPOLL_WAIT
which never made it into the main kernel tree.
The native linux aio can be used with select/epoll
using eventfd(), which means we can implement aio
with fd events and implement aio outside of tevent.
metze
|
|
metze
|
|
metze
|
|
metze
|
|
metze
|
|
metze
|
|
|
|
|
|
|
|
metze
|