Age | Commit message (Collapse) | Author | Files | Lines |
|
This reverts commit 69d5cea2e59162f19460e7ce4b6382fc5fdd6ca0.
This commit causes issues with the RPC server, revert it until we find the
exact issue and possibly have a torture test to avoid it happening again.
Found playing with w2k8r2 and forest trusts.
|
|
We need to install named.conf.update for provision to succeed from the
installed setup file.
Andrew Bartlett
|
|
By doing the unmount, we can avoid double-mounting st and bin
|
|
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
I implemented this referral test in C since the LDB python API isn't capable
to extract referrals from search result sets (there the result sets are simple
lists which contain only the matching entries).
First I enhanced the RootDSE test to return all partition base DNs in a new
null-terminated list "partitions". Then I used this in my referrals test which
I've implemented in the LDB api since I needed some certain DN functions.
|
|
This is a first, very basic implementation of the referrals (more informations
at MS-ADTS 3.1.1.4.6 and 3.1.1.3.4.1.12).
To have the full referral support (and to always point to the right host) the
full implementation using DNS will be needed (at the moment we always point to
the main DC which is referenceable through the DNS domainname).
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
The domain scope control is always removed, from the search one only the two
interesting flags (which are handled) and it is marked as non-critical.
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
This is needed for my work regarding the referrals when the domain scope control
isn't specified.
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
They don't cause any harm to our functionality - so ignore them were not needed.
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
This was causing marshalling faults when we returned errors.
|
|
|
|
Signed-off-by: Matthias Dieter Wallnöfer <mwallnoefer@yahoo.de>
|
|
Signed-off-by: Matthias Dieter Wallnöfer <mwallnoefer@yahoo.de>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
drsuapi_DsReplicaGetInfoRequest description
|
|
|
|
|
|
connecitons
|
|
We need this so we can create independent DRS connections to
different DCs.
|
|
|
|
|
|
|
|
|
|
- Function should accept pointer to drsuapi_DsReplicaSyncRequest.
While this doesn't generate essentially different code for
NDR parser, using pointer will make drsuapi_DsReplicaSync
descritpin with the rest of the functions in DRSUAPI interface.
Another benefit is that this way we could create Wireshark
dissector directly from Samba's verions for drsuapi.idl
- 'level' and thus the switch_type() should be uint32
|
|
- pointer to naming_context should be [ref] pointer
(i.e. not NULL pointer)
- other_info is actually the DNS name for Source DSA and is used
if DRSUAPI_DRS_SYNC_BYNAME is passed
ref: [MS-DRSR] 5.39
|
|
metze
|
|
metze
|
|
metze
|
|
Inspired by bug #7159.
metze
|
|
calculated buffer size in RPC-SPOOLSS.
Guenther
|
|
tdb transactions were designed to be robust against the machine
powering off, but interestingly were never designed to handle the case
where an administrator kill -9's a process during commit. Because
recovery is only done on tdb_open, processes with the tdb already
mapped will simply use it despite it being corrupt and needing
recovery.
The solution to this is to check for recovery every time we grab a
data lock: we could have gained the lock because a process just died.
This has no measurable cost: here is the time for tdbtorture -s 0 -n 1
-l 10000:
Before:
2.75 2.50 2.81 3.19 2.91 2.53 2.72 2.50 2.78 2.77 = Avg 2.75
After:
2.81 2.57 3.42 2.49 3.02 2.49 2.84 2.48 2.80 2.43 = Avg 2.74
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
|
|
To test the case of death of a process during transaction commit, add
a -k (kill random) option to tdbtorture. The easiest way to do this
is to make every worker a child (unless there's only one child), which
is why this patch is bigger than you might expect.
Using -k without -t (always transactions) you expect corruption, though
it doesn't happen every time. With -t, we currently get corruption but
the next patch fixes that.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
The current recovery code truncates the tdb file on recovery. This is
fine if recovery is only done on first open, but is a really bad idea
as we move to allowing recovery on "live" databases.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Now the transaction code uses the standard allrecord lock, that stops
us from trying to grab any per-record locks anyway. We don't need to
have special noop lock ops for transactions.
This is a nice simplification: if you see brlock, you know it's really
going to grab a lock.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
tdb_release_extra_locks() is too general: it carefully skips over the
transaction lock, even though the only caller then drops it. Change
this, and rename it to show it's clearly transaction-specific.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Now the transaction allrecord lock is the standard one, and thus is cleaned
in tdb_release_extra_locks(), _tdb_transaction_cancel() doesn't need to
know what type it is.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Centralize locking of all chains of the tdb; rename _tdb_lockall to
tdb_allrecord_lock and _tdb_unlockall to tdb_allrecord_unlock, and
tdb_brlock_upgrade to tdb_allrecord_upgrade.
Then we use this in the transaction code. Unfortunately, if the transaction
code records that it has grabbed the allrecord lock read-only, write locks
will fail, so we treat this upgradable lock as a write lock, and mark it
as upgradable using the otherwise-unused offset field.
One subtlety: now the transaction code is using the allrecord_lock, the
tdb_release_extra_locks() function drops it for us, so we no longer need
to do it manually in _tdb_transaction_cancel.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
Records themselves get (read) locked by the traversal code against delete.
Interestingly, this locking isn't done when the allrecord lock has been
taken, though the allrecord lock until recently didn't cover the actual
records (it now goes to end of file).
The write record lock, grabbed by the delete code, is not suppressed
by the allrecord lock. This is now bad: it causes us to punch a hole
in the allrecord lock when we release the write record lock. Make this
consistent: *no* record locks of any kind when the allrecord lock is
taken.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
We were previously inconsistent with our "global" lock: the
transaction code grabbed it from FREELIST_TOP to end of file, and the
rest of the code grabbed it from FREELIST_TOP to end of the hash
chains. Change it to always grab to end of file for simplicity and
so we can merge the two.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This was redundant before this patch series: it mirrored num_lockrecs
exactly. It still does.
Also, skip useless branch when locks == 1: unconditional assignment is
cheaper anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is pure overhead, but it centralizes the locking. Realloc (esp. as
most implementations are lazy) is fast compared to the fnctl anyway.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|