Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
This is needed in all of the library, not only in the dbwrap_open part.
|
|
sys_acl_to_text()
This makes it possible to print the entire string again.
Andrew Bartlett
Autobuild-User: Andrew Bartlett <abartlet@samba.org>
Autobuild-Date: Wed May 9 06:07:06 CEST 2012 on sn-devel-104
|
|
|
|
|
|
metze
|
|
Signed-off-by: Michael Adam <obnox@samba.org>
|
|
This required that the lower level cache store a UID/GID and a type, and that
we operate on struct unixid rather than just uid/gid.
The ID_TYPE_BOTH is then handled as being a positive mapping for both
a UID and GID value. Wrapper functions are provided so that callers are not
changed in this patch.
Andrew Bartlett
Signed-off-by: Michael Adam <obnox@samba.org>
|
|
|
|
This safely allocates the task_id so that when we have multiple event
contexts, they can each have their own messaging context, particularly
for the imessaging subsystem under source4.
Andrew Bartlett
|
|
|
|
Only non-gcc compilers seem to notice this as an error.
Andrew Bartlett
Autobuild-User: Andrew Bartlett <abartlet@samba.org>
Autobuild-Date: Mon Apr 23 05:58:52 CEST 2012 on sn-devel-104
|
|
Autobuild-User: Volker Lendecke <vl@samba.org>
Autobuild-Date: Sat Apr 21 13:46:00 CEST 2012 on sn-devel-104
|
|
This should fix one of the recent flaky tests
|
|
This also removes the ID_CACHE_FLUSH message.
|
|
Autobuild-User: Volker Lendecke <vl@samba.org>
Autobuild-Date: Fri Apr 20 17:05:52 CEST 2012 on sn-devel-104
|
|
This simplifies the g_lock implementation. The new implementation tries to
acquire a lock. If that fails due to a lock conflict, wait for the g_lock
record to change. Upon change, just try again. The old logic had to cope with
pending records and an ugly hack into ctdb itself. As a bonus, we now get a
really clean async g_lock_lock_send/recv that can asynchronously wait for a
global lock. This would have been almost impossible to do without the
dbwrap_record_watch infrastructure.
|
|
With this API you can asynchronously wait for a record to be modified
|
|
This is a per-db function that is called whenever some record is modified
|
|
|
|
This returns a blob uniquely identifying the database
|
|
|
|
Autobuild-User: Volker Lendecke <vl@samba.org>
Autobuild-Date: Thu Apr 19 19:13:45 CEST 2012 on sn-devel-104
|
|
|
|
|
|
We can assume that the rbt dbs are around
|
|
Not sure this will actually please Coverity, but it fixes a severe bug
|
|
Autobuild-User: Andrew Bartlett <abartlet@samba.org>
Autobuild-Date: Thu Apr 19 14:15:42 CEST 2012 on sn-devel-104
|
|
|
|
|
|
|
|
In this case, the blob is already in memory, so it is easier to return the full
blob to the caller, and let the caller decide if some interface restriction
stops the full blob from being passed all the way up the stack.
This allows us to quickly write a python wrapper for this xattr storage
mechanism.
Andrew Bartlett
|
|
|
|
|
|
|
|
This will help with making dbwrap available as a top level library.
Andrew Bartlett
|
|
This will allow db_open_tdb() to be called from common code, which may
already have a loadparm context loaded.
It also slowly moves the lp_ctx up the stack, as required to remove
the library loop between smbconf and the registry.
Andrew Bartlett
|
|
This is in preperation for calling dbwrap from common code, where we may not
have a stackframe set up.
Andrew Bartlett
|
|
This will avoid the need for some #ifdefs
|
|
This might make some #ifdef CLUSTER_SUPPORT unnecessary in the future
|
|
This will help notify torture tests: A tevent barrier can be waited on with
tevent_barrier_wait_send/recv. The barrier is initialized with a number of
requests that it will accept waiting. When that number is reached, all those
requests will be released and their callback will be called. The barrier will
be free for re-use again.
|
|
Send a raw blob without the messaging.idl wrap
|
|
This is a tevent_based variant of messaging_register
|
|
This is a void* that represents a signal handler attached to some
custom tevent_context. This is necessary to make the tdb based
messaging infrastructure trigger its business when we are sitting in
tevent_loop_once for an event context that is not the main one in the
messaging context.
|
|
The existing one is not async at all.
|
|
This is designed to spread the load on individual ctdb records to allow upper
layers to do backoff mechanisms. In the ctdb case, do not get the record if a
local lock is already taken. If we are not dmaster, do at most one migrate
attempt.
For the tdb case, this is a nonblocking fetch_locked. If someone else has the
lock, give up.
|
|
This is a caching layer for the notify database and potentially for the brlock
database. It caches the parse_record operation as long as the underlying seqnum
does not change.
|
|
|
|
|
|
All callers had that fallback
|