Age | Commit message (Collapse) | Author | Files | Lines |
|
py_talloc_steal() was implemented as a macro which evaluated it's 2nd
argument twice. It was often called via a macro with a 2nd argument
that was a function call, for example an allocation in
py_talloc_new(). This meant it allocated memory twice, and leaked one
of them.
This re-implements py_talloc_steal() as a function, so that it only
does the allocation once.
|
|
|
|
Thanks to Michael Brown for pointing this out.
|
|
Thanks to Jelmer for pointing this out
|
|
used in the tdb manpages.
|
|
If the test setup fails, we still need to format the test result for the
UI. At leas in the subunit case, the format doesn't specify what to do
here, so we fail every test manually with the setup failure message.
|
|
|
|
|
|
pointers
(talloc.c)
...
> static inline int _talloc_free_internal(void *ptr, const char *location)
> {
> struct talloc_chunk *tc;
>
> if (unlikely(ptr == NULL)) {
> return -1;
> }
>
> tc = talloc_chunk_from_ptr(ptr);
...
Obviously this never had been documented before.
|
|
We saw tdb_lockall() take 71 seconds under heavy load; this is because Linux
(at least) doesn't prevent new small locks being obtained while we're waiting
for a big log.
The workaround is to do divide and conquer using non-blocking chainlocks: if
we get down to a single chain we block. Using a simple test program where
children did "hold lock for 100ms, sleep for 1 second" the time to do
tdb_lockall() dropped signifiantly. There are ln(hashsize) locks taken in
the contended case, but that's slow anyway.
More analysis is given in my blog at http://rusty.ozlabs.org/?p=120
This may also help transactions, though in that case it's the initial
read lock which uses this gradual locking routine; the update-to-write-lock
code is separate and still tries to update in one go.
Even though ABI doesn't change, minor version bumped so behavior change
can be easily detected.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
|
|
This is required for Solaris, which needs to link in librt to make use of
fdatasync().
|
|
tdb_lockall() uses F_WRLCK internally, which doesn't work on a fd opened with O_RDONLY. Use tdb_lockall_read() instead.
Jeremy.
|
|
Thanks to Brad Hards for this patch
|
|
Although the build was ok on my workstation it appears that on build
server it was not because the include path was not correct.
|
|
This is something that was not picked up during the migration to waf
|
|
this converts all callers that use the Samba4 loadparm lp_ calling
convention to use the lpcfg_ prefix.
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
|
|
Guenther
|
|
|
|
|
|
|
|
Using "#!/usr/bin/env python" is more portable. It still isn't ideal
though, as we should really use the python path found at configure
time. We do that in many places already, but some don't.
Signed-off-by: Andrew Bartlett <abartlet@samba.org>
|
|
|
|
|
|
|
|
|
|
appears to cause unresolved symbols at the moment.
|
|
Guenther
|
|
|
|
But such numbers can be forced with idr_get_new_above(), and that
reveals two bugs:
1) Crash in sub_remove() caused by pa array being too short.
2) Shift by more than 32 in _idr_find(), which is undefined, causing
the "outside the current tree" optimization to misfire and return NULL.
|
|
When doing
fd1 = tevent_add_fd(ev, ev, 2, 0, NULL, NULL);
fd2 = tevent_add_fd(ev, ev, 3, 0, NULL, NULL);
TALLOC_FREE(fd2);
fd2 = tevent_add_fd(ev, ev, 1, 0, NULL, NULL);
we end up with select_ev->maxfd==1. This is wrong.
An alternative fix might be to make select_ev->maxfd an unsigned int and make
EVENT_INVALID_MAXFD==UINT_MAX. But in theory we might end up with an fd of
UINT_MAX.
std_event_add_fd() contains exactly the same piece of code, so I'm directly
pushing it.
Volker
|
|
libedit on MAc OSX 10.5 does not have the rl_completion_t typedef,
but uses a internal typedef names CPPFunction.
Signed-off-by: Günther Deschner <gd@samba.org>
|
|
Guenther
|
|
|
|
|
|
|
|
|
|
-samba4 suffix for libraries that are bundled.
|
|
thanks to Joachim Schmitz <schmitz@hp.com>. This fixes #7460.
|
|
This should fix bug #7319 and #7320.
|
|
|
|
this needs to be with the krb5.h check
|
|
|
|
|
|
reverse (as it is now).
It makes no sense to talloc off the null context, then talloc steal
into the required context - just talloc off the correct context, and
change data_blob() to pass in the null context to data_blob_talloc().
Jeremy.
Signed-off-by: Günther Deschner <gd@samba.org>
|
|
|
|
Guenther
|
|
metze
|
|
Signed-off-by: Stefan Metzmacher <metze@samba.org>
|
|
metze
|