Summary: | daemon can crash if --disable-userdb-cache is used | ||
---|---|---|---|
Product: | dbus | Reporter: | Per Inge Mathisen <per.mathisen> |
Component: | core | Assignee: | Havoc Pennington <hp> |
Status: | RESOLVED FIXED | QA Contact: | John (J5) Palmieri <johnp> |
Severity: | major | ||
Priority: | low | CC: | chengwei.yang.cn, scott, walters |
Version: | unspecified | Keywords: | patch |
Hardware: | x86-64 (AMD64) | ||
OS: | Linux (All) | ||
Whiteboard: | review? | ||
i915 platform: | i915 features: | ||
Attachments: |
[PATCH] Do not update user db cache if build with "--disable-userdb-cache"
[PATCH v2] Do not update user db cache if build with "--disable-userdb-cache" |
Description
Per Inge Mathisen
2008-08-14 03:04:01 UTC
Hm, I think SIGPIPE is a normal thing to get here. You should do in gdb: handle SIGPIPE nostop pass I finally managed to get the crash while gdb was tracing the dbus process. This time it was not a bogus SIGPIPE. Again, this happened while shutting down a host of processes running on dbus. I have saved the core file, so if you need anything else, let me know. Backtrace: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 140407117903744 (LWP 11324)] 0x0000000000e1a012 in strcmp () from /lib64/libc.so.6 (gdb) bt full #0 0x0000000000e1a012 in strcmp () from /lib64/libc.so.6 mallstream = (FILE *) 0x0 tr_old_memalign_hook = (void *(*)(size_t, size_t, const void *)) 0 tr_old_malloc_hook = (void *(*)(size_t, const void *)) 0 tr_old_realloc_hook = (void *(*)(void *, size_t, const void *)) 0 lock = 0 mallenv = "MALLOC_TRACE" malloc_trace_buffer = 0x0 tr_old_free_hook = (void (*)(void *, const void *)) 0 mallwatch = (void *) 0x0 #1 0x00007fb31467abee in find_generic_function (table=0x7fb3155893f0, key=0x7fb3155973b0, idx=10, compare_func=0xe1a010 <strcmp>, create_if_not_found=1, bucket=0x0, preallocated=0x7fb315607828) at dbus-hash.c:918 entry = (DBusHashEntry *) 0x7fb3156077f8 #2 0x00007fb31467aea6 in find_string_function (table=0x7fb3155893f0, key=0x7fb3155973b0, create_if_not_found=1, bucket=0x0, preallocated=0x7fb315607828) at dbus-hash.c:952 No locals. #3 0x00007fb31467a8b1 in _dbus_hash_table_insert_string_preallocated (table=0x7fb3155973b0, preallocated=0x7fb315607828, key=0x7fb3155973b0 "developer", value=0x7fb3155ab350) at dbus-hash.c:1680 entry = <value optimized out> #4 0x00007fb31467aad9 in _dbus_hash_table_insert_string (table=0x7fb3155893f0, key=0x7fb3155973b0 "developer", value=0x7fb3155ab350) at dbus-hash.c:1443 preallocated = (DBusPreallocatedHash *) 0xffffff64 #5 0x00007fb31468171e in _dbus_user_database_lookup (db=0x7fb315589260, uid=500, username=0x7fb3155e9fb0, error=0x0) at dbus-userdb.c:208 info = (DBusUserInfo *) 0x7fb3155ab350 #6 0x00007fb3146817e3 in _dbus_user_database_get_username (db=0x7fb3155973b0, username=0xa, info=0x7fff1c69b610, error=0xffffffff) at dbus-userdb.c:661 No locals. #7 0x00007fb314681a2d in _dbus_credentials_add_from_user (credentials=0x7fb3155eb8d0, username=0x7fb3155e9fb0) at dbus-userdb.c:507 db = (DBusUserDatabase *) 0xffffff64 info = (const DBusUserInfo *) 0x7fb3155e9f60 #8 0x00007fb3146875fe in handle_server_data_external_mech (auth=0x7fb3155e9f60, data=0x7fff1c69b650) at dbus-auth.c:1065 No locals. #9 0x00007fb3146867cb in process_data (auth=0x7fb3155e9f60, args=0x7fff1c69b6a0, data_func=0x7fb3146874f0 <handle_server_data_external_mech>) at dbus-auth.c:1606 end = 6 decoded = {dummy1 = 0x7fb315595160, dummy2 = 3, dummy3 = 16, dummy4 = 2147483639, dummy5 = 0, dummy6 = 0, dummy7 = 0, dummy8 = 0} #10 0x00007fb3146869eb in handle_server_state_waiting_for_auth (auth=0x7fb3155e9f60, command=<value optimized out>, args=0x7fff1c69b720) at dbus-auth.c:1658 i = 9 mech = {dummy1 = 0x7fb3155e4750, dummy2 = 8, dummy3 = 16, dummy4 = 2147483639, dummy5 = 0, dummy6 = 0, dummy7 = 0, dummy8 = 0} hex_response = {dummy1 = 0x7fb3155abc20, dummy2 = 6, dummy3 = 16, dummy4 = 2147483639, dummy5 = 0, dummy6 = 0, dummy7 = 0, dummy8 = 0} #11 0x00007fb314685f82 in _dbus_auth_do_work (auth=0x7fb3155e9f60) at dbus-auth.c:2100 No locals. #12 0x00007fb31467896f in _dbus_transport_get_is_authenticated (transport=0x7fb3155ecb60) at dbus-transport.c:688 allow = <value optimized out> auth_identity = <value optimized out> #13 0x00007fb314679318 in do_authentication (transport=0x7fb3155ecb60, do_reading=1, do_writing=0, auth_completed=0x7fff1c69b85c) at dbus-transport-socket.c:422 oom = <value optimized out> orig_auth_state = <value optimized out> #14 0x00007fb314679bbb in socket_handle_watch (transport=0x7fb3155973b0, watch=<value optimized out>, flags=<value optimized out>) at dbus-transport-socket.c:837 auth_finished = <value optimized out> #15 0x00007fb31467888d in _dbus_transport_handle_watch (transport=0x7fb3155ecb60, watch=0x7fb3155ec130, condition=1) at dbus-transport.c:851 retval = 1 #16 0x00007fb31466ed88 in _dbus_connection_handle_watch (watch=0x7fb3155ec130, condition=1, data=0x7fb3155ec920) at dbus-connection.c:1420 connection = <value optimized out> retval = <value optimized out> status = <value optimized out> #17 0x00007fb314679f71 in dbus_watch_handle (watch=0x7fb3155ec130, flags=1) at dbus-watch.c:663 __FUNCTION__ = "dbus_watch_handle" #18 0x00007fb314682382 in _dbus_loop_iterate (loop=0x7fb3155856b0, block=1) at dbus-mainloop.c:810 wcb = (WatchCallback *) 0x7fb31559a8c0 condition = 1 retval = 0 fds = (DBusPollFD *) 0x7fff1c69bb10 stack_fds = {{fd = 3, events = 1, revents = 0}, {fd = 9, events = 1, revents = 0}, {fd = 11, events = 1, revents = 0}, {fd = 13, events = 1, revents = 0}, {fd = 12, events = 1, revents = 0}, { fd = 14, events = 1, revents = 0}, {fd = 17, events = 1, revents = 0}, {fd = 19, events = 1, revents = 0}, {fd = 16, events = 1, revents = 0}, {fd = 18, events = 1, revents = 0}, {fd = 22, events = 1, revents = 0}, {fd = 21, events = 1, revents = 0}, {fd = 20, events = 1, revents = 0}, { fd = 26, events = 1, revents = 0}, {fd = 15, events = 1, revents = 1}, {fd = 25, events = 1, revents = 0}, {fd = 29, events = 1, revents = 0}, {fd = 31, events = 1, revents = 25}, {fd = 31, events = 1, revents = 0}, {fd = 32, events = 1, revents = 17}, {fd = 32, events = 1, revents = 0}, { fd = 32, events = 1, revents = 0}, {fd = 32, events = 1, revents = 0}, {fd = 32, events = 1, revents = 0}, {fd = 33, events = 1, revents = 17}, {fd = 33, events = 1, revents = 0}, {fd = 40, events = 1, revents = 0}, {fd = 40, events = 1, revents = 0}, {fd = 40, events = 1, revents = 0}, { fd = 40, events = 1, revents = 0}, {fd = 40, events = 1, revents = 0}, {fd = 40, events = 1, revents = 0}, {fd = 40, events = 1, revents = 0}, {fd = 41, events = 1, revents = 17}, {fd = 0, events = 0, revents = 0}, {fd = 0, events = 0, revents = 0}, {fd = 0, events = 0, revents = 0}, { fd = 14494885, events = 0, revents = 0}, {fd = 342275856, events = 32691, revents = 0}, { fd = 342377564, events = 32691, revents = 0}, {fd = 342275856, events = 32691, revents = 0}, {fd = 0, events = 0, revents = 0} <repeats 16 times>, {fd = 0, events = 32691, revents = 0}, {fd = 344591216, events = 32691, revents = 0}, {fd = 342342956, events = 32691, revents = 0}, {fd = 0, events = 0, revents = 0}, {fd = 0, events = 0, revents = 0}, {fd = 0, events = 0, revents = 0}, {fd = 0, events = 0, revents = 0}} n_fds = 15 watches_for_fds = (WatchCallback **) 0x7fff1c69b910 stack_watches_for_fds = {0x7fb31558bc50, 0x7fb315590070, 0x7fb315590cf0, 0x7fb315594860, 0x7fb315591670, 0x7fb3155951a0, 0x7fb31559d440, 0x7fb31559f7c0, 0x7fb31559eb20, 0x7fb3155a5ea0, 0x7fb3155aa950, 0x7fb3155a9c10, 0x7fb3155a86e0, 0x7fb315598ff0, 0x7fb31559a8c0, 0x7fb3155a74e0, 0x7fb3155ebaa0, 0x7fb3155eab30, 0x7fb3155eab30, 0x7fb3155bdeb0, 0x7fb3155bdeb0, 0x7fb3155bdeb0, 0x7fb3155bdeb0, 0x7fb3155bdeb0, 0x7fb315644120, 0x7fb315644120, 0x7fb3155f6910, 0x7fb3155f6910, 0x7fb3155f6910, 0x7fb3155f6910, 0x7fb3155f6910, 0x7fb3155f6910, 0x7fb3155f6910, 0x7fb31560ca80, 0xf375846, 0x7fff1c69bb78, 0x7fb314631bc0, 0x1193ef, 0x0, 0x7fb314631bc0, 0x5, 0x0, 0x7fff00000001, 0x118deb, 0x0, 0x7fb314631b30, 0x6, 0x9, 0x0, 0x1001191f1, 0x7fb314655358, 0x7fff1c69bbe0, 0x7fb314655000, 0x7fb314657885, 0x0, 0x7fb314631b78, 0x14631000, 0x7fb314657971, 0xdb1000, 0x7fb314656ad8, 0x500000000, 0x1000001f1, 0x7fb314655000, 0x7fb314655358} i = 14 link = (DBusList *) 0x1 n_ready = <value optimized out> initial_serial = <value optimized out> timeout = <value optimized out> oom_watch_pending = 0 orig_depth = 1 #19 0x00007fb3146824cd in _dbus_loop_run (loop=0x7fb3155856b0) at dbus-mainloop.c:874 our_exit_depth = 0 #20 0x00007fb31466bda1 in main (argc=7, argv=<value optimized out>) at main.c:464 val = 4 end = 1 error = {name = 0x0, message = 0x0, dummy1 = 1, dummy2 = 0, dummy3 = 0, dummy4 = 0, dummy5 = 0, padding1 = 0x100562f70} config_file = {dummy1 = 0x7fb3155853b0, dummy2 = 24, dummy3 = 32, dummy4 = 2147483639, dummy5 = 0, dummy6 = 0, dummy7 = 1, dummy8 = 0} addr_fd = {dummy1 = 0x7fb3155854f0, dummy2 = 1, dummy3 = 16, dummy4 = 2147483639, dummy5 = 0, dummy6 = 0, dummy7 = 1, dummy8 = 0} pid_fd = {dummy1 = 0x7fb315585510, dummy2 = 1, dummy3 = 16, dummy4 = 2147483639, dummy5 = 0, dummy6 = 0, dummy7 = 1, dummy8 = 0} prev_arg = 0x7fff1c69dbe5 "--session" print_addr_pipe = {fd_or_handle = -1} print_pid_pipe = {fd_or_handle = -1} i = <value optimized out> print_address = 1 print_pid = 1 force_fork = 1 (gdb) info threads 2 Thread 1096554832 (LWP 11326) 0x000000000077776c in recvfrom () from /lib64/libpthread.so.0 * 1 Thread 140407117903744 (LWP 11324) 0x0000000000e1a012 in strcmp () from /lib64/libc.so.6 Looks like a valid crash. Note this is against 1.2.2 (Fedora 8). Quick analysis: * We had some changes in this function quite recently. There was a bug where the userdb cache was mistakenly disabled. * There were also important fixes to dbus-sysdeps-unix.c:fill_user_info which this depends on. Per, do you have a minimized test case you can post for this bug? If not - is the bug strongly related to the parent-child relationship, or should I be able to make a test case of just multiple processes connecting to the bus and getting killed? I hit this problem in the version of dbus in Ubuntu hardy (1.1.20-1ubuntu3.2). I can reproduce it using valgrind or electric-fence. With valgrind (note that the version in hardy doesn't work; I used the version from Ubuntu jaunty): $ valgrind dbus-1.1.20/bus/dbus-daemon --session --print-address=10 10>/tmp/dbus-address (and in another terminal:) $ DBUS_SESSION_BUS_ADDRESS=$(cat /tmp/dbus-address) python -c "import dbus; dbus.SessionBus()" valgrind produces the following warning twice: ==9193== Invalid read of size 1 ==9193== at 0x56A09EE: strcmp (mc_replace_strmem.c:337) ==9193== by 0x135BCC: find_generic_function (dbus-hash.c:918) ==9193== by 0x1357FE: _dbus_hash_table_insert_string_preallocated (dbus-hash.c:1680) ==9193== by 0x135AD1: _dbus_hash_table_insert_string (dbus-hash.c:1443) ==9193== by 0x13E6ED: _dbus_user_database_lookup (dbus-userdb.c:208) ==9193== by 0x13E822: _dbus_user_database_get_username (dbus-userdb.c:659) ==9193== by 0x13EB2F: _dbus_credentials_add_from_user (dbus-userdb.c:505) ==9193== by 0x145570: handle_server_data_external_mech (dbus-auth.c:1066) ==9193== by 0x14454C: process_data (dbus-auth.c:1607) ==9193== by 0x1447AA: handle_server_state_waiting_for_auth (dbus-auth.c:1659) ==9193== by 0x143BF4: _dbus_auth_do_work (dbus-auth.c:2101) ==9193== by 0x132B45: _dbus_transport_get_is_authenticated (dbus-transport.c:687) ==9193== Address 0x77e6030 is 0 bytes inside a block of size 9 free'd ==9193== at 0x569EDFA: free (vg_replace_malloc.c:323) ==9193== by 0x138360: dbus_free (dbus-memory.c:644) ==9193== by 0x13E31F: _dbus_user_info_free (dbus-userdb.c:78) ==9193== by 0x13E3A5: _dbus_user_info_free_allocated (dbus-userdb.c:49) ==9193== by 0x1357AF: _dbus_hash_table_insert_ulong (dbus-hash.c:1610) ==9193== by 0x13E6CC: _dbus_user_database_lookup (dbus-userdb.c:201) ==9193== by 0x13E822: _dbus_user_database_get_username (dbus-userdb.c:659) ==9193== by 0x13EB2F: _dbus_credentials_add_from_user (dbus-userdb.c:505) ==9193== by 0x145570: handle_server_data_external_mech (dbus-auth.c:1066) ==9193== by 0x14454C: process_data (dbus-auth.c:1607) ==9193== by 0x1447AA: handle_server_state_waiting_for_auth (dbus-auth.c:1659) ==9193== by 0x143BF4: _dbus_auth_do_work (dbus-auth.c:2101) With gdb: $ gdb --args dbus-1.1.20/bus/dbus-daemon --session --print-address=12 12>/tmp/dbus-address (gdb) set environment LD_PRELOAD /usr/lib/libefence.so.0.0 (gdb) run (and in another terminal:) $ DBUS_SESSION_BUS_ADDRESS=$(cat /tmp/dbus-address) python -c "import dbus; dbus.SessionBus()" To fix, apply the patch in Bug 15588 and use --enable-userdb-cache. *** Bug 15589 has been marked as a duplicate of this bug. *** On Bug #15589, Scott wrote this useful-looking summary of the bug: > D-Bus relies on the userdb cache being enabled to be able to hold on to user > info structures (which don't have refcounting). > > Test case: > 1) disable the userdb cache > 2) start a minimal dbus server > 3) connect to it _from the same username_ > > The server will have already looked up its own username, and will be holding on > to the info for that (to compare it against users coming in, I suspect). > > When the new connection comes in, it will look up the username of *that*, which > will invalidate the existing entry in the hash table. Then when it compares > the new info with the info of its own user, you'll be reading from free'd > memory. > > This could be partially fixed by not putting new info entries into the hash > table, but then there'd be a memory leak for every time you looked one up, > since it won't be clear who owns it. Dropping priority and severity since this now only happens in a non-default compile-time configuration (you have to disable the userdb cache explicitly). #0 0x0000000000e1a012 in strcmp () from /lib64/libc.so.6 mallstream = (FILE *) 0x0 tr_old_memalign_hook = (void *(*)(size_t, size_t, const void *)) 0 tr_old_malloc_hook = (void *(*)(size_t, const void *)) 0 tr_old_realloc_hook = (void *(*)(void *, size_t, const void *)) 0 lock = 0 mallenv = "MALLOC_TRACE" malloc_trace_buffer = 0x0 tr_old_free_hook = (void (*)(void *, const void *)) 0 mallwatch = (void *) 0x0 #1 0x00007fb31467abee in find_generic_function (table=0x7fb3155893f0, key=0x7fb3155973b0, idx=10, compare_func=0xe1a010 <strcmp>, create_if_not_found=1, bucket=0x0, preallocated=0x7fb315607828) at dbus-hash.c:918 entry = (DBusHashEntry *) 0x7fb3156077f8 #2 0x00007fb31467aea6 in find_string_function (table=0x7fb3155893f0, key=0x7fb3155973b0, create_if_not_found=1, bucket=0x0, preallocated=0x7fb315607828) at dbus-hash.c:952 No locals. #3 0x00007fb31467a8b1 in _dbus_hash_table_insert_string_preallocated (table=0x7fb3155973b0, preallocated=0x7fb315607828, key=0x7fb3155973b0 "developer", value=0x7fb3155ab350) at dbus-hash.c:1680 entry = <value optimized out> #4 0x00007fb31467aad9 in _dbus_hash_table_insert_string (table=0x7fb3155893f0, key=0x7fb3155973b0 "developer", value=0x7fb3155ab350) at dbus-hash.c:1443 preallocated = (DBusPreallocatedHash *) 0xffffff64 #5 0x00007fb31468171e in _dbus_user_database_lookup (db=0x7fb315589260, uid=500, username=0x7fb3155e9fb0, Seems it's different with #bug 15588, From the above backtrace, it fail due to update userdb table, after #bug 15588 fixed, this can only happen if compiling with "--disable-userdb-cache" because it still try to update userdb hash table, this is not necessary because userdb cache disabled by user. So here is a patch to fix it. Created attachment 82467 [details] [review] [PATCH] Do not update user db cache if build with "--disable-userdb-cache" (In reply to comment #9) > Created attachment 82467 [details] [review] [review] > [PATCH] Do not update user db cache if build with "--disable-userdb-cache" verified with valgrind before/after applying this patch, it works fine. BTW, I'd like to open another bug for userdb-cache option, there still are a lot of code to be disabled at compile time if build with '--disable-userdb-cache'. Created attachment 82468 [details] [review] [PATCH v2] Do not update user db cache if build with "--disable-userdb-cache" fixed coding style. Since 1.7.6 it is impossible to disable the userdb cache. There, solved :-) |
Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct. How we collect and use information is described in our Privacy Policy.