免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 5546 | 回复: 16
打印 上一主题 下一主题

[C] 多线程死锁了,给下gdb日志,帮我分析下 [复制链接]

论坛徽章:
1
2015年辞旧岁徽章
日期:2015-03-03 16:54:15
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2016-01-09 10:46 |只看该作者 |倒序浏览
本帖最后由 jd808 于 2016-01-09 10:48 编辑
  1. (gdb) info threads
  2.   Id   Target Id         Frame
  3.   7    Thread 0x7f82e25eb700 (LWP 24196) "Xgateway" __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  4.   6    Thread 0x7f82e1dea700 (LWP 24197) "Xgateway" __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  5.   5    Thread 0x7f82e15e9700 (LWP 24198) "Xgateway" __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  6.   4    Thread 0x7f82e0de8700 (LWP 24199) "Xgateway" __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  7.   3    Thread 0x7f82e05e7700 (LWP 24200) "Xgateway" __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  8.   2    Thread 0x7f82dfde6700 (LWP 24201) "Xgateway" 0x00007f82e26a948d in nanosleep () at ../sysdeps/unix/syscall-template.S:81
  9. * 1    Thread 0x7f82e3ea7740 (LWP 24195) "Xgateway" __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  10. (gdb) thread 1
  11. [Switching to thread 1 (Thread 0x7f82e3ea7740 (LWP 24195))]
  12. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  13. 135        2:        movl        %edx, %eax
  14. (gdb) bt
  15. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  16. #1  0x00007f82e3a83d32 in _L_lock_791 () from /lib64/libpthread.so.0
  17. #2  0x00007f82e3a83c38 in __GI___pthread_mutex_lock (mutex=0x1cb04f8) at pthread_mutex_lock.c:64
  18. #3  0x0000000000415709 in TServer::CloseEvent (this=<optimized out>, conn=0x1cb0370, events=<optimized out>)
  19.     at gateway/src/TServer.cpp:340
  20. #4  0x00007f82e36001ee in bufferevent_readcb (fd=20, event=<optimized out>, arg=0x1ca4500) at bufferevent_sock.c:196
  21. #5  0x00007f82e3605849 in event_persist_closure (ev=<optimized out>, base=0x1ca15c0) at event.c:1531
  22. #6  event_process_active_single_queue (base=base@entry=0x1ca15c0, activeq=0x1badc90, max_to_process=max_to_process@entry=2147483647,
  23.     endtime=endtime@entry=0x0) at event.c:1590
  24. #7  0x00007f82e36060ff in event_process_active (base=0x1ca15c0) at event.c:1689
  25. #8  event_base_loop (base=0x1ca15c0, flags=0) at event.c:1912
  26. #9  0x0000000000414e01 in MultiServer::StartRun (this=0x1c53880) at gateway/src/MultiServer.cpp:329
  27. #10 0x000000000041ce52 in TControl::Run (this=this@entry=0x7fff7eaf4ab0) at gateway/src/TControl.cpp:30
  28. #11 0x00000000004060e0 in main (argc=1, argv=0x7fff7eaf4bc8) at gateway/src/node_gateway.cpp:64
  29. (gdb) thread 3
  30. [Switching to thread 3 (Thread 0x7f82e05e7700 (LWP 24200))]
  31. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  32. 135        2:        movl        %edx, %eax
  33. (gdb) bt
  34. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  35. #1  0x00007f82e3a83d32 in _L_lock_791 () from /lib64/libpthread.so.0
  36. #2  0x00007f82e3a83c38 in __GI___pthread_mutex_lock (mutex=0x1cb04f8) at pthread_mutex_lock.c:64
  37. #3  0x0000000000406869 in TEngine::Base_Client_Lock (this=0x1cad670, send_conn=0x1cb0370, command=0x7f82e05e6bf0 "\031\260!",
  38.     size=52, is_zip=1) at gateway/src/TEngine.cpp:113
  39. #4  0x000000000040766c in TUserEngine::send_map_xy_client (this=<optimized out>, pcon=<optimized out>, Build_conn=<optimized out>,
  40.     command=<optimized out>, cmdlen=<optimized out>, send_count=<optimized out>) at gateway/src/TUserEngine.cpp:794
  41. #5  0x00000000004131ec in operator() (__args#4=0x7f82e05e6bac, __args#3=52, __args#2=0x7f82e05e6bf0 "\031\260!", __args#1=0x1cb0370,
  42.     __args#0=<optimized out>, this=0x7f82e05e6bd0) at /usr/include/c++/4.8.2/tr1/functional:2146
  43. #6  BaseTEngine::CallConnList(std::tr1::function<int (Conn*, Conn*, char const*, int, void*)>, Conn*, char const*, int, void*) (
  44.     this=this@entry=0x1cad670, f=..., ActConn=ActConn@entry=0x1cb0370, command=command@entry=0x7f82e05e6bf0 "\031\260!",
  45.     cmdlen=cmdlen@entry=52, n=n@entry=0x7f82e05e6bac) at gateway/src/BaseTEngine.cpp:166
  46. #7  0x0000000000409380 in TUserEngine::_MoveXY (this=this@entry=0x1cad670, x=x@entry=211, y=y@entry=2414, pint AddToWriteBuffer(const void *buffer, int len)
  47.     {
  48. //                m_WriteBuf = bufferevent_get_output(m_bev);
  49.                 //printf("m_WriteBuf[%p]\n", m_WriteBuf);
  50.             if(fd_type>0 && m_WriteBuf!=NULL) {
  51.                         int res = evbuffer_add(m_WriteBuf, buffer, len);
  52. //                        int res = bufferevent_write(m_bev, buffer, len);//这个和上面evbuffer_add是一样的
  53.                         return res;

  54.                 }
  55.                 else
  56.                         return -1;
  57.     }arameter=1,
  58.     hero_user_id=-1) at gateway/src/TUserEngine.cpp:643
  59. #8  0x00000000004093f7 in TUserEngine::moveXY (this=this@entry=0x1cad670,
  60.     command=command@entry=0x7f82c8006104 <incomplete sequence \323>, DataLen=DataLen@entry=16) at gateway/src/TUserEngine.cpp:592
  61. #9  0x000000000040aa76 in TUserEngine::ExecCmd (this=this@entry=0x1cad670, command=0x7f82c8006104 <incomplete sequence \323>,
  62.     DataLen=DataLen@entry=16) at gateway/src/TUserEngine.cpp:174
  63. #10 0x000000000040af82 in TUserEngine::Process (this=0x1cad670, arg=<optimized out>, pconn=<optimized out>, CMD=<optimized out>,
  64.     CMDlen=<optimized out>) at gateway/src/TUserEngine.cpp:42
  65. #11 0x0000000000415fe6 in TServer::ThreadProcessD (this=<optimized out>, pcon=0x1cb0370, CMD=<optimized out>, CMDlen=43,
  66.     arg=0x1ca1c90) at gateway/src/TServer.cpp:123
  67. #12 0x0000000000416279 in TServer::ThreadProcess (this=<optimized out>, arg=<optimized out>) at gateway/src/TServer.cpp:186
  68. #13 0x0000000000414520 in MultiServer::WorkerLibevent (arg=<optimized out>) at gateway/src/MultiServer.cpp:361
  69. #14 0x00007f82e3a81df5 in start_thread (arg=0x7f82e05e7700) at pthread_create.c:308
  70. #15 0x00007f82e26e21ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
  71. (gdb) thread 4
  72. [Switching to thread 4 (Thread 0x7f82e0de8700 (LWP 24199))]
  73. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  74. 135        2:        movl        %edx, %eax
  75. (gdb) bt
  76. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  77. #1  0x00007f82e3a83d32 in _L_lock_791 () from /lib64/libpthread.so.0
  78. #2  0x00007f82e3a83c38 in __GI___pthread_mutex_lock (mutex=0x1cb04f8) at pthread_mutex_lock.c:64
  79. #3  0x0000000000406869 in TEngine::Base_Client_Lock (this=0x1caa750, send_conn=0x1cb0370, command=0x7f82e0de7bf0 "\031\260!",
  80.     size=52, is_zip=1) at gateway/src/TEngine.cpp:113
  81. #4  0x000000000040766c in TUserEngine::send_map_xy_client (this=<optimized out>, pcon=<optimized out>, Build_conn=<optimized out>,
  82.     command=<optimized out>, cmdlen=<optimized out>, send_count=<optimized out>) at gateway/src/TUserEngine.cpp:794
  83. #5  0x00000000004131ec in operator() (__args#4=0x7f82e0de7bac, __args#3=52, __args#2=0x7f82e0de7bf0 "\031\260!", __args#1=0x1cb0370,
  84.     __args#0=<optimized out>, this=0x7f82e0de7bd0) at /usr/include/c++/4.8.2/tr1/functional:2146
  85. #6  BaseTEngine::CallConnList(std::tr1::function<int (Conn*, Conn*, char const*, int, void*)>, Conn*, char const*, int, void*) (
  86.     this=this@entry=0x1caa750, f=..., ActConn=ActConn@entry=0x1cb0370, command=command@entry=0x7f82e0de7bf0 "\031\260!",
  87.     cmdlen=cmdlen@entry=52, n=n@entry=0x7f82e0de7bac) at gateway/src/BaseTEngine.cpp:166
  88. #7  0x0000000000409380 in TUserEngine::_MoveXY (this=this@entry=0x1caa750, x=x@entry=336, y=y@entry=2251, parameter=1,
  89.     hero_user_id=-1) at gateway/src/TUserEngine.cpp:643
  90. #8  0x00000000004093f7 in TUserEngine::moveXY (this=this@entry=0x1caa750, command=command@entry=0x7f82c0007384 "P\001",
  91.     DataLen=DataLen@entry=16) at gateway/src/TUserEngine.cpp:592
  92. #9  0x000000000040aa76 in TUserEngine::ExecCmd (this=this@entry=0x1caa750, command=0x7f82c0007384 "P\001", DataLen=DataLen@entry=16)
  93.     at gateway/src/TUserEngine.cpp:174
  94. #10 0x000000000040af82 in TUserEngine::Process (this=0x1caa750, arg=<optimized out>, pconn=<optimized out>, CMD=<optimized out>,
  95.     CMDlen=<optimized out>) at gateway/src/TUserEngine.cpp:42
  96. #11 0x0000000000415fe6 in TServer::ThreadProcessD (this=<optimized out>, pcon=0x1cb0370, CMD=<optimized out>, CMDlen=43,
  97.     arg=0x1ca1c40) at gateway/src/TServer.cpp:123
  98. #12 0x0000000000416279 in TServer::ThreadProcess (this=<optimized out>, arg=<optimized out>) at gateway/src/TServer.cpp:186
  99. #13 0x0000000000414520 in MultiServer::WorkerLibevent (arg=<optimized out>) at gateway/src/MultiServer.cpp:361
  100. #14 0x00007f82e3a81df5 in start_thread (arg=0x7f82e0de8700) at pthread_create.c:308
  101. #15 0x00007f82e26e21ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
  102. (gdb) thread 5
  103. [Switching to thread 5 (Thread 0x7f82e15e9700 (LWP 24198))]
  104. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  105. 135        2:        movl        %edx, %eax
  106. (gdb) bt
  107. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  108. #1  0x00007f82e3a83d4d in _L_lock_840 () from /lib64/libpthread.so.0
  109. #2  0x00007f82e3a83c6a in __GI___pthread_mutex_lock (mutex=0x1ca4820) at pthread_mutex_lock.c:85
  110. #3  0x00007f82e3609852 in debug_lock_lock (mode=<optimized out>, lock_=0x1cb0750) at evthread.c:227
  111. #4  0x00007f82e35f85ac in evbuffer_add (buf=0x1ca4790, data_in=0x1cb1ac0, datlen=79) at buffer.c:1725
  112. #5  0x00000000004182d7 in AddToWriteBuffer (this=0x1cb0370, this=0x1cb0370, len=<optimized out>, buffer=<optimized out>)
  113.     at gateway/src/../inc/MultiServer.h:232
  114. #6  socket_send (conn=conn@entry=0x1cb0370, buffer=<optimized out>, size=<optimized out>) at gateway/src/function.cpp:91
  115. #7  0x00000000004068b3 in TEngine::Base_Client_Lock (this=0x1ca7830, send_conn=0x1cb0370, command=0x7f82e15e8bf0 "\031\260!",
  116.     size=<optimized out>, is_zip=1) at gateway/src/TEngine.cpp:128
  117. #8  0x000000000040766c in TUserEngine::send_map_xy_client (this=<optimized out>, pcon=<optimized out>, Build_conn=<optimized out>,
  118.     command=<optimized out>, cmdlen=<optimized out>, send_count=<optimized out>) at gateway/src/TUserEngine.cpp:794
  119. #9  0x00000000004131ec in operator() (__args#4=0x7f82e15e8bac, __args#3=52, __args#2=0x7f82e15e8bf0 "\031\260!", __args#1=0x1cb0370,
  120.     __args#0=<optimized out>, this=0x7f82e15e8bd0) at /usr/include/c++/4.8.2/tr1/functional:2146
  121. #10 BaseTEngine::CallConnList(std::tr1::function<int (Conn*, Conn*, char const*, int, void*)>, Conn*, char const*, int, void*) (
  122.     this=this@entry=0x1ca7830, f=..., ActConn=ActConn@entry=0x1cb0370, command=command@entry=0x7f82e15e8bf0 "\031\260!",
  123.     cmdlen=cmdlen@entry=52, n=n@entry=0x7f82e15e8bac) at gateway/src/BaseTEngine.cpp:166
  124. #11 0x0000000000409380 in TUserEngine::_MoveXY (this=this@entry=0x1ca7830, x=x@entry=266, y=y@entry=2315, parameter=1,
  125.     hero_user_id=-1) at gateway/src/TUserEngine.cpp:643
  126. #12 0x00000000004093f7 in TUserEngine::moveXY (this=this@entry=0x1ca7830, command=command@entry=0x7f82d8003594 "\n\001",
  127.     DataLen=DataLen@entry=16) at gateway/src/TUserEngine.cpp:592
  128. #13 0x000000000040aa76 in TUserEngine::ExecCmd (this=this@entry=0x1ca7830, command=0x7f82d8003594 "\n\001", DataLen=DataLen@entry=16)
  129.     at gateway/src/TUserEngine.cpp:174
  130. #14 0x000000000040af82 in TUserEngine::Process (this=0x1ca7830, arg=<optimized out>, pconn=<optimized out>, CMD=<optimized out>,
  131.     CMDlen=<optimized out>) at gateway/src/TUserEngine.cpp:42
  132. #15 0x0000000000415fe6 in TServer::ThreadProcessD (this=<optimized out>, pcon=0x1cb0370, CMD=<optimized out>, CMDlen=43,
  133.     arg=0x1ca1bf0) at gateway/src/TServer.cpp:123
  134. #16 0x0000000000416279 in TServer::ThreadProcess (this=<optimized out>, arg=<optimized out>) at gateway/src/TServer.cpp:186
  135. #17 0x0000000000414520 in MultiServer::WorkerLibevent (arg=<optimized out>) at gateway/src/MultiServer.cpp:361
  136. #18 0x00007f82e3a81df5 in start_thread (arg=0x7f82e15e9700) at pthread_create.c:308
  137. ---Type <return> to continue, or q <return> to quit---
  138. #19 0x00007f82e26e21ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
  139. (gdb) thread 6
  140. [Switching to thread 6 (Thread 0x7f82e1dea700 (LWP 24197))]
  141. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  142. 135        2:        movl        %edx, %eax
  143. (gdb) bt
  144. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  145. #1  0x00007f82e3a83d32 in _L_lock_791 () from /lib64/libpthread.so.0
  146. #2  0x00007f82e3a83c38 in __GI___pthread_mutex_lock (mutex=0x1cb04f8) at pthread_mutex_lock.c:64
  147. #3  0x0000000000406869 in TEngine::Base_Client_Lock (this=0x1ca4910, send_conn=0x1cb0370, command=0x7f82e1de9bf0 "\031\260!",
  148.     size=52, is_zip=1) at gateway/src/TEngine.cpp:113
  149. #4  0x000000000040766c in TUserEngine::send_map_xy_client (this=<optimized out>, pcon=<optimized out>, Build_conn=<optimized out>,
  150.     command=<optimized out>, cmdlen=<optimized out>, send_count=<optimized out>) at gateway/src/TUserEngine.cpp:794
  151. #5  0x00000000004131ec in operator() (__args#4=0x7f82e1de9bac, __args#3=52, __args#2=0x7f82e1de9bf0 "\031\260!", __args#1=0x1cb0370,
  152.     __args#0=<optimized out>, this=0x7f82e1de9bd0) at /usr/include/c++/4.8.2/tr1/functional:2146
  153. #6  BaseTEngine::CallConnList(std::tr1::function<int (Conn*, Conn*, char const*, int, void*)>, Conn*, char const*, int, void*) (
  154.     this=this@entry=0x1ca4910, f=..., ActConn=ActConn@entry=0x1cb0370, command=command@entry=0x7f82e1de9bf0 "\031\260!",
  155.     cmdlen=cmdlen@entry=52, n=n@entry=0x7f82e1de9bac) at gateway/src/BaseTEngine.cpp:166
  156. #7  0x0000000000409380 in TUserEngine::_MoveXY (this=this@entry=0x1ca4910, x=x@entry=405, y=y@entry=2373, parameter=1,
  157.     hero_user_id=-1) at gateway/src/TUserEngine.cpp:643
  158. #8  0x00000000004093f7 in TUserEngine::moveXY (this=this@entry=0x1ca4910, command=command@entry=0x7f82d0005de4 "\225\001",
  159.     DataLen=DataLen@entry=16) at gateway/src/TUserEngine.cpp:592
  160. #9  0x000000000040aa76 in TUserEngine::ExecCmd (this=this@entry=0x1ca4910, command=0x7f82d0005de4 "\225\001",
  161.     DataLen=DataLen@entry=16) at gateway/src/TUserEngine.cpp:174
  162. #10 0x000000000040af82 in TUserEngine::Process (this=0x1ca4910, arg=<optimized out>, pconn=<optimized out>, CMD=<optimized out>,
  163.     CMDlen=<optimized out>) at gateway/src/TUserEngine.cpp:42
  164. #11 0x0000000000415fe6 in TServer::ThreadProcessD (this=<optimized out>, pcon=0x1cb0370, CMD=<optimized out>, CMDlen=43,
  165.     arg=0x1ca1ba0) at gateway/src/TServer.cpp:123
  166. #12 0x0000000000416279 in TServer::ThreadProcess (this=<optimized out>, arg=<optimized out>) at gateway/src/TServer.cpp:186
  167. #13 0x0000000000414520 in MultiServer::WorkerLibevent (arg=<optimized out>) at gateway/src/MultiServer.cpp:361
  168. #14 0x00007f82e3a81df5 in start_thread (arg=0x7f82e1dea700) at pthread_create.c:308
  169. #15 0x00007f82e26e21ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
  170. (gdb) thread 7
  171. [Switching to thread 7 (Thread 0x7f82e25eb700 (LWP 24196))]
  172. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  173. 135        2:        movl        %edx, %eax
  174. (gdb) bt
  175. #0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
  176. #1  0x00007f82e3a83d32 in _L_lock_791 () from /lib64/libpthread.so.0
  177. #2  0x00007f82e3a83c38 in __GI___pthread_mutex_lock (mutex=0x1cb04f8) at pthread_mutex_lock.c:64
  178. #3  0x00000000004158a1 in TServer::AutoCloseEvent (this=<optimized out>, conn=0x1cb0370) at gateway/src/TServer.cpp:525
  179. #4  0x0000000000416693 in TServer::TimeOutCb (this=this@entry=0x1c53880) at gateway/src/TServer.cpp:500
  180. #5  0x00000000004167a2 in TServer::TimerCb (this=0x1c53880, arg=0x1cae4f0) at gateway/src/TServer.cpp:443
  181. #6  0x00000000004144e1 in MultiServer::WorkerTimerEvent (arg=<optimized out>) at gateway/src/MultiServer.cpp:342
  182. #7  0x00007f82e3a81df5 in start_thread (arg=0x7f82e25eb700) at pthread_create.c:308
  183. #8  0x00007f82e26e21ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
复制代码
代码大致是

  1. int TEngine::Base_Client_Lock(Conn *send_conn,const char* command,int size, int is_zip)
  2. {
  3.         if(send_conn==NULL)return -1;
  4.         pthread_mutex_lock(&send_conn->UserCMDLock);
  5.         if(send_conn==NULL)
  6.                 return -1;

  7.         if(send_conn->is_down==1){
  8.                 pthread_mutex_unlock(&send_conn->UserCMDLock);
  9.                 return -1;
  10.         }

  11.         int mid=proto->MODLUE_ID<0?proto->MODLUE_ID:9999999;
  12.         if (Client_BuildPact(mid,m_sbuff, command,size, is_zip) == -1)
  13.         {
  14.                 pthread_mutex_unlock(&send_conn->UserCMDLock);
  15.                 return -1;
  16.         }
  17.         socket_send(send_conn, m_sbuff->buff,m_sbuff->data_size);
  18.         pthread_mutex_unlock(&send_conn->UserCMDLock);
  19.         return 0;
  20. }

  21. int socket_send(Conn *conn,const  char *buffer, int size)
  22. {
  23. //        pthread_t pid = pthread_self();
  24. //        printf("%p|%ld,添加到发送缓冲区1\n", conn, pid);
  25.         //pthread_mutex_lock(&conn->GetThread()->send_mutex);   //ok
  26.         //pthread_mutex_lock(&conn->send_mutex);
  27.         //printf("%p|%ld,添加到发送缓冲区2\n", conn, pid);
  28.         if (conn != NULL)
  29.                 conn->AddToWriteBuffer((void *)buffer, size);

  30. //        printf("%p|%ld,添加成功并返回1\n", conn, pid);
  31.         //pthread_mutex_unlock(&conn->send_mutex);
  32.         //pthread_mutex_unlock(&conn->GetThread()->send_mutex);
  33.         //printf("%p|%ld,添加成功并返回2\n", conn, pid);
  34.        
  35.         return 0;
  36. }

  37. int AddToWriteBuffer(const void *buffer, int len)
  38.     {
  39. //                m_WriteBuf = bufferevent_get_output(m_bev);
  40.                 //printf("m_WriteBuf[%p]\n", m_WriteBuf);
  41.             if(fd_type>0 && m_WriteBuf!=NULL) {
  42.                         int res = evbuffer_add(m_WriteBuf, buffer, len);
  43. //                        int res = bufferevent_write(m_bev, buffer, len);//这个和上面evbuffer_add是一样的
  44.                         return res;

  45.                 }
  46.                 else
  47.                         return -1;
  48.     }

复制代码
用的是libevent框架,evbuffer_add里面libevent本身有个锁,他锁住不放,会导致上一层的pthread_mutex_unlock(&send_conn->UserCMDLock);不会释放,导致其他线程卡在pthread_mutex_lock(&send_conn->UserCMDLock);锁这里,但问题是,从日志中无法确定evbuffer_add里面的锁为什么不释放,因为他不会导致交叉锁呀,锁都在这个函数里封装好的,而外面想调用evbuffer_add,必须经过pthread_mutex_lock(&send_conn->UserCMDLock);锁,怎么破?????

论坛徽章:
4
2015年辞旧岁徽章
日期:2015-03-03 16:54:152015年迎新春徽章
日期:2015-03-04 09:56:11IT运维版块每日发帖之星
日期:2016-08-11 06:20:00IT运维版块每日发帖之星
日期:2016-08-15 06:20:00
2 [报告]
发表于 2016-01-10 09:14 |只看该作者
有个地方在函数返回前没有unlock。

贴出一下:
        pthread_mutex_lock(&send_conn->UserCMDLock);
        if(send_conn==NULL)
                return -1;

论坛徽章:
1
2015年辞旧岁徽章
日期:2015-03-03 16:54:15
3 [报告]
发表于 2016-01-11 09:57 |只看该作者
happy_fish100 发表于 2016-01-10 09:14
有个地方在函数返回前没有unlock。

贴出一下:
这个没错的,send_conn==NULL的时候是无法解锁的,而且这把锁也已经消失了。

论坛徽章:
14
水瓶座
日期:2014-06-10 09:51:0215-16赛季CBA联赛之江苏
日期:2017-11-27 11:42:3515-16赛季CBA联赛之八一
日期:2017-04-12 14:26:2815-16赛季CBA联赛之吉林
日期:2016-08-20 10:43:1215-16赛季CBA联赛之广夏
日期:2016-06-23 09:53:58程序设计版块每日发帖之星
日期:2016-02-11 06:20:00程序设计版块每日发帖之星
日期:2016-02-09 06:20:0015-16赛季CBA联赛之上海
日期:2015-12-25 16:40:3515-16赛季CBA联赛之广夏
日期:2015-12-22 09:39:36程序设计版块每日发帖之星
日期:2015-08-24 06:20:002015亚冠之德黑兰石油
日期:2015-08-07 09:57:302015年辞旧岁徽章
日期:2015-03-03 16:54:15
4 [报告]
发表于 2016-01-11 10:51 |只看该作者
本帖最后由 lxyscls 于 2016-01-11 10:53 编辑

回复 3# jd808
  1. pthread_mutex_lock(&send_conn->UserCMDLock);
  2. if(send_conn==NULL)
  3.     return -1;
复制代码
你这样实现,本身就是错误的,send_conn == NULL,lock就crash了。你需要的是判断lock的返回状态。

论坛徽章:
1
2015年辞旧岁徽章
日期:2015-03-03 16:54:15
5 [报告]
发表于 2016-01-11 11:07 |只看该作者
lxyscls 发表于 2016-01-11 10:51
回复 3# jd808 你这样实现,本身就是错误的,send_conn == NULL,lock就crash了。你需要的是判断lock的返回 ...

没听过可以判断互斥锁的状态呀

论坛徽章:
1
2015年辞旧岁徽章
日期:2015-03-03 16:54:15
6 [报告]
发表于 2016-01-11 11:10 |只看该作者
lxyscls 发表于 2016-01-11 10:51
回复 3# jd808 你这样实现,本身就是错误的,send_conn == NULL,lock就crash了。你需要的是判断lock的返回 ...
而且这也没错,a线程锁住他,准备将句柄删除,b线程在锁这里等待,a线程删除完成后锁不存在,pconn置为NULL,如果b线程没报错,顺利锁住,这个时候pcon可能等于NULL,所以这个时候需要判断pcon是否等于NULL,如果等于NULL,说明句柄实体都没了,就没有继续下去的必要,锁也将不存在了,(锁是建立在pcon里面的。)

论坛徽章:
14
水瓶座
日期:2014-06-10 09:51:0215-16赛季CBA联赛之江苏
日期:2017-11-27 11:42:3515-16赛季CBA联赛之八一
日期:2017-04-12 14:26:2815-16赛季CBA联赛之吉林
日期:2016-08-20 10:43:1215-16赛季CBA联赛之广夏
日期:2016-06-23 09:53:58程序设计版块每日发帖之星
日期:2016-02-11 06:20:00程序设计版块每日发帖之星
日期:2016-02-09 06:20:0015-16赛季CBA联赛之上海
日期:2015-12-25 16:40:3515-16赛季CBA联赛之广夏
日期:2015-12-22 09:39:36程序设计版块每日发帖之星
日期:2015-08-24 06:20:002015亚冠之德黑兰石油
日期:2015-08-07 09:57:302015年辞旧岁徽章
日期:2015-03-03 16:54:15
7 [报告]
发表于 2016-01-11 11:11 |只看该作者
回复 5# jd808


    那你需要man pthread_mutex_lock

论坛徽章:
14
水瓶座
日期:2014-06-10 09:51:0215-16赛季CBA联赛之江苏
日期:2017-11-27 11:42:3515-16赛季CBA联赛之八一
日期:2017-04-12 14:26:2815-16赛季CBA联赛之吉林
日期:2016-08-20 10:43:1215-16赛季CBA联赛之广夏
日期:2016-06-23 09:53:58程序设计版块每日发帖之星
日期:2016-02-11 06:20:00程序设计版块每日发帖之星
日期:2016-02-09 06:20:0015-16赛季CBA联赛之上海
日期:2015-12-25 16:40:3515-16赛季CBA联赛之广夏
日期:2015-12-22 09:39:36程序设计版块每日发帖之星
日期:2015-08-24 06:20:002015亚冠之德黑兰石油
日期:2015-08-07 09:57:302015年辞旧岁徽章
日期:2015-03-03 16:54:15
8 [报告]
发表于 2016-01-11 11:13 |只看该作者
回复 6# jd808


    destroy掉了,你还能lock得住,才真是见鬼了
    要是没有destroy,你只是free了外部的那个结构体,岂不是就资源泄露了?

论坛徽章:
4
2015年辞旧岁徽章
日期:2015-03-03 16:54:152015年迎新春徽章
日期:2015-03-04 09:56:11IT运维版块每日发帖之星
日期:2016-08-11 06:20:00IT运维版块每日发帖之星
日期:2016-08-15 06:20:00
9 [报告]
发表于 2016-01-11 11:40 |只看该作者
本帖最后由 happy_fish100 于 2016-01-11 11:41 编辑

lock住之后判断宿主指针是否为空,总感觉这样的用法不科学!(lock之前已经判断过了)

如果其他线程会释放宿主指针,可以使用如下步骤:
1. lock,实现互斥
2. 告诉使用宿主指针的其他线程宿主即将释放,可以在宿主结构体中增加一个成员变量
3. unlock
4. 延迟释放指针,延迟时间根据实际情况设置。

当然也可以有其他实现方式,比如智能指针(引用计数为0才真正释放空间)。
建议使用智能指针方式。

论坛徽章:
1
2015年辞旧岁徽章
日期:2015-03-03 16:54:15
10 [报告]
发表于 2016-01-11 15:23 |只看该作者
lxyscls 发表于 2016-01-11 11:13
回复 6# jd808
他们同时在竞争锁,结果删除线程抢赢了,操作线程抢输了,那么操作线程那个家伙肯定是在排队,当删除线程操作完成后,那个家伙的内存肯定也不存在了,但不知道他在另外一个线程哪里排队情况如何?如果锁计数器是归0,还是不可预知,但可以肯定的时候,轮到他的时候,不关如何,都能够锁成功。
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP