zghover 发表于 2013-12-02 11:42

问大家一个关于(twemproxy)nutcracker-0.2.4无法自动删除节点的问题?


想用twemproxy(nutcracker-0.2.4) 搭建一个redis集群,但发现无法自动删除节点。

版本:
nutcracker-0.2.4

配置文件:
dds:
listen: 0.0.0.0:10000         #使用哪个端口启动Twemproxy
redis: true                   #是否是Redis的proxy
hash: fnv1a_64                #指定具体的hash函数
distribution: ketama          #具体的hash算法
auto_eject_hosts: true      #是否在结点无法响应的时候临时摘除结点
timeout: 400                  #超时时间(毫秒)
server_retry_timeout: 1000    #重试的时间(毫秒)
server_failure_limit: 1       #结点故障多少次就算摘除掉
servers:                      #下面表示所有的Redis节点(IP:端口号:权重)
   - 192.168.98.91:6379:10
    - 192.168.98.93:6379:20
    - 192.168.98.99:6379:60

用-v 11启动服务器,一遍查看日志。

用一个客户端,每隔0.1秒发送一个set请求,每个请求的key是随机生成的,开始的时候一切正常,能分配到3台机器上。
后来我在192.168.98.99上kill掉redis服务。
发现日志如下:


31694 nc_response.c:120 s 10 active 0 is done
31695 nc_core.c:207 close s 10 '192.168.98.99:6379' on event 0001 eof 1 done 1 rb 244 sb 1291
31696 nc_stats.c:1125 metric 'server_connections' in pool 0 server 2
31697 nc_stats.c:1157 decr field 'server_connections' to 0
31698 nc_stats.c:1125 metric 'server_eof' in pool 0 server 2
31699 nc_stats.c:1142 incr field 'server_eof' to 1
31700 nc_server.c:266 server '192.168.98.99:6379:60' failure count 1 limit 1
31701 nc_server.c:281 update pool 0 'dds' to delete server '192.168.98.99:6379:60' for next 1 secs
31702 nc_stats.c:1039 metric 'server_ejects' in pool 0
31703 nc_stats.c:1056 incr field 'server_ejects' to 1
31704 nc_ketama.c:122 2 of 3 servers are live for pool 0 'dds'
31705 nc_ketama.c:168 192.168.98.91:6379:6379 weight 10 of 30 pct 0.33333 points per server 104
31706 nc_ketama.c:168 192.168.98.93:6379:6379 weight 20 of 30 pct 0.66667 points per server 212
31707 nc_ketama.c:210 updated pool 0 'dds' with 2 of 3 servers live in 13 slots and 316 active points in 3680 slots
31708 nc_server.c:63 unref conn 0x8c8b4e8 owner 0x8c7ecb0 from '192.168.98.99:6379:60'
31709 nc_connection.c:267 put conn 0x8c8b4e8
31710 nc_stats.c:996 skip swap of current 0x8c830f8 shadow 0x8c8f700 as aggregator is busy
31711 nc_core.c:285 event 0001 on c 8
31712 nc_message.c:300 get msg 0x8c87248 id 1010 request 1 owner sd 8
31713 nc_mbuf.c:99 get mbuf 0x8c8b320
31714 nc_mbuf.c:182 insert mbuf 0x8c8b320 len 0
31715 nc_connection.c:307 recv on sd 8 35 of 16360

说明twemproxy已经更新了ketama(一致性hash算法)的节点。

客户端继续发送请求,却不时的出现Connection refused的错误,再查看日志,发现192.168.98.99又被添加入ketama的节点中,日志如下:

33598 nc_ketama.c:122 3 of 3 servers are live for pool 0 'dds'
33599 nc_ketama.c:168 192.168.98.91:6379:6379 weight 10 of 90 pct 0.11111 points per server 52
33600 nc_ketama.c:168 192.168.98.93:6379:6379 weight 20 of 90 pct 0.22222 points per server 104
33601 nc_ketama.c:168 192.168.98.99:6379:6379 weight 60 of 90 pct 0.66667 points per server 320
33602 nc_ketama.c:210 updated pool 0 'dds' with 3 of 3 servers live in 13 slots and 476 active points in 3680 slots


所有会有以下错误:


33978 nc_server.c:642 key '测试9' on dist 0 maps to server '192.168.98.99:6379:60'
33979 nc_util.c:225 malloc(176) at 0x8c8b5a0 @ nc_connection.c:100
33980 nc_server.c:44 ref conn 0x8c8b5a0 owner 0x8c7ecb0 into '192.168.98.99:6379:60
33981 nc_connection.c:208 get conn 0x8c8b5a0 client 0
33982 nc_server.c:452 connect to server '192.168.98.99:6379:60'
33983 nc_server.c:494 connecting on s 11 to server '192.168.98.99:6379:60'
33984 nc_message.c:165 insert msg 1082 into tmo rbt with expiry of 400 msec
33985 nc_stats.c:1125 metric 'in_queue' in pool 0 server 2
33986 nc_stats.c:1142 incr field 'in_queue' to 1
33987 nc_stats.c:1125 metric 'in_queue_bytes' in pool 0 server 2
33988 nc_stats.c:1172 incr by field 'in_queue_bytes' to 33
33989 nc_stats.c:1125 metric 'requests' in pool 0 server 2
33990 nc_stats.c:1142 incr field 'requests' to 46
33991 nc_stats.c:1125 metric 'request_bytes' in pool 0 server 2
33992 nc_stats.c:1172 incr by field 'request_bytes' to 1324
33993 nc_request.c:498 forward from c 8 to s 11 req 1082 len 33 type 49 with key '测试9'
33994 nc_stats.c:996 skip swap of current 0x8c830f8 shadow 0x8c8f700 as aggregator is busy
33995 nc_core.c:285 event 0019 on s 11
33996 nc_core.c:207 close s 11 '192.168.98.99:6379' on event 0019 eof 0 done 0 rb 0 sb 0: Connection refused
33997 nc_stats.c:1125 metric 'server_err' in pool 0 server 2
33998 nc_stats.c:1142 incr field 'server_err' to 1
33999 nc_stats.c:1125 metric 'in_queue' in pool 0 server 2
34000 nc_stats.c:1157 decr field 'in_queue' to 0
34001 nc_stats.c:1125 metric 'in_queue_bytes' in pool 0 server 2
34002 nc_stats.c:1187 decr by field 'in_queue_bytes' to 0
34003 nc_server.c:376 close s 11 schedule error for req 1082 len 33 type 49 from c 8: Connection refused
34004 nc_server.c:266 server '192.168.98.99:6379:60' failure count 1 limit 1
34005 nc_server.c:281 update pool 0 'dds' to delete server '192.168.98.99:6379:60' for next 1 secs
34006 nc_stats.c:1039 metric 'server_ejects' in pool 0
34007 nc_stats.c:1056 incr field 'server_ejects' to 2
34008 nc_ketama.c:122 2 of 3 servers are live for pool 0 'dds'
34009 nc_ketama.c:168 192.168.98.91:6379:6379 weight 10 of 30 pct 0.33333 points per server 104
34010 nc_ketama.c:168 192.168.98.93:6379:6379 weight 20 of 30 pct 0.66667 points per server 212
34011 nc_ketama.c:210 updated pool 0 'dds' with 2 of 3 servers live in 13 slots and 316 active points in 3680 slots
34012 nc_server.c:63 unref conn 0x8c8b5a0 owner 0x8c7ecb0 from '192.168.98.99:6379:60'

此时又检查到192.168.98.99错误,又把它从ketama中删除掉,但又来又被加上了,然后一直这样下去。

请教大家,这是怎么回事?多谢。

zghover 发表于 2013-12-06 10:51

对该软件做了一些修改,若有节点死掉,会重构一致性hash算法(ketama),也就实现了自动删除节点功能。
但若该节点自动恢复后,如何把该节点又添加到hash算法中,还在研究中。
页: [1]
查看完整版本: 问大家一个关于(twemproxy)nutcracker-0.2.4无法自动删除节点的问题?