免费注册 查看新帖 |

Chinaunix

  平台 论坛 博客 文库
12下一页
最近访问板块 发新帖
查看: 7429 | 回复: 10
打印 上一主题 下一主题

[FastDFS] fastDFS使用怪现象,实在找不到答案,跪求!!! [复制链接]

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-02 06:20:00IT运维版块每日发帖之星
日期:2016-04-03 06:20:00
跳转到指定楼层
1 [收藏(0)] [报告]
发表于 2016-03-30 20:05 |只看该作者 |倒序浏览
新建了一个fastDFS的系统,
很简单: 一个tracker 一个group(两个storage),tracker地址:192.168.200.199,storage1地址:192.168.200.220,storage2地址:192.168.200.240
tracker和storage都正常启动,日志等均无错误显示。能正常上传文件,且两个storage都上传成功。
使用fdfs_download_file也能下载文件。

问题:
  我每次使用fdfs_download_file下载文件时tracker返回给我的storage一直是storage1(我在fdfs_download_file中把tracker返回给我的storage的IP地址打印出来看),无论我下载多少次都是这样,而且我还尝试运行多个fdfs_download_file进行同时下载,还有就是多台客户端电脑同时下载,结果都相同。我能确保storage2是正常工作的,因为我手动关闭storage1后用fdfs_download_file去下载时tracker返回给我的storage变成了storage2。我也反复检查了我的配置和这个问题相关的参数就只有tracker.conf中的download_server=0,# 0: 轮询方式,可以下载当前文件的任一storage server

实在是没有办法,跪求帮助啊!!!

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-02 06:20:00IT运维版块每日发帖之星
日期:2016-04-03 06:20:00
2 [报告]
发表于 2016-03-30 20:18 |只看该作者
按理说参数download_server=0,那tracker应该进行轮询的负载均衡啊,现在它总是只返回storage1

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-02 06:20:00IT运维版块每日发帖之星
日期:2016-04-03 06:20:00
3 [报告]
发表于 2016-03-30 20:28 |只看该作者
大家不要只看啊,给出出主意呗!

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-02 06:20:00IT运维版块每日发帖之星
日期:2016-04-03 06:20:00
4 [报告]
发表于 2016-03-31 09:08 |只看该作者
tracker.conf配置内容:

# is this config file disabled
# false for enabled
# true for disabled
disabled=false

# bind an address of this host
# empty for bind all addresses of this host
bind_addr=

# the tracker server port
port=22122

# connect timeout in seconds
# default value is 30s
connect_timeout=30

# network timeout in seconds
# default value is 30s
network_timeout=60

# the base path to store data and log files
base_path=/home/fastdfs_data

# max concurrent connections this server supported
max_connections=256

# accept thread count
# default value is 1
# since V4.07
accept_threads=1

# work thread count, should <= max_connections
# default value is 4
# since V2.00
work_threads=4

# the method of selecting group to upload files
# 0: round robin
# 1: specify group
# 2: load balance, select the max free space group to upload file
store_lookup=2

# which group to upload file
# when store_lookup set to 1, must set store_group to the group name
store_group=group2

# which storage server to upload file
# 0: round robin (default)
# 1: the first server order by ip address
# 2: the first server order by priority (the minimal)
store_server=0

# which path(means disk or mount point) of the storage server to upload file
# 0: round robin
# 2: load balance, select the max free space path to upload file
store_path=0

# which storage server to download file
# 0: round robin (default)
# 1: the source storage server which the current file uploaded to
download_server=0

# reserved storage space for system or other applications.
# if the free(available) space of any stoarge server in
# a group <= reserved_storage_space,
# no file can be uploaded to this group.
# bytes unit can be one of follows:
### G or g for gigabyte(GB)
### M or m for megabyte(MB)
### K or k for kilobyte(KB)
### no unit for byte(B)
### XX.XX% as ratio such as reserved_storage_space = 10%
reserved_storage_space = 10%

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*

# sync log buff to disk every interval seconds
# default value is 10 seconds
sync_log_buff_interval = 10

# check storage server alive interval seconds
check_active_interval = 120

# thread stack size, should >= 64KB
# default value is 64KB
thread_stack_size = 64KB

# auto adjust when the ip address of the storage server changed
# default value is true
storage_ip_changed_auto_adjust = true

# storage sync file max delay seconds
# default value is 86400 seconds (one day)
# since V2.00
storage_sync_file_max_delay = 86400

# the max time of storage sync a file
# default value is 300 seconds
# since V2.00
storage_sync_file_max_time = 300

# if use a trunk file to store several small files
# default value is false
# since V3.00
use_trunk_file = false

# the min slot size, should <= 4KB
# default value is 256 bytes
# since V3.00
slot_min_size = 256

# the max slot size, should > slot_min_size
# store the upload file to trunk file when it's size <=  this value
# default value is 16MB
# since V3.00
slot_max_size = 16MB

# the trunk file size, should >= 4MB
# default value is 64MB
# since V3.00
trunk_file_size = 64MB

# if create trunk file advancely
# default value is false
# since V3.06
trunk_create_file_advance = false

# the time base to create trunk file
# the time format: HH:MM
# default value is 02:00
# since V3.06
trunk_create_file_time_base = 02:00

# the interval of create trunk file, unit: second
# default value is 38400 (one day)
# since V3.06
trunk_create_file_interval = 86400

# the threshold to create trunk file
# when the free trunk file size less than the threshold, will create
# the trunk files
# default value is 0
# since V3.06
trunk_create_file_space_threshold = 20G

# if check trunk space occupying when loading trunk free spaces
# the occupied spaces will be ignored
# default value is false
# since V3.09
# NOTICE: set this parameter to true will slow the loading of trunk spaces
# when startup. you should set this parameter to true when neccessary.
trunk_init_check_occupying = false

# if ignore storage_trunk.dat, reload from trunk binlog
# default value is false
# since V3.10
# set to true once for version upgrade when your version less than V3.10
trunk_init_reload_from_binlog = false

# the min interval for compressing the trunk binlog file
# unit: second
# default value is 0, 0 means never compress
# FastDFS compress the trunk binlog when trunk init and trunk destroy
# recommand to set this parameter to 86400 (one day)
# since V5.01
trunk_compress_binlog_min_interval = 0

# if use storage ID instead of IP address
# default value is false
# since V4.00
use_storage_id = false

# specify storage ids filename, can use relative or absolute path
# since V4.00
storage_ids_filename = storage_ids.conf

# id type of the storage server in the filename, values are:
## ip: the ip address of the storage server
## id: the server id of the storage server
# this paramter is valid only when use_storage_id set to true
# default value is ip
# since V4.03
id_type_in_filename = ip

# if store slave file use symbol link
# default value is false
# since V4.01
store_slave_file_use_link = false

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# HTTP port on this tracker server
http.server_port=8080

# check storage HTTP server alive interval seconds
# <= 0 for never check
# default value is 30
http.check_alive_interval=30

# check storage HTTP server alive type, values are:
#   tcp : connect to the storge server with HTTP port only,
#        do not request and get response
#   http: storage check alive url must return http status 200
# default value is tcp
http.check_alive_type=tcp

# check storage HTTP server alive uri/url
# NOTE: storage embed HTTP server support uri: /status.html
http.check_alive_uri=/status.html

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-02 06:20:00IT运维版块每日发帖之星
日期:2016-04-03 06:20:00
5 [报告]
发表于 2016-03-31 09:08 |只看该作者
storage.conf内容:

# is this config file disabled
# false for enabled
# true for disabled
disabled=false

# the name of the group this storage server belongs to
#
# comment or remove this item for fetching from tracker server,
# in this case, use_storage_id must set to true in tracker.conf,
# and storage_ids.conf must be configed correctly.
group_name=group1

# bind an address of this host
# empty for bind all addresses of this host
bind_addr=

# if bind an address of this host when connect to other servers
# (this storage server as a client)
# true for binding the address configed by above parameter: "bind_addr"
# false for binding any address of this host
client_bind=true

# the storage server port
port=23000

# connect timeout in seconds
# default value is 30s
connect_timeout=30

# network timeout in seconds
# default value is 30s
network_timeout=60

# heart beat interval in seconds
heart_beat_interval=30

# disk usage report interval in seconds
stat_report_interval=60

# the base path to store data and log files
base_path=/home/fastdfs/fastdfs_log

# max concurrent connections the server supported
# default value is 256
# more max_connections means more memory will be used
max_connections=256

# the buff size to recv / send data
# this parameter must more than 8KB
# default value is 64KB
# since V2.00
buff_size = 256KB

# accept thread count
# default value is 1
# since V4.07
accept_threads=1

# work thread count, should <= max_connections
# work thread deal network io
# default value is 4
# since V2.00
work_threads=4

# if disk read / write separated
##  false for mixed read and write
##  true for separated read and write
# default value is true
# since V2.00
disk_rw_separated = true

# disk reader thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_reader_threads = 1

# disk writer thread count per store base path
# for mixed read / write, this parameter can be 0
# default value is 1
# since V2.00
disk_writer_threads = 1

# when no entry to sync, try read binlog again after X milliseconds
# must > 0, default value is 200ms
sync_wait_msec=50

# after sync a file, usleep milliseconds
# 0 for sync successively (never call usleep)
sync_interval=0

# storage sync start time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_start_time=00:00

# storage sync end time of a day, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
sync_end_time=23:59

# write to the mark file after sync N files
# default value is 500
write_mark_file_freq=500

# path(disk or mount point) count, default value is 1
store_path_count=1

# store_path#, based 0, if store_path0 not exists, it's value is base_path
# the paths must be exist
store_path0=/home/fastdfs/fastdfs_data
#store_path1=/home/yuqing/fastdfs2

# subdir_count  * subdir_count directories will be auto created under each
# store_path (disk), value can be 1 to 256, default value is 256
subdir_count_per_path=256

# tracker_server can ocur more than once, and tracker_server format is
#  "host:port", host can be hostname or ip address
tracker_server=192.168.200.199:22122

#standard log level as syslog, case insensitive, value list:
### emerg for emergency
### alert
### crit for critical
### error
### warn for warning
### notice
### info
### debug
log_level=info

#unix group name to run this program,
#not set (empty) means run by the group of current user
run_by_group=

#unix username to run this program,
#not set (empty) means run by current user
run_by_user=

# allow_hosts can ocur more than once, host can be hostname or ip address,
# "*" (only one asterisk) means match all ip addresses
# we can use CIDR ips like 192.168.5.64/26
# and also use range like these: 10.0.1.[0-254] and host[01-08,20-25].domain.com
# for example:
# allow_hosts=10.0.1.[1-15,20]
# allow_hosts=host[01-08,20-25].domain.com
# allow_hosts=192.168.5.64/26
allow_hosts=*

# the mode of the files distributed to the data path
# 0: round robin(default)
# 1: random, distributted by hash code
file_distribute_path_mode=0

# valid when file_distribute_to_path is set to 0 (round robin),
# when the written file count reaches this number, then rotate to next path
# default value is 100
file_distribute_rotate_count=100

# call fsync to disk when write big file
# 0: never call fsync
# other: call fsync when written bytes >= this bytes
# default value is 0 (never call fsync)
fsync_after_written_bytes=0

# sync log buff to disk every interval seconds
# must > 0, default value is 10 seconds
sync_log_buff_interval=10

# sync binlog buff / cache to disk every interval seconds
# default value is 60 seconds
sync_binlog_buff_interval=10

# sync storage stat info to disk every interval seconds
# default value is 300 seconds
sync_stat_file_interval=300

# thread stack size, should >= 512KB
# default value is 512KB
thread_stack_size=512KB

# the priority as a source server for uploading file.
# the lower this value, the higher its uploading priority.
# default value is 10
upload_priority=10

# the NIC alias prefix, such as eth in Linux, you can see it by ifconfig -a
# multi aliases split by comma. empty value means auto set by OS type
# default values is empty
if_alias_prefix=

# if check file duplicate, when set to true, use FastDHT to store file indexes
# 1 or yes: need check
# 0 or no: do not check
# default value is 0
check_file_duplicate=0

# file signature method for check file duplicate
## hash: four 32 bits hash code
## md5: MD5 signature
# default value is hash
# since V4.01
file_signature_method=hash

# namespace for storing file indexes (key-value pairs)
# this item must be set when check_file_duplicate is true / on
key_namespace=FastDFS

# set keep_alive to 1 to enable persistent connection with FastDHT servers
# default value is 0 (short connection)
keep_alive=0

# you can use "#include filename" (not include double quotes) directive to
# load FastDHT server list, when the filename is a relative path such as
# pure filename, the base path is the base path of current/this config file.
# must set FastDHT server list when check_file_duplicate is true / on
# please see INSTALL of FastDHT for detail
##include /home/yuqing/fastdht/conf/fdht_servers.conf

# if log to access log
# default value is false
# since V4.00
use_access_log = false

# if rotate the access log every day
# default value is false
# since V4.00
rotate_access_log = false

# rotate access log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.00
access_log_rotate_time=00:00

# if rotate the error log every day
# default value is false
# since V4.02
rotate_error_log = false

# rotate error log time base, time format: Hour:Minute
# Hour from 0 to 23, Minute from 0 to 59
# default value is 00:00
# since V4.02
error_log_rotate_time=00:00

# rotate access log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_access_log_size = 0

# rotate error log when the log file exceeds this size
# 0 means never rotates log file by log file size
# default value is 0
# since V4.02
rotate_error_log_size = 0

# keep days of the log files
# 0 means do not delete old log files
# default value is 0
log_file_keep_days = 0

# if skip the invalid record when sync file
# default value is false
# since V4.02
file_sync_skip_invalid_record=false

# if use connection pool
# default value is false
# since V4.05
use_connection_pool = false

# connections whose the idle time exceeds this time will be closed
# unit: second
# default value is 3600
# since V4.05
connection_pool_max_idle_time = 3600

# use the ip address of this storage server if domain_name is empty,
# else this domain name will ocur in the url redirected by the tracker server
http.domain_name=

# the port of the web server on this storage server
http.server_port=8888

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-08 06:20:00IT运维版块每日发帖之星
日期:2016-05-26 06:20:00
6 [报告]
发表于 2016-04-01 10:35 |只看该作者
大哥,你这个问题我还没有遇到,因为我的上传还没有成功。。我查不到,你可以帮我看一下吗?
各软件版本:
libfastcommon-master
FastDFS V5.05
fastfdfs_nginx_module_master
nginx V1.9.1(后来换成了ngx_openresty-1.7.10.1)

[root@localhost fdfs]# ls
a.txt        http.conf   mod_fastdfs.conf  tracker.conf
client.conf  mime.types  storage.conf

[root@localhost fdfs]# fdfs_upload_file client.conf a.txt
[2016-04-01 10:11:33] ERROR - file: tracker_proto.c, line: 37, server: 192.168.1.9:23000, recv data fail, errno: 107, error info: Transport endpoint is not connected
upload file fail, error no: 107, error info: Transport endpoint is not connected
[root@localhost fdfs]#

配置文件应该没有问题的,我看了好几遍。第一次装的时候上传成功了,但是第二天我又上传它就报错了,而且根据a.txt的内容不一样 ,报错也不一样:

[root@localhost fdfs]# fdfs_upload_file client.conf a.txt
[2016-04-01 10:10:59] ERROR - file: tracker_proto.c, line: 48, server: 192.168.1.9:23000, response status 22 != 0
upload file fail, error no: 22, error info: Invalid argument

storage日志如下:
[2016-04-01 09:51:46] ERROR - file: ../common/sockopt.c, line: 616, bind port 23000 failed, errno: 98, error info: Address already in use.
[2016-04-01 10:10:59] ERROR - file: storage_service.c, line: 1425, client ip: 192.168.1.9, pkg length is not correct, invalid file bytes: 8392585448953743978
[2016-04-01 10:11:33] ERROR - file: storage_service.c, line: 1425, client ip: 192.168.1.9, pkg length is not correct, invalid file bytes: 8392585448953743978

我昨天没关机,它竟然报端口被占用
这个是空文件上传报错日志:
[2016-03-31 19:33:48] ERROR - file: storage_service.c, line: 1373, cmd=11, client ip: 192.168.1.9, package size 15 is not correct, expect length >= 23
[2016-03-31 19:36:09] ERROR - file: storage_service.c, line: 1425, client ip: 192.168.1.9, pkg length is not correct, invalid file bytes: 8392585448953743978

我想也许和nginx的安装有关吧,我是采用的fdfs + nginx_module + nginx 来装的,安装过程没有报任何错(应该说都解决了),可是nginx无论如何都启动不了,报错如下:
刚刚确认了以下:
[root@localhost logs]#  /usr/local/openresty/nginx/sbin/nginx
/usr/local/openresty/nginx/sbin/nginx: symbol lookup error: /usr/local/openresty/nginx/sbin/nginx: undefined symbol: g_fdfs_base_path

上面这个我是用openresty nginx +lua包安装的,之前单独用nginx源码包安装,启动也报这个错,google后面这个参数都搜不到信息。。
难道和我之前用yum install 安装过nginx 有关吗?可是我已经卸载,但是留了一个启动文件在 /etc/init.d/nginx ,然后直接用
systemctl start nginx 可以启动成功。。。我真是凌乱了。昨天都不能的

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-02 06:20:00IT运维版块每日发帖之星
日期:2016-04-03 06:20:00
7 [报告]
发表于 2016-04-01 16:53 |只看该作者
回复 6# duoduoluo_z


[2016-04-01 10:11:33] ERROR - file: tracker_proto.c, line: 37, server: 192.168.1.9:23000, recv data fail, errno: 107, error info: Transport endpoint is not connected
upload file fail, error no: 107, error info: Transport endpoint is not connected

这是storage没有连接成功!
你看看tracker和storage的日志就能找到原因。

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-02 06:20:00IT运维版块每日发帖之星
日期:2016-04-03 06:20:00
8 [报告]
发表于 2016-04-01 17:00 |只看该作者
原因找到了啊,看fdfs负载均衡的代码找到的,是自己给自己挖了个坑跳
我自己修改了fdfs_download_file.c的代码,在下载前对文件做了一次文件信息的查询,
查询操作使用storage1下载操作使用storage2,因为是轮询且我的group中只有2个storage
这样就导致每次下载后负载均衡又回到原点,这个错误提醒我这种轮询算法的缺点,任何操作(如查询)
也会做一次负载。

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-08 06:20:00IT运维版块每日发帖之星
日期:2016-05-26 06:20:00
9 [报告]
发表于 2016-04-05 10:27 |只看该作者
这是storage.log  的信息
[2016-04-05 09:56:42] INFO - file: storage_param_getter.c, line: 225, storage_ip_changed_auto_adjust=1
[2016-04-05 09:56:42] INFO - file: tracker_client_thread.c, line: 257, successfully connect to tracker server 192.168.1.9:22122, as a tracker client, my ip is 192.168.1.9
[2016-04-05 10:00:14] ERROR - file: storage_service.c, line: 1425, client ip: 192.168.1.9, pkg length is not correct, invalid file bytes: 7669743415109681112

第二行已经显示链接成功了,仅仅是tracker 链接成功? 和client 链接成功?只有这么多信息了,是storage.conf 的配置问题吗?

另外看到一个问题:tracker.log 显示的是     INFO - FastDFS v1.27  可我明明装的是5.08了,奇怪啊

论坛徽章:
2
IT运维版块每日发帖之星
日期:2016-04-08 06:20:00IT运维版块每日发帖之星
日期:2016-05-26 06:20:00
10 [报告]
发表于 2016-04-05 11:13 |只看该作者
我用fdfs_test client.conf upload 仓鼠1.jpg 上传好像成功了:

[root@localhost fdfs]# fdfs_test client.conf upload 仓鼠1.jpg
This is FastDFS client test program v1.27

Copyright (C) 2008, Happy Fish / YuQing

FastDFS may be copied only under the terms of the GNU General
Public License V3, which may be found in the FastDFS source kit.
Please visit the FastDFS Home Page
for more detail.

base_path=/data/fdfs, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0
group_name=group1, ip_addr=192.168.1.9, port=23000
storage_upload_by_filename
group_name=group1, remote_filename=M00/00/00/wKgBCVcDK-kAAAAAAAAcyhXliqg768.jpg
source ip address: 192.168.1.9
file timestamp=2016-04-05 11:07:21
file size=7370
file url: http:、、192.168.1.9/group1/M00/00/00/wKgBCVcDK-kAAAAAAAAcyhXliqg768.jpg
storage_upload_slave_by_filename
group_name=group1, remote_filename=M00/00/00/wKgBCVcDK-kAAAAAAAAcyhXliqg768_big.jpg
source ip address: 192.168.1.9
file timestamp=2016-04-05 11:07:21
file size=7370
file [url]http:、、192.168.1.9/group1/M00/00/00/wKgBCVcDK-kAAAAAAAAcyhXliqg768_big.jpg

但是在浏览器里输入这个地址,得到400 bad Request  
请问这是怎么回事,到底上传成功了吗?
V5.08是否是自带php扩展,如果我用java-client 可以吗
您需要登录后才可以回帖 登录 | 注册

本版积分规则 发表回复

  

北京盛拓优讯信息技术有限公司. 版权所有 京ICP备16024965号-6 北京市公安局海淀分局网监中心备案编号:11010802020122 niuxiaotong@pcpop.com 17352615567
未成年举报专区
中国互联网协会会员  联系我们:huangweiwei@itpub.net
感谢所有关心和支持过ChinaUnix的朋友们 转载本站内容请注明原作者名及出处

清除 Cookies - ChinaUnix - Archiver - WAP - TOP