taowu 发表于 2016-08-30 17:14

请教下余大FastDfs文件系统磁盘扩容的问题

你好余大:我们目前有2台服务器trackerserver+storage和storage,软件版本FastDFS_v4.06,fastdfs-nginx-module,nginx-1.2.6,现在这2台服务器磁盘满了,要把硬盘整体更换成大容量的来进行扩容。

我是这样计划扩容的,麻烦余大看看有没有问题或者有其他更好的方案建议一下,谢谢!


step1,拿一台全新的大容量服务器作为已有group新的storage server,让他自己同步数据过去。这样文件系统变成3台了。
step2,从group中下架一台storage server,把这台服务器磁盘进行扩容,然后再上架加入group,让数据自己同步过去。
step3,最后一台磁盘小的服务器下架撤掉。修改以上2台扩容后的服务器配置,一台为trackerserver+storage,另一台为storage。

duoduoluo_z 发表于 2016-08-31 10:01

云服务器可以实时扩容吗?直接加硬盘呢?----小白疑问

happy_fish100 发表于 2016-08-31 10:22

回复 1# taowu

可以照你的方法来扩容。

taowu 发表于 2016-08-31 21:23

[root@storage3.mpay.com logs]# cat storaged.log
INFO - FastDFS v4.06, base_path=/home/fastdfs, store_path_count=1, subdir_count_per_path=256, group_name=group1, run_by_group=, run_by_user=, connect_timeout=30s, network_timeout=60s, port=23000, bind_addr=10.0.120.57, client_bind=1, max_connections=10000, work_threads=4, disk_rw_separated=1, disk_reader_threads=1, disk_writer_threads=1, buff_size=256KB, heart_beat_interval=30s, stat_report_interval=60s, tracker_server_count=1, sync_wait_msec=50ms, sync_interval=0ms, sync_start_time=00:00, sync_end_time=23:59, write_mark_file_freq=500, allow_ip_count=-1, file_distribute_path_mode=0, file_distribute_rotate_count=100, fsync_after_written_bytes=0, sync_log_buff_interval=10s, sync_binlog_buff_interval=10s, sync_stat_file_interval=300s, thread_stack_size=512 KB, upload_priority=10, if_alias_prefix=, check_file_duplicate=0, file_signature_method=hash, FDHT group count=0, FDHT server count=0, FDHT key_namespace=, FDHT keep_alive=0, HTTP server port=8888, domain name=, use_access_log=0, rotate_access_log=0, access_log_rotate_time=00:00, rotate_error_log=0, error_log_rotate_time=00:00, rotate_access_log_size=0, rotate_error_log_size=0, file_sync_skip_invalid_record=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s
INFO - file: storage_param_getter.c, line: 187, use_storage_id=0, id_type_in_filename=ip, storage_ip_changed_auto_adjust=1, store_path=0, reserved_storage_space=10.00%, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, store_slave_file_use_link=0
INFO - file: storage_func.c, line: 174, tracker_client_ip: 10.0.120.57, my_server_id_str: 10.0.120.57, g_server_id_in_filename: 964165642
INFO - local_host_ip_count: 2,127.0.0.110.0.120.57
INFO - file: tracker_client_thread.c, line: 308, successfully connect to tracker server 10.0.120.55:22122, as a tracker client, my ip is 10.0.120.57
INFO - file: tracker_client_thread.c, line: 1124, tracker server 10.0.120.55:22122, set tracker leader: 10.0.120.55:22122
INFO - file: storage_sync.c, line: 2698, successfully connect to storage server 10.0.120.55:23000
INFO - file: storage_sync.c, line: 2698, successfully connect to storage server 10.0.120.56:23000
ERROR - file: storage_nio.c, line: 404, client ip: 10.0.120s10.0.120.55, recv failed, errno: 88, error info: Socket operation on non-socket
INFO - FastDFS v4.06, base_path=/home/fastdfs, store_path_count=1, subdir_count_per_path=256, group_name=group1, run_by_group=, run_by_user=, connect_timeout=30s, network_timeout=60s, port=23000, bind_addr=10.0.120.57, client_bind=1, max_connections=10000, work_threads=4, disk_rw_separated=1, disk_reader_threads=1, disk_writer_threads=1, buff_size=256KB, heart_beat_interval=30s, stat_report_interval=60s, tracker_server_count=1, sync_wait_msec=50ms, sync_interval=0ms, sync_start_time=00:00, sync_end_time=23:59, write_mark_file_freq=500, allow_ip_count=-1, file_distribute_path_mode=0, file_distribute_rotate_count=100, fsync_after_written_bytes=0, sync_log_buff_interval=10s, sync_binlog_buff_interval=10s, sync_stat_file_interval=300s, thread_stack_size=512 KB, upload_priority=10, if_alias_prefix=, check_file_duplicate=0, file_signature_method=hash, FDHT group count=0, FDHT server count=0, FDHT key_namespace=, FDHT keep_alive=0, HTTP server port=8888, domain name=, use_access_log=0, rotate_access_log=0, access_log_rotate_time=00:00, rotate_error_log=0, error_log_rotate_time=00:00, rotate_access_log_size=0, rotate_error_log_size=0, file_sync_skip_invalid_record=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s
INFO - file: storage_param_getter.c, line: 187, use_storage_id=0, id_type_in_filename=ip, storage_ip_changed_auto_adjust=1, store_path=0, reserved_storage_space=10.00%, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, store_slave_file_use_link=0
INFO - file: storage_func.c, line: 174, tracker_client_ip: 10.0.120.57, my_server_id_str: 10.0.120.57, g_server_id_in_filename: 964165642
INFO - local_host_ip_count: 2,127.0.0.110.0.120.57
ERROR - file: storage_nio.c, line: 404, client ip: 10.0.120s10.0.120.55, recv failed, errno: 88, error info: Socket operation on non-socket
INFO - file: tracker_client_thread.c, line: 308, successfully connect to tracker server 10.0.120.55:22122, as a tracker client, my ip is 10.0.120.57
INFO - file: tracker_client_thread.c, line: 1124, tracker server 10.0.120.55:22122, set tracker leader: 10.0.120.55:22122

启动好像报错了,storage进程退出了,余大帮忙看看哪里出问题了

taowu 发表于 2016-08-31 21:44

重装了libevent-1.4.14b-stable编译完后编译fastdfs编译好像报错了
[root@storage3.mpay.com FastDFS]# ./make.sh install
mkdir -p /usr/local/bin
mkdir -p /etc/fdfs
cp -f fdfs_trackerd/usr/local/bin
cp: cannot stat `fdfs_trackerd': No such file or directory
make: *** Error 1
mkdir -p /usr/local/bin
mkdir -p /etc/fdfs
cp -f fdfs_storaged/usr/local/bin
cp: cannot stat `fdfs_storaged': No such file or directory
make: *** Error 1
mkdir -p /usr/local/bin
mkdir -p /etc/fdfs
mkdir -p /usr/local/lib

taowu 发表于 2016-08-31 21:45

make.sh时候报了这些

/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../lib64/libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /lib/../lib64/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /lib/../lib64/libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib/../lib64/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib/../lib64/libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /lib64/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /lib64/libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib64/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib64/libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib64/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib64/libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /lib/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /lib/libevent.a when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib/libevent.so when searching for -levent
/usr/bin/ld: skipping incompatible /usr/lib/libevent.a when searching for -levent
/usr/bin/ld: cannot find -levent
collect2: ld returned 1 exit status
make: *** Error 1

taowu 发表于 2016-09-01 09:21

昨天晚上libevent编译的问题,今天早上莫名的编译成功了,现在已经开始数据同步了:roll:

duoduoluo_z 发表于 2016-09-01 16:19

回复 3# happy_fish100

余大,我现在想实现fdfs的上传目录自定义,即不循环生成目录,而是根据某个参数(userid)来创建目录,之后在这个目录下上传文件。请问是否涉及到大量源码的修改?或者有什么方法可以使fdfs_server 不参与目录创建,只负责把上传分发到不同分组,同时能返回正确的路径?
看到storage.conf 中可以配置循环目录的个数,最小为 1X1,可不可以直接跳过这个功能?
盼指点一二!谢谢!~
页: [1]
查看完整版本: 请教下余大FastDfs文件系统磁盘扩容的问题