前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >分布式文件系统FastDFS安装配置

分布式文件系统FastDFS安装配置

作者头像
算法之名
发布2019-08-20 16:06:04
7530
发布2019-08-20 16:06:04
举报
文章被收录于专栏:算法之名算法之名

FastDFS是一个分布式的文件系统,他可以把上传到某一台服务器的文件分发复制到其他节点的文件服务器上做高可用。

FastDFS 上传文件交互过程:

1. client 询问 tracker 上传到的 storage,不需要附加参数; 2. tracker 返回一台可用的 storage; 3. client 直接和 storage 通讯完成文件上传。 客户端 client 发起对 FastDFS 的文件传输动作,是通过连接到某一台 Tracker Server 的指定端 口来实现的,Tracker Server 根据目前已掌握的信息,来决定选择哪一台 Storage Server ,然后将这个 Storage Server 的地址等信息返回给 client,然后 client 再通过这些信息连接到这台 Storage Server, 将要上传的文件传送到给 Storage Server 上。

FastDFS 下载文件交互过程:

1. client 询问 tracker 下载文件的 storage,参数为文件标识(卷名和文件名); 2. tracker 返回一台可用的 storage; 3. client 直接和 storage 通讯完成文件下载。

FastDFS的安装

安装包: FastDFS v5.05 libfastcommon-master.zip(是从 FastDFS 和 FastDHT 中提取出来的公共 C 函数库) fastdfs-nginx-module_v1.16.tar.gz nginx-1.11.5.tar.gz fastdfs_client_java._v1.25.tar.gz

ngx_cache_purge-2.3.tar.gz

先安装依赖包

yum install make cmake gcc gcc-c++

2、安装 libfastcommon: (1)上传或下载 libfastcommon-master.zip 到服务器任意目录 (2)解压 # unzip libfastcommon-master.zip # cd libfastcommon-master

(3) 编译、安装 # ./make.sh # ./make.sh install libfastcommon 默认安装到了 /usr/lib64/libfastcommon.so /usr/lib64/libfdfsclient.so

(4)因为 FastDFS 主程序设置的 lib 目录是/usr/local/lib,所以需要创建软链接. # ln -s /usr/lib64/libfastcommon.so /usr/local/lib/libfastcommon.so # ln -s /usr/lib64/libfastcommon.so /usr/lib/libfastcommon.so # ln -s /usr/lib64/libfdfsclient.so /usr/local/lib/libfdfsclient.so # ln -s /usr/lib64/libfdfsclient.so /usr/lib/libfdfsclient.so

3、安装 FastDFS (1)上传或下载 FastDFS 源码包(FastDFS_v5.05.tar.gz)到服务器任意目录 (2)解压 # tar -zxvf FastDFS_v5.05.tar.gz # cd FastDFS

(3)编译、安装(编译前要确保已经成功安装了 libfastcommon) # ./make.sh # ./make.sh install 采用默认安装的方式安装,安装后的相应文件与目录: A、服务脚本在: /etc/init.d/fdfs_storaged /etc/init.d/fdfs_tracker B、配置文件在(样例配置文件): /etc/fdfs/client.conf.sample /etc/fdfs/storage.conf.sample /etc/fdfs/tracker.conf.sample C、命令工具在/usr/bin/目录下的: fdfs_appender_test fdfs_appender_test1 fdfs_append_file fdfs_crc32 fdfs_delete_file fdfs_download_file fdfs_file_info fdfs_monitor fdfs_storaged

fdfs_test fdfs_test1 fdfs_trackerd fdfs_upload_appender fdfs_upload_file stop.sh restart.sh

(4)因为 FastDFS 服务脚本设置的 bin 目录是/usr/local/bin,但实际命令安装在/usr/bin,可以进入 /user/bin 目录使用以下命令查看 fdfs 的相关命令: # cd /usr/bin/ # ls | grep fdfs

可以看到

fdfs_appender_test fdfs_appender_test1 fdfs_append_file fdfs_crc32 fdfs_delete_file fdfs_download_file fdfs_file_info fdfs_monitor fdfs_storaged fdfs_test fdfs_test1 fdfs_trackerd fdfs_upload_appender fdfs_upload_file

因此需要修改 FastDFS 服务脚本中相应的命令路径,也就是把/etc/init.d/fdfs_storaged 和/etc/init.d/fdfs_tracker 两个脚本中的/usr/local/bin 修改成/usr/bin:

cd /etc/init.d/

vim fdfs_trackerd

使用查找替换命令进统一修改:%s+/usr/local/bin+/usr/bin

vim fdfs_storaged

使用查找替换命令进统一修改:%s+/usr/local/bin+/usr/bin

注意:以上操作无论是配置 tracker 还是配置 storage 都是必须的,而 tracker 和 storage 的区别主要是 在安装完 fastdfs 之后的配置过程中。

配置 FastDFS 跟踪器 Tracker

1、 复制 FastDFS 跟踪器样例配置文件,并重命名: # cd /etc/fdfs/

# cp tracker.conf.sample tracker.conf

2、 编辑跟踪器配置文件: # vim tracker.conf 修改的内容如下: disabled=false #启用配置文件 port=22122 #tracker 的端口号,一般采用 22122 这个默认端口 base_path=/home/fastdfs/tracker #tracker 的数据文件和日志目录,如果你是在阿里云上面配置的话,放在你挂载的数据盘上面,如/mnt/fastdfs/tracker

3、 创建基础数据目录(参考基础目录 base_path 配置): # mkdir -p /home/fastdfs/tracker

4、 启动 Tracker: [root@host1 fdfs]# cd /etc/init.d/

[root@host1 init.d]# ./fdfs_trackerd start Starting FastDFS tracker server: (初次成功启动,会在/fastdfs/tracker 目录下创建 data、logs 两个目录)可以通过以下两个方法查 看 tracker 是否启动成功:

(1)查看 22122 端口监听情况:

[root@host1 init.d]# netstat -anpl | grep 22122 tcp 0 0 0.0.0.0:22122 0.0.0.0:* LISTEN 25142/fdfs_trackerd

(2)通过以下命令查看 tracker 的启动日志,看是否有错误

[root@host1 /]# cd /home/fastdfs/tracker/logs/

[root@host1 logs]# tail -100f trackerd.log [2018-11-13 11:16:54] INFO - FastDFS v5.05, base_path=/home/fastdfs/tracker, run_by_group=, run_by_user=, connect_timeout=30s, network_timeout=60s, port=22122, bind_addr=, max_connections=256, accept_threads=1, work_threads=4, store_lookup=2, store_group=, store_server=0, store_path=0, reserved_storage_space=10.00%, download_server=0, allow_ip_count=-1, sync_log_buff_interval=10s, check_active_interval=120s, thread_stack_size=64 KB, storage_ip_changed_auto_adjust=1, storage_sync_file_max_delay=86400s, storage_sync_file_max_time=300s, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, trunk_compress_binlog_min_interval=0, use_storage_id=0, id_type_in_filename=ip, storage_id_count=0, rotate_error_log=0, error_log_rotate_time=00:00, rotate_error_log_size=0, log_file_keep_days=0, store_slave_file_use_link=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s [2018-11-13 11:16:54] INFO - local_host_ip_count: 5, 127.0.0.1 192.168.5.129 192.168.122.1 172.17.0.1 172.18.0.1

5、 关闭 Tracker:

[root@host1 logs]# cd /etc/init.d/

[root@host1 init.d]# ./fdfs_trackerd stop stopping fdfs_trackerd ...

6、 设置 FastDFS 跟踪器开机启动:

[root@host1 /]# cd /etc/rc.d

[root@host1 rc.d]# vim rc.local

添加以下内容: ## FastDFS Tracker /etc/init.d/fdfs_trackerd start

配置 FastDFS 存储

1、 复制 FastDFS 存储器样例配置文件,并重命名: # cd /etc/fdfs/

# cp storage.conf.sample storage.conf

2、 编辑存储器样例配置文件(以 group1 中的 storage 节点的 storage.conf 为例): # vi /etc/fdfs/storage.conf 修改的内容如下: disabled=false #启用配置文件 group_name=group1 #组名(第一组为 group1,第二组为 group2) port=23000 #storage 的端口号,同一个组的 storage 端口号必须相同 base_path=/home/fastdfs/storage #设置 storage 的日志目录,阿里云配置同追踪器 store_path0=/home/fastdfs/storage #存储路径 store_path_count=1 #存储路径个数,需要和 store_path 个数匹配 tracker_server=192.168.1.131:22122 #tracker 服务器的 IP 地址和端口 tracker_server=192.168.1.132:22122 #多个 tracker 直接添加多条配置 http.server_port=8888 #设置 http 端口号

3、 创建基础数据目录(参考基础目录 base_path 配置): # mkdir -p /home/fastdfs/storage

4、 启动 Storage:

[root@host1 fdfs]# cd /etc/init.d/

[root@host1 init.d]# ./fdfs_storaged start Starting FastDFS storage server:

启动存储节点之前必须启动追踪节点,否则存储节点不会启动。初次成功启动,会在/home/fastdfs/storage 目录下创建数据目录 data 和日志目录 logs) 各节点启动动,使用 tail -f /home/fastdfs/storage/logs/storaged.log 命令监听存储节点日志,可以 看到存储节点链接到跟踪器,并提示哪一个为 leader 跟踪器。同时也会看到同一组中的其他节点加入 进来的日志信息。

[root@host1 logs]# tail -100f storaged.log mkdir data path: A4 ... mkdir data path: A5 ... mkdir data path: A6 ... mkdir data path: A7 ... mkdir data path: A8 ... mkdir data path: A9 ... mkdir data path: AA ... mkdir data path: AB ... mkdir data path: AC ... mkdir data path: AD ... mkdir data path: AE ... mkdir data path: AF ... mkdir data path: B0 ... mkdir data path: B1 ... mkdir data path: B2 ... mkdir data path: B3 ... mkdir data path: B4 ... mkdir data path: B5 ... mkdir data path: B6 ... mkdir data path: B7 ... mkdir data path: B8 ... mkdir data path: B9 ... mkdir data path: BA ... mkdir data path: BB ... mkdir data path: BC ... mkdir data path: BD ... mkdir data path: BE ... mkdir data path: BF ... mkdir data path: C0 ... mkdir data path: C1 ... mkdir data path: C2 ... mkdir data path: C3 ... mkdir data path: C4 ... mkdir data path: C5 ... mkdir data path: C6 ... mkdir data path: C7 ... mkdir data path: C8 ... mkdir data path: C9 ... mkdir data path: CA ... mkdir data path: CB ... mkdir data path: CC ... mkdir data path: CD ... mkdir data path: CE ... mkdir data path: CF ... mkdir data path: D0 ... mkdir data path: D1 ... mkdir data path: D2 ... mkdir data path: D3 ... mkdir data path: D4 ... mkdir data path: D5 ... mkdir data path: D6 ... mkdir data path: D7 ... mkdir data path: D8 ... mkdir data path: D9 ... mkdir data path: DA ... mkdir data path: DB ... mkdir data path: DC ... mkdir data path: DD ... mkdir data path: DE ... mkdir data path: DF ... mkdir data path: E0 ... mkdir data path: E1 ... mkdir data path: E2 ... mkdir data path: E3 ... mkdir data path: E4 ... mkdir data path: E5 ... mkdir data path: E6 ... mkdir data path: E7 ... mkdir data path: E8 ... mkdir data path: E9 ... mkdir data path: EA ... mkdir data path: EB ... mkdir data path: EC ... mkdir data path: ED ... mkdir data path: EE ... mkdir data path: EF ... mkdir data path: F0 ... mkdir data path: F1 ... mkdir data path: F2 ... mkdir data path: F3 ... mkdir data path: F4 ... mkdir data path: F5 ... mkdir data path: F6 ... mkdir data path: F7 ... mkdir data path: F8 ... mkdir data path: F9 ... mkdir data path: FA ... mkdir data path: FB ... mkdir data path: FC ... mkdir data path: FD ... mkdir data path: FE ... mkdir data path: FF ... data path: /home/fastdfs/storage/data, mkdir sub dir done. [2018-11-13 13:41:29] INFO - file: storage_param_getter.c, line: 191, use_storage_id=0, id_type_in_filename=ip, storage_ip_changed_auto_adjust=1, store_path=0, reserved_storage_space=10.00%, use_trunk_file=0, slot_min_size=256, slot_max_size=16 MB, trunk_file_size=64 MB, trunk_create_file_advance=0, trunk_create_file_time_base=02:00, trunk_create_file_interval=86400, trunk_create_file_space_threshold=20 GB, trunk_init_check_occupying=0, trunk_init_reload_from_binlog=0, trunk_compress_binlog_min_interval=0, store_slave_file_use_link=0 [2018-11-13 13:41:29] INFO - file: storage_func.c, line: 254, tracker_client_ip: 192.168.5.129, my_server_id_str: 192.168.5.129, g_server_id_in_filename: -2130335552 [2018-11-13 13:41:29] INFO - local_host_ip_count: 5, 127.0.0.1 192.168.5.129 192.168.122.1 172.17.0.1 172.18.0.1 [2018-11-13 13:41:30] INFO - file: tracker_client_thread.c, line: 310, successfully connect to tracker server 192.168.5.129:22122, as a tracker client, my ip is 192.168.5.129 [2018-11-13 13:42:00] INFO - file: tracker_client_thread.c, line: 1235, tracker server 192.168.5.129:22122, set tracker leader: 192.168.5.129:22122 [2018-11-13 13:42:30] INFO - file: storage_sync.c, line: 2698, successfully connect to storage server 192.168.5.182:23000 [2018-11-13 13:43:00] INFO - file: storage_sync.c, line: 2698, successfully connect to storage server 192.168.5.182:23000

查看 23000 端口监听情况:

[root@host1 init.d]# netstat -anpl | grep 23000 tcp 0 0 0.0.0.0:23000 0.0.0.0:* LISTEN 29945/fdfs_storaged

所有 Storage 节点都启动之后,可以在任一 Storage 节点上使用如下命令查看集群信息:

[root@host1 logs]# fdfs_monitor /etc/fdfs/storage.conf [2018-11-13 13:48:47] DEBUG - base_path=/home/fastdfs/storage, connect_timeout=30, network_timeout=60, tracker_server_count=1, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=0, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0

server_count=1, server_index=0

tracker server is 192.168.5.129:22122

group count: 1

Group 1: group name = group1 disk total space = 408048 MB disk free space = 393235 MB trunk free space = 0 MB storage server count = 2 active server count = 2 storage server port = 23000 storage HTTP port = 8889 store path count = 1 subdir count per path = 256 current write server index = 0 current trunk file id = 0

Storage 1: id = 192.168.5.129 ip_addr = 192.168.5.129 (host1) ACTIVE http domain = version = 5.05 join time = 2018-11-13 13:40:28 up time = 2018-11-13 13:40:28 total storage = 408048 MB free storage = 398402 MB upload priority = 10 store_path_count = 1 subdir_count_per_path = 256 storage_port = 23000 storage_http_port = 8889 current_write_path = 0 source storage id = if_trunk_server = 0 connection.alloc_count = 256 connection.current_count = 1 connection.max_count = 1 total_upload_count = 0 success_upload_count = 0 total_append_count = 0 success_append_count = 0 total_modify_count = 0 success_modify_count = 0 total_truncate_count = 0 success_truncate_count = 0 total_set_meta_count = 0 success_set_meta_count = 0 total_delete_count = 0 success_delete_count = 0 total_download_count = 0 success_download_count = 0 total_get_meta_count = 0 success_get_meta_count = 0 total_create_link_count = 0 success_create_link_count = 0 total_delete_link_count = 0 success_delete_link_count = 0 total_upload_bytes = 0 success_upload_bytes = 0 total_append_bytes = 0 success_append_bytes = 0 total_modify_bytes = 0 success_modify_bytes = 0 stotal_download_bytes = 0 success_download_bytes = 0 total_sync_in_bytes = 0 success_sync_in_bytes = 0 total_sync_out_bytes = 0 success_sync_out_bytes = 0 total_file_open_count = 0 success_file_open_count = 0 total_file_read_count = 0 success_file_read_count = 0 total_file_write_count = 0 success_file_write_count = 0 last_heart_beat_time = 2018-11-13 13:48:29 last_source_update = 1970-01-01 08:00:00 last_sync_update = 1970-01-01 08:00:00 last_synced_timestamp = 1970-01-01 08:00:00 Storage 2: id = 192.168.5.182 ip_addr = 192.168.5.182 (host2) ACTIVE http domain = version = 5.05 join time = 2018-11-13 13:42:20 up time = 2018-11-13 13:42:20 total storage = 408048 MB free storage = 393235 MB upload priority = 10 store_path_count = 1 subdir_count_per_path = 256 storage_port = 23000 storage_http_port = 8889 current_write_path = 0 source storage id = 192.168.5.129 if_trunk_server = 0 connection.alloc_count = 256 connection.current_count = 1 connection.max_count = 1 total_upload_count = 0 success_upload_count = 0 total_append_count = 0 success_append_count = 0 total_modify_count = 0 success_modify_count = 0 total_truncate_count = 0 success_truncate_count = 0 total_set_meta_count = 0 success_set_meta_count = 0 total_delete_count = 0 success_delete_count = 0 total_download_count = 0 success_download_count = 0 total_get_meta_count = 0 success_get_meta_count = 0 total_create_link_count = 0 success_create_link_count = 0 total_delete_link_count = 0 success_delete_link_count = 0 total_upload_bytes = 0 success_upload_bytes = 0 total_append_bytes = 0 success_append_bytes = 0 total_modify_bytes = 0 success_modify_bytes = 0 stotal_download_bytes = 0 success_download_bytes = 0 total_sync_in_bytes = 0 success_sync_in_bytes = 0 total_sync_out_bytes = 0 success_sync_out_bytes = 0 total_file_open_count = 0 success_file_open_count = 0 total_file_read_count = 0 success_file_read_count = 0 total_file_write_count = 0 success_file_write_count = 0 last_heart_beat_time = 2018-11-13 13:48:22 last_source_update = 1970-01-01 08:00:00 last_sync_update = 1970-01-01 08:00:00 last_synced_timestamp = 1970-01-01 08:00:00

该命令在/usr/bin/目录下,在任意位置可以直接调用。可以看到存储节点状态为 ACTIVE 则可

5、 关闭 Storage: # /etc/init.d/fdfs_storaged stop

6、 设置 FastDFS 存储器开机启动: # vim /etc/rc.d/rc.local 添加: ## FastDFS Storage /etc/init.d/fdfs_storaged start

文件上传测试

1、修改 Tracker 服务器中的客户端配置文件: # cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf # vim /etc/fdfs/client.conf base_path=/home/fastdfs/tracker tracker_server=192.168.1.131:22122 tracker_server=192.168.1.132:22122

2、执行如下文件上传命令:

[root@host1 soft]# fdfs_upload_file /etc/fdfs/client.conf ./jdk-8u45-linux-x64.tar.gz group1/M00/00/00/wKgFgVvqbfWARXiRClPqSv5X14U.tar.gz

(能返回以上文件 ID,说明文件上传成功)

fdfs_upload_file命令在/usr/bin/目录下,可以在任意位置执行。

我们可以在两台存储服务器上查看该文件

[root@host2 data]# cd 00/00 [root@host2 00]# ll 总用量 169212 -rw-r--r-- 1 root root 173271626 11月 13 14:23 wKgFgVvqbfWARXiRClPqSv5X14U.tar.gz [root@host2 00]# pwd /home/fastdfs/storage/data/00/00

这个是host2的文件服务器 [root@host1 data]# cd 00/00 [root@host1 00]# ll 总用量 169212 -rw-r--r-- 1 root root 173271626 11月 13 14:23 wKgFgVvqbfWARXiRClPqSv5X14U.tar.gz [root@host1 00]# pwd /home/fastdfs/storage/data/00/00

这个是host1的文件服务器,说明上传一次,两台文件服务器都有该文件,而且ID号相同。

在各存储节点上安装 Nginx

1、fastdfs-nginx-module 作用说明 FastDFS 通过 Tracker 服务器,将文件放在 Storage 服务器存储,但是同组存储服务器之间需要进入 文件复制,有同步延迟的问题。假设 Tracker 服务器将文件上传到了 192.168.1.135,上传成功后文件 ID 已经返回给客户端。此时 FastDFS 存储集群机制会将这个文件同步到同组存储 192.168.1.136,在文件还 没有复制完成的情况下,客户端如果用这个文件 ID 在 192.168.1.136 上取文件,就会出现文件无法访问的 错误。而 fastdfs-nginx-module 可以重定向文件连接到源服务器取文件,避免客户端由于复制延迟导致的 文件无法访问错误。(解压后的 fastdfs-nginx-module 在 nginx 安装时使用) 2、上传 fastdfs-nginx-module_v1.16.tar.gz 到任意位置,解压 # tar -zxvf fastdfs-nginx-module_v1.16.tar.gz 3、修改 fastdfs-nginx-module 的 config 配置文件 # vim fastdfs-nginx-module/src/config CORE_INCS="$CORE_INCS /usr/local/include/fastdfs /usr/local/include/fastcommon/" 修改为:CORE_INCS="$CORE_INCS /usr/include/fastdfs /usr/include/fastcommon/" (注意:这个路径修改是很重要的,不然在 nginx 编译的时候会报错的)

4、上传当前的稳定版本 Nginx(nginx-1.11.5.tar.gz)到服务器任意目录(我这里放的是/home/soft)

5、安装编译 Nginx 所需的依赖包 # yum install gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel

6、编译安装 Nginx(添加 fastdfs-nginx-module 模块)

如果你已经安装过了Nginx,则必须卸载掉之前安装的,你可以先复制nginx.conf文件出来,然后

ps -ef | grep nginx

kill id

cd /usr/local

rm -rf nginx

如果你的追踪服务器跟存储服务器是同一台服务器此处可以先跳过,先进行7的操作,如果只是存储服务器,继续以下操作

# tar -zxvf nginx-1.11.5.tar.gz # cd nginx-1.11.5 # ./configure --prefix=/usr/local/nginx --add-module=/home/soft/fastdfs-nginx-module/src # make && make install

7、复制 fastdfs-nginx-module 源码中的配置文件到/etc/fdfs 目录,并修改 # cp /home/soft/fastdfs-nginx-module/src/mod_fastdfs.conf /etc/fdfs/ # vi /etc/fdfs/mod_fastdfs.conf (1)第一组 Storage 的 mod_fastdfs.conf 配置如下: connect_timeout=10 base_path=/tmp tracker_server=192.168.1.131:22122 tracker_server=192.168.1.132:22122 storage_server_port=23000 group_name=group1 url_have_group_name = true store_path0=/home/fastdfs/storage group_count = 1 [group1] group_name=group1 storage_server_port=23000 store_path_count=1 store_path0=/home/fastdfs/storage

8、复制 FastDFS 的部分配置文件到/etc/fdfs 目录 # cd /home/soft/FastDFS/conf # cp http.conf mime.types /etc/fdfs/

9、在/fastdfs/storage 文件存储目录下创建软连接,将其链接到实际存放数据的目录 # ln -s /home/fastdfs/storage/data/ /home/fastdfs/storage/data/M00

10、配置 Nginx

如果你之前安装过nginx,并卸载重装了,将你之前的nginx.conf拷贝回/usr/local/nginx/conf目录下

# vim /usr/local/nginx/conf/nginx.conf user root; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 8888; server_name localhost; location ~/group([0-9])/M00 { #alias /fastdfs/storage/data; ngx_fastdfs_module; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } }

注意、说明: A、8888 端口值是要与/etc/fdfs/storage.conf 中的 http.server_port=8888 相对应, 因为 http.server_port 默认为 8888,如果想改成 80,则要对应修改过来。 B、Storage 对应有多个 group 的情况下,访问路径带 group 名,如/group1/M00/00/00/xxx, 对应的 Nginx 配置为: location ~/group([0-9])/M00 { ngx_fastdfs_module; }

11、启动 Nginx # /usr/local/nginx/sbin/nginx ngx_http_fastdfs_set pid=xxx

(重启 Nginx 的命令为:/usr/local/nginx/sbin/nginx -s reload) 设置 Nginx 开机启动 # vim /etc/rc.local 加入: /usr/local/nginx/sbin/nginx

12、通过浏览器访问测试时上传的文件

http://192.168.5.182:8888/group1/M00/00/00/wKgFgVvqbfWARXiRClPqSv5X14U.tar.gz

通过访问我们可以把之前的文件下载下来。

在跟踪器节点上安装 Nginx

1、在 tracker 上安装的 nginx 主要为了提供 http 访问的反向代理、负载均衡以及缓存服务。

2、安装编译 Nginx 所需的依赖包 # yum install gcc gcc-c++ make automake autoconf libtool pcre pcre-devel zlib zlib-devel openssl openssl-devel

3、上传 ngx_cache_purge-2.3.tar.gz 到服务器,解压 # tar -zxvf ngx_cache_purge-2.3.tar.gz

4、上传当前的稳定版本 Nginx(nginx-1.11.5.tar.gz)到服务器任意目录

如果你已经安装过了Nginx,则必须卸载掉之前安装的,你可以先复制nginx.conf文件出来,然后

ps -ef | grep nginx

kill id

cd /usr/local

rm -rf nginx

5、编译安装 Nginx(添加 ngx_cache_purge 模块) # tar -zxvf nginx-1.11.5.tar.gz # cd nginx-1.11.5

此处注意,如果你的追踪服务器跟存储服务器是同一台服务器,配置如下

./configure --prefix=/usr/local/nginx --add-module=/home/soft/fastdfs-nginx-module/src --add-module=/home/soft/ngx_cache_purge-2.3

否则只是追踪服务器,配置如下 # ./configure --prefix=/usr/local/nginx --add-module=/home/soft/ngx_cache_purge-2.3 # make && make install

6、配置 Nginx,设置负载均衡以及缓存

# vim /usr/local/nginx/conf/nginx.conf user root; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info;

#pid logs/nginx.pid; events { worker_connections 1024; use epoll; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; #设置缓存 server_names_hash_bucket_size 128; client_header_buffer_size 32k; large_client_header_buffers 4 32k; client_max_body_size 300m; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 16k; proxy_buffers 4 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; #设置缓存存储路径、存储方式、分配内存大小、磁盘最大空间、缓存期限 proxy_cache_path /home/fastdfs/cache/nginx/proxy_cache levels=1:2 keys_zone=http-cache:200m max_size=1g inactive=30d;

proxy_temp_path /home/fastdfs/cache/nginx/proxy_cache/tmp; #设置 group1 的服务器 upstream fdfs_group1 { server 192.168.1.135:8888 weight=1 max_fails=2 fail_timeout=30s; server 192.168.1.136:8888 weight=1 max_fails=2 fail_timeout=30s; }

server { listen 8000; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; #设置 group 的负载均衡参数 location /group1/M00 { proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_cache http-cache; proxy_cache_valid 200 304 12h; proxy_cache_key $uri$is_args$args; proxy_pass http://fdfs_group1; expires 30d; }

#设置清除缓存的访问权限 location ~/purge(/.*) { allow 127.0.0.1; allow 192.168.1.0/24; deny all; proxy_cache_purge http-cache $1$is_args$args; }

#error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } 如果你是追踪服务器跟存储服务器为同一台服务器,还要加上之前存储服务器的Nginx配置。

按以上 nginx 配置文件的要求,创建对应的缓存目录: # mkdir -p /home/fastdfs/cache/nginx/proxy_cache # mkdir -p /home/fastdfs/cache/nginx/proxy_cache/tmp

7、启动 Nginx # /usr/local/nginx/sbin/nginx 重启 Nginx # /usr/local/nginx/sbin/nginx -s reload 设置 Nginx 开机启动 # vi /etc/rc.local 加入:/usr/local/nginx/sbin/nginx

8、文件访问测试

前面直接通过访问 Storage 节点中的 Nginx 的文件

http://xxx.xxx.xxx.xxx:8888/group1/M00/00/00/wKgFgVvqbfWARXiRClPqSv5X14U.tar.gz

任意一台存储服务器的IP地址都可以下载

现在可以通过 Tracker 中的 Nginx 来进行访问 http://xxx.xxx.xxx.xxx:8000/group1/M00/00/00/wKgFgVvqbfWARXiRClPqSv5X14U.tar.gz

任意一台追踪服务器的IP地址都可以下载

由上面的文件访问效果可以看到,每一个 Tracker 中的 Nginx 都单独对后端的 Storage 组做了负载均衡

注意:千万不要使用 kill -9 命令强杀 FastDFS 进程,否则可能会导致 binlog 数据丢失。

本文参与 腾讯云自媒体分享计划,分享自作者个人站点/博客。
如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
相关产品与服务
负载均衡
负载均衡(Cloud Load Balancer,CLB)提供安全快捷的流量分发服务,访问流量经由 CLB 可以自动分配到云中的多台后端服务器上,扩展系统的服务能力并消除单点故障。负载均衡支持亿级连接和千万级并发,可轻松应对大流量访问,满足业务需求。
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档