前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >小文件合并方案分享

小文件合并方案分享

作者头像
用户1260683
发布2020-07-14 16:36:41
2.5K0
发布2020-07-14 16:36:41
举报

小文件合并方案分享

现有问题

  1. 资源利用率&成本:受限于磁盘性能和硬件成本,需要在控制好硬件成本的情况下,解决海量小文件的存储,提高资源利用率。单个集群如果存储了大量小文件(240块SATA,总共6亿文件,文件大小约100KB),磁盘容量平均利用率只有22%。
  2. 读写性能:随着集群文件数量的增长,整体的读写性能会急剧下降。导致这类性能下降的原因主要有2个,一方面是filestore底层采用xfs文件系统,xfs不适合做这种大量小文件的存储,另外是我们采用了SMR的SATA磁盘,这类磁盘也不适合用在Ceph里,具体可以参考下面的文档。
  • https://blog.widodh.nl/2017/02/do-not-use-smr-disks-with-ceph/
  • https://copyfuture.com/blogs-details/201911061902186294pksqoqhzwcm79x Ceph 十年演进的经验教训 —— 磁盘文件系统并不适合作为分布式存储后端
Haystack

Facebook's Haystack design paper. https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Beaver.pdf

SeaweedFS

SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read.

https://github.com/chrislusf/seaweedfs#compared-to-glusterfs-ceph

ambry

https://github.com/linkedin/ambry/wiki/Store

The data node maintains a file per replicated store. We call this file the on-disk log. The on-disk log is a pre-allocated file in a standard linux file system (ext4/xfs). In Ambry, we pre-allocate a file for each on-disk log. The basic idea for the replicated store is the following : on put, append blobs to the end of the pre-allocated file so as to encourage a sequential write workload. Any gets that are serviced by the replicated store may incur a random disk IO, but we expect good locality in the page cache. Deletes, like puts, are appended as a record at the end of the file.

To be able to service random reads of either user metadata or blobs, the replicated store must maintain an index that maps blob IDs to specific offsets in the on-disk log. We store other attributes as well in this index such as delete flags and ttl values for each blob. The index is designed as a set of sorted files. The most recent index segment is in memory. The older segments are memory mapped and an entry is located by doing a binary search on them. The search moves from the most recent to the oldest. This makes it easy to identify the deleted entry before the put entry.

单pool结构方案

  1. 写入数据之前,需要预先分配一个大文件块,调度算法实现较复杂。(单个大文件读写竞争处理)
  2. 大文件发生GC时(空洞资源回收),会同时影响小文件读写。
  3. 成本低,受限于EC模式及底层硬件性能,读写性能会有所下降。
  4. 集群扩容会导致性能波动,同时影响读写性能。

多pool结构方案

  1. 按默认方式写入数据,写入过程不需要考虑后续大文件合并的问题,实现较简单。
  2. 大文件发生GC时(空洞资源回收),只会会影响部分小文件读。(读写分离)
  3. 成本适中,兼顾性能(SSD多副本)和EC(低成本模式)。
  4. 集群扩容相对来讲(只扩EC pool)只会影响部分数据的读取,对写入的影响基本可以忽略。
本文参与 腾讯云自媒体分享计划,分享自微信公众号。
原始发表:2020-07-09,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 Ceph对象存储方案 微信公众号,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体分享计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 小文件合并方案分享
    • 现有问题
      • 单pool结构方案
        • 多pool结构方案
        相关产品与服务
        对象存储
        对象存储(Cloud Object Storage,COS)是由腾讯云推出的无目录层次结构、无数据格式限制,可容纳海量数据且支持 HTTP/HTTPS 协议访问的分布式存储服务。腾讯云 COS 的存储桶空间无容量上限,无需分区管理,适用于 CDN 数据分发、数据万象处理或大数据计算与分析的数据湖等多种场景。
        领券
        问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档