前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >fio基础6

fio基础6

作者头像
franket
发布2022-04-24 00:19:17
3760
发布2022-04-24 00:19:17
举报
文章被收录于专栏:技术杂记

附:

下面是关于 fio 的详细用法与各参数解释,在源码包的 HOWTO 文档里

代码语言:javascript
复制
[root@iZ116haf49sZ fio]# cat HOWTO 
Table of contents
-----------------

1. Overview
2. How fio works
3. Running fio
4. Job file format
5. Detailed list of parameters
6. Normal output
7. Terse output
8. Trace file format
9. CPU idleness profiling

1.0 Overview and history
------------------------
fio was originally written to save me the hassle of writing special test
case programs when I wanted to test a specific workload, either for
performance reasons or to find/reproduce a bug. The process of writing
such a test app can be tiresome, especially if you have to do it often.
Hence I needed a tool that would be able to simulate a given io workload
without resorting to writing a tailored test case again and again.

A test work load is difficult to define, though. There can be any number
of processes or threads involved, and they can each be using their own
way of generating io. You could have someone dirtying large amounts of
memory in an memory mapped file, or maybe several threads issuing
reads using asynchronous io. fio needed to be flexible enough to
simulate both of these cases, and many more.

2.0 How fio works
-----------------
The first step in getting fio to simulate a desired io workload, is
writing a job file describing that specific setup. A job file may contain
any number of threads and/or files - the typical contents of the job file
is a global section defining shared parameters, and one or more job
sections describing the jobs involved. When run, fio parses this file
and sets everything up as described. If we break down a job from top to
bottom, it contains the following basic parameters:

	IO type		Defines the io pattern issued to the file(s).
			We may only be reading sequentially from this
			file(s), or we may be writing randomly. Or even
			mixing reads and writes, sequentially or randomly.

	Block size	In how large chunks are we issuing io? This may be
			a single value, or it may describe a range of
			block sizes.

	IO size		How much data are we going to be reading/writing.

	IO engine	How do we issue io? We could be memory mapping the
			file, we could be using regular read/write, we
			could be using splice, async io, syslet, or even
			SG (SCSI generic sg).

	IO depth	If the io engine is async, how large a queuing
			depth do we want to maintain?

	IO type		Should we be doing buffered io, or direct/raw io?

	Num files	How many files are we spreading the workload over.

	Num threads	How many threads or processes should we spread
			this workload over.

The above are the basic parameters defined for a workload, in addition
there's a multitude of parameters that modify other aspects of how this
job behaves.


3.0 Running fio
---------------
See the README file for command line parameters, there are only a few
of them.

Running fio is normally the easiest part - you just give it the job file
(or job files) as parameters:

$ fio job_file

and it will start doing what the job_file tells it to do. You can give
more than one job file on the command line, fio will serialize the running
of those files. Internally that is the same as using the 'stonewall'
parameter described in the parameter section.

If the job file contains only one job, you may as well just give the
parameters on the command line. The command line parameters are identical
to the job parameters, with a few extra that control global parameters
(see README). For example, for the job file parameter iodepth=2, the
mirror command line option would be --iodepth 2 or --iodepth=2. You can
also use the command line for giving more than one job entry. For each
--name option that fio sees, it will start a new job with that name.
Command line entries following a --name entry will apply to that job,
until there are no more entries or a new --name entry is seen. This is
similar to the job file options, where each option applies to the current
job until a new [] job entry is seen.

fio does not need to run as root, except if the files or devices specified
in the job section requires that. Some other options may also be restricted,
such as memory locking, io scheduler switching, and decreasing the nice value.


4.0 Job file format
-------------------
As previously described, fio accepts one or more job files describing
what it is supposed to do. The job file format is the classic ini file,
where the names enclosed in [] brackets define the job name. You are free
to use any ascii name you want, except 'global' which has special meaning.
A global section sets defaults for the jobs described in that file. A job
may override a global section parameter, and a job file may even have
several global sections if so desired. A job is only affected by a global
section residing above it. If the first character in a line is a ';' or a
'#', the entire line is discarded as a comment.

So let's look at a really simple job file that defines two processes, each
randomly reading from a 128MB file.

; -- start job file --
[global]
rw=randread
size=128m

[job1]

[job2]

; -- end job file --

As you can see, the job file sections themselves are empty as all the
described parameters are shared. As no filename= option is given, fio
makes up a filename for each of the jobs as it sees fit. On the command
line, this job would look as follows:

$ fio --name=global --rw=randread --size=128m --name=job1 --name=job2


Let's look at an example that has a number of processes writing randomly
to files.

; -- start job file --
[random-writers]
ioengine=libaio
iodepth=4
rw=randwrite
bs=32k
direct=0
size=64m
numjobs=4

本文系转载,前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文系转载前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • 附:
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档