(adapted from www.coker.com.au/bonnie++/readme.html)
The tests run by bonnie attempt to keep optimizers from noticing it’s all bogus. The idea is to make sure that these are real transfers to/from user space to the physical disk.
These are the types of filesystem activity that have been observed to be bottlenecks in I/O-intensive applications, in particular the text database work done in connection with the New Oxford English Dictionary Project at the University of Waterloo.
It initially performs a series of tests on a file (or files) of known size. For each test, Bonnie reports the number of Kilo-bytes processed per elapsed second, and the % CPU usage (sum of user and system). If a size >1G is specified then we will use a number of files of size 1G or less.
- put_c (K/s): The file is written using the putc() stdio macro. The loop that does the writing should be small enough to fit into any reasonable I-cache. The CPU overhead here is that required to do the stdio code plus the OS file space allocation.
- put_block (K/s): The file is created using write(2). The CPU overhead should be just the OS file space allocation.
- rewrite (K/s): Each BUFSIZ of the file is read with read(2), dirtied, and rewritten with write(2), requiring an lseek(2). Since no space allocation is done, and the I/O is well-localized, this should test the effectiveness of the filesystem cache and the speed of data transfer.
- getc (K/s): The file is read using the getc() stdio macro. Once again, the inner loop is small. This should exercise only stdio and sequential input.
- get_block (K/s): The file is read using read(2). This should be a very pure test of sequential input performance.
- seeks (K/s): This test runs SeekProcCount processes (default 3) in parallel, doing a total of 8000 lseek()s to locations in the file specified by random() in bsd systems, drand48() on sysV systems. In each case, the block is read with read(2). In 10% of cases, it is dirtied and written back with write(2).
This test involves file create/stat/unlink to simulate some operations that are common bottlenecks on large Squid and INN servers, and machines with tens of thousands of mail files in /var/spool/mail.
The file creation tests use file names with 7 digits numbers and a random number (from 0 to 12) of random alpha-numeric characters. For the sequential tests the random characters in the file name follow the number. For the random tests the random characters are first.
The size of files is either zero bytes, or a random size within a specified range (we use a range of 0k to 15K). Files greater than zero bytes have random data written to them.
- seq_create (files/second): files are created in numeric order.
- seq_stat (files/second): files are stat()ed in readdir() order (ie the order they are stored in the directory which is very likely the to be the same order as which they were created). Also, the data of files larger than 0 bytes is read.
- seq_del (files/second): files are then deleted in the same order.
- ran_create (files/second): create the files in an order that will appear random to the file system (the last 7 characters are in numeric order on the files).
- ran_stat (files/second): we stat() random files (this will return very good results on file systems with sorted directories because not every file will be stat()ed and the cache will be more effective). Also, the data of files larger than 0 bytes is read.
- ran_del (files/second): we then delete all the files in random order.