Linux provides several I/O schedulers that allow one to tune performance for your use patterns. Traditionally disk scheduling algorithms have been designed for rotational drives where rotational latency and drive head movement latency need to be taken into consideration, hence the need for complex I/O schedulers. However a SSD does not suffer from the HDD latency issues - does this mean the default I/O scheduler choice needs to be re-thought?
I decided to benchmark a SanDisk pSSD to see the behaviour of the different I/O schedulers using the following tests:
a) untar kernel source
c) copy kernel source tree
e) copy tar file
g) rm -rf kernel source tree
h) rm -rf copy of source tree
For each of the I/O schedulers I ran the above sequence of tests 3 times and took an average of the total run time. My results are as follows:
So it appears the default cfq (Completely Fair Queuing) scheduler is least optimal and the noop scheduler behaves best. According to the noop wikipedia entry the noop scheduler is the best choice for solid state drives, so maybe my test results can be trusted after all :-)
SSDs do not require multiple I/O requests to be re-ordered since they do not suffer from traditional random-access latencies, hence the noop scheduler is optimal.