DragonFly kernel List (threaded) for 2011-07
Re: Blogbench RAID benchmarks
On Thu, Jul 21, 2011 at 3:35 PM, Freddie Cash <firstname.lastname@example.org>
Ok, well this is interesting. Basically it comes down to whether we
want to starve read operations or whether we want to starve write
The FreeBSD results starve read operations, while the DragonFly results
starve write operations. That's the entirety of the difference between
the two tests.
Would using the disk scheduler's in FBSD/DFly help with this at all?
FreeBSD includes a geom_sched class for enabling pluggable disk scheduler's (currently only round-robin algorithm is implemented). http://info.iet.unipi.it/~luigi/geom_sched/
Page 39 of the presentation on GEOM_SCHED shows the following, indicating that it should make a big difference in the blogbench results (note the second result with greedy read and write):
Some preliminary results on scheduler’s performance in some easy
cases (the focus here is on the framework).
Measurement is using multiple dd instances on a ﬁlesystems, all
speeds in MiB/s.
two greedy readers, throughput improvement
NORMAL: 6.8 + 6.8 ; GSCHED RR: 27.0 + 27.0
one greedy reader, one greedy writer, capture eﬀect
NORMAL: R: 0.234 W:72.3 ; GSCHED RR: R:12.0 W:40.0
multiple greedy writers, only small loss of througput
NORMAL: 16+16; RR: 15.5 + 15.5
one sequential reader, one random reader (ﬁo)
NORMAL: Seq: 4. 2 Rand: 4.2; RR: Seq: 30 Rand: 4.4
ZFS includes it's own disk scheduler, so geom_sched wouldn't help in that case. Would be interesting to see a comparison of HAMMER+swapcache and ZFS+L2ARC, though.
And I believe DFly has dsched?
This is all with swapcache turned off. The only way to test in a
fair manner with swapcache turned on (with a SSD) is if the FreeBSD
test used a similar setup w/ZFS.