Saturday 18 November 2017

io scheduler, jfs, ssd

I didn't have much to do today and came across some articles about jfs and io schedulers and thought i'd run a few tests while polishing off a bottle of red (Noons Twleve Bells 2011). Actually i'd been waiting for a so-called "mate" to show up but he decided he was too busy to even let me know until after the day was over. Not like i've been suicidally depressed this week or anything.

The test I chose was to compile open jdk 9 which is a pretty i/o intensive and complex build.

My hardware is a Kaveri A10-7850K APU, "4G" of memory (sigh) on an ITX motherboard, and a Samsung SSD 840 EVO 250GB drive. For each test I blew away the build directory, re-ran configure, set the scheduler "echo foo > /sys/block/sda/queue/scheduler", and flushed the buffer caches "echo 3 > /proc/sys/vm/drop_caches" before running "time make images". I forced all cpu cores to a fixed 3.0Ghz using the performance scheduler.

At first I kept getting ICE's in the default compiler so I installed gcc-4.9 ... and got ICE's again. Inconsistently though, and we all know what ICE's in gcc means (it almost always means broken hardware, i.e. RAM). Sigh. It was afer a suspend-resume so that might've had something to do with it.

Well I rebooted and fudged around in the BIOS intending to play with the RAM clock but noticed I'd set the voltage too low. So what the hell I set it 1.60v and rebooted. And whattyaknow, suddenly I have 8G ram working again (for now), first time in a year. Fucking PCs.

Anyway so I ran the tests with each i/o scheduler on the filesystem and had no ICE's this time (which was dissapointing because it means the hardware isn't likely to be busted afterall, just temperamental). I used the default compiler. As it was a fresh reboot I ran builds from the same 80x24 xterm and the only other luser applications running beyond the panels and window manager were 1 other root exterm and an emacs to record the results.

cfq

real    10m39.440s
user    28m55.256s
sys     2m40.140s

deadline

real    10m36.500s
user    28m44.236s
sys     2m40.788s

noop

real    10m42.683s
user    28m43.960s
sys     2m41.036s

As expected from the articles i'd read, deadline (not the default in ubuntu!) is the best. But it's hardly a big deal. Some articles also suggested using noop for SSD storage but that is "clearly" worse (oddly that it increases sys time if it doesn't do anything). I only ran one test each in the order shown so it's only an illustrative result but TBH I'm just not that interested especially if the resutls are so close anyway. If there was something more significant I might care.

I guess i'll switch to deadline anyway, why not.

No doubt much of the compilation ran from buffer cache - infact this is over 2x faster than when I compiled it on Slackware64 a few days ago - at the time I only had 4G system ram (less the framebuffer) and a few fat apps running. But I also had different bios settings (slower ram speed) and btrfs rather than jfs.

As an aside I remember in the early days of Evolution (2000-2-x) when it took 45 minutes for a complete build on a desktop tower, and the code wasn't even very big at the time. That dell piece of crap had woefully shitful i/o. Header files really kill C compilation performance though.

Still openjdk is pretty big, and i'm using a pretty woeful CPU here as well. Steamroller, lol. Where on earth are the ryzen apu's amd??

notzed@minized:~/hg/jdk9u$ find . -type f \
  -a \( -name '*.java' -o -name '*.cpp' -o -name '*.c' \) \
   | xargs grep \; | wc -l
3063124
(some of the c and c++ is platform specific and not built but it's a good enough measure).

No comments: