Here is some sample output:
======> Writing in isolation (read protocol!=write protocol, read version!=write version, rsize!=wsize) Creating 5 pipes popening echo Number of measurements: $(wc -l) popening echo Average number of seconds: $(cut -d " " -f 4 | avg -i) popening echo Average time: $(cut -d " " -f 4 | avg -i | modtime -i) popening sleep 1; echo Best time: $(cut -d " " -f 4 | highest -s $(expr 1024 \* 1024) -r -n 1 | modtime) popening sleep 2; echo Best numbers:; highest -s $(expr 1024 \* 1024) -r -f 2 -n 5 Number of measurements: 26 Average number of seconds: 703.932692308 Average time: 11 minutes 43 seconds Best time: 9 minutes 43 seconds Best numbers: xfer-result-Writing-16384-3-udp:Write time: 583.82 xfer-result-Writing-8192-3-tcp:Write time: 638.06 xfer-result-Writing-9216-3-tcp:Write time: 649.62 xfer-result-Writing-16384-3-tcp:Write time: 653.30 xfer-result-Writing-13312-3-tcp:Write time: 654.96 ======> Reading in isolation (read protocol!=write protocol, read version!=write version, rsize!=wsize) Creating 5 pipes popening echo Number of measurements: $(wc -l) popening echo Average number of seconds: $(cut -d " " -f 4 | avg -i) popening echo Average time: $(cut -d " " -f 4 | avg -i | modtime -i) popening sleep 1; echo Best time: $(cut -d " " -f 4 | highest -s $(expr 1024 \* 1024) -r -n 1 | modtime) popening sleep 2; echo Best numbers:; highest -s $(expr 1024 \* 1024) -r -f 2 -n 5 Number of measurements: 25 Average number of seconds: 389.25 Average time: 6 minutes 29 seconds Best time: 4 minutes 18 seconds Best numbers: xfer-result-Reading-16384-3-tcp:Read time: 258.31 xfer-result-Reading-8192-3-tcp:Read time: 337.19 xfer-result-Reading-9216-3-tcp:Read time: 339.16 xfer-result-Reading-10240-3-tcp:Read time: 340.15 xfer-result-Reading-12288-3-tcp:Read time: 340.26 ======> Best composite of read and write (read protocol==write protocol, read version==write version, rsize!=wsize) tcp 3 rsize: 4096 readtime: 485.49 wsize: 8192 writetime: 638.06 composite: 714.345 tcp 3 rsize: 5120 readtime: 471.15 wsize: 8192 writetime: 638.06 composite: 721.515 tcp 3 rsize: 6144 readtime: 471.14 wsize: 8192 writetime: 638.06 composite: 721.520 tcp 3 rsize: 7168 readtime: 469.20 wsize: 8192 writetime: 638.06 composite: 722.490 tcp 3 rsize: 4096 readtime: 485.49 wsize: 9216 writetime: 649.62 composite: 731.685 /\/\/\ udp 3 rsize: 5120 readtime: 514.31 wsize: 16384 writetime: 583.82 composite: 618.575 udp 3 rsize: 7168 readtime: 481.18 wsize: 16384 writetime: 583.82 composite: 635.140 udp 3 rsize: 4096 readtime: 473.37 wsize: 16384 writetime: 583.82 composite: 639.045 udp 3 rsize: 6144 readtime: 466.38 wsize: 16384 writetime: 583.82 composite: 642.540 udp 3 rsize: 9216 readtime: 405.25 wsize: 16384 writetime: 583.82 composite: 673.105 /\/\/\ ======> Best composite of read and write (read protocol==write protocol, read version==write version, rsize==wsize) tcp 3 8192 both sizes: 8192 readtime: 337.19 writetime: 638.06 composite: 788.495 udp 3 9216 both sizes: 9216 readtime: 405.25 writetime: 664.46 composite: 794.065 tcp 3 9216 both sizes: 9216 readtime: 339.16 writetime: 649.62 composite: 804.850 tcp 3 13312 both sizes: 13312 readtime: 341.15 writetime: 654.96 composite: 811.865 tcp 3 14336 both sizes: 14336 readtime: 372.20 writetime: 665.83 composite: 812.645
This sort of result leads one to consider a variety of mounting schemes to improve performance:
Mount scheme | Advantages | Disadvantages |
One mount, preferring write speed | Simple, good write speed | read speed suffers |
One mount, preferring read speed | Simple, good read speed | write speed suffers |
One mount, balancing read and write speed | Simple, read and write speed not bad | both read and write speed suffer a bit |
Two mounts, one for high read speed, one for high write speed | Good read speed and good write speed | A bit complicated for users. Writes may not show up in the read mount immediately. |
Three mounts, one for high read speed, one for high write speed, one for balanced read and write speed | Good read speed, good write speed, good "if you're switching from read to write, back and forth, a lot" | In a sense still more complicated (more mounts to think about), but in a sense less complicated (when in doubt, just use the balanced mount) |
notify-when-up's I'm using during these tests. esmft1 is the NFS server, esmf04m is the NFS client:
Notify when the relevant network interface is idle longer than expected (unless the program is done - eyeball that one) | esmft1-root results) notify-when-up -f 'maxtime 60 tethereal -i ce1 -c 100' |
Notify when the number of results files is seen growing | esmft1-root results) notify-when-up -g 3 'ls -l | wc -l' -m $[60*30] |
Notify when nfs-test is done | esmf04m-root> notify-when-up -s 'bash ./nfs-test' |
You can e-mail the author with questions or comments: