Check with your sysadmin, before running this program! Tell him or her that this program could do bad things to the machine - THEN ask. If the answer is "yes", then continue. If not, find another benchmark to fiddle with.

  • You can download it here

    Getting a semi-meaningful number:

    1. Figure out how much physical RAM you have on your test machine. See your OS FAQ for how to do this.
    2. Run working-set. Don't do anything else on the machine while working-set is running. At first, you could, but it'll produce a less accurate result. Later, you probably won't want to, because the machine will become really slow.
    3. Dig thru working-set's output for lines containing the string "sample 0". Pick the line with the largest number to the left of that string.
    4. Subtract that number, from the amount of memory on your machine (from step 1).
    5. That difference is (approximately) the memory overhead due to basic OS functions. The result appears to be a function mostly of OS version, and the amount of RAM installed in the machine (some OSes take up more RAM, if they find themselves on a machine with lots of memory).

    Some sample results:
    Ultrix 4.2:	32 - 25 = 7M  overhead
    SunOS 4.1.3_U1:	12 -    =     overhead (too little RAM to give useful result)
    Linux 2.0.22:	24 - 20 = 4M  overhead
    Linux 2.0.32:	24 - 21 = 3M  overhead
    SunOS 5.5.1:	48 - 34 = 14M overhead (malloc failed, more swap should help)
    SunOS 5.7 beta update, 64 bit mode:	128 - 73 = 55M (netscape on console)
    SunOS 5.7 beta update, 64 bit mode:	128 - 83 = 45M (nothing on console)
    SunOS 5.7 beta update, 32 bit mode:	16 - 7 = 9M (nothing on console)
    

    I'm particularly interested in results for HP-UX, AIX, NT and W98.
    working-set is a slightly different sort of benchmark. It is an attempt to measure, not how fast your machine runs, or even how much memory you have, but rather the point at which a large-memory-consuming process will slow substantially, after asking for Just a Little Too Much Memory. In other words, it's trying to see where your virtual memory system will start thrashing, and hence, how much OS overhead your system has, in terms of RAM.

    It's also a little different, in that it attempts to compensate for hiccups in system utilization - if a couple of timings don't fit with the rest, they're tossed. If over half the timings of a given line are widely scattered, the run is aborted (we're keeping the middle half of a bell curve).


    Let's interpret some output... (you can safely ignore this part)
    1 
    2 (kept 5 trials of 7) 99.396%,99.396%,99.807%,99.807% keep 1,1 avg 1,1
    3 (kept 4 trials of 7) 99.876%,100.483%,99.739%,99.931% keep 2,1 avg 2,1
    4 (kept 6 trials of 7) 99.036%,99.159%,99.190%,99.450% keep 3,1 avg 3,1
    5 (kept 4 trials of 7) 99.108%,100.073%,99.025%,99.834% keep 4,1 avg 4,1
    6 (kept 6 trials of 7) 99.076%,99.968%,99.190%,100.166% keep 5,1 avg 5,1
    7 (kept 5 trials of 7) 98.779%,99.700%,98.896%,99.704% keep 6,1 avg 6,1
    8 (kept 5 trials of 7) 99.108%,100.333%,99.139%,100.245% keep 7,1 avg 7,1
    
    The first column is the number of meg being tested. You can verify this with "top" or similar, but be aware that the program has more than just a data segment, and also has light data needs aside from the test arrays allocated to the specified amounts.

    The "kept n of m" bit is telling us that some measurements were thrown out. If all 7 are used, this isn't mentioned.

    The first percentage is done relative to the prior trial, with weird values thrown out. The second percentage is always done relative to the first trial.

    The third and fourth percentages are the same thing over again, but this time instead of throwing out weird values, it uses strict averages (geometric means).

    The "keep" and "avg" nonsense is just there to make it slightly (ok, very slightly) easier to remember how to interpret the percentages. If the difference between the percentages doesn't make sense to you, I suggest focusing on the second, and ignoring the others.

    The first line doesn't say anything, because there's nothing to compare it against, yet. It's typically sampled 31 times, instead of the usual 7. It's sampled more times, not because you care more about the behavior of a process with 1M of data, but because it's compared against all the other amounts (2M, 3M, &c).

    This is why we throw out some trials, instead of using strict averages. Check out lines 10 and 11:

    1 
    2 (kept 5 trials of 7) 99.396%,99.396%,99.807%,99.807% keep 1,1 avg 1,1
    3 (kept 4 trials of 7) 99.876%,100.483%,99.739%,99.931% keep 2,1 avg 2,1
    4 (kept 6 trials of 7) 99.036%,99.159%,99.190%,99.450% keep 3,1 avg 3,1
    5 (kept 4 trials of 7) 99.108%,100.073%,99.025%,99.834% keep 4,1 avg 4,1
    6 (kept 6 trials of 7) 99.076%,99.968%,99.190%,100.166% keep 5,1 avg 5,1
    7 (kept 5 trials of 7) 98.779%,99.700%,98.896%,99.704% keep 6,1 avg 6,1
    8 (kept 5 trials of 7) 99.108%,100.333%,99.139%,100.245% keep 7,1 avg 7,1
    9 (kept 5 trials of 7) 98.596%,99.483%,98.596%,99.452% keep 8,1 avg 8,1
    10 (kept 6 trials of 7) 98.676%,100.081%,114.583%,116.215% keep 9,1 avg 9,1
    11 (kept 6 trials of 7) 98.654%,99.978%,98.591%,86.044% keep 10,1 avg 10,1
    12 (kept 6 trials of 7) 98.230%,99.676%,98.310%,99.669% keep 11,1 avg 11,1
    13 (kept 6 trials of 7) 98.108%,99.875%,98.223%,99.912% keep 12,1 avg 12,1
    14 (kept 5 trials of 7) 98.372%,100.269%,98.354%,100.133% keep 13,1 avg 13,1
    15 (kept 5 trials of 7) 98.358%,99.986%,98.358%,100.004% keep 14,1 avg 14,1
    16 (kept 4 trials of 7) 98.131%,99.769%,98.156%,99.795% keep 15,1 avg 15,1
    
    One of the 7 trials of line 10 had something else going on at the same time - like (perhaps) a cron job. Line 11 is a bogus in one percentage, because that one alone was done relative to line 10's oddity.

    Note that if you run this on a machine that has the feature of overcommitting on swap (EG, Solaris 2.x, Linux, SunOS 4.1.x, Irix, OSF/1), I expect you will eventually cause the machine to become Very Unhappy, by running this program. Not all of these overcommit out-of-the-box, but it is often a good idea to configure a machine to do so, actually.

    On a machine that doesn't over-commit swap, you'll probably eventually get a "malloc failed" message.

    Running a machine in swap-overcommit mode, can sometimes get working-set to run a little longer, before causing mayhem, or exiting.


    Hits: 3888
    Timestamp: 2024-03-01 13:49:49 PST

    Back to Dan's tech tidbits

    You can e-mail the author with questions or comments: