Install network-manager (mine seems to have disappeared with my 10.10 upgrade).
Create your interface with an MTU of 9000
First, check if your network card can support Gigabit speeds
using ethtool or hwinfo --netcard; if it cannot, raising the
MTU to a jumbo size is likely to cause some network downtime.
In fact, even some cards that support 1000BaseT don't support
Go to System -> Preferences -> Network Connections, and set your
new MTU. 9000 should be a good value.
Look for the changes in /etc/NetworkManager/system-connections/Auto eth0 or similar.
If you have entries in /etc/network/interfaces, you may need to remove them to get your NetworkManager changes
A reboot is one way of making the changes take effect, or more
simply use the init.d script (the script might tell you to use the "service" command instead, but that is (was?) buggy in
Path MTU Discovery appears to be enabled by default, so nothing to do there but ensure there's a 0 in
RHEL 3 and Redhat 9
Just put a line like:
...in /etc/sysconfig/network-scripts/ifcfg-eth0, and reboot. Or you
might be able to restart the network - but be careful when trying this
Select one of the following two methods:
Shut down the interface you want to change.
Hunt around in smitty for where you "enable jumbo frames". Do so.
Hunt around in smitty for where you change the MTU. Set it to
Make sure "Path MTU Discovery" is enabled.
Bring the interface back up.
Using ifconfig and chdev. I was able to do this over ssh, but only
because I was reconfiguring a different interface from the one
I was ssh'ing in on. This is of course changing adapter 2.
chdev -P -l ent2 -a media_speed=Auto_Negotiation
ifconfig en2 down detach
chdev -l ent2 -a jumbo_frames=yes
chdev -l en2 -a mtu=9000
chdev -l en2 -a state=up
Solaris 9 with a "ce" NIC (using the second such NIC in this example):
esmft1-root /) ifconfig -a
lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
Despite your best efforts, you may find that you still aren't
getting jumbo frames at all, or are getting jumbo frames for some
protocols, but not the one(s) you need.
One good way to verify that you indeed are getting jumbo frames
for the protocol(s) you're concerned with, is to fire up a sniffer
of some sort (ethereal, tethereal, snoop, tcpdump, whatever),
run one of them on the protocols you require jumbo frames for,
crank up their verbosity level to the point that they show packet
lengths, and then sample around 10,000 packets or so. It can be
useful to generate a list of counts of how many packets of each
size you're actually getting (a character-cell app may be better
suited to this than a GUI app); the packet sizes may be more
clustered than you might expect. If your protocol(s) are showing
some jumbo frames,
you're probably good to go. Don't necessarily expect that all of
your packets will be jumbo frames; there's a good chance that your
protocol(s) will generate a mix of small packets and large
Another simpler way (if you have the software), is to use this
command on a Linux machine:
ip route get ip_address
Still another way is to use the tracepath command on a Linux system,
though ISTR hearing that it may not always be that accurate.
You may also want to look for errors, retransmissions, duplicate
packets, and so on. If you're using a card that is doing TCP checksums
in hardware, ethereal/tetheral (and perhaps other sniffers) may
misreport a large number of invalid TCP checksums. In that
case, bad TCP checksums can be ignored.
Rumor has it that NFS will memorize your MTU at mount time, so
you may find that you have to umount+mount to get NFS using
jumbo frames. However, I have some slight evidence supporting
the idea that NFS will rediscover a new MTU fine, as long as Path
MTU Discovery is turned on, and you've waited long enough for the
rediscovery to occur - AIX 5.1 apparently wants to rediscover the
Path MTU only once every 30 minutes, for example.
The slower the CPU (in the absence of some form of TOE card),
the more jumbo frames should help. So on a 10 petahertz,
ultrasuperscalar box, jumbo frames probably wouldn't help a bit.
But they should make a huge difference on an old 486. IE, it's
not about whether jumbo frames work on gigabit, it's about
whether the system in question can pump out packets fast enough
to make good use of a gigabit network, which is facilitated
sometimes by jumbo frames.
The smarter the gigabit NIC (For example, the more of the header
processing the card can do on its own, rather than relying on
other system resources), the less jumbo frames are likely to
The slower the system memory bandwidth (again, to an extent, in
the absence of a TOE card), the more jumbo frames should help
(but this effect should be lessened by the presence of a nice,
0-copy TCP stack - which TOE cards sometimes may interfere
with... IOW, some TCP/IP stacks copy packets around a number of
times before they make it from application to physical layer,
which will increase the impact of memory speed).
Using applications with large block sizes at the application
layer, should help maximize the benefits of jumbo frames by
making it easier for the TCP/IP stack to use large frames.
If you have a protocol that is prone to sending lots of
tinygrams, and it isn't a "send, wait for ack, send, wait for
ack" sort of protocol, but rather more of a "send, send, send,
get some acks, send, send, send, get some acks", then turning on
the Nagel Algorithm may help maximize the benefit of jumbo
On the other hand, if you -do- have a protocol that is very
"send, wait for ack, send, wait for ack", then turning -off- the
Nagel Algorithm should help, by cutting latency, by not waiting
around briefly for the remainder of an MSS prior to
If you have a protocol that can be rearranged a bit to use
extents, that too should help maximize the benefits of jumbo
frames. I believe Lustre and NFSv4 are supposed to be
As a sort of summary:
Jumbo frames should help performance by reducing the load on the
CPU (by requiring fewer headers to be processed for the same
amount of payload data), reducing the load on the system memory
bus (by requiring less "overhead memory" to be copied around),
and reducing the number of context switches (needed to go from
userspace to kernelspace and back and forth, at least once for
each packet), which tend to be especially expensive in many
cases, especially on x86 designs which historically haven't been
optimized for multiuser systems that well, which meant
especially expensive context switches.
Further notes about gigabit ethernet:
You're best off leaving it autonegotiated
In the gigabit world, some say there's no such thing as "half duplex", while others say that although it's
infrequently supported, it is in the spec.
A high performance switch like the IBM SP2 switch is really only
gigabit as well. The difference is that the IBM SP2 switch has been
optimized to decrease latency substantially, making it better suited
to applications built on top of MPI or similar.