Skip to main content

πŸ“¦ Jumbo Frames Configuration

Jumbo frames are Ethernet frames with Maximum Transmission Unit (MTU) larger than the standard 1500 bytes, typically 9000 bytes. They reduce CPU overhead and improve throughput for bulk data transfers by sending more data per frame, ideal for storage networks, virtualization, and high-performance computing environments.

Key Concepts
  • MTU (Maximum Transmission Unit) - Maximum payload size in bytes per frame
  • Standard MTU - 1500 bytes (default Ethernet standard)
  • Jumbo Frame - Ethernet frame with MTU > 1500 bytes (typically 9000 bytes)
  • L2 MTU - Layer 2 MTU includes Ethernet overhead (headers, VLAN tags)
  • Fragmentation - Breaking large packets into smaller ones (avoided with proper MTU)
  • Path MTU Discovery - Process to find smallest MTU along route
  • MSS (Maximum Segment Size) - TCP payload size (MTU minus IP/TCP headers)

Prerequisites​

Before configuring jumbo frames, ensure you have:

  • βœ… Network equipment supporting jumbo frames (NICs, switches, routers)
  • βœ… All devices in data path configured for same MTU
  • βœ… Understanding of your network topology and traffic patterns
  • βœ… Backup configuration before making changes
  • βœ… Test environment or maintenance window
  • βœ… Monitoring tools to verify performance improvements

Important Considerations
  • End-to-End Support Required: ALL devices in path must support jumbo frames
  • Switch Configuration: Most switches need explicit jumbo frame enablement
  • Performance Impact: Misconfigured MTU causes fragmentation and packet loss
  • VLAN Overhead: 802.1Q tag adds 4 bytes to frame size
  • Internet Traffic: Jumbo frames only work on local networks (not across internet)
  • Testing Required: Always verify with iperf/ping before production
  • Incremental Deployment: Enable jumbo frames progressively, not all at once

Understanding Jumbo Frames​

Frame Size Comparison​

Standard Ethernet Frame (1500 MTU):

[Preamble 8B][Dest MAC 6B][Src MAC 6B][Type 2B][Payload 1500B][CRC 4B]
Total Frame Size: 1526 bytes (without VLAN tag)

Jumbo Frame (9000 MTU):

[Preamble 8B][Dest MAC 6B][Src MAC 6B][Type 2B][Payload 9000B][CRC 4B]
Total Frame Size: 9026 bytes (without VLAN tag)

Performance Benefits​

Frame SizePayloadOverheadEfficiencyCPU CyclesUse Case
1500 MTU (Standard)1500B26B/frame98.3%BaselineGeneral networking, internet
4000 MTU4000B26B/frame99.4%-40% vs 1500Moderate performance boost
9000 MTU (Jumbo)9000B26B/frame99.7%-60% vs 1500Storage, backups, VM migration

Throughput Example:

  • 1 Gbps link with 1500 MTU: ~118 MB/s actual throughput
  • 1 Gbps link with 9000 MTU: ~125 MB/s actual throughput (~6% gain)
  • 10 Gbps link with 9000 MTU: ~1180 MB/s (10-15% gain over standard MTU)
When to Use Jumbo Frames

Best Use Cases:

  • iSCSI/NFS storage traffic
  • VMware vMotion and vSphere replication
  • Database replication and backups
  • High-resolution video streaming (internal)
  • Hyper-V Live Migration

Avoid For:

  • Internet-facing connections
  • Mixed MTU environments
  • Networks with legacy equipment
  • Wireless networks (limited support)

Common Jumbo Frame Scenarios​

Scenario 1: Data Center Storage Network​

Topology:

[ESXi Hosts] ──┬── [10G Switch] ──┬── [iSCSI Storage]
β”‚ β”‚
[NAS] [Backup Server]

All devices: MTU 9000

Device Configuration:

DeviceRoleMTUL2 MTUInterfacePerformance Gain
ESXi Host 1Hypervisor90009018vmnic1 (storage)+12% IOPS
ESXi Host 2Hypervisor90009018vmnic1 (storage)+12% IOPS
Storage SwitchAggregation90009216All portsFull line-rate
iSCSI SANStorage90009018eth0-3 (MPIO)+15% throughput
NAS ServerFile storage90009018bond0+8% SMB transfer
Backup ServerVeeam proxy90009018ens192+10% backup speed

Scenario 2: Virtualization Cluster​

Network: VMware vSphere cluster with separate networks

Network TypeVLANMTUPurposeTraffic Pattern
Management101500vCenter, SSH, webLow volume, latency-sensitive
vMotion209000Live VM migrationBulk transfers, high throughput
Storage309000iSCSI/NFS datastoresSequential I/O, consistent load
VM Network401500Guest VM trafficMixed protocols, internet access
Replication509000vSphere ReplicationLarge file transfers

Per-Interface Configuration:

ESXi Host InterfacePortgroupMTUComment
vmk0 (Management)Management-PG1500Standard for management
vmk1 (vMotion)vMotion-PG9000Optimize VM migration speed
vmk2 (iSCSI-A)iSCSI-A-PG9000First iSCSI path
vmk3 (iSCSI-B)iSCSI-B-PG9000Second iSCSI path (MPIO)
vmnic4 (VM traffic)VM-Network-PG1500Guest VM connectivity

Scenario 3: Multi-Site Replication​

Network: HQ with branch office replication over dedicated fiber

SiteDeviceWAN MTULAN MTUNotes
HQRouter15009000WAN limited by ISP
HQCore SwitchN/A9000Internal storage network
HQDatabase Server90009000Replication source
BranchRouter15009000Same WAN limitation
BranchAccess SwitchN/A9000Internal network
BranchDatabase Replica90009000Replication target

Path MTU Impact:

  • Local LAN traffic: 9000 MTU (optimized)
  • WAN replication: 1500 MTU (fragmented to standard size)
  • Solution: MSS clamping at router to avoid fragmentation

Scenario 4: Mixed MTU Environment (Transition Phase)​

Challenge: Upgrading network progressively to jumbo frames

Device GroupCurrent MTUTarget MTUMigration PhaseNotes
Core Switches15009000Phase 1 (Week 1)Enable switch support first
Storage Servers15009000Phase 2 (Week 2)Low-risk, isolated VLAN
Hypervisor Hosts15009000Phase 3 (Week 3)During maintenance window
Application Servers15001500Not changedNo performance benefit
User VLANs15001500Not changedInternet-facing traffic

Configuration in MikroTik RouterOS​

Option A: Terminal (Interface MTU Configuration)​

Basic Interface MTU​

# Set MTU on physical interface
/interface ethernet set ether1 mtu=9000 l2mtu=9216 comment="Storage Network"

# Verify interface MTU
/interface ethernet print detail where name=ether1

# Expected output:
# name="ether1" mtu=9000 l2mtu=9216

Bridge with Jumbo Frames​

# Create bridge for storage network
/interface bridge add name=bridge-storage mtu=9000 comment="Jumbo Frame Bridge"

# Add interfaces to bridge
/interface bridge port add bridge=bridge-storage interface=ether2 comment="ESXi-1"
/interface bridge port add bridge=bridge-storage interface=ether3 comment="ESXi-2"
/interface bridge port add bridge=bridge-storage interface=ether4 comment="Storage"

# Set L2 MTU on bridge ports
/interface ethernet set ether2,ether3,ether4 l2mtu=9216

# Assign IP to bridge
/ip address add address=10.0.100.1/24 interface=bridge-storage comment="Storage Gateway"

VLAN with Jumbo Frames​

# Create VLAN interface with larger MTU
/interface vlan add name=vlan-storage vlan-id=30 interface=bridge1 mtu=9000 \
comment="Storage VLAN"

# Set L2 MTU on physical interfaces carrying VLAN
/interface ethernet set ether1 l2mtu=9216

# Assign IP address
/ip address add address=10.0.30.1/24 interface=vlan-storage

MSS Clamping for Mixed MTU​

# Clamp TCP MSS for WAN interface (prevent fragmentation)
/ip firewall mangle add chain=forward protocol=tcp tcp-flags=syn \
out-interface=ether1-wan action=change-mss new-mss=1360 \
comment="MSS Clamp for WAN"

# For jumbo frame network
/ip firewall mangle add chain=forward protocol=tcp tcp-flags=syn \
in-interface=bridge-storage action=change-mss new-mss=8960 \
comment="MSS for Jumbo Frames"

PPPoE with Jumbo Frames​

# PPPoE client with larger MTU (if ISP supports)
/interface pppoe-client add name=pppoe-out1 interface=ether1-wan \
mtu=9000 mrru=9000 user=username password=password \
comment="Jumbo Frame PPPoE"

# Note: ISP must support jumbo frames on uplink

Option B: Winbox​

Setting Interface MTU​

  1. Physical Interface:

    • Interfaces β†’ Ethernet β†’ Select ether1
    • General Tab:
      • MTU: 9000
      • L2 MTU: 9216
      • Comment: Storage Network
    • Click Apply β†’ OK
  2. Bridge Configuration:

    • Bridge β†’ Bridge β†’ [+]
    • Name: bridge-storage
    • MTU: 9000
    • Comment: Jumbo Frame Bridge
    • Click OK
  3. Bridge Ports:

    • Bridge β†’ Ports β†’ [+]
    • Interface: ether2
    • Bridge: bridge-storage
    • Click OK
    • Repeat for ether3, ether4
  4. VLAN Interface:

    • Interfaces β†’ VLAN β†’ [+]
    • Name: vlan-storage
    • VLAN ID: 30
    • Interface: bridge1
    • MTU: 9000
    • Click OK
  5. MSS Clamping:

    • IP β†’ Firewall β†’ Mangle β†’ [+]
    • Chain: forward
    • Protocol: tcp
    • TCP Flags: syn (check only)
    • Out Interface: ether1-wan
    • Action: change-mss
    • New MSS: 1360
    • Comment: MSS Clamp for WAN
    • Click OK

Understanding MTU Configuration​

MTU vs L2 MTU​

ParameterDefinitionTypical ValueLayerIncludes
MTUMaximum IP packet size1500 or 9000Layer 3IP header + payload
L2 MTUMaximum Ethernet frame size1518 or 9216Layer 2Ethernet header + MTU + CRC
VLAN MTUMTU with 802.1Q tag1504 or 9004Layer 2.5MTU + 4-byte VLAN tag

Calculation Example:

Jumbo Frame Components:
- Ethernet Header: 14 bytes (Dest MAC + Src MAC + Type)
- 802.1Q VLAN Tag: 4 bytes (optional)
- IP Payload: 9000 bytes (MTU)
- CRC: 4 bytes
- Interframe Gap: 12 bytes

Total L2 MTU Required: 14 + 4 + 9000 + 4 = 9022 bytes
Recommended L2 MTU: 9216 bytes (accommodates overhead)

Network Flow Diagram​

[Client] ─── 9000 MTU ─── [Switch] ─── 9000 MTU ─── [Storage]
10.0.100.10 (L2 MTU: 9216) 10.0.100.50

Ping Test:
ping 10.0.100.50 size=8972 -f

Packet breakdown:
- ICMP data: 8972 bytes
- IP header: 20 bytes
- ICMP header: 8 bytes
- Total: 9000 bytes (fits in MTU)

Verification​

Step 1: Verify Interface MTU Settings​

# Check all interfaces
/interface print detail

# Check specific interface
/interface ethernet print detail where name=ether1

# Expected output:
# name="ether1" mtu=9000 l2mtu=9216 mac-address=XX:XX:XX:XX:XX:XX

Step 2: Test with Ping (Do Not Fragment)​

# Ping with maximum payload (9000 MTU - 20 IP - 8 ICMP = 8972 bytes)
/ping 10.0.100.50 size=8972 do-not-fragment count=10

# Expected: 0% packet loss

# Test slightly larger (should fail)
/ping 10.0.100.50 size=9000 do-not-fragment count=5

# Expected: 100% packet loss or "packet too large" error

Step 3: Check Switch Support (from Linux/Windows)​

# Linux: Check interface MTU
ip link show eth0
# Expected: mtu 9000

# Windows: Check MTU
netsh interface ipv4 show subinterfaces
# Expected: MTU = 9000

# Ping test from client
ping 10.0.100.1 -f -l 8972
# Expected: Reply from 10.0.100.1 (0% loss)

Step 4: Measure Throughput with iperf3​

# On storage server (target):
iperf3 -s

# On client (source):
iperf3 -c 10.0.100.50 -t 30 -i 5

# Compare results:
# Standard MTU (1500): ~118 MB/s on 1G link
# Jumbo MTU (9000): ~125 MB/s on 1G link (+6%)

Step 5: Monitor CPU Usage During Transfer​

# Monitor CPU while transferring large file
/tool profile duration=30

# Expected: Lower CPU usage with jumbo frames
# Standard MTU: 60-70% CPU
# Jumbo MTU: 40-50% CPU (~20-30% reduction)

Step 6: Verify Path MTU Discovery​

# Linux: Trace path MTU
tracepath 10.0.100.50

# Expected output shows MTU at each hop:
# 1: 10.0.100.1 (10.0.100.1) pmtu 9000
# 2: 10.0.100.50 (10.0.100.50) reached

Troubleshooting​

IssueCauseSolution
Ping works but file transfer failsPath MTU discovery blocked (ICMP)Allow ICMP "Packet Too Big" in firewall
Intermittent packet lossSwitch buffer overflow with large framesEnable flow control: /interface ethernet set ether1 flow-control=on
Cannot set MTU > 1500Interface/hardware limitationCheck driver/firmware, use /interface ethernet print capabilities
VLAN traffic drops with jumbo framesL2 MTU too small for VLAN+jumboSet L2 MTU β‰₯ 9216: /interface ethernet set l2mtu=9216
Fragmentation occurringMSS not clamped properlySet TCP MSS: /ip firewall mangle add action=change-mss new-mss=8960
Performance degradation after enablingMisconfigured device in pathTest each hop individually, verify all support 9000 MTU
Switch not forwarding jumbo framesJumbo frame support disabledEnable on switch (varies by vendor)
ESXi vmkernel cannot set 9000 MTUvSwitch MTU not configuredSet vSwitch MTU to 9000 before vmkernel adapters
NFS mount fails with jumbo framesServer/client MTU mismatchVerify both NFS server and client use same MTU
iSCSI initiator disconnectsMSS/MTU mismatch causing retransmitsSet iSCSI initiator MTU = target MTU
PPPoE drops with large MTUISP equipment limitationRevert PPPoE to 1500 MTU, use jumbo on LAN only
Bridge not passing jumbo framesBridge MTU smaller than port MTUSet bridge MTU β‰₯ port MTU: /interface bridge set mtu=9000

Advanced Jumbo Frame Options​

1. Per-VLAN MTU Configuration​

Separate MTU for different traffic types:

# Management VLAN - standard MTU
/interface vlan add name=vlan-mgmt vlan-id=10 interface=bridge1 mtu=1500

# Storage VLAN - jumbo frames
/interface vlan add name=vlan-storage vlan-id=30 interface=bridge1 mtu=9000

# VM network - standard MTU
/interface vlan add name=vlan-vm vlan-id=40 interface=bridge1 mtu=1500

# Set physical interface L2 MTU to maximum
/interface ethernet set ether1 l2mtu=9216

2. QoS with Jumbo Frames​

Prioritize jumbo frame traffic:

# Mark storage traffic
/ip firewall mangle add chain=prerouting in-interface=vlan-storage \
action=mark-connection new-connection-mark=storage-conn

/ip firewall mangle add chain=prerouting connection-mark=storage-conn \
action=mark-packet new-packet-mark=storage-pkt

# Prioritize in queue
/queue simple add name=storage-priority target=vlan-storage \
max-limit=10G/10G priority=1/1 queue=default/default

3. Baby Jumbo Frames (4000-6000 MTU)​

Compromise for partial infrastructure:

# Use 4096 MTU where full 9000 not supported
/interface ethernet set ether1 mtu=4096 l2mtu=4116

# Still provides ~30% efficiency gain over 1500 MTU
# Compatible with more switches than 9000 MTU

4. Bonding with Jumbo Frames​

LAG/bonding with large MTU:

# Create bonding interface
/interface bonding add name=bond-storage slaves=ether2,ether3 \
mode=802.3ad lacp-rate=1sec mtu=9000

# Set L2 MTU on member interfaces
/interface ethernet set ether2,ether3 l2mtu=9216

# Assign IP
/ip address add address=10.0.100.1/24 interface=bond-storage

5. MTU Path Discovery Optimization​

Fine-tune PMTUD:

# Enable PMTUD globally
/ip settings set tcp-syncookies=yes allow-fast-path=yes

# Log PMTUD events
/ip firewall filter add chain=forward protocol=icmp icmp-options=3:4 \
action=log log-prefix="PMTUD: " comment="Log Packet Too Big"

/ip firewall filter add chain=forward protocol=icmp icmp-options=3:4 \
action=accept comment="Allow PMTUD"

6. Router-on-a-Stick with Mixed MTU​

Inter-VLAN routing with different MTU:

# Standard VLAN
/interface vlan add name=vlan-users vlan-id=10 interface=ether1 mtu=1500
/ip address add address=192.168.10.1/24 interface=vlan-users

# Jumbo VLAN
/interface vlan add name=vlan-storage vlan-id=20 interface=ether1 mtu=9000
/ip address add address=192.168.20.1/24 interface=vlan-storage

# Router automatically fragments when routing between VLANs

7. Container with Jumbo Frames​

Docker/LXC container networking:

# Create veth pair with jumbo MTU
/interface veth add name=veth-storage address=10.0.100.2/24 \
gateway=10.0.100.1 mtu=9000

# Assign to container
/container add interface=veth-storage mtu=9000 \
file=container-image.tar remote-image=...

8. Wireless with Large MTU (Limited)​

Extend jumbo frames to wireless (rarely supported):

# Some enterprise APs support up to 2304 MTU
/interface wireless set wlan1 mtu=2304

# Note: Most wireless devices limited to 1500 MTU
# Test thoroughly before production

9. Jumbo Frame Monitoring​

Track jumbo frame statistics:

# Monitor interface stats
/interface ethernet monitor ether1 once

# Check for errors
/interface ethernet print stats

# Log large frame drops
:if ([/interface ethernet get ether1 tx-drop] > 0) do={
:log warning "Jumbo frame drops detected on ether1"
}

10. GRE/IPIP Tunnel with Jumbo Frames​

Tunnel overhead compensation:

# Create GRE tunnel
/interface gre add name=gre-tunnel1 remote-address=203.0.113.1 \
mtu=8980 local-address=198.51.100.1

# MTU reduced by GRE overhead (20 bytes)
# 9000 MTU network β†’ 8980 MTU tunnel

11. Load Balancing with Jumbo Frames​

ECMP with large frames:

# Two equal-cost paths with jumbo MTU
/ip route add dst-address=10.0.200.0/24 gateway=10.0.100.1 \
distance=1 comment="Path 1"

/ip route add dst-address=10.0.200.0/24 gateway=10.0.101.1 \
distance=1 comment="Path 2"

# Ensure both paths support 9000 MTU
/interface print detail where name~"ether"

12. Automated MTU Testing Script​

Discover maximum MTU:

:local target "10.0.100.50"
:local maxmtu 9000
:local minmtu 1500
:local currentmtu $maxmtu

:put "=== MTU Discovery Test ==="
:while ($currentmtu >= $minmtu) do={
:local pingsize ($currentmtu - 28)
:local result [/ping $target count=3 size=$pingsize do-not-fragment]

:if ($result = 0) do={
:put ("MTU $currentmtu: FAILED")
:set currentmtu ($currentmtu - 500)
} else={
:put ("MTU $currentmtu: SUCCESS - Maximum working MTU found")
:set currentmtu 0
}
}

Performance Benchmarks​

Throughput Comparison​

1 Gigabit Ethernet:

MTU SizeThroughputFrames/secCPU UsageEfficiency
1500940 Mbps78,12565%Baseline
4000970 Mbps30,30350%+3% throughput
9000980 Mbps13,88945%+4% throughput

10 Gigabit Ethernet:

MTU SizeThroughputFrames/secCPU UsageEfficiency
15009.2 Gbps781,25090%Baseline
40009.7 Gbps303,03065%+5% throughput
90009.8 Gbps138,88955%+6.5% throughput
Best Practices Summary
  1. Enable jumbo frames only on isolated storage/backup networks
  2. Set L2 MTU to 9216 on all switches supporting jumbo VLANs
  3. Test end-to-end with ping before enabling in production
  4. Use MSS clamping on routers interfacing with standard MTU networks
  5. Monitor for increased error rates after enabling
  6. Document all MTU settings in network diagrams
  7. Keep management networks at 1500 MTU for compatibility
  8. Use iperf3 to measure actual throughput improvements


πŸŽ‰ You now understand jumbo frames, MTU configuration, performance optimization, and troubleshooting! Use jumbo frames strategically in storage and virtualization networks to reduce CPU overhead and improve bulk transfer throughput while maintaining standard MTU for general traffic.