13 A study on the load of packet assembly on edge routers 1020331 2002 2 8
IP IP MTU Maximum Transfer Unit MTU MTU [2] i
CPU MTUIP ii
Abstract A study on the load of packet assembly on edge routers Yamada Atsushi With growing rate of Internet access around the world, the number of packets transferred over network goes on increasing. So the trunk nodes are required to handle the packets efficiently, and transfer them fast. On the other hand, the present IP packet transmission has some inefficiency. In fact, the size of the packets transferred from the access network to the backbone network is smaller than the backbone network MTU (Maximum Transfer Unit). The packets adapted to the size of the access network s MTU are very small. These are transmitted even on the backbone network without changing size. If many small packets carry the data, the trunk nodes must process a lot of IP header, so increasing the processing overhead on the trunk nodes increase. And because of the increasing amount of packet header, we can t utilize the bandwidth in network efficiently. So we focus attention on that fact, and are studying the efficient method for transmitting the packets. We call the method as Packet Assembly. In Packet Assembly method, the small packets are assembled at edge routers, and are transferred to backbone network as the large packet. After that, it is transmitted efficiently in backbone network. Finally, at the edge router just before the destination access network it is reassembled. In this way, we aim to reduce the load of the router on backbone network and to make the process of transferring packets more efficient. In our system, the load on edge router may increase. So while assembling, we iii
measured the CPU utilization on there by conducting experiment. From the result of the experiment we thought about the availabilities of Packet Assembly. key words Packet Assembly, MTU, IP, access network, core network, load iv
1 1 2 2 2.1... 2 2.2 IP... 3 2.2.1... 3 2.2.2... 3 2.3... 6 2.3.1 JumboFrame... 6 2.3.2 MPLS................................. 6 3 8 3.1... 8 3.1.1.................................. 8 3.1.2.................................. 9 3.2... 9 3.2.1... 9 4 10 4.1... 10 4.1.1... 10 4.1.2................................ 10................................ 10... 11.................................. 12 v
4.1.3................................ 14 CPU... 14 CPU... 15 CPU... 15 CPU... 15 4.1.4.................................. 16... 16... 17 5 18 6 19 20 21 vi
2.1... 2 2.2 Extended Ethernet Frames vs. Standard Ethernet Frames........ 4 2.3... 4 3.1... 8 4.1... 10 4.2... 11 4.3.............................. 13 4.4 CPU... 14 4.5 CPU... 15 4.6 CPU... 16 4.7 CPU... 16 vii
2.1 MTU... 5 4.1 MTU... 12 viii
1 1
2 2.1 1969 4 ARPA net30 1 1 Internet Software Consortium Web [5] 2.1 2.1 2
2.2 IP 2.2 IP 2.2.1 per-packet IP per-packet [4] end-to-end 16 per-packet per-bit per-packet 2.2[3][6] 1500B 9kB CPU CPU 2.2.2 2.3 1998 InternetMCI [1] 445525761500B 4044B TCP ACKSYN FINRST telnet 3
2.2 IP 2.2 Extended Ethernet Frames vs. Standard Ethernet Frames 2.3 4
2.2 IP MTU TCP MSS Maximum Segment Size 512 536B TCP IP 552B 576B 1500B Ethernet MTUMaximum Transfer UnitMTU IP MTU MTU 2.1[9, Page: 135] Ethernet MTU 1500B 2.1 MTU MTU Total Length FCS IP MTU 65535 - Hyperchannel 65535 - IP over HIPPI 65280 65320 16MB IBM Token Ring 17914 17958 IP over ATM 9180 - IEEE 802.4 Token Bus 8166 8191 IEEE 802.5 Token Ring 4464 4508 FDDI 4352 4500 Ethernet 1500 1518 PPPDefault 1500 - IEEE 802.3 Ethernet 1492 1518 IP MTU 68 - Ethernet MTU MTU IP 5
2.3 2.3 2.3.1 Jumbo Frame Jumbo Frame Alteon WebSystems 1 Ethernet 1518B 10Mbps 100Mbps Ethernet Gigabit Ethernet1000Mbps 1518B Jumbo Frame Ethernet 1518B 9000B Gigabit Ethernet NICNetwork Interface Card 2.3.2 MPLS MPLSMulti Protocol Label Switching IETFInternet Engineering Task Force) MPLS MPLS (LSRLabel Switch Router) LSR LDP(Label Distribution Protocol) 1 Alteon Networks Nortel Networks CNBU(Content Networking Business Unit) 6
2.3 MPLS 1. 2. IP ATM ATM IP 3. IP VPN 4. IP ATM QoS per-packet MPLS 7
3 3.1 3.1.1 3.1 IP 3.1 A B A 8
3.2 B 3.1.2 MPLS 3.2 3.2.1 9
4 4.1 CPU 4.1.1 1. 2. 3. 4.1.2 4.1 10
4.1 5 PC 4.1 Ethernet 3 PC NIC 2 2 3 AB A B B B 4.1 A IP Fragmentation Defragmentation 4.2 4.1 3 4.2 MTU 1500 1220 1220 A 20 IP MTU 4.1 116 4 11
4.1 4.1 MTU MTUBytes 1 1220 2 620 4 320 8 170 16 95 MTU 320 1200 300 4 300 20 IP MTU 320 A A 1220 4.14.2 A CPU CPU CPU 12
4.1 1. A CPU 2. A CPU 3. CPU 4. CPU 1. 4.1 A B 2. MTU 3. A 4. A 5. A CPU 6. CPU 4.3 UNIX ping ICMP ECHO REQUEST 4.3 ping -fflood pingecho REPLY 1220 IP 10000 13
4.1 A UNIX top 100ms 4.1.3 top CPU CPU 4.4 CPU À À ½ 4.4 CPU 14
4.1 CPU CPU 4.5 À À ½ 4.5 CPU CPU CPU 4.4 CPU CPU 4.4 15
4.1 ½ À À 4.6 CPU 4.1.4 À À ½ 4.7 CPU 16
4.1 17
5 CPU 18
6 ATM 19
20
[1] K Claffy, Greg Miller, Kevin Thompson, The nature of the beast: Recent traffic measurements from an Internet backbone, INET 98 Conference, 1998. [2] T.Kanda and K.Shimamura, Load reduction for the node processors in core networks by packet assembly. IEEE CQR Technical Committee, CQR International workshop2001 in Tucson, U.S.A. [3] Phil Dykstra, Extended Frame Sizes for Next Generation Ethernets - a white paper by Alteon, 1999. [4] DARPA ITO Information Technology Office. http://www.darpa.mil/ito/research/ngi/supernet.html(february 5, 2002). [5] Internet Software Consortium - Number of Internet Hosts http://www.isc.org/ds/host-count-history.html(february 7, 2002). [6] Phil Dykstra, Gigabit Ethernet Jumbo Frames, 1999. http://sd.wareonearth.com/ phil/jumbo.html(january 15, 2002). [7] 2001. [8] 2001. [9] TCP/IP 2 1998 21