

Windows registry settingsTo achieve these high performance Windows results the following changes were applied. Above 40,000pps the network is saturated and packet loss starts to occur, packet loss initiating PGM reliability and consuming more bandwidth than is available for full speed operation.

Latency in microseconds from Linux to Linux at 40,000 packets-per-second on-way.Īt 40,000pps you are starting to see everything start to break down with the majority of packets from 200-600μs. Latency in microseconds from Linux to Linux at 30,000 packets-per-second one-way.Īt 30,000pps outlier latency jumps to 1ms. Latency in microseconds from Linux to Linux at 20,000 packets-per-second one-way.Īt 20,000pps we start to see a spread of outliers but notice the grouping remains at 200μs. Onward and upward we must go, with an IFG of 96ns the line capacity of a gigabit network is 81,274pps leading to the test potential limit of 40,000pps one-way with a little safety room above. Also note that the packet reflection is implemented at the application layer much like any end-developer written software using the OpenPGM BSD socket API, compare this with alternative testing configurations that may operate at the network layer and bypass the effective full latency of the networking stack and yield to disingenuous figures. The marketing version would be 20,000pps, as we consider 10,000pps being transmitted, and 10,000pps being received simultaneously, with a one-way latency of 100μs. The numbers themselves are of minor consequence, for explanation at 10,000 packets-per-second (pps) there is a marked grouping at 200μs round-trip-time (RTT) latency. Latency in microseconds from Linux to Linux at 10,000 packets-per-second one-way. The baseline reading is taken from Linux to Linux, the reference hardware is an IBM BladeCentre HS20 with Broadcom BCM5704S gigabit Ethernet adapters and the networking infrastructure is provided by a BNT fibre gigabit Ethernet switch.

Performance testing configuration with a sender maintaining a reference clock to calculate message round-trip-time (RTT). Testing entails transmission of a message onto a single LAN segment, the message is received by a listening application which immediately re-broadcasts the message, when the message is received back at the source the round-trip-time is calculated using a single high precision clock source. We take performance readings of PGM across multiple hosts and present a visual heat map of latency to provide insight to the actual performance. Lest the bard fray more, the topic is of PGM haste in the homogeneous environment, and the unfortunate absence of said haste. How camest thou hither, tell me, and wherefore?
