The Community for Technology Leaders
High-Performance Interconnects, Symposium on (2013)
San Jose, CA, USA USA
Aug. 21, 2013 to Aug. 23, 2013
pp: 25-32
This paper studies the effectiveness of TCP pacing in a data center setting. TCP senders inject bursts of packets into the network at the beginning of each round-trip time. These bursts stress the network queues which may cause loss, reduction in throughput and increased latency. Such undesirable effects become more pronounced in data center environments where traffic is bursty in nature and buffer sizes are small. TCP pacing is believed to reduce the burstiness of TCP traffic and to mitigate the impact of small buffering in routers. Unfortunately, current research literature has not always agreed on the overall benefits of pacing. In this paper, we present a model for the effectiveness of pacing. Our model demonstrates that for a given buffer size, as the number of concurrent flows are increased beyond a Point of Inflection (PoI), non-paced TCP outperforms paced TCP. We present a lower bound for the PoI and argue that increasing the number of concurrent flows beyond the PoI, increases inter-flow burstiness of paced packets and diminishes the effectiveness of pacing.

M. Ghobadi and Y. Ganjali, "TCP Pacing in Data Center Networks," 2013 IEEE 21st Annual Symposium on High-Performance Interconnects (HOTI), San Jose, CA, USA, 2013, pp. 25-32.
94 ms
(Ver 3.3 (11022016))