The idea behind pipelining in HTTP 1.1 is to allow clients submit multiple idempotent HTTP requests over a connection to a host, let the server to process them in parallel and respond in the order in which the requests were made.
In theory, pipelining has some benefits:
- Increased parallelism on the server – this seems obvious since the client is able to dispatch many requests over a connection.
- Reduced amount of the time an HTTP request is waiting to be made in the browser – in stead of queuing up requests behind one another, the browser can dispatch them as soon as it knows that a resource needs to be fetched
- Improved connection reuse – as a consequence of increased parallelism and reduction in queued times, it is arguable that pipelining would result in a reduction in number of connections that the browser needs to open.
These may result in reduced latencies. However, finding data with consistent patterns to prove benefits of pipelining is hard to come by. The first set of charts below show total time requests were in the queue for each connection. The red circles show requests made without pipelining and green circles show requests made with pipelining enabled. The size of each circle is proportional to the number of resources requested for each connection. The results show 100 runs made from a pool of private instances of WebPagetest on Windows XP with Firefox over DSL profile.
Any pattern showing that pipelining reduces queued times is absent in these charts.
The second set of charts below show the number of connections used by the browser for each run. The red lines show runs without pipelining, and green lines show runs made with pipelining.
Again there is no consistent pattern in these charts to show the pipelining improves connection reuse.
The point is here is that though the idea of pipelining is simple in theory, making tangible benefits out of pipelining may be hard.