diff options
author | Loïc Hoguin <[email protected]> | 2025-02-03 15:36:16 +0100 |
---|---|---|
committer | Loïc Hoguin <[email protected]> | 2025-02-05 14:29:58 +0100 |
commit | 49be0f57cf5ce66178dc24b9c08c835888d1ce0e (patch) | |
tree | a88135c26f7ea8e48b78a93ce9239342e726fba3 /doc/src/guide | |
parent | fcab905ecac3adc77348880c9702e53d65681344 (diff) | |
download | cowboy-49be0f57cf5ce66178dc24b9c08c835888d1ce0e.tar.gz cowboy-49be0f57cf5ce66178dc24b9c08c835888d1ce0e.tar.bz2 cowboy-49be0f57cf5ce66178dc24b9c08c835888d1ce0e.zip |
Implement dynamic socket buffer sizes
Cowboy will set the socket's buffer size dynamically to
better fit the current workload. When the incoming data
is small, a low buffer size reduces the memory footprint
and improves responsiveness and therefore performance.
When the incoming data is large, such as large HTTP
request bodies, a larger buffer size helps us avoid
doing too many binary appends and related allocations.
Setting a large buffer size for all use cases is
sub-optimal because allocating more than needed
necessarily results in a performance hit (not just
increased memory usage).
By default Cowboy starts with a buffer size of 8192 bytes.
It then doubles or halves the buffer size depending on
the size of the data it receives from the socket. It
stops decreasing at 8192 and increasing at 131072 by
default.
To keep track of the size of the incoming data Cowboy
maintains a moving average. It allows Cowboy to avoid
changing the buffer too often but still react quickly
when necessary. Cowboy will increase the buffer size
when the moving average is above 90% of the current
buffer size, and decrease when the moving average is
below 40% of the current buffer size.
The current buffer size and moving average are
propagated when switching protocols. The dynamic buffer
is implemented in HTTP/1, HTTP/2 and HTTP/1 Websocket.
HTTP/2 Websocket has it disabled because it doesn't
interact directly with the socket; in that case it
is HTTP/2 that has a dynamic buffer.
The dynamic buffer provides a very large performance improvement
in many scenarios, at minimal cost for others. Because it largely
depend on the underlying protocol the improvements are no all equal.
TLS and compression also impact the results.
The improvement when reading a large request body, with the
requests repeated in a fast loop are:
* HTTP: 6x to 20x faster
* HTTPS: 2x to 6x faster
* H2: 4x to 5x faster
* H2C: 20x to 40x faster
I am not sure why H2C's performance was so bad, especially compared
to H2, when using default buffer sizes. Dynamic buffers make H2C a
lot more viable with default settings.
The performance impact on "hello world" type requests is minimal,
it goes from -5% to +5% roughly.
Websocket improvements vary again depending on the protocol, but
also depending on whether compression is enabled:
* HTTP echo: roughly 2x faster
* HTTP send: roughly 4x faster
* H2C echo: roughly 2x faster
* H2C send: 3x to 4x faster
In the echo test we reply back, and Gun doesn't have the dynamic
buffer optimisation, so that probably explains the x2 difference.
With compression however there isn't much improvement. The results
are roughly within -10% to +10% of each other. Zlib compression
seems to be a bottleneck, or at least to modify the performance
profile to such an extent that the size of the buffer does not
matter. This happens to randomly generated binary data as well
so it is probably not caused by the test data.
Diffstat (limited to 'doc/src/guide')
0 files changed, 0 insertions, 0 deletions