Age | Commit message (Collapse) | Author |
|
They allow the server to configure what it is willing to accept
for both the negotiated configuration (takeover and window bits)
and the other zlib options (level, mem_level and strategy).
This can be used to reduce the memory and/or CPU footprint of
the compressed data, which comes with a cost in compression ratio.
|
|
And fix this case when multiple ranges are requested.
|
|
Returning the atom auto instead of a callback informs Cowboy
that it needs to handle range requests automatically. This
changes the behavior so that the ProvideCallback function
is called and then Cowboy splits the data on its own and
sends the response without any other user involvement other
than defining the ranges_provided/2 callback.
This is a quick and dirty way to add range request support
to resources, and will be good enough for many cases including
for cowboy_static as it also works when the normal response
body is a sendfile tuple.
|
|
It is now possible to stream one or more sendfile tuples.
A simple example of what can now be done would be for
example to build a tar file on the fly using the sendfile
syscall for sending the files, or to support Range requests
with more than one range with the sendfile syscall.
When using cowboy_compress_h unfortunately we have to read
the file in order to send it. More options will be added
at a later time to make sure users don't read too much
into memory. This is a new feature however so existing
code is not affected.
Also rework cowboy_http's data sending to be flatter.
|
|
This is currently undocumented but is planned to be documented
in the next version.
|
|
|
|
|
|
|
|
Fix cases where the q-value is 0 and where a wildcard
was sent in the accept-charset header.
Also don't send a charset in the content-type of the
response if the media type is not text.
Thanks to Philip Witty for help figuring this out.
|
|
Thanks to Philip Witty for help figuring this out.
|
|
|
|
Currently the compression threshold is set to 300 and hardcoded in the
codebase. There are cases where it make sense to allow this to be
configured, for instance when you want to enforce all responses to be
compressed regarldess of their size.
|
|
|
|
|
|
This command is currently not documented. It allows disabling
the reading of incoming data from the socket, and can be used
as a poor man's flow control.
|
|
|
|
This feature is currently experimental. It will become the
preferred way to use Websocket handlers once it becomes
documented.
A commands-based interface enables adding commands without
having to change the interface much. It mirrors the interface
of stream handlers or gen_statem. It will enable adding
commands that have been needed for some time but were not
implemented for fear of making the interface too complex.
|
|
|
|
|
|
|
|
If content-length is set in the response headers
we can skip chunked transfer-encoding.
|
|
|
|
We now flush messages that are specific to cowboy_http only.
Stream handlers should also flush their own specific messages
if necessary, although timeouts will be flushed regardless
of where they originate from.
Also renames the http_SUITE to old_http_SUITE to distinguish
new tests from old tests. Most old tests need to be removed
or converted eventually as they're legacy tests from Cowboy 1.0.
|
|
|
|
There's already tests failing and quite some refactoring to be
done to make some things easier to test or fix issues.
|
|
The miscount occurred because of a faulty iolist split function.
The bug should now be corrected, a PropEr test has been added
and a regression test has also been added.
|
|
|
|
The option controls how much body we accept to skip for HTTP/1.1
connections when the user code did not consume the body fully.
It defaults to 1MB.
|
|
Support for these was broken during the development
of Cowboy 2.0. It is now fixed and better handled
than it ever was.
|
|
|
|
In some cases we were sending a response faster than h2spec
was sending us the test case data, resulting in the request
being processed successfully instead of failing as expected.
|
|
Found more bugs! Unfortunately no fix for them in this commit.
|
|
Bad chunk sizes used to be accepted and could result in
a badly parsed body or a timeout. They are now properly
rejected.
Chunk extensions now have a hard limit of 129 characters.
I haven't heard of anyone using them and Cowboy does not
provide an interface for them, but we can always increase
or make configurable if it ever becomes necessary (but
I honestly doubt it).
Also a test from the old http suite could be removed. Yay!
|
|
|
|
|
|
|
|
This depends on changes in Cowlib that are only available on
master.
|
|
|
|
Also {switch_handler, Module, Opts}.
Allows switching to a different handler type. This is
particularly useful for processing most of the request
with cowboy_rest and then streaming the response body
using cowboy_loop.
|
|
Sending data of size 0 with the fin flag set resulted in nothing
being sent to the client and still considering the response to
be finished for HTTP/1.1.
For both HTTP/1.1 and HTTP/2, the final chunk of body that is
sent automatically by Cowboy at the end of a response that the
user did not properly terminate was not passing through stream
handlers. This resulted in issues like compression being incorrect.
Some tests still fail under 20.1.3. They are due to recent zlib
changes and should be fixed in a future patch release. Unfortunately
it does not seem to be any 20.1 version that is safe to use for
Cowboy, although some will work better than others.
|
|
|
|
|
|
The 100 continue response will only be sent if the client
has not sent the body yet (at all), if the connection is
HTTP/1.1 or above and if the user has not sent it yet.
The 100 continue response is sent when the user calls
read_body and it is cowboy_stream_h's responsibility
to send it. This means projects that don't use the
cowboy_stream_h stream handler will need to handle the
expect header themselves (but that's okay because they
might have different considerations than normal Cowboy).
|
|
User code can now send as many 1xx responses as necessary.
|
|
|
|
Also fix a test group to use h2 instead of HTTP/1.1.
|
|
It is possible in some cases to move on to the next request
without waiting, but that can be done as an optimization
later on if necessary.
|
|
Also corrects the lack of error response when HTTP/1.1 is used.
|
|
|
|
This will result in no data being sent. It's simply easier to
do this than to have to handle 0 size cases in user code.
|