Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Cowboy takes a few shortcuts to avoid wasting resources when
there is a protocol error. The RFC wants us to send a different
error depending on the state of the stream at the time of the
error, and for us to maintain the connection in cases where we
would have to spend valuable resources to decode headers. In
all these cases Cowboy will simply close the connection with
an appropriate error.
|
|
Bad chunk sizes used to be accepted and could result in
a badly parsed body or a timeout. They are now properly
rejected.
Chunk extensions now have a hard limit of 129 characters.
I haven't heard of anyone using them and Cowboy does not
provide an interface for them, but we can always increase
or make configurable if it ever becomes necessary (but
I honestly doubt it).
Also a test from the old http suite could be removed. Yay!
|
|
It's worth noting that transfer-encoding now takes precedence
over content-length as recommended by the RFC, so that when
both headers are sent we only care about transfer-encoding
and explicitly remove content-length from the headers.
|
|
|
|
|
|
|
|
|
|
Also fixes the handling of the max_headers option for HTTP/1.1.
It is now a strict limit and not dependent on whether data is
already in the buffer.
|
|
They are global for the node for all future call trace flags,
so it's not necessary to set them repeatedly with every request.
Doing it once at startup also ensures we can't have race
conditions when the user wants to change which trace patterns
should be used (because requests are concurrent and patterns
end up overwriting themselves repeatedly), and makes this
changing of trace patterns much more straightforward: the
user can just define the ones they want. The default function
traces everything.
In addition I have also added the tracer_flags option to make
the trace flags configurable, excluding the tracer pid.
|
|
This depends on changes in Cowlib that are only available on
master.
|
|
If we do then we end up killing the tracer after the stream
terminates and this is not what we want. This prevents us from
getting useful information from requests that are still ongoing
(when they run concurrently) and completely prevents us from
tracing Websocket handlers.
I'm not the biggest fan of having unsupervised modules but if
this is properly documented there should be no problem.
|
|
This only happens if the switch takes too long, and should not
happen unless a spawned process refuses to shut down immediately.
|
|
It was mistakenly discarded.
|
|
|
|
This can happen normally when Cowboy is restarted, for example.
Instead of failing requests when that happens, we degrade
gracefully and do a little more work to provide the current
date header.
|
|
Also {switch_handler, Module, Opts}.
Allows switching to a different handler type. This is
particularly useful for processing most of the request
with cowboy_rest and then streaming the response body
using cowboy_loop.
|
|
Unfortunately compression will be disabled for 20.1, 20.1.1
and 20.1.2. In additiona I do not recommend 20.1.3 due to
issues inflating some specific sizes.
|
|
Sending data of size 0 with the fin flag set resulted in nothing
being sent to the client and still considering the response to
be finished for HTTP/1.1.
For both HTTP/1.1 and HTTP/2, the final chunk of body that is
sent automatically by Cowboy at the end of a response that the
user did not properly terminate was not passing through stream
handlers. This resulted in issues like compression being incorrect.
Some tests still fail under 20.1.3. They are due to recent zlib
changes and should be fixed in a future patch release. Unfortunately
it does not seem to be any 20.1 version that is safe to use for
Cowboy, although some will work better than others.
|
|
|
|
|
|
|
|
The 100 continue response will only be sent if the client
has not sent the body yet (at all), if the connection is
HTTP/1.1 or above and if the user has not sent it yet.
The 100 continue response is sent when the user calls
read_body and it is cowboy_stream_h's responsibility
to send it. This means projects that don't use the
cowboy_stream_h stream handler will need to handle the
expect header themselves (but that's okay because they
might have different considerations than normal Cowboy).
|
|
User code can now send as many 1xx responses as necessary.
|
|
Another experimental stream handler. It enables tracing for
the connection process and any children processes based on
the matching of the request. It can be used to do ad-hoc
tracing by sending a specific header, path, method or other.
It is meant to be used both for tests and production. Some
configuration scenarios are NOT safe for production, beware.
It's important to understand that, at this time, tracing
is enabled on the scale of the entire connection including
any future request processes. Keep this in mind when trying
to use it in production. The only way to stop tracing is
by having the callback function exit (by calling exit/1
explicitly). This can be done after a certain number of
events for example. Tracing can generate a lot of events,
so it's a good idea to stop after a small number of events
(between 1000 and 10000 should be good) and to avoid tracing
the whole world.
Documentation will follow at a later time.
|
|
To obtain the local socket ip/port and the client TLS
certificate, respectively.
|
|
When the user code was sending a response fully without reading
the request body, the connection could get closed when receiving
DATA frames for that body. We now ask the client to stop sending
data via a NO_ERROR RST_STREAM, and linger any stream that has
been reset so that we can skip any pending frames from that
stream.
This fixes a number of intermittent failures in req_SUITE, which
now passes reliably.
In addition a small number of rfc7540_SUITE test cases have been
corrected as they were incorrect.
|
|
|
|
I broke this when fixing stream handlers earlier.
|
|
When we have to send a response before terminating a stream,
we call info. The state returned by this info call was
discarded when we called terminate after that. This commit
fixes it.
There are no tests for this, however the new metrics test
in the next commit requires the correct behavior so this
is ultimately covered.
|
|
|
|
It collects metrics and passes them to a configurable callback
once the stream terminates. It will be documented in a future
release. More tests incoming.
|
|
It is possible in some cases to move on to the next request
without waiting, but that can be done as an optimization
later on if necessary.
|
|
|
|
|
|
I have amended a lot of changes from the original commit
to make it behave as expected, including returning a 400
error. LH
|
|
|
|
|
|
Also corrects the lack of error response when HTTP/1.1 is used.
|
|
|
|
|