Age | Commit message (Collapse) | Author |
|
|
|
This commit reworks the logging that Cowboy does via
error_logger to make the module that will do the actual
logging configurable.
The logger module interface must be the same as logger
and lager: a separate function per log level with the
same log levels they support.
The default behavior remains to call error_logger,
although some messages were downgraded to warnings
instead of errors. Since error_logger only supports
three different log levels, some messages may get
downgraded/upgraded depending on what the original
log level was to make them compatible with error_logger.
The {log, Level, Format, Args} command was also
added to stream handlers. Stream handlers should
use this command to log messages because it allows
writing a stream handler to intercept some of those
messages and extract information or block them as
necessary.
The logger option only applies to Cowboy itself,
not to the messages Ranch logs, so more work remains
to be done in that area.
|
|
|
|
If content-length is set in the response headers
we can skip chunked transfer-encoding.
|
|
|
|
|
|
|
|
|
|
|
|
This should reduce the amount of noise in RabbitMQ.
|
|
We now flush messages that are specific to cowboy_http only.
Stream handlers should also flush their own specific messages
if necessary, although timeouts will be flushed regardless
of where they originate from.
Also renames the http_SUITE to old_http_SUITE to distinguish
new tests from old tests. Most old tests need to be removed
or converted eventually as they're legacy tests from Cowboy 1.0.
|
|
Currently cowboy assumes that idle_timeout or request_timeout is
a number and always starts timers. Similar situation takes place
in case of preface_timeout for http2. This commit adds case for
handling infinity as a timeout, allowing to not start mentioned
timers.
|
|
|
|
|
|
|
|
Cases where a request body was involved could sometimes
fail depending on timing. Also fix all of the old
http_SUITE tests.
|
|
The option controls how much body we accept to skip for HTTP/1.1
connections when the user code did not consume the body fully.
It defaults to 1MB.
|
|
Also make sure the header is sent for all types of early_error
that result in the closing of the connection.
|
|
|
|
It's safer than allow it with the wrong behavior.
|
|
|
|
|
|
|
|
|
|
Bad chunk sizes used to be accepted and could result in
a badly parsed body or a timeout. They are now properly
rejected.
Chunk extensions now have a hard limit of 129 characters.
I haven't heard of anyone using them and Cowboy does not
provide an interface for them, but we can always increase
or make configurable if it ever becomes necessary (but
I honestly doubt it).
Also a test from the old http suite could be removed. Yay!
|
|
It's worth noting that transfer-encoding now takes precedence
over content-length as recommended by the RFC, so that when
both headers are sent we only care about transfer-encoding
and explicitly remove content-length from the headers.
|
|
|
|
|
|
Also fixes the handling of the max_headers option for HTTP/1.1.
It is now a strict limit and not dependent on whether data is
already in the buffer.
|
|
This depends on changes in Cowlib that are only available on
master.
|
|
This only happens if the switch takes too long, and should not
happen unless a spawned process refuses to shut down immediately.
|
|
Sending data of size 0 with the fin flag set resulted in nothing
being sent to the client and still considering the response to
be finished for HTTP/1.1.
For both HTTP/1.1 and HTTP/2, the final chunk of body that is
sent automatically by Cowboy at the end of a response that the
user did not properly terminate was not passing through stream
handlers. This resulted in issues like compression being incorrect.
Some tests still fail under 20.1.3. They are due to recent zlib
changes and should be fixed in a future patch release. Unfortunately
it does not seem to be any 20.1 version that is safe to use for
Cowboy, although some will work better than others.
|
|
|
|
|
|
To obtain the local socket ip/port and the client TLS
certificate, respectively.
|
|
|
|
I broke this when fixing stream handlers earlier.
|
|
When we have to send a response before terminating a stream,
we call info. The state returned by this info call was
discarded when we called terminate after that. This commit
fixes it.
There are no tests for this, however the new metrics test
in the next commit requires the correct behavior so this
is ultimately covered.
|
|
It is possible in some cases to move on to the next request
without waiting, but that can be done as an optimization
later on if necessary.
|
|
Also corrects the lack of error response when HTTP/1.1 is used.
|
|
|
|
The documentation was correct, the code was not.
This should make it easier to implement new protocols. Note that
for HTTP/2 we will need to add some form of counting later on to
check for malformed requests, but we can do simpler and just
reduce from the expected length and then check if that's 0 when
IsFin=fin.
|
|
|
|
When the request process exits with a {request_error, Reason, Human}
exit reason, Cowboy will return a 400 status code instead of 500.
Cowboy may also return a more specific status code depending on
the error. Currently it may also return 408 or 413.
This should prove to be more solid that looking inside the stack
trace.
|
|
This should work very similar to normal supervisors,
in particular during the shutdown sequence when the
connection process goes down or switches to Websocket.
Processes that need to enforce the shutdown timeout
will be required to trap exits, just like in a supervisor.
In a vanilla Cowboy, this only matters at connection
shutdown, as Cowboy will otherwise wait for the request
process to be down before stopping the stream.
Tests are currently missing.
|
|
Introduces the new stream_handler_SUITE test suite. More cases
will be added later on.
|
|
|
|
|
|
This is a more or less temporary solution to an existing problem.
In the future we will need to enforce a shutdown timeout for
these processes.
|
|
This fixes the connection being dropped because of request_timeout
despite there being some active streams.
|