Age | Commit message (Collapse) | Author |
|
This was initially an internal function, it has been made public
due to popular demand as it can sometimes be needed.
|
|
Otherwise acceptors will not be upgraded properly until after the
next request comes in.
Thanks to DeadZen for pointing it out.
|
|
|
|
|
|
|
|
At the same time renaming cowboy_http:content_type_params/3 to
cowboy_http:params/2 (with a default Acc of []) as this code isn't
useful only for content types.
|
|
|
|
If requests go through a proxy, they will have the original uri in the
request, i.e. : GET http://proxy.server.uri/some/query/string HTTP 1.1 ...
That was problematic -- cowboy_http_protocol:request didn't know what to
to with the result of decode_packet applied to this, which would be something
like:
``` erlang
{http_request,'GET',{absoluteURI,http,<<"proxy.server.uri">>,
undefined,<<"/some/query/string">>},{1,1}}
```
So, I just ignore the host, grab the path and pass into
``` erlang
cowboy_http_protocol:request({http_request, Method, {abs_path, Path},
Version}, State)
```
Seems to do the trick without much effort.
|
|
Initially recommended by Magnus Klaar, the trick is to add a catch
instruction before the erlang:hibernate/3 call so that Dialyzer
thinks it will return, followed by the expected return value
('ok' for HTTP, 'closed' for websockets).
This should be good enough until a real solution is found.
|
|
|
|
|
|
queue:len/1 is O(len(Q))
queue:out/1 is O(1) amortized, O(len(Q)) worst case
Replace with a pattern match
|
|
We've been having many recurring issues, some which were fixed,
only to have other things broken again. Can't rely on a service
that breaks all the time.
|
|
This needs python2 to be the default python in /usr/bin/python.
|
|
The previous solution was retrieving the last put connection
and wasn't a real queue, so this solution should improve the
overall latency under load.
|
|
|
|
Goes from 36s to 24s on my laptop.
|
|
This allows any application to upgrade the protocol options without
having to restart the listener. This is most useful to update the
dispatch list of HTTP servers, for example.
The upgrade is done at the acceptor level, meaning only new connections
receive the new protocol options.
|
|
This is a big change in the internal cowboy API. This should not
have any impact on existing applications as only the acceptor is
expected to use these API calls.
The function cowboy_listener:wait/3 has been removed. max_connections
checking now occurs directly in cowboy_listener:add_connection/3.
If the pool is full and the acceptor has to wait, then it doesn't
return, waiting for a free space to be available.
To accomodate these changes, it is now cowboy_listener that will
inform the new connection that it is ready by sending {shoot, self()}.
This should be a great improvement to the latency of responses as
there is one less message to wait for before the request process
can do its work.
Overall the performance under heavy load should also be improved as
we greatly reduce the number of messages sent between the acceptor
and the listener process.
|
|
|
|
|
|
|
|
We're using the existing test suite for websocket servers from the
Autobahn project to verify that out websockets implementation is
sane. A CT test suite and python module wrapping the test suite has
been added. The test suite is run when the 'make inttests' target
is executed.
|
|
The body was still in the buffer that's being used for the next
request and was thus used as a request, causing errors.
|
|
|
|
a keep alive socket
|
|
|
|
|
|
|
|
At the same time rename http_headers/0 to cowboy_http:headers/0.
|
|
|
|
|
|
Exported types are much better than include files.
|
|
|
|
Excluding generate_etag, last_modified, expires and variances.
|
|
|
|
|
|
|
|
Conflicts:
src/cowboy_http_req.erl
test/http_SUITE.erl
|
|
|
|
|
|
|
|
Followup to 0bb23f2400ed0b65834913c8522a978d986f1f92.
As discussed in #119.
|
|
|
|
|
|
|
|
|
|
|
|
They should return true when it has been processed successfully,
or false otherwise, in which case a 500 error is sent.
Fixes #119.
|
|
It was failing from time to time due to the response being sent
as two separate packets.
|