Age | Commit message (Collapse) | Author |
|
|
|
|
|
This is an undocumented workaround to disable chunks when using HTTP/1.1.
It can be used when the client advertises itself as HTTP/1.1 despite not
understanding the chunked transfer-encoding.
Usage can be found looking at the test for it. When activated, Cowboy
will still advertise itself as HTTP/1.1, but will send the body the same
way it would if it was HTTP/1.0.
|
|
* Parsing code was moved to cowlib: cowboy_qs:parse_qs/1
* A function was added to build query strings: cowboy_qs:qs/1
* Also added cowboy_qs:urlencode/1 and cowboy_qsurldecode/1
|
|
The code for parsing has also been rewritten to be more efficient
and to be able to handle cookie values with space inside them properly.
Update cowlib to 0.2.0.
|
|
|
|
|
|
|
|
Certain clients send malformed Accept-Encoding headers, which causes
cowboy_req to crash is compression is enabled.
|
|
It's not allowed, however a heavily deployed client (Flash player)
can send such an empty header, therefore we make a special condition
for it and return an empty list when it happens.
|
|
The SPDY connection processes are also supervisors.
Missing:
* sendfile support
* request body reading support
|
|
|
|
|
|
|
|
|
|
|
|
Now instead of {1, 1} we have 'HTTP/1.1', and instead of {1, 0}
we have 'HTTP/1.0'. This is more efficient, easier to read in
crash logs, and clearer in the code.
|
|
Clients do not send it. We skip the value if we receive it now,
as it shouldn't happen, and won't for all the mainstream clients.
|
|
|
|
|
|
|
|
Adds a new type of streaming response fun. It can be set in a similar
way to a streaming body fun with known length:
Req2 = cowboy_req:set_resp_body_fun(chunked, StreamFun, Req)
The fun, StreamFun, should accept a fun as its single argument. This
fun, ChunkFun, is used to send chunks of iodata:
ok = ChunkFun(IoData)
ChunkFun should not be called with an empty binary or iolist as this
will cause HTTP 1.1 clients to believe the stream is over. The final (0
length) chunk will be sent automatically - even if it has already been
sent - assuming no exception is raised.
Also note that the connection will close after the last chunk for HTTP
1.0 clients.
|
|
|
|
|
|
|
|
|
|
This kind of function is highly dependent on the proxy used,
therefore parsing was added for x-forwarded-for instead and we
just let users write the function that works for them. The code
can be easily extracted if anyone was using the function.
|
|
|
|
|
|
|
|
Make them consistent with the rest of the module.
|
|
This allows us to change the max chunk length on a per chunk basis
instead of for the whole stream. It's also much easier to use this
way even if we don't want to change the chunk size.
|
|
|
|
Defaults to a maximum of 1000000 bytes.
Also standardize the te_identity and te_chunked decoding functions.
Now they both try to read as much as possible (up to the limit),
making body reading much faster when not using chunked encoding.
|
|
|
|
|
|
|
|
|
|
We now read from the socket to be able to detect errors or TCP close
events, and buffer the data if any. Once the data receive goes over
a certain limit, which defaults to 5000 bytes, we simply close the
connection with an {error, overflow} reason.
|
|
We now obtain the peer address before creating the Req object.
If an error occurs, then something went wrong, we close the connection
nicely directly.
|
|
|
|
|
|
|
|
Basic HTTP authorization according to RFC 2617 is implemented.
Added an example of its usage with REST handler.
|
|
|
|
Types and code are moved to cowboy_router. The match/3 export
from cowboy_dispatcher isn't available anymore as it is called
internally.
|
|
Reported and fixed over email by Adrian Roe.
|
|
|
|
This makes it similar to the other has_* functions.
|
|
|