Age | Commit message (Collapse) | Author |
|
This would allow us to override them without messing up the body,
and would make it usable with the static file handler for example.
Experimental at this point.
|
|
The options were added to allow developers to fix timeout
issues when reading large bodies. It is also a cleaner and
easier to extend interface.
This commit deprecates the functions init_stream, stream_body
and skip_body which are no longer needed. They will be removed
in 1.0.
The body function can now take an additional argument that is a
list of options. The body_qs, part and part_body functions can
too and simply pass this argument down to the body call.
There are options for disabling the automatic continue reply,
setting a maximum length to be returned (soft limit), setting
the read length and read timeout, and setting the transfer and
content decode functions.
The return value of the body and body_qs have changed slightly.
The body function now works similarly to the part_body function,
in that it returns either an ok or a more tuple depending on
whether there is additional data to be read. The body_qs function
can return a badlength tuple if the body is too big. The default
size has been increased from 16KB to 64KB.
The default read length and timeout have been tweaked and vary
depending on the function called.
The body function will now adequately process chunked bodies,
which means that the body_qs function will too. But this means
that the behavior has changed slightly and your code should be
tested properly when updating your code.
The body and body_qs still accept a length as first argument
for compatibility purpose with older code. Note that this form
is deprecated and will be removed in 1.0. The part and part_body
function, being new and never having been in a release yet, have
this form completely removed in this commit.
Again, while most code should work as-is, you should make sure
that it actually does before pushing this to production.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The old undocumented API is removed entirely.
While a documentation exists for the new API, it will not
be considered set in stone until further testing has been
performed, and a file upload example has been added.
The new API should be a little more efficient than the
old API, especially with smaller messages.
|
|
|
|
|
|
This is an undocumented workaround to disable chunks when using HTTP/1.1.
It can be used when the client advertises itself as HTTP/1.1 despite not
understanding the chunked transfer-encoding.
Usage can be found looking at the test for it. When activated, Cowboy
will still advertise itself as HTTP/1.1, but will send the body the same
way it would if it was HTTP/1.0.
|
|
* Parsing code was moved to cowlib: cowboy_qs:parse_qs/1
* A function was added to build query strings: cowboy_qs:qs/1
* Also added cowboy_qs:urlencode/1 and cowboy_qsurldecode/1
|
|
The code for parsing has also been rewritten to be more efficient
and to be able to handle cookie values with space inside them properly.
Update cowlib to 0.2.0.
|
|
|
|
|
|
|
|
Certain clients send malformed Accept-Encoding headers, which causes
cowboy_req to crash is compression is enabled.
|
|
It's not allowed, however a heavily deployed client (Flash player)
can send such an empty header, therefore we make a special condition
for it and return an empty list when it happens.
|
|
The SPDY connection processes are also supervisors.
Missing:
* sendfile support
* request body reading support
|
|
|
|
|
|
|
|
|
|
|
|
Now instead of {1, 1} we have 'HTTP/1.1', and instead of {1, 0}
we have 'HTTP/1.0'. This is more efficient, easier to read in
crash logs, and clearer in the code.
|
|
Clients do not send it. We skip the value if we receive it now,
as it shouldn't happen, and won't for all the mainstream clients.
|
|
|
|
|
|
|
|
Adds a new type of streaming response fun. It can be set in a similar
way to a streaming body fun with known length:
Req2 = cowboy_req:set_resp_body_fun(chunked, StreamFun, Req)
The fun, StreamFun, should accept a fun as its single argument. This
fun, ChunkFun, is used to send chunks of iodata:
ok = ChunkFun(IoData)
ChunkFun should not be called with an empty binary or iolist as this
will cause HTTP 1.1 clients to believe the stream is over. The final (0
length) chunk will be sent automatically - even if it has already been
sent - assuming no exception is raised.
Also note that the connection will close after the last chunk for HTTP
1.0 clients.
|
|
|
|
|
|
|
|
|
|
This kind of function is highly dependent on the proxy used,
therefore parsing was added for x-forwarded-for instead and we
just let users write the function that works for them. The code
can be easily extracted if anyone was using the function.
|
|
|
|
|
|
|
|
Make them consistent with the rest of the module.
|
|
This allows us to change the max chunk length on a per chunk basis
instead of for the whole stream. It's also much easier to use this
way even if we don't want to change the chunk size.
|
|
|
|
Defaults to a maximum of 1000000 bytes.
Also standardize the te_identity and te_chunked decoding functions.
Now they both try to read as much as possible (up to the limit),
making body reading much faster when not using chunked encoding.
|
|
|
|
|
|
|
|
|
|
We now read from the socket to be able to detect errors or TCP close
events, and buffer the data if any. Once the data receive goes over
a certain limit, which defaults to 5000 bytes, we simply close the
connection with an {error, overflow} reason.
|
|
We now obtain the peer address before creating the Req object.
If an error occurs, then something went wrong, we close the connection
nicely directly.
|
|
|