summaryrefslogtreecommitdiffstats
path: root/_build/content
diff options
context:
space:
mode:
authorLoïc Hoguin <[email protected]>2016-03-28 15:36:42 +0200
committerLoïc Hoguin <[email protected]>2016-03-28 15:36:42 +0200
commitfe3492a98de29942477b061cd02c92246f4bf85a (patch)
tree2255b796a657e6e4dfb72beec1141258d17f1220 /_build/content
downloadninenines.eu-fe3492a98de29942477b061cd02c92246f4bf85a.tar.gz
ninenines.eu-fe3492a98de29942477b061cd02c92246f4bf85a.tar.bz2
ninenines.eu-fe3492a98de29942477b061cd02c92246f4bf85a.zip
Initial commit, new website system
Diffstat (limited to '_build/content')
-rw-r--r--_build/content/articles/cowboy2-qs.asciidoc184
-rw-r--r--_build/content/articles/erlang-scalability.asciidoc143
-rw-r--r--_build/content/articles/erlang-validate-utf8.asciidoc202
-rw-r--r--_build/content/articles/erlang.mk-and-relx.asciidoc131
-rw-r--r--_build/content/articles/erlanger-playbook-september-2015-update.asciidoc25
-rw-r--r--_build/content/articles/erlanger-playbook.asciidoc69
-rw-r--r--_build/content/articles/farwest-funded.asciidoc37
-rw-r--r--_build/content/articles/january-2014-status.asciidoc159
-rw-r--r--_build/content/articles/on-open-source.asciidoc137
-rw-r--r--_build/content/articles/ranch-ftp.asciidoc220
-rw-r--r--_build/content/articles/the-story-so-far.asciidoc250
-rw-r--r--_build/content/articles/tictactoe.asciidoc91
-rw-r--r--_build/content/articles/xerl-0.1-empty-modules.asciidoc153
-rw-r--r--_build/content/articles/xerl-0.2-two-modules.asciidoc152
-rw-r--r--_build/content/articles/xerl-0.3-atomic-expressions.asciidoc135
-rw-r--r--_build/content/articles/xerl-0.4-expression-separator.asciidoc48
-rw-r--r--_build/content/articles/xerl-0.5-intermediate-module.asciidoc145
-rw-r--r--_build/content/docs.asciidoc28
-rw-r--r--_build/content/donate.asciidoc24
-rw-r--r--_build/content/services.asciidoc95
-rw-r--r--_build/content/slogan.asciidoc7
-rw-r--r--_build/content/talks.asciidoc14
22 files changed, 2449 insertions, 0 deletions
diff --git a/_build/content/articles/cowboy2-qs.asciidoc b/_build/content/articles/cowboy2-qs.asciidoc
new file mode 100644
index 00000000..90ef714b
--- /dev/null
+++ b/_build/content/articles/cowboy2-qs.asciidoc
@@ -0,0 +1,184 @@
++++
+date = "2014-08-20T00:00:00+01:00"
+title = "Cowboy 2.0 and query strings"
+
++++
+
+Now that Cowboy 1.0 is out, I can spend some of my time thinking
+about Cowboy 2.0 that will be released soon after Erlang/OTP 18.0.
+This entry discusses the proposed changes to query string handling
+in Cowboy.
+
+Cowboy 2.0 will respond to user wishes by simplifying the interface
+of the `cowboy_req` module. Users want two things: less
+juggling with the Req variable, and more maps. Maps is the only
+dynamic key/value data structure in Erlang that we can match directly
+to extract values, allowing users to greatly simplify their code as
+they don't need to call functions to do everything anymore.
+
+Query strings are a good candidate for maps. It's a list of
+key/values, so it's pretty obvious we can win a lot by using maps.
+However query strings have one difference with maps: they can have
+duplicate keys.
+
+How are we expected to handle duplicate keys? There's no standard
+behavior. It's up to applications. And looking at what is done in
+the wild, there's no de facto standard either. While some ignore
+duplicate keys (keeping the first or the last they find), others
+require duplicate keys to end with `[]` to automatically
+put the values in a list, or even worse, languages like PHP even
+allow you to do things like `key[something][other]` and
+create a deep structure for it. Finally some allow any key to have
+duplicates and just gives you lists of key/values.
+
+Cowboy so far had functions to retrieve query string values one
+value at a time, and if there were duplicates it would return the
+first it finds. It also has a function returning the entire list
+with all duplicates, allowing you to filter it to get all of them,
+and another function that returns the raw query string.
+
+What are duplicates used for? Not that many things actually.
+
+One use of duplicate keys is with HTML forms. It is common practice
+to give all related checkboxes the same name so you get a list of
+what's been checked. When nothing is checked, nothing is sent at all,
+the key is not in the list.
+
+Another use of duplicate keys is when generating forms. A good
+example of that would be a form that allows uploading any number
+of files. When you add a file, client-side code adds another field
+to the form. Repeat up to a certain limit.
+
+And that's about it. Of note is that HTML radio elements share
+the same name too, but only one key/value is sent, so they are not
+relevant here.
+
+Normally this would be the part where I tell you how we solve
+this elegantly. But I had doubts. Why? Because there's no good
+solutions to solving only this particular problem.
+
+I then stopped thinking about duplicate keys for a minute and
+started to think about the larger problem.
+
+Query strings are input data. They take a particular form,
+and may be sent as part of the URI or as part of the request
+body. We have other kinds of input data. We have headers and
+cookies and the request body in various forms. We also have
+path segments in URIs.
+
+What do you do with input data? Well you use it to do
+something. But there is one thing that you almost always do
+(and if you don't, you really should): you validate it and
+you map it into Erlang terms.
+
+Cowboy left the user take care of validation and conversion
+into Erlang terms so far. Rather, it left the user take care
+of it everywhere except one place. Guess where? That's right,
+bindings.
+
+If you define routes with bindings then you have the option
+to provide constraints. Constraints can be used to do two things:
+validate the data and convert it in a more appropriate term. For
+example if you use the `int` constraint, Cowboy will
+make sure the binding is an integer, and will replace the value
+with the integer representation so that you can use it directly.
+In this particular case it not only routes the URI, but also
+validates and converts the bindings directly.
+
+This is very relevant in the case of our duplicate keys,
+because if we have a list with duplicates of a key, chances
+are we want to convert that into a list of Erlang terms, and
+also make sure that all the elements in this list are expected.
+
+The answer to this particular problem is simple. We need a
+function that will parse the query string and apply constraints.
+But this is not all, there is one other problem to be solved.
+
+The other problem is that for the user some keys are mandatory
+and some are optional. Optional keys include the ones that
+correspond to HTML checkboxes: if the key for one or more
+checkbox is missing from the query string, we still want to
+have an empty list in our map so we can easily match. Matching
+maps is great, but not so much when values might be missing,
+so we have to normalize this data a little.
+
+This problem is solved by allowing a default value. If the
+key is missing and a default exists, set it. If no default
+exists, then the key was mandatory and we want to crash.
+
+I therefore make a proposal for changing the query string
+interface to three functions.
+
+The first function already exists, it is `cowboy_req:qs(Req)`
+and it returns only the query string binary. No more Req returned.
+
+The second function is a renaming of `cowboy_req:qs_vals(Req)`
+to something more explicit: `cowboy_req:parse_qs(Req)`.
+The new name implies that a parsing operation is done. It was implicit
+and cached before. It will be explicit and not cached anymore now.
+Again, no more Req returned.
+
+The third function is the one I mentioned above. I think
+the interface `cowboy_req:match_qs(Req, Fields)` is
+most appropriate. It returns a normalized map that is the same
+regardless of optional fields being provided with the request,
+allowing for easy matching. It crashes if something went wrong.
+Still no Req returned.
+
+I feel that this three function interface provides everything
+one would need to comfortably write applications. You can get
+low level and get the query string directly; you can get a list
+of key/value binaries without any additional processing and do it
+on your own; or you can get a processed map that contains Erlang
+terms ready to be used.
+
+I strongly believe that by democratizing the constraints to
+more than just bindings, but also to query string, cookies and
+other key/values in Cowboy, we can allow the developer to quickly
+and easily go from HTTP request to Erlang function calls. The
+constraints are reusable functions that can serve as guards
+against unwanted data, providing convenience in the process.
+
+Your handlers will not look like an endless series of calls
+to get and convert the input data, they will instead be just
+one call at the beginning followed by the actual application
+logic, thanks to constraints and maps.
+
+[source,erlang]
+----
+handle(Req, State) ->
+ #{name:=Name, email:=Email, choices:=ChoicesList, remember_me:=RememberMe} =
+ cowboy_req:match_qs(Req, [
+ name, {email, email},
+ {choices, fun check_choices/1, []},
+ {remember_me, boolean, false}]),
+ save_choices(Name, Email, ChoicesList),
+ if RememberMe -> create_account(Name, Email); true -> ok end,
+ {ok, Req, State}.
+
+check_choices(<<"blue">>) -> {true, blue};
+check_choices(<<"red">>) -> {true, red};
+check_choices(_) -> false;
+----
+
+(Don't look too closely at the structure yet.)
+
+As you can see in the above snippet, it becomes really easy
+to go from query string to values. You can also use the map
+directly as it is guaranteed to only contain the keys you
+specified, any extra key is not returned.
+
+This would I believe be a huge step up as we can now
+focus on writing applications instead of translating HTTP
+calls. Cowboy can now take care of it.
+
+And to conclude, this also solves our duplicate keys
+dilemma, as they now automatically become a list of binaries,
+and this list is then checked against constraints that
+will fail if they were not expecting a list. And in the
+example above, it even converts the values to atoms for
+easier manipulation.
+
+As usual, feedback is more than welcome, and I apologize
+for the rocky structure of this post as it contains all the
+thoughts that went into this rather than just the conclusion.
diff --git a/_build/content/articles/erlang-scalability.asciidoc b/_build/content/articles/erlang-scalability.asciidoc
new file mode 100644
index 00000000..3fdaa445
--- /dev/null
+++ b/_build/content/articles/erlang-scalability.asciidoc
@@ -0,0 +1,143 @@
++++
+date = "2013-02-18T00:00:00+01:00"
+title = "Erlang Scalability"
+
++++
+
+I would like to share some experience and theories on
+Erlang scalability.
+
+This will be in the form of a series of hints, which
+may or may not be accompanied with explanations as to why
+things are this way, or how they improve or reduce the scalability
+of a system. I will try to do my best to avoid giving falsehoods,
+even if that means a few things won't be explained.
+
+== NIFs
+
+NIFs are considered harmful. I don't know any single NIF-based
+library that I would recommend. That doesn't mean they should
+all be avoided, just that if you're going to want your system to
+scale, you probably shouldn't use a NIF.
+
+A common case for using NIFs is JSON processing. The problem
+is that JSON is a highly inefficient data structure (similar
+in inefficiency to XML, although perhaps not as bad). If you can
+avoid using JSON, you probably should. MessagePack can replace
+it in many situations.
+
+Long-running NIFs will take over a scheduler and prevent Erlang
+from efficiently handling many processes.
+
+Short-running NIFs will still confuse the scheduler if they
+take more than a few microseconds to run.
+
+Threaded NIFs, or the use of the `enif_consume_timeslice`
+might help allievate this problem, but they're not a silver bullet.
+
+And as you already know, a crashing NIF will take down your VM,
+destroying any claims you may have at being scalable.
+
+Never use a NIF because "C is fast". This is only true in
+single-threaded programs.
+
+== BIFs
+
+BIFs can also be harmful. While they are generally better than
+NIFs, they are not perfect and some of them might have harmful
+effects on the scheduler.
+
+A great example of this is the `erlang:decode_packet/3`
+BIF, when used for HTTP request or response decoding. Avoiding
+its use in _Cowboy_ allowed us to see a big increase in
+the number of requests production systems were able to handle,
+up to two times the original amount. Incidentally this is something
+that is impossible to detect using synthetic benchmarks.
+
+BIFs that return immediately are perfectly fine though.
+
+== Binary strings
+
+Binary strings use less memory, which means you spend less time
+allocating memory compared to list-based strings. They are also
+more natural for strings manipulation because they are optimized
+for appending (as opposed to prepending for lists).
+
+If you can process a binary string using a single match context,
+then the code will run incredibly fast. The effects will be much
+increased if the code was compiled using HiPE, even if your Erlang
+system isn't compiled natively.
+
+Avoid using `binary:split` or `binary:replace`
+if you can avoid it. They have a certain overhead due to supporting
+many options that you probably don't need for most operations.
+
+== Buffering and streaming
+
+Use binaries. They are great for appending, and it's a direct copy
+from what you receive from a stream (usually a socket or a file).
+
+Be careful to not indefinitely receive data, as you might end up
+having a single binary taking up huge amounts of memory.
+
+If you stream from a socket and know how much data you expect,
+then fetch that data in a single `recv` call.
+
+Similarly, if you can use a single `send` call, then
+you should do so, to avoid going back and forth unnecessarily between
+your Erlang process and the Erlang port for your socket.
+
+== List and binary comprehensions
+
+Prefer list comprehensions over `lists:map/2`. The
+compiler will be able to optimize your code greatly, for example
+not creating the result if you don't need it. As time goes on,
+more optimizations will be added to the compiler and you will
+automatically benefit from them.
+
+== gen_server is no silver bullet
+
+It's a bad idea to use `gen_server` for everything.
+For two reasons.
+
+There is an overhead everytime the `gen_server` receives
+a call, a cast or a simple message. It can be a problem if your
+`gen_server` is in a critical code path where speed
+is all that matters. Do not hesitate to create other kinds of
+processes where it makes sense. And depending on the kind of process,
+you might want to consider making them special processes, which
+would essentially behave the same as a `gen_server`.
+
+A common mistake is to have a unique `gen_server` to
+handle queries from many processes. This generally becomes the
+biggest bottleneck you'll want to fix. You should try to avoid
+relying on a single process, using a pool if you can.
+
+== Supervisor and monitoring
+
+A `supervisor` is also a `gen_server`,
+so the previous points also apply to them.
+
+Sometimes you're in a situation where you have supervised
+processes but also want to monitor them in one or more other
+processes, effectively duplicating the work. The supervisor
+already knows when processes die, why not use this to our
+advantage?
+
+You can create a custom supervisor process that will perform
+both the supervision and handle exit and other events, allowing
+to avoid the combination of supervising and monitoring which
+can prove harmful when many processes die at once, or when you
+have many short lived processes.
+
+== ets for LOLSPEED(tm)
+
+If you have data queried or modified by many processes, then
+`ets` public or protected tables will give you the
+performance boost you require. Do not forget to set the
+`read_concurrency` or `write_concurrency`
+options though.
+
+You might also be thrilled to know that Erlang R16B will feature
+a big performance improvement for accessing `ets` tables
+concurrently.
diff --git a/_build/content/articles/erlang-validate-utf8.asciidoc b/_build/content/articles/erlang-validate-utf8.asciidoc
new file mode 100644
index 00000000..383afcc6
--- /dev/null
+++ b/_build/content/articles/erlang-validate-utf8.asciidoc
@@ -0,0 +1,202 @@
++++
+date = "2015-03-06T00:00:00+01:00"
+title = "Validating UTF-8 binaries with Erlang"
+
++++
+
+Yesterday I pushed Websocket permessage-deflate to
+Cowboy master. I also pushed
+https://github.com/ninenines/cowlib/commit/7e4983b70ddf8cedb967e36fba6a600731bdad5d[a
+change in the way the code validates UTF-8 data]
+(required for text and close frames as per the spec).
+
+When looking into why the permessage-deflate tests
+in autobahntestsuite were taking such a long time, I
+found that autobahn is using an adaptation of the
+algorithm named <a href="http://bjoern.hoehrmann.de/utf-8/decoder/dfa/">Flexible
+and Economical UTF-8 Decoder</a>. This is the C99
+implementation:
+
+[source,c]
+----
+// Copyright (c) 2008-2009 Bjoern Hoehrmann <[email protected]>
+// See http://bjoern.hoehrmann.de/utf-8/decoder/dfa/ for details.
+
+#define UTF8_ACCEPT 0
+#define UTF8_REJECT 1
+
+static const uint8_t utf8d[] = {
+ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, // 00..1f
+ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, // 20..3f
+ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, // 40..5f
+ 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0, // 60..7f
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9, // 80..9f
+ 7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7,7, // a0..bf
+ 8,8,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2, // c0..df
+ 0xa,0x3,0x3,0x3,0x3,0x3,0x3,0x3,0x3,0x3,0x3,0x3,0x3,0x4,0x3,0x3, // e0..ef
+ 0xb,0x6,0x6,0x6,0x5,0x8,0x8,0x8,0x8,0x8,0x8,0x8,0x8,0x8,0x8,0x8, // f0..ff
+ 0x0,0x1,0x2,0x3,0x5,0x8,0x7,0x1,0x1,0x1,0x4,0x6,0x1,0x1,0x1,0x1, // s0..s0
+ 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,1,1,1,1,1,0,1,0,1,1,1,1,1,1, // s1..s2
+ 1,2,1,1,1,1,1,2,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1, // s3..s4
+ 1,2,1,1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1,3,1,3,1,1,1,1,1,1, // s5..s6
+ 1,3,1,1,1,1,1,3,1,3,1,1,1,1,1,1,1,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1, // s7..s8
+};
+
+uint32_t inline
+decode(uint32_t* state, uint32_t* codep, uint32_t byte) {
+ uint32_t type = utf8d[byte];
+
+ *codep = (*state != UTF8_ACCEPT) ?
+ (byte & 0x3fu) | (*codep << 6) :
+ (0xff >> type) & (byte);
+
+ *state = utf8d[256 + *state*16 + type];
+ return *state;
+}
+----
+
+And this is the Erlang implementation I came up with:
+
+[source,erlang]
+----
+%% This function returns 0 on success, 1 on error, and 2..8 on incomplete data.
+validate_utf8(<<>>, State) -> State;
+validate_utf8(<< C, Rest/bits >>, 0) when C < 128 -> validate_utf8(Rest, 0);
+validate_utf8(<< C, Rest/bits >>, 2) when C >= 128, C < 144 -> validate_utf8(Rest, 0);
+validate_utf8(<< C, Rest/bits >>, 3) when C >= 128, C < 144 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 5) when C >= 128, C < 144 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 7) when C >= 128, C < 144 -> validate_utf8(Rest, 3);
+validate_utf8(<< C, Rest/bits >>, 8) when C >= 128, C < 144 -> validate_utf8(Rest, 3);
+validate_utf8(<< C, Rest/bits >>, 2) when C >= 144, C < 160 -> validate_utf8(Rest, 0);
+validate_utf8(<< C, Rest/bits >>, 3) when C >= 144, C < 160 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 5) when C >= 144, C < 160 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 6) when C >= 144, C < 160 -> validate_utf8(Rest, 3);
+validate_utf8(<< C, Rest/bits >>, 7) when C >= 144, C < 160 -> validate_utf8(Rest, 3);
+validate_utf8(<< C, Rest/bits >>, 2) when C >= 160, C < 192 -> validate_utf8(Rest, 0);
+validate_utf8(<< C, Rest/bits >>, 3) when C >= 160, C < 192 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 4) when C >= 160, C < 192 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 6) when C >= 160, C < 192 -> validate_utf8(Rest, 3);
+validate_utf8(<< C, Rest/bits >>, 7) when C >= 160, C < 192 -> validate_utf8(Rest, 3);
+validate_utf8(<< C, Rest/bits >>, 0) when C >= 194, C < 224 -> validate_utf8(Rest, 2);
+validate_utf8(<< 224, Rest/bits >>, 0) -> validate_utf8(Rest, 4);
+validate_utf8(<< C, Rest/bits >>, 0) when C >= 225, C < 237 -> validate_utf8(Rest, 3);
+validate_utf8(<< 237, Rest/bits >>, 0) -> validate_utf8(Rest, 5);
+validate_utf8(<< C, Rest/bits >>, 0) when C =:= 238; C =:= 239 -> validate_utf8(Rest, 3);
+validate_utf8(<< 240, Rest/bits >>, 0) -> validate_utf8(Rest, 6);
+validate_utf8(<< C, Rest/bits >>, 0) when C =:= 241; C =:= 242; C =:= 243 -> validate_utf8(Rest, 7);
+validate_utf8(<< 244, Rest/bits >>, 0) -> validate_utf8(Rest, 8);
+validate_utf8(_, _) -> 1.
+----
+
+Does it look similar to you? So how did we get there?
+
+I started with a naive implementation of the original. First, we
+don't need the codepoint calculated and extracted for our validation
+function. We just want to know the data is valid, so we only need to
+calculate the next state. Then, the only thing we needed to be careful
+about was that tuples are 1-based, and that we need to stop processing
+the binary when we get the state 1 or when the binary is empty.
+
+[source,erlang]
+----
+validate_utf8(<<>>, State) -> State;
+validate_utf8(_, 1) -> 1;
+validate_utf8(<< C, Rest/bits >>, State) ->
+ validate_utf8(Rest, element(257 + State * 16 + element(1 + C, ?UTF8D), ?UTF8D)).
+----
+
+The macro `?UTF8D` is the tuple equivalent of the C array
+in the original code.
+
+Compared to our previous algorithm, this performed about the same.
+In some situations a little faster, in some a little slower. In other words,
+not good enough. But because this new algorithm allows us to avoid a binary
+concatenation this warranted looking further.
+
+It was time to step into crazy land.
+
+Erlang is very good at pattern matching, even more so than doing some
+arithmetic coupled by fetching elements from a tuple. So I decided I was
+going to write all possible clauses for all combinations of `C`
+and `State`. And by write I mean generate.
+
+So I opened my Erlang shell, defined the variable `D` to be
+the tuple `?UTF8D` with its 400 elements, and then ran the
+following expression (after a bit of trial and error):
+
+[source,erlang]
+----
+16> file:write_file("out.txt",
+ [io_lib:format("validate_utf8(<< ~p, Rest/bits >>, ~p) -> ~p;~n",
+ [C, S, element(257 + S * 16 + element(1 + C, D), D)])
+ || C <- lists:seq(0,255), S <- lists:seq(0,8)]).
+ok
+----
+
+The result is a 2304 lines long file, containing 2304 clauses.
+People who pay attention to what I say on Twitter will remember
+I said something around 3000 clauses, but that was just me not
+using the right number of states in my estimate.
+
+There was a little more work to be done on this generated
+code that I did using regular expressions. We need to recurse
+when the resulting state is not 1. We also need to stop when
+the binary is empty, making it the 2305th clause.
+
+Still, 2305 is a lot. But hey, the code did work, and faster
+than the previous implementation too! But hey, perhaps I could
+find a way to reduce its size.
+
+Removing all the clauses that return 1 and putting a catch-all
+clause at the end instead reduced the number to about 500, and
+showed that many clauses were similar:
+
+[source,erlang]
+----
+validate_utf8(<< 0, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+validate_utf8(<< 1, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+validate_utf8(<< 2, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+validate_utf8(<< 3, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+validate_utf8(<< 4, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+validate_utf8(<< 5, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+validate_utf8(<< 6, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+validate_utf8(<< 7, Rest/bits >>, 0) -> validate_utf8(Rest, 0);
+----
+
+But also:
+
+[source,erlang]
+----
+validate_utf8(<< 157, Rest/bits >>, 2) -> validate_utf8(Rest, 0);
+validate_utf8(<< 157, Rest/bits >>, 3) -> validate_utf8(Rest, 2);
+validate_utf8(<< 157, Rest/bits >>, 5) -> validate_utf8(Rest, 2);
+validate_utf8(<< 157, Rest/bits >>, 6) -> validate_utf8(Rest, 3);
+validate_utf8(<< 157, Rest/bits >>, 7) -> validate_utf8(Rest, 3);
+validate_utf8(<< 158, Rest/bits >>, 2) -> validate_utf8(Rest, 0);
+validate_utf8(<< 158, Rest/bits >>, 3) -> validate_utf8(Rest, 2);
+validate_utf8(<< 158, Rest/bits >>, 5) -> validate_utf8(Rest, 2);
+validate_utf8(<< 158, Rest/bits >>, 6) -> validate_utf8(Rest, 3);
+validate_utf8(<< 158, Rest/bits >>, 7) -> validate_utf8(Rest, 3);
+----
+
+Patterns, my favorites!
+
+A little more time was spent to edit the 500 or so clauses into
+smaller equivalents, testing that performance was not impacted, and
+comitting the result.
+
+The patterns above can be found here in the resulting function:
+
+[source,erlang]
+----
+validate_utf8(<< C, Rest/bits >>, 0) when C < 128 -> validate_utf8(Rest, 0);
+...
+validate_utf8(<< C, Rest/bits >>, 2) when C >= 144, C < 160 -> validate_utf8(Rest, 0);
+validate_utf8(<< C, Rest/bits >>, 3) when C >= 144, C < 160 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 5) when C >= 144, C < 160 -> validate_utf8(Rest, 2);
+validate_utf8(<< C, Rest/bits >>, 6) when C >= 144, C < 160 -> validate_utf8(Rest, 3);
+validate_utf8(<< C, Rest/bits >>, 7) when C >= 144, C < 160 -> validate_utf8(Rest, 3);
+...
+----
+
+I hope you enjoyed this post.
diff --git a/_build/content/articles/erlang.mk-and-relx.asciidoc b/_build/content/articles/erlang.mk-and-relx.asciidoc
new file mode 100644
index 00000000..e8a667a8
--- /dev/null
+++ b/_build/content/articles/erlang.mk-and-relx.asciidoc
@@ -0,0 +1,131 @@
++++
+date = "2013-05-28T00:00:00+01:00"
+title = "Build Erlang releases with Erlang.mk and Relx"
+
++++
+
+Building OTP releases has always been a difficult task. Tools like
+Reltool or Rebar have made this simpler, but
+it's no panacea. This article will show you an alternative and
+hopefully much simpler solution.
+
+There is two steps to building a release. First you need to build
+the various OTP applications you want to include in the release. Once
+done, you need to create the release itself, by including the Erlang
+runtime system alongside the applications, a boot script to start the
+node and all its applications, and some configuration files.
+
+https://github.com/extend/erlang.mk[Erlang.mk] solves
+the first step. It is an include file for GNU Make. Just
+including it in a Makefile is enough to allow building your project,
+fetching and building dependencies, building documentation, performing
+static analysis and more.
+
+https://github.com/erlware/relx[Relx] solves the second
+step. It is a release creation tool, wrapped into a single executable
+file. It doesn't require a configuration file. And if you do need one,
+it will be a pretty small one.
+
+Let's take a look at the smallest Erlang.mk powered
+Makefile. There is only one thing required: defining the project
+name.
+
+[source,make]
+----
+PROJECT = my_project
+
+include erlang.mk
+----
+
+Simply doing this allows you to build your application by typing
+`make`, running tests using `make tests`, and
+more. It will even compile your '.dtl' files found in the
+'templates/' directory if you are using ErlyDTL!
+
+Let's now take a look at a simplified version of the Makefile for
+this website. I only removed a few targets that were off-topic.
+
+[source,make]
+----
+PROJECT = ninenines
+
+DEPS = cowboy erlydtl
+dep_cowboy = https://github.com/extend/cowboy.git 0.8.5
+dep_erlydtl = https://github.com/evanmiller/erlydtl.git 4d0dc8fb
+
+.PHONY: release clean-release
+
+release: clean-release all projects
+ relx -o rel/$(PROJECT)
+
+clean-release: clean-projects
+ rm -rf rel/$(PROJECT)
+
+include erlang.mk
+----
+
+You can see here how to define dependencies. First you list all
+the dependency names, then you have one line per dependency, giving
+the repository URL and the commit number, tag or branch you want.
+
+Then you can see two targets defined, with `release`
+becoming the default target, because it was defined first. You can
+override the default target `all`, which builds the
+application and its dependencies, this way.
+
+And as you can see, the `release` target uses
+Relx to build a release into the 'rel/ninenines/'
+directory. Let's take a look at the configuration file for this release.
+
+[source,erlang]
+----
+{release, {ninenines, "1"}, [ninenines]}.
+
+{extended_start_script, true}.
+{sys_config, "rel/sys.config"}.
+
+{overlay, [
+ {mkdir, "log"},
+ {copy, "rel/vm.args",
+ "releases/\{\{release_name\}\}-\{\{release_version\}\}/vm.args"}
+]}.
+----
+
+The first line defines a release named `ninenines`, which
+has a version number `"1"` and includes one application, also
+named `ninenines`, although it doesn't have to.
+
+We then use the `extended_start_script` option to tell
+Relx that we would like to have a start script that allows
+us to not only start the release, but do so with the node in the
+background, or also to allow us to connect to a running node, and so on.
+This start script has the same features as the one tools like
+Rebar generates.
+
+The rest of the file just makes sure our configuration files are
+where we expect them. Relx will automatically take care
+of your 'sys.config' file as long as you tell it where to
+find it. The 'vm.args' file used by the extended start script
+needs to be handled more explicitly by using an overlay however.
+
+How does Relx find what applications to include?
+By looking at the application dependencies in the '.app'
+file of each OTP application. Make sure you put all dependencies in
+there, _including_ library applications, and Relx
+will find everything for you.
+
+For example, this release includes the following applications.
+Only what's strictly required.
+
+----
+compiler-4.9.1 crypto-2.3 kernel-2.16.1 ranch-0.8.3 syntax_tools-1.6.11
+cowboy-0.8.5 erlydtl-0.7.0 ninenines-0.2.0 stdlib-1.19.1
+----
+
+The 'sys.config' file is standard and
+http://www.erlang.org/doc/man/config.html[well documented].
+The 'vm.args' file is just an optionally multiline file
+containing all the flags to pass to the Erlang VM, for example
+`-name [email protected] -heart`.
+
+Building OTP releases has always been a difficult task. Until now.
diff --git a/_build/content/articles/erlanger-playbook-september-2015-update.asciidoc b/_build/content/articles/erlanger-playbook-september-2015-update.asciidoc
new file mode 100644
index 00000000..494d1156
--- /dev/null
+++ b/_build/content/articles/erlanger-playbook-september-2015-update.asciidoc
@@ -0,0 +1,25 @@
++++
+date = "2015-09-02T00:00:00+01:00"
+title = "The Erlanger Playbook September 2015 Update"
+
++++
+
+An update to The Erlanger Playbook is now available!
+
+The Erlanger Playbook is a book about software development using
+Erlang. It currently covers all areas from the conception, design,
+the writing of code, documentation and tests.
+
+The book is still a work in progress. Future topics will include
+refactoring, debugging and tracing, benchmarking, releases, community
+management (for open source projects).
+
+This update fixes a number of things and adds two chapters: IOlists
+and Erlang building blocks.
+
+Learn more about link:/articles/erlanger-playbook[The Erlanger Playbook]!
+
+This is a self-published ebook. The base price is 50€. All proceeds
+will be used to allow me to work on open source full time.
+
+Thank you for helping me helping you help us all!
diff --git a/_build/content/articles/erlanger-playbook.asciidoc b/_build/content/articles/erlanger-playbook.asciidoc
new file mode 100644
index 00000000..4a67bf22
--- /dev/null
+++ b/_build/content/articles/erlanger-playbook.asciidoc
@@ -0,0 +1,69 @@
++++
+date = "2015-06-18T00:00:00+01:00"
+title = "The Erlanger Playbook"
+
++++
+
+I am proud to announce the pre-release of The Erlanger Playbook.
+
+The Erlanger Playbook is a book about software development using
+Erlang. It currently covers all areas from the conception, design,
+the writing of code, documentation and tests.
+
+The book is still a work in progress. Future topics will include
+refactoring, debugging and tracing, benchmarking, releases, community
+management (for open source projects).
+
+The following sections are currently available:
+
+* About this book; Future additions
+* _Workflow:_ Think; Write; Stay productive
+* _Documentation:_ On documentation; Tutorials; User guide; Manual
+* _Code:_ Starting a project; Version control; Project structure; Code style; Best practices; Special processes
+* _Tests:_ On testing; Success typing analysis; Manual testing; Unit testing; Functional testing
+
+Read a preview: link:/res/erlanger-preview.pdf[Special processes]
+
+The book is currently just shy of 100 pages. The final version
+of the book is planned to be between 200 and 250 pages.
+A print version of the book will be considered once the final
+version gets released. The printed book is *not* included
+in the price.
+
+This is a self-published book. The base price is 50€. All proceeds
+will be used to allow me to work on open source full time.
+
+++++
+<form action="https://www.paypal.com/cgi-bin/webscr" method="post" target="_top">
+<input type="hidden" name="cmd" value="_s-xclick">
+<input type="hidden" name="hosted_button_id" value="9M44HJCGX3GVN">
+<input type="image" src="https://www.paypalobjects.com/en_US/i/btn/btn_buynowCC_LG.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online!">
+<img alt="" border="0" src="https://www.paypalobjects.com/fr_FR/i/scr/pixel.gif" width="1" height="1">
+</form>
+++++
+
+You are more than welcome to pay extra by using this second button.
+It allows you to set the price you want. Make sure to set it to at least
+50€ to receive the book.
+
+++++
+<form action="https://www.paypal.com/cgi-bin/webscr" method="post" target="_top">
+<input type="hidden" name="cmd" value="_s-xclick">
+<input type="hidden" name="hosted_button_id" value="BBW9TR9LBK8C2">
+<input type="image" src="https://www.paypalobjects.com/en_US/i/btn/btn_buynowCC_LG.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online!">
+<img alt="" border="0" src="https://www.paypalobjects.com/fr_FR/i/scr/pixel.gif" width="1" height="1">
+</form>
+++++
+
+Make sure to provide a valid email address.
+
+There will be a *delay* between payment and sending of the book.
+This process is currently manual.
+
+As the book is a pre-release, feedback is more than welcome. You can
+send your comments to erlanger@ this website.
+
+The plan is to add about 20 pages every month until it is completed.
+You will receive updates to the book for free as soon as they are available.
+
+Huge thanks for your interest in buying this book!
diff --git a/_build/content/articles/farwest-funded.asciidoc b/_build/content/articles/farwest-funded.asciidoc
new file mode 100644
index 00000000..99ea3525
--- /dev/null
+++ b/_build/content/articles/farwest-funded.asciidoc
@@ -0,0 +1,37 @@
++++
+date = "2013-06-27T00:00:00+01:00"
+title = "Farwest got funded!"
+
++++
+
+This was a triumph! I'm making a note here: HUGE SUCCESS!!
+
+++++
+<iframe frameborder="0" scrolling="no" height="400px" width"236px" seamless="seamless" src="https://api.bountysource.com/user/fundraisers/83/embed"></iframe>
+++++
+
+It's hard to overstate my satisfaction. Thanks to everyone who
+made this possible.
+
+If you have backed this fundraiser, and haven't provided your
+personal details yet, please do so quickly so that your rewards
+can be sent!
+
+I am hoping that we will be able to make good use of all that
+money. The details of the expenses will be published regularly
+on the https://github.com/extend/farwest/wiki/2013-Fundraiser[2013 Fundraiser wiki page],
+giving you full disclosure as to how your money is used.
+
+It will take a little time to get things started, we are in
+summer after all! We will however act quickly to make the
+prototype easy enough to use so that the paid UI work can
+begin. This is also when user contributions will be welcome.
+
+You can see the https://github.com/extend/farwest/wiki/Roadmap[Roadmap]
+to get more information on the current plans. This document will
+get updated as time goes on so check again later to see if you
+can help!
+
+Look at me: still talking when there's open source to do!
+
+Thanks again for all your support. I really appreciate it.
diff --git a/_build/content/articles/january-2014-status.asciidoc b/_build/content/articles/january-2014-status.asciidoc
new file mode 100644
index 00000000..58ce17b3
--- /dev/null
+++ b/_build/content/articles/january-2014-status.asciidoc
@@ -0,0 +1,159 @@
++++
+date = "2014-01-07T00:00:00+01:00"
+title = "January 2014 status"
+
++++
+
+I will now be regularly writing posts about project status, plans
+and hopes for the future.
+
+Before that though, there's one important news to share.
+
+Until a year ago all development was financed through consulting
+and development services. This worked alright but too much time was
+spent doing things that didn't benefit the open source projects.
+And that didn't make me happy at all. Because I like being happy
+I stopped that for the most part and spent the year figuring things
+out, experimenting and discussing with people about it.
+
+What makes me happy is answering these "what if" questions.
+Ranch and Cowboy are a direct product of that, as they originate
+from the "what if we could have a server running different protocols
+on different ports but all part of the same application?"; Erlang.mk
+is a bit different: "this works great for me, what if it could
+become the standard solution for building Erlang applications?".
+
+When I successfully answer the question, this becomes a project
+that may end up largely benefiting the Erlang community. I love
+Erlang and I love enabling people to build awesome products based
+on my projects. It's a lot more rewarding than activities like
+consulting where you only help one company at a time. And it's
+also a much better use of my time as this has a bigger impact on
+the community.
+
+The hard part is to figure out how to be able to spend 100%
+of the time on projects that you basically give away for free,
+and still be able to afford living.
+
+The immediate solution was getting work sponsored by the
+http://www.leofs.org/[LeoFS project]. LeoFS is a great
+distributed file storage that I can only recommend to anyone who
+needs to store files or large pieces of data. The sponsorship
+works pretty great, and spurred development of the SPDY code in
+Cowboy amongst other things, plus a couple upcoming projects
+done more recently and getting a final touch before release.
+
+It turns out sponsoring works great. So I'm thinking of
+expanding on it and hopefully get enough sponsoring for fulltime
+open source development. So I figured out a few things that
+can give incentive to companies willing to sponsor.
+
+Sponsors can _request that a particular version of Cowboy
+be maintained indefinitely_ (as long as they're sponsoring).
+This means fixes will be backported. This doesn't include
+features although I can take requests depending on feasability.
+
+Sponsors can _have a direct, private line of communication_,
+useful when they need help debugging or optimizing their product.
+
+Sponsors can _get their name associated with one of the
+project_ and get a good standing in the community thanks
+to this. They would be featured in the README of the project
+which is viewed by hundreds of developers daily.
+
+Sponsors can _be listed on this website_. I will modify
+the front page when we get a few more sponsors, they will be
+featured below the carousel of projects.
+
+Please mailto:[email protected][contact us] if
+you are interested in sponsoring, and say how much you are willing
+to sponsor. The goal here is only to have enough money to make a
+living and attend a few conferences. There's an upper limit in the
+amount needed per year, so the more sponsors there are the cheaper
+it becomes to everyone.
+
+The upper limit stems from the new legal entity that will replace
+the current Nine Nines. This is mostly to lower the legal costs and
+simplify the administrative stuff and allow me to dedicate all my
+time on what's important. From your point of view it's business as
+usual.
+
+Now on to project statuses and future works.
+
+== Cowboy
+
+Cowboy is getting ready for a 1.0 release. Once multipart support
+is in, all that's left is finishing the guide, improving tests and
+finishing moving code to the cowlib project. I hope everything will
+be ready around the time R17B is released.
+
+I already dream of some API breaking changes after 1.0, which
+would essentially become 2.0 when they're done. An extensive survey
+will be setup after the 1.0 release to get more information on what
+people like and don't like about the API.
+
+And of course, when clients start implementing HTTP/2.0 then we
+will too.
+
+== Ranch
+
+Ranch is also getting close to 1.0. I am currently writing a
+test suite for upgrades. After that I also would like to write
+a chaos_monkey test suite and add a getting started chapter to the
+guide.
+
+Ranch is pretty solid otherwise, it's hard to foresee new
+features at this point.
+
+== Erlang.mk
+
+I didn't expect this project to become popular. Glad it did though.
+
+Windows support is planned, but will require GNU Make 4.
+Thankfully, it's available at least through cygwin. Make,
+Git and Erlang will be the only required dependencies
+because the rest of the external calls will be converted to
+using Guile, a Scheme included since GNU Make 4. So it is
+Guile that will download the needed files, magically fill
+the list of modules in the '.app' file and so on, allowing
+us to provide a truly cross-platform solution without
+losing on the performance we benefit from using Make.
+
+Also note that it is possible to check whether Guile
+is available so we will be able to fallback to the current
+code for older systems.
+
+I am also thinking about adding an extra column to the package
+index, indicating the preferred tag or commit number to be used.
+This would allow us to skip the individual `dep` lines
+entirely if the information in the package index is good enough.
+And committing that file to your project would be the only thing
+needed to lock the dependencies. Of course if a `dep`
+line is specified this would instead override the file.
+
+== Alien Shaman
+
+This is the two-parts project requested by the LeoFS team.
+This is essentially a "distributed bigwig". I am hoping to
+have a prototype up in a few days.
+
+Alien is the part that allows writing and enabling probes
+in your nodes. Probes send events which may get filtered before
+being forwarded to their destination. The events may be sent
+to a local process, a remote process, over UDP, TCP or SSL.
+Events may also be received by a process called a relay, which
+may be used to group or aggregate data before it is being sent
+over the network, reducing the footprint overall.
+
+Shaman is the UI for it. It will ultimately be able to display
+any event as long as it's configured to do so. Events may be logs,
+numeric values displayed on graphs updated in real time, lists of
+items like processes and so on.
+
+== Feedback
+
+That's it for today! There will be another status update once
+Shaman is out. But for now I have to focus on it.
+
+As always, please send feedback on the projects, this post,
+the sponsoring idea, anything really! Thanks.
diff --git a/_build/content/articles/on-open-source.asciidoc b/_build/content/articles/on-open-source.asciidoc
new file mode 100644
index 00000000..6e700e8a
--- /dev/null
+++ b/_build/content/articles/on-open-source.asciidoc
@@ -0,0 +1,137 @@
++++
+date = "2014-09-05T00:00:00+01:00"
+title = "On open source"
+
++++
+
+Last week I read a great article
+http://videlalvaro.github.io/2014/08/on-contributing-to-opensource.html[on
+contributing to open source] by Alvaro Videla. He makes
+many great points and I am in agreement with most of it.
+This made me want to properly explain my point of view with
+regard to open source and contributions. Unlike most open
+source evangelism articles I will not talk about ideals or
+any of that crap, but rather my personal feelings and
+experience.
+
+I have been doing open source work for quite some time.
+My very first open source project was a graphics driver
+for (the very early version of) the PCSX2 emulator. That
+was more than ten years ago, and there
+http://ngemu.com/threads/gstaris-0-6.30469/[isn't
+much left to look at today]. This was followed by a
+https://github.com/extend/wee[PHP framework]
+(started long before Zend Framework was even a thing) and
+a few other small projects. None of them really took off.
+It's alright, that's pretty much the fate of most open
+source projects. You spend a lot of work and sweat and
+get very little in return from others.
+
+This sounds harsh but this is the reality of all open
+source projects. If you are thinking of building a project
+and releasing it as open source, you should be prepared
+for that. This is how most of your projects will feel like.
+Don't release a project as open source thinking everyone
+will pat you on the back and cheer, this won't happen. In
+fact if your project is a too small improvement over existing
+software, what many people will do is say you have NIH
+syndrome, regardless of the improvement you bring. So you
+need not to rely on other people in order to get your
+enjoyment out of building open source software.
+
+In my case I get enjoyment from thinking about problems
+that need solving. Often times the problems are already
+solved, but nevermind that, I still think about them and
+sometimes come up with something I feel is better and then
+write code for it. Writing code is also fun, but not as
+fun as using my brain to imagine solutions.
+
+You don't need thousands of users to do that. So are
+users worthless to me then? No, of course not. In fact
+they are an important component: they bring me problems
+that need solving. So users are very important to me.
+But that's not the only reason.
+
+I got lucky that the Cowboy project became popular.
+And seeing it be this popular, and some of my other projects
+also do quite well, made me believe I could perhaps work
+full time on open source. If I can work full time then
+I can produce better software. What I had one hour to
+work on before I can now spend a day on, and experiment
+until I am satisfied. This is very useful because that
+means I can get it almost right from the beginning, and
+avoid the million API breaking changes that occured
+before Cowboy 1.0 was released.
+
+To be able to work full time on open source however,
+I need money. This is a largely unspoken topic of open
+source work. The work is never free. You can download the
+product for free, but someone has to pay for the work
+itself. Life is unfortunately not free.
+
+Large projects and some lucky people have their work
+sponsored by their employers. Everyone else has to deal
+with it differently. In my case I was sponsored for a
+while by the http://leo-project.net/leofs/[LeoFS]
+project, but that ended. I also had the Farwest fundraiser,
+which was a success, although the project stalled after that.
+(Fear not, as Farwest will make a comeback as a conglomerate
+of Web development projects in the future.) After that I set
+up the http://ninenines.eu/support/[sponsoring scheme],
+which I can proudly say today brings in enough money to
+cover my food and shelter. Great!
+
+This is a start, but it's of course not enough. Life
+is a little more than food and shelter, and so I am still
+looking for sponsors. This is not a very glorious experience,
+as I am essentially looking for scraps that companies can
+throw away. Still, if a handful more companies were doing
+that, not only would I be able to live comfortably, but I
+would also be able to stop worrying about the future as I
+could put money on the side for when it gets rough.
+
+A few companies giving me some scrap money so I could
+live and work independently is by far the most important
+thing anyone can do to help my projects, including Cowboy.
+Yes, they're even more important than code contributions,
+bug reports and feedback. Because this money gives me the
+time I need to handle the code contributions, bug reports
+and feedback.
+
+If Cowboy or another project is a large part of your
+product or infrastructure, then the best thing you can do
+is become a sponsor. The second best is opening tickets
+and/or providing feedback. The third best is providing
+good code contributions.
+
+I will not expand on the feedback part. Feedback is
+very important, and even just a high five or a retweet
+is already good feedback. It's not very complicated.
+
+I want to expand a little on code contributions
+however. Not long ago I ran across the term "patch bomb"
+which means dropping patches and expecting the project
+maintainers to merge them and maintain them. I receive
+a lot of patches, and often have to refuse them. Causes
+for refusal vary. Some patches only benefit the people
+who submitted them (or a very small number of people).
+Some patches are not refined enough to be included.
+Others are out of scope of the project. These are some
+of the reasons why I refuse patches. Having limited
+time and resources, I have to focus my efforts on the
+code used by the larger number of users. I have to
+prioritize patches from submitters who are reactive
+and address the issues pointed out. And I have to plainly
+refuse other patches.
+
+I believe this wraps up my thoughts on open source.
+Overall I had a great experience, the Erlang community
+being nice and understanding of the issues at hand in
+general. And if the money problem could be solved soon,
+then I would be one of the luckiest and happiest open
+source developer on Earth.
+
+Think about it the next time you see a donation button
+or a request for funds or sponsoring. You can considerably
+improve an open source developer's life with very little
+of your company's money.
diff --git a/_build/content/articles/ranch-ftp.asciidoc b/_build/content/articles/ranch-ftp.asciidoc
new file mode 100644
index 00000000..19209ccc
--- /dev/null
+++ b/_build/content/articles/ranch-ftp.asciidoc
@@ -0,0 +1,220 @@
++++
+date = "2012-11-14T00:00:00+01:00"
+title = "Build an FTP Server with Ranch in 30 Minutes"
+
++++
+
+Last week I was speaking at the
+http://www.erlang-factory.com/conference/London2012/speakers/LoicHoguin[London Erlang Factory Lite]
+where I presented a live demonstration of building an FTP server using
+http://ninenines.eu/docs/en/ranch/HEAD/README[Ranch].
+As there was no slide, you should use this article as a reference instead.
+
+The goal of this article is to showcase how to use Ranch for writing
+a network protocol implementation, how Ranch gets out of the way to let
+you write the code that matters, and the common techniques used when
+writing servers.
+
+Let's start by creating an empty project. Create a new directory and
+then open a terminal into that directory. The first step is to add Ranch
+as a dependency. Create the `rebar.config` file and add the
+following 3 lines.
+
+[source,erlang]
+----
+{deps, [
+ {ranch, ".*", {git, "git://github.com/extend/ranch.git", "master"}}
+]}.
+----
+
+This makes your application depend on the last Ranch version available
+on the _master_ branch. This is fine for development, however when
+you start pushing your application to production you will want to revisit
+this file to hardcode the exact version you are using, to make sure you
+run the same version of dependencies in production.
+
+You can now fetch the dependencies.
+
+[source,bash]
+----
+$ rebar get-deps
+==> ranch_ftp (get-deps)
+Pulling ranch from {git,"git://github.com/extend/ranch.git","master"}
+Cloning into 'ranch'...
+==> ranch (get-deps)
+----
+
+This will create a 'deps/' folder containing Ranch.
+
+We don't actually need anything else to write the protocol code.
+We could make an application for it, but this isn't the purpose of this
+article so let's just move on to writing the protocol itself. Create
+the file 'ranch_ftp_protocol.erl' and open it in your favorite
+editor.
+
+[source,bash]
+$ vim ranch_ftp_protocol.erl
+
+Let's start with a blank protocol module.
+
+[source,erlang]
+----
+-module(ranch_ftp_protocol).
+-export([start_link/4, init/3]).
+
+start_link(ListenerPid, Socket, Transport, Opts) ->
+ Pid = spawn_link(?MODULE, init, [ListenerPid, Socket, Transport]),
+ {ok, Pid}.
+
+init(ListenerPid, Socket, Transport) ->
+ io:format("Got a connection!~n"),
+ ok.
+----
+
+When Ranch receives a connection, it will call the <code>start_link/4</code>
+function with the listener's pid, socket, transport module to be used,
+and the options we define when starting the listener. We don't need options
+for the purpose of this article, so we don't pass them to the process we are
+creating.
+
+Let's open a shell and start a Ranch listener to begin accepting
+connections. We only need to call one function. You should probably open
+it in another terminal and keep it open for convenience. If you quit
+the shell you will have to repeat the commands to proceed.
+
+Also note that you need to type `c(ranch_ftp_protocol).`
+to recompile and reload the code for the protocol. You do not need to
+restart any process however.
+
+[source,bash]
+----
+$ erl -pa ebin deps/*/ebin
+Erlang R15B02 (erts-5.9.2) [source] [64-bit] [smp:4:4] [async-threads:0] [hipe] [kernel-poll:false]
+
+Eshell V5.9.2 (abort with ^G)
+----
+
+[source,erlang]
+----
+1> application:start(ranch).
+ok
+2> ranch:start_listener(my_ftp, 10,
+ ranch_tcp, [{port, 2121}],
+ ranch_ftp_protocol, []).
+{ok,<0.40.0>}
+----
+
+This starts a listener named `my_ftp` that runs your very own
+`ranch_ftp_protocol` over TCP, listening on port `2121`.
+The last argument is the options given to the protocol that we ignored
+earlier.
+
+To try your code, you can use the following command. It should be able
+to connect, the server will print a message in the console, and then
+the client will print an error.
+
+[source,bash]
+$ ftp localhost 2121
+
+Let's move on to actually writing the protocol.
+
+Once you have created the new process and returned the pid, Ranch will
+give ownership of the socket to you. This requires a synchronization
+step though.
+
+[source,erlang]
+----
+init(ListenerPid, Socket, Transport) ->
+ ok = ranch:accept_ack(ListenerPid),
+ ok.
+----
+
+Now that you acknowledged the new connection, you can use it safely.
+
+When an FTP server accepts a connection, it starts by sending a
+welcome message which can be one or more lines starting with the
+code `200`. Then the server will wait for the client
+to authenticate the user, and if the authentication went successfully,
+which it will always do for the purpose of this article, it will reply
+with a `230` code.
+
+[source,erlang]
+----
+init(ListenerPid, Socket, Transport) ->
+ ok = ranch:accept_ack(ListenerPid),
+ Transport:send(Socket, <<"200 My cool FTP server welcomes you!\r\n">>),
+ {ok, Data} = Transport:recv(Socket, 0, 30000),
+ auth(Socket, Transport, Data).
+
+auth(Socket, Transport, <<"USER ", Rest/bits>>) ->
+ io:format("User authenticated! ~p~n", [Rest]),
+ Transport:send(Socket, <<"230 Auth OK\r\n">>),
+ ok.
+----
+
+As you can see we don't need complex parsing code. We can simply
+match on the binary in the argument!
+
+Next we need to loop receiving data commands and optionally
+execute them, if we want our server to become useful.
+
+We will replace the <code>ok.</code> line with the call to
+the following function. The new function is recursive, each call
+receiving data from the socket and sending a response. For now
+we will send an error response for all commands the client sends.
+
+[source,erlang]
+----
+loop(Socket, Transport) ->
+ case Transport:recv(Socket, 0, 30000) of
+ {ok, Data} ->
+ handle(Socket, Transport, Data),
+ loop(Socket, Transport);
+ {error, _} ->
+ io:format("The client disconnected~n")
+ end.
+
+handle(Socket, Transport, Data) ->
+ io:format("Command received ~p~n", [Data]),
+ Transport:send(Socket, <<"500 Bad command\r\n">>).
+----
+
+With this we are almost ready to start implementing commands.
+But with code like this we might have errors if the client doesn't
+send just one command per packet, or if the packets arrive too fast,
+or if a command is split over multiple packets.
+
+To solve this, we need to use a buffer. Each time we receive data,
+we will append to the buffer, and then check if we have received a
+command fully before running it. The code could look similar to the
+following.
+
+[source,erlang]
+----
+loop(Socket, Transport, Buffer) ->
+ case Transport:recv(Socket, 0, 30000) of
+ {ok, Data} ->
+ Buffer2 = << Buffer/binary, Data/binary >>,
+ {Commands, Rest} = split(Buffer2),
+ [handle(Socket, Transport, C) || C <- Commands],
+ loop(Socket, Transport);
+ {error, _} ->
+ io:format("The client disconnected~n")
+ end.
+----
+
+The implementation of `split/1` is left as an exercice
+to the reader. You will also probably want to handle the `QUIT`
+command, which must stop any processing and close the connection.
+
+The attentive reader will also take note that in the case of text-
+based protocols where commands are separated by line breaks, you can
+set an option using `Transport:setopts/2` and have all the
+buffering done for you for free by Erlang itself.
+
+As you can surely notice by now, Ranch allows us to build network
+applications by getting out of our way entirely past the initial setup.
+It lets you use the power of binary pattern matching to write text and
+binary protocol implementations in just a few lines of code.
+
+* http://www.erlang-factory.com/conference/London2012/speakers/LoicHoguin[Watch the talk]
diff --git a/_build/content/articles/the-story-so-far.asciidoc b/_build/content/articles/the-story-so-far.asciidoc
new file mode 100644
index 00000000..54bf7af9
--- /dev/null
+++ b/_build/content/articles/the-story-so-far.asciidoc
@@ -0,0 +1,250 @@
++++
+date = "2014-08-23T00:00:00+01:00"
+title = "The story so far"
+
++++
+
+As I am away from home with little to do (some call this
+a vacation) I wanted to reflect a little on the story so far,
+or how I arrived to Erlang and got to where I am now. The
+raw personal experience. It'll be an article that's more
+about social aspect, communities and marketing a project than
+technical considerations. As a period piece, it will also
+allow me to reflect on the evolution of Erlang in recent
+years.
+
+Once upon a time-- Okay this isn't a fairy tale. The story
+begins with a short chapter in 2010. The year 2010 started
+with a fairly major event in my life: the US servers for the
+online game I stopped playing a few months before, but was
+still involved with through its community, were closing. OMG!
+Someone found a way to log packets and started working on a
+private server; meanwhile the JP servers were still up. And
+that's pretty much it.
+
+Fast forward a few months and it became pretty clear that
+the private server was going nowhere considering all the drama
+surrounding it-- which is actually not unusual, but it was
+more entertaining than average and the technical abilities of
+people running the project were obviously lacking so I decided
+to obtain those logged packets and look at things myself. I
+didn't want to do a private server yet, I only wanted to take
+a peek to see how things worked, and perhaps organize some
+effort to document the protocol.
+
+There was 10GB of logs. I didn't have an easy to use
+language to analyze them, and hex editors wouldn't cut it for
+most purposes, so I had to look elsewhere. This was a good
+opportunity to start learning this PHP killer I read about
+before, which also happens to feature syntax for matching
+binaries, called Erlang. To be perfectly honest I wouldn't
+have touched the logs if I didn't have the added motivation
+to play with and learn a new language.
+
+At the time it was pretty hard to learn Erlang. In my
+experience there was Joe's book (which I always recommend
+first as I believe it is the best to learn the Erlang side
+of things; but falls a little short on OTP), and there was
+about 5 chapters of LYSE. There were a couple other books
+I never managed to get into (sorry guys), and there was also
+a few interesting blogs, some of which I can't find anymore.
+Finally the #erlang IRC community was there but I was strictly
+lurking at the time.
+
+What a difference compared to 4 years later! (That's
+today, by the way!) Now we have more books than I can
+remember, tons of articles covering various aspects of the
+language and platform, many targeting beginners but a good
+number of them also about advanced topics. We even have a
+free online book, LYSE, with more than 30 chapters covering
+pretty much everything. Needless to say I never finished
+reading LYSE as it got written slower than I learnt.
+
+Back to 2010. I wrote a parser for the logs, and
+aggregated those results into one CSV file per packet type
+so I could open them in Gnumeric and aggregate some more,
+but manually this time, and draw conclusions on the packet
+structures. That was pretty easy. Even for a beginner.
+Anyone can go from zero to that level in a day or two.
+Then, having mastered binary pattern matching, I wanted
+to learn some more Erlang, by making this aggregation
+faster. What I had done before worked, but I wasn't going
+to wait forever to process everything sequentially. So I
+looked and found a project called `plists` (still exists,
+but not maintained AFAIK). I downloaded that project and
+replaced my `lists:` calls to `plists:`.
+Boom. In just a few minutes all logs were processed, and
+I had learnt something new.
+
+It is particularly interesting to note that the lack of
+a package manager or index never bothered me. Neither before
+nor after learning Erlang. My experience with package
+managers was mostly related to Ubuntu, a little Perl and
+Python, and PHP's Pear. Let's just stay polite and say it
+was always a terrible experience. So searching on the Web
+didn't feel awkward, because even if I used a tool or
+website I would have ended up doing a search or two anyway.
+This is in contrast to the package index feature in
+https://github.com/ninenines/erlang.mk[Erlang.mk],
+which is meant to simplify specifying dependencies more
+than anything: `DEPS = cowboy`. It does not
+attempt to solve any other problem, and will only attempt
+to solve one extra problem in the near future, which is
+the discovery of packages. So expect some kind of website
+listing packages soon enough.
+
+I want to use this parenthese to also point out that at
+the time there was a very small number of projects out there,
+at least compared to today. While you sometimes hear people
+complain about lack of certain libraries, it is so much
+better now than it was before! The situation improves very
+quickly, so much that it's not going to be that big an issue
+soon enough.
+
+Wanting to know more about that game's protocol, in the
+year 2010, I ended up starting to write more Erlang code to
+simulate a server and use the server to query the client and
+see what was happening, documenting the packets and so on.
+This eventually lead to a larger project implementing more
+and more until people got their hopes up for a revival of
+the game, all the while the now competing original server
+project died in a stream of drama and technical incompetence.
+Of course, I ended up doing what any good Internet citizen
+would do, I crushed people's hopes, but that's not important
+to our story. The important part is that before giving up
+on this project, I not only learnt a good deal of Erlang
+and a little deal of OTP (which I did not touch until 6
+months after I started with Erlang; see the paragraph
+about learning material above), but I also had an intriguing
+idea pop into my mind for what would become my greatest
+success yet.
+
+The giving up part was not easy. Having had financial
+difficulties all year 2010 and part of 2009, I resolved
+to travel back to Paris to try and make it. I ended up
+sleeping in offices for 6 months, being hosted by a shady
+person, and hearing my fair share of stories about
+the dark side of business. While there I also worked for
+another company with someone who would end up becoming
+another high profile Erlang developer. The situation
+slowly improved, I started taking part in the #erlang
+IRC discussions, giving up my status of lurker and, a
+few months into 2011, started working on the Apache killer
+project: Cowboy.
+
+This is the part where I probably should get accused of
+racism and other fun things, but I never did. And I think
+that speaks lots about the Erlang community. In all my time
+writing Erlang code, I can count the number of conflicts I
+had with other people on a single hand. This is the nicest
+programming community I have ever seen, by far. And the
+humblest too. The Erlang community feels like Japan. And
+I love Japan. So I love the Erlang community. I can't say
+this enough. This is something that stayed true for all
+my time using Erlang, and despite the rise of alternative
+languages that are not Japan the Erlang community has
+remained very Japan.
+
+The first published version of Cowboy was written in
+two weeks. A little before those two weeks, during, and
+a while after, pretty much everything I said on the
+Internets was that Cowboy was going to be the greatest
+HTTP server ever, that the other servers were problematic
+(and just to be clear, Yaws was rarely if ever mentioned,
+due to being in a perceived different league of "full
+featured servers" while Cowboy was a "lightweight server"),
+and that Cowboy will be the best replacement to a Mochiweb
+or Misultin application. This, alongside a lot of time
+spent on IRC telling people to use Cowboy when they were
+asking for an HTTP server to use, probably made me sound
+very annoying. But it worked, and Cowboy started getting
+its first users, despite being only a few weeks old. Of
+course, as soon as I got my very first user, I started
+claiming Cowboy had "a lot of users".
+
+Looking back today I would definitely find myself annoying,
+this wasn't just an idle comment there. For about a year,
+maybe a little more, all I ever said was that Cowboy was
+the best. This probably made me a little dumber in the
+process (as if I wasn't enough! I know). Being French, I
+sometimes would also say things quite abruptly. To stay
+polite, I probably sounded like an asshole. I learnt to
+stop being so French over time thankfully.
+
+I think what was most important to Cowboy at the time,
+was three things. First, it felt fresh. It was new, had new
+ideas, tried to do things differently and followed "new" old
+best practices (the OTP way-- which was simply too obscure
+for most people at the time). Second, it had me spending
+all my time telling people to use it whenever they were
+looking for an HTTP server. Third, it had me helping people
+get started with it and guide them all the steps of the way.
+Mostly because it didn't have a very good documentation, but
+still, hand holding does wonders.
+
+To be able to help people every time they had a problem,
+I did not spend all my days reading IRC. Instead I simply
+made sure to be notified when someone said `cowboy`.
+The same way many people subscribe to alerts when their
+company is mentioned in the news. Nothing fancy.
+
+Time went on, Cowboy grew, or as some like to say,
+completely destroyed the competition, and many people
+eventually moved from Mochiweb and Misultin to Cowboy.
+And then Roberto Ostinelli stopped Misultin development
+and told everyone to move to Cowboy. This is the most
+humble and selfless act I have ever seen in the programming
+sphere, and I only have one thing to say about it: GG.
+Thanks for the fish. He left me with the tasks of improving
+Cowboy examples, documentation and strongly believed that
+the Misultin interface was more user friendly out of all
+the servers. So I added many examples, as many lines of
+documentation as we have of code, and strongly believe
+that Cowboy 2.0 will be the most user friendly interface
+out of all servers. But only time will tell.
+
+With the rise of the project and the rise in the number
+of users, my previous strategy (completely incidental, by
+the way, and definitely not a well thought out plan to
+become popular) stopped working. It was taking me too much
+time. The important aspects slowly drifted. If I wanted to
+support more users, I would have to spend less time with
+each individual user. This was actually a hard problem.
+You basically have to make people understand they can't
+just come to you directly when they have a problem, they
+have to follow proper channels. It becomes less personal,
+and might be felt like you don't care about them anymore.
+You have to hurt some people's feelings at this point. It
+is quite unfortunate, and also quite difficult to do. There
+is some unwritten rule that says early adopters deserve
+more, but in the real world it never works like this. So
+I probably hurt some people's feelings at some point. But
+that's okay. Because even if you make sure to be as nice
+as possible when you tell people to go through proper
+channels from now on, some people will still get offended.
+There's nothing you can do about it.
+
+From that point onward the important points about the
+project was getting the documentation done, making sure
+people knew about the proper channels to get help and
+report issues, etc. Basically making myself less needed.
+This is quite a contrast with the first days, but I believe
+Cowboy made that transition successfully.
+
+Not only did I win time by not having to hold hands with
+everyone all the time (not that I didn't like it, but you
+know, the sweat), but I also won time thanks to the increased
+project popularity. Indeed, the more users you have, the more
+annoying guys there are to tell people to use your project
+and that it's the best and everything. Which is great. At
+least, it's great if you don't pay too much attention to it.
+Sometimes people will give an advice that is, in your opinion,
+a bad advice. And that's okay. Don't intervene every time
+someone gives a bad advice, learn to let it go. People will
+figure it out. You learn by making mistakes, after all. Use
+this extra time to make sure other people don't end up
+giving the same bad advice instead. Fix the code or the
+documentation that led to this mistake. Slowly improve the
+project and make sure it doesn't happen again.
+
+This is my story. So far, anyway.
diff --git a/_build/content/articles/tictactoe.asciidoc b/_build/content/articles/tictactoe.asciidoc
new file mode 100644
index 00000000..8aec1c57
--- /dev/null
+++ b/_build/content/articles/tictactoe.asciidoc
@@ -0,0 +1,91 @@
++++
+date = "2012-10-17T00:00:00+01:00"
+title = "Erlang Tic Tac Toe"
+
++++
+
+Everyone knows http://en.wikipedia.org/wiki/Tic-tac-toe[Tic Tac Toe],
+right?
+
+Players choose either to be the Xs or the Os, then place their symbol
+on a 3x3 board one after another, trying to create a line of 3 of them.
+
+Writing an algorithm to check for victory sounds easy, right? It's
+easily tested, considering there's only 8 possible winning rows (3 horizontal,
+3 vertical and 2 diagonal).
+
+In Erlang though, you probably wouldn't want an algorithm. Erlang has
+this cool feature called pattern matching which will allow us to completely
+avoid writing the algorithm by instead letting us match directly on the
+solutions.
+
+Let's first create a board. A board is a list of 3 rows each containing
+3 columns. It can also be thought of as a tuple containing 9 elements.
+A tuple is easier to manipulate so this is what we are going to use.
+Each position can either contain an `x`, an `o`,
+or be `undefined`.
+
+[source,erlang]
+----
+new() ->
+ {undefined, undefined, undefined,
+ undefined, undefined, undefined,
+ undefined, undefined, undefined}.
+----
+
+Now that we have a board, if we want to play, we need a function that
+will allow players to, you know, actually play their moves. Rows and
+columns are numbered 1 to 3 so we need a little math to correctly
+deduce the element's position.
+
+[source,erlang]
+----
+play(Who, X, Y, Board) ->
+ setelement((Y - 1) * 3 + X, Board, Who).
+----
+
+This function returns the board with the element modified. Of course,
+as you probably noticed, we aren't checking that the arguments are correct,
+or that the element was already set. This is left as an exercise to the
+reader.
+
+After playing the move, we need to check whether someone won. That's
+where you'd write an algorithm, and that's where I wouldn't. Let's just
+pattern match all of them!
+
+[source,erlang]
+----
+check(Board) ->
+ case Board of
+ {x, x, x,
+ _, _, _,
+ _, _, _} -> {victory, x};
+
+ {x, _, _,
+ _, x, _,
+ _, _, x} -> {victory, x};
+
+ {x, _, _,
+ x, _, _,
+ x, _, _} -> {victory, x};
+
+ %% [snip]
+
+ _ -> ok
+ end.
+----
+
+Pattern matching allows us to simply _draw_ the solutions
+directly inside our code, and if the board matches any of them, then we
+have a victory or a draw, otherwise the game can continue.
+
+The `_` variable is special in that it always matches,
+allowing us to focus strictly on the winning row. And because it's very
+graphical, if we were to have messed up somewhere, then we'd only need
+take a quick glance to be sure the winning solutions are the right ones.
+
+Erlang allows us to transform algorithms into very graphical code thanks
+to its pattern matching feature, and let us focus on doing things instead
+of writing algorithms to do things.
+
+* link:/res/tictactoe.erl[tictactoe.erl]
diff --git a/_build/content/articles/xerl-0.1-empty-modules.asciidoc b/_build/content/articles/xerl-0.1-empty-modules.asciidoc
new file mode 100644
index 00000000..b2c178b2
--- /dev/null
+++ b/_build/content/articles/xerl-0.1-empty-modules.asciidoc
@@ -0,0 +1,153 @@
++++
+date = "2013-01-30T00:00:00+01:00"
+title = "Xerl: empty modules"
+
++++
+
+Let's build a programming language. I call it Xerl: eXtended ERLang.
+It'll be an occasion for us to learn a few things, especially me.
+
+Unlike in Erlang, in this language, everything is an expression.
+This means that modules and functions are expression, and indeed that
+you can have more than one module per file.
+
+We are just starting, so let's no go ahead of ourselves here. We'll
+begin with writing the code allowing us to compile an empty module.
+
+We will compile to Core Erlang: this is one of the many intermediate
+step your Erlang code compiles to before it becomes BEAM machine code.
+Core Erlang is a very neat language for machine generated code, and we
+will learn many things about it.
+
+Today we will only focus on compiling the following code:
+
+[source,erlang]
+mod my_module
+begin
+end
+
+Compilation will be done in a few steps. First, the source file will
+be transformed in a tree of tokens by the lexer. Then, the parser will
+use that tree of tokens and convert it to the AST, bringing semantical
+meaning to our representation. Finally, the code generator will transform
+this AST to Core Erlang AST, which will then be compiled.
+
+We will use _leex_ for the lexer. This lexer uses .xrl files
+which are then compiled to .erl files that you can then compile to BEAM.
+The file is divided in three parts: definitions, rules and Erlang code.
+Definitions and Erlang code are obvious; rules are what concerns us.
+
+We only need two things: atoms and whitespaces. Atoms are a lowercase
+letter followed by any letter, number, _ or @. Whitespace is either a
+space, an horizontal tab, \r or \n. There exists other kinds of whitespaces
+but we simply do not allow them in the Xerl language.
+
+Rules consist of a regular expression followed by Erlang code. The
+latter must return a token representation or the atom `skip_token`.
+
+[source,erlang]
+----
+{L}{A}* :
+ Atom = list_to_atom(TokenChars),
+ {token, case reserved_word(Atom) of
+ true -> {Atom, TokenLine};
+ false -> {atom, TokenLine, Atom}
+ end}.
+
+{WS}+ : skip_token.
+----
+
+The first rule matches an atom, which is converted to either a special
+representation for reserved words, or an atom tuple. The
+`TokenChars` variable represents the match as a string, and
+the `TokenLine` variable contains the line number.
+https://github.com/extend/xerl/blob/0.1/src/xerl_lexer.xrl[View the complete file].
+
+We obtain the following result from the lexer:
+
+[source,erlang]
+----
+[{mod,1},{atom,1,my_module},{'begin',2},{'end',3}]
+----
+
+The second step is to parse this list of tokens to add semantic meaning
+and generate what is called an _abstract syntax tree_. We will be
+using the _yecc_ parser generator for this. This time it will take
+.yrl files but the process is the same as before. The file is a little
+more complex than for the lexer, we need to define at the very least
+terminals, nonterminals and root symbols, the grammar itself, and
+optionally some Erlang code.
+
+To compile our module, we need a few things. First, everything is an
+expression. We thus need list of expressions and individual expressions.
+We will support a single expression for now, the `mod`
+expression which defines a module. And that's it! We end up with the
+following grammar:
+
+[source,erlang]
+----
+exprs -> expr : ['$1'].
+exprs -> expr exprs : ['$1' | '$2'].
+
+expr -> module : '$1'.
+
+module -> 'mod' atom 'begin' 'end' :
+ {'mod', ?line('$1'), '$2', []}.
+----
+
+https://github.com/extend/xerl/blob/0.1/src/xerl_parser.yrl[View the complete file].
+
+We obtain the following result from the parser:
+
+[source,erlang]
+----
+[{mod,1,{atom,1,my_module},[]}]
+----
+
+We obtain a list of a single `mod` expression. Just like
+we wanted. Last step is generating the Core Erlang code from it.
+
+Code generation generally is comprised of several steps. We will
+discuss these in more details later on. For now, we will focus on the
+minimal needed for successful compilation.
+
+We can use the `cerl` module to generate Core Erlang AST.
+We will simply be using functions, which allows us to avoid learning
+and keeping up to date with the internal representation.
+
+There's one important thing to do when generating Core Erlang AST
+for a module: create the `module_info/{0,1}` functions.
+Indeed, these are added to Erlang before it becomes Core Erlang, and
+so we need to replicate this ourselves. Do not be concerned however,
+as this only takes a few lines of extra code.
+
+As you can see by
+https://github.com/extend/xerl/blob/0.1/src/xerl_codegen.erl[looking at the complete file],
+the code generator echoes the grammar we defined in the parser, and
+simply applies the appropriate Core Erlang functions for each expressions.
+
+We obtain the following pretty-printed Core Erlang generated code:
+
+[source,erlang]
+----
+module 'my_module' ['module_info'/0,
+ 'module_info'/1]
+ attributes []
+'module_info'/0 =
+ fun () ->
+ call 'erlang':'get_module_info'
+ ('empty_module')
+'module_info'/1 =
+ fun (Key) ->
+ call 'erlang':'get_module_info'
+ ('empty_module', Key)
+end
+----
+
+For convenience I added all the steps in a `xerl:compile/1`
+function that you can use against your own .xerl files.
+
+That's it for today! We will go into more details over each steps in
+the next few articles.
+
+* https://github.com/extend/xerl/blob/0.1/[View the source]
diff --git a/_build/content/articles/xerl-0.2-two-modules.asciidoc b/_build/content/articles/xerl-0.2-two-modules.asciidoc
new file mode 100644
index 00000000..4da5322e
--- /dev/null
+++ b/_build/content/articles/xerl-0.2-two-modules.asciidoc
@@ -0,0 +1,152 @@
++++
+date = "2013-02-03T00:00:00+01:00"
+title = "Xerl: two modules"
+
++++
+
+Everything is an expression.
+
+This sentence carries profound meaning. We will invoke it many
+times over the course of these articles.
+
+If everything is an expression, then the language shouldn't have
+any problem with me defining two modules in the same source file.
+
+[source,erlang]
+----
+mod first_module
+begin
+end
+
+mod second_module
+begin
+end
+----
+
+Likewise, it shouldn't have any problem with me defining a
+module inside another module.
+
+[source,erlang]
+----
+mod out_module
+begin
+ mod in_module
+ begin
+ end
+end
+----
+
+Of course, in the context of the Erlang VM, these two snippets
+are equivalent; there is nothing preventing you from calling the
+`in_module` module from any other module. The `mod`
+instruction means a module should be created in the Erlang VM,
+with no concept of scope attached.
+
+Still we need to handle both. To do this we will add a step
+between the parser and the code generator that will walk over the
+abstract syntax tree, from here onward shortened as _AST_,
+and transform the AST by executing it where applicable.
+
+What happens when you execute a `mod` instruction?
+A module is created. Since we are compiling, that simply means
+the compiler will branch out and create a module.
+
+If everything is an expression, does that mean this will allow
+me to create modules at runtime using the same syntax? Yes, but
+let's not get ahead of ourselves yet.
+
+For now we will just iterate over the AST, and will compile
+a module for each `mod` found. Modules cannot contain
+expressions yet, so there's no need to recurse over it at this
+point. This should solve the compilation of our first snippet.
+
+The `compile/1` function becomes:
+
+[source,erlang]
+----
+compile(Filename) ->
+ io:format("Compiling ~s...~n", [Filename]),
+ {ok, Src} = file:read_file(Filename),
+ {ok, Tokens, _} = xerl_lexer:string(binary_to_list(Src)),
+ {ok, Exprs} = xerl_parser:parse(Tokens),
+ execute(Filename, Exprs, []).
+
+execute(_, [], Modules) ->
+ io:format("Done...~n"),
+ {ok, lists:reverse(Modules)};
+execute(Filename, [Expr = {mod, _, {atom, _, Name}, []}|Tail], Modules) ->
+ {ok, [Core]} = xerl_codegen:exprs([Expr]),
+ {ok, [{Name, []}]} = core_lint:module(Core),
+ io:format("~s~n", [core_pp:format(Core)]),
+ {ok, _, Beam} = compile:forms(Core,
+ [binary, from_core, return_errors, {source, Filename}]),
+ {module, Name} = code:load_binary(Name, Filename, Beam),
+ execute(Filename, Tail, [Name|Modules]).
+----
+
+Running this compiler over the first snippet yields the following
+result:
+
+[source,erlang]
+----
+Compiling test/mod_SUITE_data/two_modules.xerl...
+module 'first_module' ['module_info'/0,
+ 'module_info'/1]
+ attributes []
+'module_info'/0 =
+ fun () ->
+ call 'erlang':'get_module_info'
+ ('first_module')
+'module_info'/1 =
+ fun (Key) ->
+ call 'erlang':'get_module_info'
+ ('first_module', Key)
+end
+module 'second_module' ['module_info'/0,
+ 'module_info'/1]
+ attributes []
+'module_info'/0 =
+ fun () ->
+ call 'erlang':'get_module_info'
+ ('second_module')
+'module_info'/1 =
+ fun (Key) ->
+ call 'erlang':'get_module_info'
+ ('second_module', Key)
+end
+Done...
+{ok,[first_module,second_module]}
+----
+
+Everything looks fine. And we can check that the two modules have
+been loaded into the VM:
+
+[source,erlang]
+----
+9> m(first_module).
+Module first_module compiled: Date: February 2 2013, Time: 14.56
+Compiler options: [from_core]
+Object file: test/mod_SUITE_data/two_modules.xerl
+Exports:
+ module_info/0
+ module_info/1
+ok
+10> m(second_module).
+Module second_module compiled: Date: February 2 2013, Time: 14.56
+Compiler options: [from_core]
+Object file: test/mod_SUITE_data/two_modules.xerl
+Exports:
+ module_info/0
+ module_info/1
+ok
+----
+
+So far so good!
+
+What about the second snippet? It brings up many questions. What
+happens once a `mod` expression has been executed at
+compile time? If it's an expression then it has to have a result,
+right? Right. We are still a bit lacking with expressions for now,
+though, so let's get back to it after we add more.
+
+* https://github.com/extend/xerl/blob/0.2/[View the source]
diff --git a/_build/content/articles/xerl-0.3-atomic-expressions.asciidoc b/_build/content/articles/xerl-0.3-atomic-expressions.asciidoc
new file mode 100644
index 00000000..dae14906
--- /dev/null
+++ b/_build/content/articles/xerl-0.3-atomic-expressions.asciidoc
@@ -0,0 +1,135 @@
++++
+date = "2013-02-18T00:00:00+01:00"
+title = "Xerl: atomic expressions"
+
++++
+
+We will be adding atomic integer expressions to our language.
+These look as follow in Erlang:
+
+[source,erlang]
+42.
+
+And the result of this expression is of course 42.
+
+We will be running this expression at compile time, since we
+don't have the means to run code at runtime yet. This will of
+course result in no module being compiled, but that's OK, it will
+allow us to discuss a few important things we'll have to plan for
+later on.
+
+First, we must of course accept integers in the tokenizer.
+
+[source,erlang]
+{D}+ : {token, {integer, TokenLine, list_to_integer(TokenChars)}}.
+
+We must then accept atomic integer expressions in the parser.
+This is a simple change. The integer token is terminal so we need
+to add it to the list of terminals, and then we only need to add
+it as a possible expression.
+
+[source,erlang]
+expr -> integer : '$1'.
+
+A file containing only the number 42 (with no terminating dot)
+will give the following result when parsing it. This is incidentally
+the same result as when tokenizing.
+
+[source,erlang]
+----
+[{integer,1,42}]
+----
+
+We must then evaluate it. We're going to interpret it for now.
+Since the result of this expression is not stored in a variable,
+we are going to simply print it on the screen and discard it.
+
+[source,erlang]
+----
+execute(Filename, [{integer, _, Int}|Tail], Modules) ->
+ io:format("integer ~p~n", [Int]),
+ execute(Filename, Tail, Modules).
+----
+
+You might think by now that what we've done so far this time
+is useless. It brings up many interesting questions though.
+
+* What happens if a file contains two integers?
+* Can we live without expression separators?
+* Do we need an interpreter for the compile step?
+
+This is what happens when we create a file that contains two
+integers on two separate lines:
+
+[source,erlang]
+----
+[{integer,1,42},{integer,2,43}]
+----
+
+And on the same lines:
+
+[source,erlang]
+----
+[{integer,1,42},{integer,1,43}]
+----
+
+Does this mean we do not need separators between expressions?
+Not quite. The `+` and `-` operators are an
+example of why we can't have nice things. They are ambiguous. They
+have two different meanings: make an atomic integer positive or
+negative, or perform an addition or a substraction between two
+integers. Without a separator you won't be able to know if the
+following snippet is one or two expressions:
+
+[source,erlang]
+42 - 12
+
+Can we use the line ending as an expression separator then?
+Some languages make whitespace important, often the line
+separator becomes the expression separator. I do not think this
+is the best idea, it can lead to errors. For example the following
+snippet would be two expressions:
+
+[source,erlang]
+----
+Var = some_module:some_function() + some_module:other_function()
+ + another_module:another_function()
+----
+
+It is not obvious what would happen unless you are a veteran
+of the language, and so we will not go down that road. We will use
+an expression separator just like in Erlang: the comma. We will
+however allow a trailing comma to make copy pasting code easier,
+even if this means some old academics guy will go nuts about it
+later on. This trailing comma will be optional and simply discarded
+by the parser when encountered. We will implement this next.
+
+The question as to how we will handle running expressions
+remains. We have two choices here: we can write an interpreter,
+or we can compile the code and run it. Writing an interpreter
+would require us to do twice the work, and we are lazy, so we will
+not do that.
+
+You might already know that Erlang does not use the same code
+for compiling and for evaluating commands in the shell. The main
+reason for this is that in Erlang everything isn't an expression.
+Indeed, the compiler compiles forms which contain expressions,
+but you can't have forms in the shell.
+
+How are we going to compile the code that isn't part of a module
+then? What do we need to run at compile-time, anyway? The body of
+the file itself, of course. The body of module declarations. That's
+about it.
+
+For the file itself, we can simply compile it as a big function
+that will be executed. Then, everytime we encounter a module
+declaration, we will run the compiler on its body, making its body
+essentially a big function that will be executed. The same mechanism
+will be applied when we encounter a module declaration at runtime.
+
+At runtime there's nothing else for us to do, the result of this
+operation will load all the compiled modules. At compile time we
+will also want to save them to a file. We'll see later how we can
+do that.
+
+* https://github.com/extend/xerl/blob/0.3/[View the source]
diff --git a/_build/content/articles/xerl-0.4-expression-separator.asciidoc b/_build/content/articles/xerl-0.4-expression-separator.asciidoc
new file mode 100644
index 00000000..c137cf1d
--- /dev/null
+++ b/_build/content/articles/xerl-0.4-expression-separator.asciidoc
@@ -0,0 +1,48 @@
++++
+date = "2013-03-01T00:00:00+01:00"
+title = "Xerl: expression separator"
+
++++
+
+As promised we are adding an expression separator this time.
+This will be short and easy.
+
+In the tokenizer we only need to add a line recognizing the
+comma as a valid token.
+
+[source,erlang]
+, : {token, {',', TokenLine}}.
+
+Then we need to change the following lines in the parser:
+
+[source,erlang]
+exprs -> expr : ['$1'].
+exprs -> expr exprs : ['$1' | '$2'].
+
+And add a comma between the expressions on the second line:
+
+[source,erlang]
+exprs -> expr : ['$1'].
+exprs -> expr ',' exprs : ['$1' | '$3'].
+
+That takes care of everything except the optional trailing
+comma at the end of our lists of expressions. We just need an
+additional rule to take care of this.
+
+[source,erlang]
+exprs -> expr ',' : ['$1'].
+
+That's it.
+
+Wondering why we don't have this optional trailing comma in
+Erlang considering how easy it was and the number of people
+complaining about it? Yeah, me too. But that's for someone else
+to answer.
+
+Another change I want to talk about is a simple modification
+of the compiler code to use an `#env{}` record for
+tracking state instead of passing around individual variables.
+This will be required later on when we make modules into proper
+expressions so I thought it was a good idea to anticipate.
+
+* https://github.com/extend/xerl/blob/0.4/[View the source]
diff --git a/_build/content/articles/xerl-0.5-intermediate-module.asciidoc b/_build/content/articles/xerl-0.5-intermediate-module.asciidoc
new file mode 100644
index 00000000..37f93337
--- /dev/null
+++ b/_build/content/articles/xerl-0.5-intermediate-module.asciidoc
@@ -0,0 +1,145 @@
++++
+date = "2013-03-25T00:00:00+01:00"
+title = "Xerl: intermediate module"
+
++++
+
+Today we will start the work on the intermediate module
+that will be used to run the code for the expressions found
+in our file's body, replacing our interpreter.
+
+This is what we want to have when all the work is done:
+
+----
+xerl -> tokens -> AST -> intermediate -> cerl
+----
+
+Today we will perform this work only on the atomic integer
+expression however, so we will not build any module at the end.
+We have a few more things to take care of before getting there.
+This does mean that we completely break compilation of modules
+though, so hopefully we can resolve that soon.
+
+This intermediate representation is in the form of a module
+which contains a single function: `run/0`. This function
+contains all the expressions from our Xerl source file.
+
+In the case of a Xerl source file only containing the integer
+`42`, we will obtain the following module ready to
+be executed:
+
+[source,erlang]
+----
+-module('$xerl_intermediate').
+-export([run/0]).
+
+run() ->
+ 42.
+----
+
+Running it will of course give us a result of `42`,
+the same we had when interpreting expressions.
+
+The resulting Core Erlang code looks like this:
+
+[source,erlang]
+----
+module '$xerl_intermediate' ['run'/0]
+ attributes []
+'run'/0 =
+ fun () ->
+ 42
+end
+----
+
+The nice thing about doing it like this is that other than the
+definition of the intermediate module and its `run/0`
+function, we can use the same code we are using for generating
+the final Beam file. It may also be faster than interpreting
+if you have complex modules.
+
+Of course this here only works for the simplest cases, as you
+cannot declare a module or a function inside another Erlang function.
+We will need to wrap these into function calls to the Xerl compiler
+that will take care of compiling them, making them available for
+any subsequent expression. We will also need to pass the environment
+to the `run` function to keep track of all this.
+
+This does mean that we will have different code for compiling
+`fun` and `mod` expressions when creating
+the intermediate module. But the many other expressions don't need
+any special care.
+
+Right now we've used the `'$xerl_intermediate'` atom
+for the intermediate module name because we only have one, but we
+will need to have a more random name later on when we'll implement
+modules this way.
+
+The attentive mind will know by now that when compiling a Xerl
+file containing one module, we will need to compile two intermediate
+modules: one for the file body, and one for the module's body. Worry
+not though, if we only detect `mod` instructions in the file
+body, we can just skip this phase.
+
+While we're at it, we'll modify our code generator to handle lists
+of expressions, which didn't actually work with integer literals
+before.
+
+We're going to use Core Erlang sequences for running the many
+expressions. Sequences work like `let`, except no value
+is actually bound. Perfect for our case, since we don't support
+binding values at this time anyway.
+
+Sequences have an argument and a body, both being Core Erlang
+expressions. The simplest way to have many expressions is to use
+a simple expression for the argument and a sequence for the rest
+of the expressions. When we encounter the last expression in the
+list, we do not create a sequence.
+
+The result is this very simple function:
+
+[source,erlang]
+----
+comp_body([Expr]) ->
+ expr(Expr);
+comp_body([Expr|Exprs]) ->
+ Arg = expr(Expr),
+ Body = comp_body(Exprs),
+ cerl:c_seq(Arg, Body).
+----
+
+In the case of our example above, a sequence will not be created,
+we only have one expression. If we were to have `42, 43, 44`
+in our Xerl source file, we would have a result equivalent to the
+following before optimization:
+
+[source,erlang]
+----
+-module('$xerl_intermediate').
+-export([run/0]).
+
+run() ->
+ 42,
+ 43,
+ 44.
+----
+
+And the result is of course `44`.
+
+The resulting Core Erlang code looks like this:
+
+[source,erlang]
+----
+module '$xerl_intermediate' ['run'/0]
+ attributes []
+'run'/0 =
+ fun () ->
+ do 42
+ do 43
+ 44
+end
+----
+
+Feels very lisp-y, right? Yep.
+
+* https://github.com/extend/xerl/blob/0.5/[View the source]
diff --git a/_build/content/docs.asciidoc b/_build/content/docs.asciidoc
new file mode 100644
index 00000000..f22fc81c
--- /dev/null
+++ b/_build/content/docs.asciidoc
@@ -0,0 +1,28 @@
++++
+date = "2015-07-01T00:00:00+01:00"
+title = "Documentation"
+section = "docs"
+type = "docs-index"
+aliases = [
+ "/docs/en/",
+ "/docs/en/cowboy/",
+ "/docs/en/erlang.mk/",
+ "/docs/en/gun/",
+ "/docs/en/ranch/",
+ "/docs/en/cowboy/1.0/",
+ "docs/en/cowboy/HEAD/",
+ "docs/en/cowboy/HEAD/guide/",
+ "docs/en/cowboy/HEAD/manual/",
+ "/docs/en/cowboy/2.0/",
+ "/docs/en/erlang.mk/1/",
+ "/docs/en/gun/1.0/",
+ "/docs/en/ranch/1.2/"
+]
++++
+
+=== Contribute
+
+Do you have examples, tutorials, videos about one or more
+of my projects? I would happily include them on this page.
+
+mailto:[email protected][Send me an email with the details].
diff --git a/_build/content/donate.asciidoc b/_build/content/donate.asciidoc
new file mode 100644
index 00000000..4ac8d4b8
--- /dev/null
+++ b/_build/content/donate.asciidoc
@@ -0,0 +1,24 @@
++++
+date = "2015-07-01T00:00:00+01:00"
+title = "Donate"
+type = "services"
++++
+
+=== Like my work? Donate!
+
+You can donate via Paypal to reward me, Loïc Hoguin, for my
+work on open source software including Cowboy and Erlang.mk.
+
+++++
+<form action="https://www.paypal.com/cgi-bin/webscr" method="post" style="display:inline">
+<input type="hidden" name="cmd" value="_donations">
+<input type="hidden" name="business" value="[email protected]">
+<input type="hidden" name="lc" value="FR">
+<input type="hidden" name="item_name" value="Loic Hoguin">
+<input type="hidden" name="item_number" value="99s">
+<input type="hidden" name="currency_code" value="EUR">
+<input type="hidden" name="bn" value="PP-DonationsBF:btn_donate_LG.gif:NonHosted">
+<input type="image" src="https://www.paypalobjects.com/en_US/i/btn/btn_donate_LG.gif" border="0" name="submit" alt="PayPal - The safer, easier way to pay online!">
+<img alt="" border="0" src="https://www.paypalobjects.com/fr_FR/i/scr/pixel.gif" width="1" height="1">
+</form>
+++++
diff --git a/_build/content/services.asciidoc b/_build/content/services.asciidoc
new file mode 100644
index 00000000..88baac57
--- /dev/null
+++ b/_build/content/services.asciidoc
@@ -0,0 +1,95 @@
++++
+date = "2015-07-01T00:00:00+01:00"
+title = "Consulting & Training"
+type = "services"
+aliases = [
+ "/training/"
+]
++++
+
+If you are interested by any of these opportunities,
+mailto:[email protected][send me an email].
+
+== Consulting
+
+You can get me, Loïc Hoguin, author of Cowboy, to help you
+solve a problem or work on a particular project.
+
+My area of expertise is Erlang; HTTP, Websocket and REST APIs;
+design and implementation of protocols; and messaging systems.
+
+I can also be helpful with testing or code reviews.
+
+I offer both hourly and daily rates:
+
+* 200€ hourly rate (remote)
+* 1000€ daily rate (remote and on-site)
+
+For remote consulting, the work can be done by phone, email,
+IRC, GitHub and/or any other platform for collaborative work.
+
+For on-site consulting, the travel expenses and
+accomodations are to be paid by the customer. I will also
+ask for a higher rate if forced to stay on-site for more
+than a week.
+
+Note that my expertise does not cover all areas where
+Erlang is used. My help will be limited in the areas of
+distributed databases, or large distributed systems.
+
+== Sponsoring
+
+You can sponsor one of my projects.
+
+Sponsoring gives you:
+
+* a direct, private line of communication
+
+* the power to make me maintain older versions of my projects
+ (as long as they are sponsoring)
+
+* priority when adding features or fixing bugs
+
+* advertisement space on this website and in the README file
+ of the project of your choice
+
+Sponsors may choose to benefit from any of these perks.
+
+In exchange sponsors must contribute financially. A minimum
+of 200€ per month is required. Sponsors may give as much as
+they want. Payment can be monthly or one-time. Invoices are
+of course provided.
+
+== Erlang beginner training
+
+I would be happy to introduce more people to Erlang. I have
+a 1-day Erlang training readily available for consumption.
+The goal of this training is to teach the basics of Erlang
+systems and programming. It's a kind of "Getting started"
+for Erlang.
+
+You can review the link:/talks/thinking-in-erlang/thinking-in-erlang.html[training slides].
+
+This training is meant to be given to a large number of
+people interested in Erlang, as part of a public event,
+where anyone interested can come.
+
+Another important aspect of this training is that it is
+meant to be affordable. We want the most people to learn
+Erlang as possible.
+
+If you have room, think you can gather 20+ people and
+are interested in sponsoring a training session, then
+we should talk.
+
+== Custom training
+
+I can also provide custom training, tailored to your level
+and your needs. It can take the form of a class, Q&A or a
+code review/writing session. I need to know your expectations
+to prepare an appropriate training.
+
+Custom training rates are the same as consulting rates and
+the same restrictions apply.
+
+// @todo Also need the donate link.
diff --git a/_build/content/slogan.asciidoc b/_build/content/slogan.asciidoc
new file mode 100644
index 00000000..f132e064
--- /dev/null
+++ b/_build/content/slogan.asciidoc
@@ -0,0 +1,7 @@
++++
+date = "2015-07-01T00:00:00+01:00"
+title = "Slogan"
++++
+
+The Erlanger Playbook is now available! +
+link:/articles/erlanger-playbook[Buy now] — link:/services[Become a Cowboy project sponsor]
diff --git a/_build/content/talks.asciidoc b/_build/content/talks.asciidoc
new file mode 100644
index 00000000..3dc41452
--- /dev/null
+++ b/_build/content/talks.asciidoc
@@ -0,0 +1,14 @@
++++
+date = "2015-07-01T00:00:00+01:00"
+title = "Public talks"
+type = "talks"
++++
+
+=== Talk requests
+
+Organizing a conference and in need of a speaker for a talk
+about Erlang and the Web? Need an introduction to Erlang/OTP
+for your company? Looking for a cool subject for a user group
+meeting?
+
+mailto:[email protected][Send me an email with the details].