aboutsummaryrefslogtreecommitdiffstats
path: root/lib/stdlib/doc/src
diff options
context:
space:
mode:
Diffstat (limited to 'lib/stdlib/doc/src')
-rw-r--r--lib/stdlib/doc/src/io.xml85
-rw-r--r--lib/stdlib/doc/src/io_protocol.xml476
-rw-r--r--lib/stdlib/doc/src/re.xml6
-rw-r--r--lib/stdlib/doc/src/unicode.xml71
-rw-r--r--lib/stdlib/doc/src/unicode_usage.xml105
5 files changed, 383 insertions, 360 deletions
diff --git a/lib/stdlib/doc/src/io.xml b/lib/stdlib/doc/src/io.xml
index ed425ce723..24cc4714d0 100644
--- a/lib/stdlib/doc/src/io.xml
+++ b/lib/stdlib/doc/src/io.xml
@@ -28,9 +28,9 @@
<rev></rev>
</header>
<module>io</module>
- <modulesummary>Standard IO Server Interface Functions</modulesummary>
+ <modulesummary>Standard I/O Server Interface Functions</modulesummary>
<description>
- <p>This module provides an interface to standard Erlang IO servers.
+ <p>This module provides an interface to standard Erlang I/O servers.
The output functions all return <c>ok</c> if they are successful,
or exit if they are not.</p>
<p>In the following description, all functions have an optional
@@ -45,10 +45,9 @@
marker="#put_chars/2">put_chars</seealso> function should be in the
<seealso marker="unicode#type-chardata"><c>unicode:chardata()</c></seealso> format. This means that programs
supplying binaries to this function need to convert them to UTF-8
- before trying to output the data on an
- <c>io_device()</c>.</p>
+ before trying to output the data on an IO device.</p>
- <p>If an io_device() is set in binary mode, the functions <seealso
+ <p>If an IO device is set in binary mode, the functions <seealso
marker="#get_chars/3">get_chars</seealso> and <seealso
marker="#get_line/2">get_line</seealso> may return binaries
instead of lists. The binaries will, as of R13A, be encoded in
@@ -68,9 +67,9 @@
<datatype>
<name name="device"/>
<desc>
- <p>Either <c>standard_io</c>, <c>standard_error</c>, a
+ <p>An IO device. Either <c>standard_io</c>, <c>standard_error</c>, a
registered name, or a pid handling IO protocols (returned from
- <seealso marker="kernel:file#open/2">file:open/2</seealso>).</p>
+ <seealso marker="kernel:file#open/2">file:open/2</seealso>).</p>
</desc>
</datatype>
<datatype>
@@ -107,11 +106,11 @@
<func>
<name name="columns" arity="0"/>
<name name="columns" arity="1"/>
- <fsummary>Get the number of columns of a device</fsummary>
+ <fsummary>Get the number of columns of an IO device</fsummary>
<desc>
<p>Retrieves the number of columns of the
<c><anno>IoDevice</anno></c> (i.e. the width of a terminal). The function
- only succeeds for terminal devices, for all other devices
+ only succeeds for terminal devices, for all other IO devices
the function returns <c>{error, enotsup}</c></p>
</desc>
</func>
@@ -120,7 +119,7 @@
<name name="put_chars" arity="2"/>
<fsummary>Write a list of characters</fsummary>
<desc>
- <p>Writes the characters of <c><anno>CharData</anno></c> to the io_server()
+ <p>Writes the characters of <c><anno>CharData</anno></c> to the I/O server
(<c><anno>IoDevice</anno></c>).</p>
</desc>
</func>
@@ -143,11 +142,11 @@
<taglist>
<tag><c><anno>Data</anno></c></tag>
<item>
- <p>The input characters. If the device supports Unicode,
+ <p>The input characters. If the IO device supports Unicode,
the data may represent codepoints larger than 255 (the
- latin1 range). If the io_server() is set to deliver
+ latin1 range). If the I/O server is set to deliver
binaries, they will be encoded in UTF-8 (regardless of if
- the device actually supports Unicode or not).</p>
+ the IO device actually supports Unicode or not).</p>
</item>
<tag><c>eof</c></tag>
<item>
@@ -172,11 +171,11 @@
<tag><c><anno>Data</anno></c></tag>
<item>
<p>The characters in the line terminated by a LF (or end of
- file). If the device supports Unicode,
+ file). If the IO device supports Unicode,
the data may represent codepoints larger than 255 (the
- latin1 range). If the io_server() is set to deliver
+ latin1 range). If the I/O server is set to deliver
binaries, they will be encoded in UTF-8 (regardless of if
- the device actually supports Unicode or not).</p>
+ the IO device actually supports Unicode or not).</p>
</item>
<tag><c>eof</c></tag>
<item>
@@ -195,7 +194,7 @@
<name name="getopts" arity="1"/>
<fsummary>Get the supported options and values from an I/O-server</fsummary>
<desc>
- <p>This function requests all available options and their current values for a specific io_device(). Example:</p>
+ <p>This function requests all available options and their current values for a specific IO device. Example:</p>
<pre>
1> <input>{ok,F} = file:open("/dev/null",[read]).</input>
{ok,&lt;0.42.0&gt;}
@@ -217,19 +216,19 @@
<name name="setopts" arity="2"/>
<fsummary>Set options</fsummary>
<desc>
- <p>Set options for the io_device() (<c><anno>IoDevice</anno></c>).</p>
+ <p>Set options for the standard IO device (<c><anno>IoDevice</anno></c>).</p>
<p>Possible options and values vary depending on the actual
- io_device(). For a list of supported options and their current values
- on a specific device, use the <seealso
+ IO device. For a list of supported options and their current values
+ on a specific IO device, use the <seealso
marker="#getopts/1">getopts/1</seealso> function.</p>
- <p>The options and values supported by the current OTP io_devices are:</p>
+ <p>The options and values supported by the current OTP IO devices are:</p>
<taglist>
<tag><c>binary, list or {binary, boolean()}</c></tag>
<item>
- <p>If set in binary mode (binary or {binary,true}), the io_server() sends binary data (encoded in UTF-8) as answers to the get_line, get_chars and, if possible, get_until requests (see the I/O protocol description in STDLIB User's Guide for details). The immediate effect is that <c>get_chars/2,3</c> and <c>get_line/1,2</c> return UTF-8 binaries instead of lists of chars for the affected device.</p>
- <p>By default, all io_devices in OTP are set in list mode, but the io functions can handle any of these modes and so should other, user written, modules behaving as clients to I/O-servers.</p>
+ <p>If set in binary mode (binary or {binary,true}), the I/O server sends binary data (encoded in UTF-8) as answers to the get_line, get_chars and, if possible, get_until requests (see the I/O protocol description in STDLIB User's Guide for details). The immediate effect is that <c>get_chars/2,3</c> and <c>get_line/1,2</c> return UTF-8 binaries instead of lists of chars for the affected IO device.</p>
+ <p>By default, all IO devices in OTP are set in list mode, but the io functions can handle any of these modes and so should other, user written, modules behaving as clients to I/O-servers.</p>
<p>This option is supported by the standard shell (group.erl), the 'oldshell' (user.erl) and the file I/O servers.</p>
</item>
<tag><c>{echo, boolean()}</c></tag>
@@ -261,10 +260,10 @@
</item>
<tag><c>{encoding, latin1 | unicode}</c></tag>
<item>
- <p>Specifies how characters are input or output from or to the actual device, implying that i.e. a terminal is set to handle Unicode input and output or a file is set to handle UTF-8 data encoding.</p>
- <p>The option <em>does not</em> affect how data is returned from the io-functions or how it is sent in the I/O-protocol, it only affects how the io_device() is to handle Unicode characters towards the &quot;physical&quot; device.</p>
- <p>The standard shell will be set for either unicode or latin1 encoding when the system is started. The actual encoding is set with the help of the "LANG" or "LC_CTYPE" environment variables on Unix-like system or by other means on other systems. The bottom line is that the user can input Unicode characters and the device will be in {encoding, unicode} mode if the device supports it. The mode can be changed, if the assumption of the runtime system is wrong, by setting this option.</p>
- <p>The io_device() used when Erlang is started with the "-oldshell" or "-noshell" flags is by default set to latin1 encoding, meaning that any characters beyond codepoint 255 will be escaped and that input is expected to be plain 8-bit ISO-latin-1. If the encoding is changed to Unicode, input and output from the standard file descriptors will be in UTF-8 (regardless of operating system).</p>
+ <p>Specifies how characters are input or output from or to the actual IO device, implying that i.e. a terminal is set to handle Unicode input and output or a file is set to handle UTF-8 data encoding.</p>
+ <p>The option <em>does not</em> affect how data is returned from the io-functions or how it is sent in the I/O-protocol, it only affects how the IO device is to handle Unicode characters towards the &quot;physical&quot; device.</p>
+ <p>The standard shell will be set for either unicode or latin1 encoding when the system is started. The actual encoding is set with the help of the "LANG" or "LC_CTYPE" environment variables on Unix-like system or by other means on other systems. The bottom line is that the user can input Unicode characters and the IO device will be in {encoding, unicode} mode if the IO device supports it. The mode can be changed, if the assumption of the runtime system is wrong, by setting this option.</p>
+ <p>The IO device used when Erlang is started with the "-oldshell" or "-noshell" flags is by default set to latin1 encoding, meaning that any characters beyond codepoint 255 will be escaped and that input is expected to be plain 8-bit ISO-latin-1. If the encoding is changed to Unicode, input and output from the standard file descriptors will be in UTF-8 (regardless of operating system).</p>
<p>Files can also be set in {encoding, unicode}, meaning that data is written and read as UTF-8. More encodings are possible for files, see below.</p>
<p>{encoding, unicode | latin1} is supported by both the standard shell (group.erl including werl on windows), the 'oldshell' (user.erl) and the file I/O servers.</p>
</item>
@@ -380,7 +379,7 @@ ok</pre>
applicable, it is used for both the field width and precision.
The default padding character is <c>' '</c> (space).</p>
<p><c>Mod</c> is the control sequence modifier. It is either a
- single character (currently only 't', for unicode translation,
+ single character (currently only <c>t</c>, for Unicode translation,
is supported) that changes the interpretation of Data.</p>
<p>The following control sequences are available:</p>
@@ -400,9 +399,9 @@ ok</pre>
2> <input>io:fwrite("|~10.5c|~-10.5c|~5c|~n", [$a, $b, $c]).</input>
| aaaaa|bbbbb |ccccc|
ok</pre>
- <p>If the Unicode translation modifier ('t') is in effect,
+ <p>If the Unicode translation modifier (<c>t</c>) is in effect,
the integer argument can be any number representing a
- valid unicode codepoint, otherwise it should be an integer
+ valid Unicode codepoint, otherwise it should be an integer
less than or equal to 255, otherwise it is masked with 16#FF:</p>
<pre>
1> <input>io:fwrite("~tc~n",[1024]).</input>
@@ -442,7 +441,7 @@ ok</pre>
<item>
<p>Prints the argument with the <c>string</c> syntax. The
argument is, if no Unicode translation modifier is present, an
- iolist(), a binary, or an atom. If the Unicode translation modifier ('t') is in effect, the argument is unicode:chardata(), meaning that binaries are in UTF-8. The characters
+ iolist(), a binary, or an atom. If the Unicode translation modifier (<c>t</c>) is in effect, the argument is unicode:chardata(), meaning that binaries are in UTF-8. The characters
are printed without quotes. The string is first truncated
by the given precision and then padded and justified
to the given field width. The default precision is the field width.</p>
@@ -601,7 +600,7 @@ ok</pre>
<tag><c>#</c></tag>
<item>
<p>Like <c>B</c>, but prints the number with an Erlang style
- '#'-separated base prefix.</p>
+ <c>#</c>-separated base prefix.</p>
<pre>
16> <input>io:fwrite("~.10#~n", [31]).</input>
10#31
@@ -651,7 +650,7 @@ ok
{shell,eval_loop,3}]}
in function io:o_request/2</pre>
<p>In this example, an attempt was made to output the single
- character '65' with the aid of the string formatting directive
+ character 65 with the aid of the string formatting directive
"~s".</p>
</desc>
</func>
@@ -682,7 +681,7 @@ ok
return suppression character. It provides a method to
specify a field which is to be omitted. <c>F</c> is the
<c>field width</c> of the input field, <c>M</c> is an optional
- translation modifier (of which 't' is the only currently
+ translation modifier (of which <c>t</c> is the only currently
supported, meaning Unicode translation) and <c>C</c>
determines the type of control sequence.</p>
@@ -708,8 +707,8 @@ ok
<tag><c>-</c></tag>
<item>
<p>An optional sign character is expected. A sign
- character '-' gives the return value <c>-1</c>. Sign
- character '+' or none gives <c>1</c>. The field width
+ character <c>-</c> gives the return value <c>-1</c>. Sign
+ character <c>+</c> or none gives <c>1</c>. The field width
parameter is ignored. Leading white-space characters
are not skipped.</p>
</item>
@@ -811,11 +810,11 @@ enter><input>:</input> <input>alan</input> <input>:</input> <input>joe</in
<func>
<name name="rows" arity="0"/>
<name name="rows" arity="1"/>
- <fsummary>Get the number of rows of a device</fsummary>
+ <fsummary>Get the number of rows of an IO device</fsummary>
<desc>
<p>Retrieves the number of rows of the
<c><anno>IoDevice</anno></c> (i.e. the height of a terminal). The function
- only succeeds for terminal devices, for all other devices
+ only succeeds for terminal devices, for all other IO devices
the function returns <c>{error, enotsup}</c></p>
</desc>
</func>
@@ -832,7 +831,7 @@ enter><input>:</input> <input>alan</input> <input>:</input> <input>joe</in
is passed on as the <c>Options</c> argument of the
<c>erl_scan:tokens/4</c> function. The data is tokenized as if
it were a
- sequence of Erlang expressions until a final <c>'.'</c> is
+ sequence of Erlang expressions until a final dot (<c>.</c>) is
reached. This token is also returned. It returns:</p>
<taglist>
<tag><c>{ok, Tokens, EndLine}</c></tag>
@@ -872,7 +871,7 @@ enter><input>1.0er.</input>
<c>Options</c> argument of the <c>erl_scan:tokens/4</c>
function. The data is tokenized as if it were an
Erlang form - one of the valid Erlang expressions in an
- Erlang source file - until a final <c>'.'</c> is reached.
+ Erlang source file - until a final dot (<c>.</c>) is reached.
This last token is also returned. The return values are the
same as for <c>scan_erl_exprs/1,2,3</c> above.</p>
</desc>
@@ -892,7 +891,7 @@ enter><input>1.0er.</input>
<c><anno>Options</anno></c> is passed on as the
<c>Options</c> argument of the <c>erl_scan:tokens/4</c>
function. The data is tokenized and parsed as if it were a
- sequence of Erlang expressions until a final '.' is reached.
+ sequence of Erlang expressions until a final dot (<c>.</c>) is reached.
It returns:</p>
<taglist>
<tag><c>{ok, ExprList, EndLine}</c></tag>
@@ -933,7 +932,7 @@ enter><input>abc("hey".</input>
<c>Options</c> argument of the <c>erl_scan:tokens/4</c>
function. The data is tokenized and parsed as if
it were an Erlang form - one of the valid Erlang expressions
- in an Erlang source file - until a final '.' is reached. It
+ in an Erlang source file - until a final dot (<c>.</c>) is reached. It
returns:</p>
<taglist>
<tag><c>{ok, AbsForm, EndLine}</c></tag>
@@ -975,7 +974,7 @@ enter><input>bar.</input>
</section>
<section>
<title>Standard Error</title>
- <p>In certain situations, especially when the standard output is redirected, access to an io_server() specific for error messages might be convenient. The io_device 'standard_error' can be used to direct output to whatever the current operating system considers a suitable device for error output. Example on a Unix-like operating system:</p>
+ <p>In certain situations, especially when the standard output is redirected, access to an I/O-server specific for error messages might be convenient. The IO device <c>standard_error</c> can be used to direct output to whatever the current operating system considers a suitable IO device for error output. Example on a Unix-like operating system:</p>
<pre>
$ <input>erl -noshell -noinput -eval 'io:format(standard_error,"Error: ~s~n",["error 11"]),'\</input>
<input>'init:stop().' > /dev/null</input>
diff --git a/lib/stdlib/doc/src/io_protocol.xml b/lib/stdlib/doc/src/io_protocol.xml
index 0ff3d5c1ee..d36bf2042f 100644
--- a/lib/stdlib/doc/src/io_protocol.xml
+++ b/lib/stdlib/doc/src/io_protocol.xml
@@ -5,7 +5,7 @@
<header>
<copyright>
<year>1999</year>
- <year>2011</year>
+ <year>2012</year>
<holder>Ericsson AB. All Rights Reserved.</holder>
</copyright>
<legalnotice>
@@ -35,10 +35,10 @@
<p>The I/O-protocol in Erlang specifies a way for a client to communicate
-with an io_server and vice versa. The io_server is a process handling the
-requests and that performs the requested task on i.e. a device. The
+with an I/O server and vice versa. The I/O server is a process that handles
+the requests and performs the requested task on e.g. an IO device. The
client is any Erlang process wishing to read or write data from/to the
-device.</p>
+IO device.</p>
<p>The common I/O-protocol has been present in OTP since the
beginning, but has been fairly undocumented and has also somewhat
@@ -53,85 +53,85 @@ implement than the original. It can certainly be argumented that the
current protocol is too complex, but this text describes how it looks
today, not how it should have looked.</p>
-<p>The basic ideas from the original protocol still hold. The io_server
+<p>The basic ideas from the original protocol still hold. The I/O server
and client communicate with one single, rather simplistic protocol and
-no server state is ever present in the client. Any io_server can be
+no server state is ever present in the client. Any I/O server can be
used together with any client code and client code need not be aware
-of the actual device the io_server communicates with.</p>
+of the actual IO device the I/O server communicates with.</p>
<section>
-<title>Protocol basics</title>
+<title>Protocol Basics</title>
-<p>As described in Robert's paper, servers and clients communicate using
-io_request/io_reply tuples as follows:</p>
+<p>As described in Robert's paper, I/O servers and clients communicate using
+<c>io_request</c>/<c>io_reply</c> tuples as follows:</p>
<p><em>{io_request, From, ReplyAs, Request}</em><br/>
<em>{io_reply, ReplyAs, Reply}</em></p>
-<p>The client sends an io_request to the io_server and the server
-eventually sends a corresponding reply.</p>
+<p>The client sends an <c>io_request</c> tuple to the I/O server and
+the server eventually sends a corresponding <c>io_reply</c> tuple.</p>
<list type="bulleted">
-<item>From is the pid() of the client, the process which the io_server
-sends the reply to.</item>
-
-<item>ReplyAs can be any datum and is simply returned in the corresponding
-io_reply. The io-module in the Erlang standard library simply uses the pid()
-of the io_server as the ReplyAs datum, but a more complicated client
-could have several outstanding io-requests to the same server and
-would then use i.e. a reference() or something else to differentiate among
-the incoming io_reply's. The ReplyAs element should be considered
-opaque by the io_server. Note that the pid() of the server is not
-explicitly present in the io_reply. The reply can be sent from any
-process, not necessarily the actual io_server. The ReplyAs element is
-the only thing that connects one io_request with an io_reply.</item>
-
-<item>Request and Reply are described below.</item>
+<item><c>From</c> is the <c>pid()</c> of the client, the process which
+the I/O server sends the IO reply to.</item>
+
+<item><c>ReplyAs</c> can be any datum and is returned in the corresponding
+<c>io_reply</c>. The <seealso marker="stdlib:io">io</seealso> module simply uses the pid()
+of the I/O server as the <c>ReplyAs</c> datum, but a more complicated client
+could have several outstanding I/O requests to the same I/O server and
+would then use i.e. a <c>reference()</c> or something else to differentiate among
+the incoming IO replies. The <c>ReplyAs</c> element should be considered
+opaque by the I/O server. Note that the <c>pid()</c> of the I/O server is not
+explicitly present in the <c>io_reply</c> tuple. The reply can be sent from any
+process, not necessarily the actual I/O server. The <c>ReplyAs</c> element is
+the only thing that connects one I/O request with an I/O-reply.</item>
+
+<item><c>Request</c> and <c>Reply</c> are described below.</item>
</list>
-<p>When an io_server receives an io_request, it acts upon the actual
-Request part and eventually sends an io_reply with the corresponding
-Reply part.</p>
+<p>When an I/O server receives an <c>io_request</c> tuple, it acts upon the actual
+<c>Request</c> part and eventually sends an <c>io_reply</c> tuple with the corresponding
+<c>Reply</c> part.</p>
</section>
<section>
-<title>Output requests</title>
+<title>Output Requests</title>
-<p>To output characters on a device, the following Requests exist:</p>
+<p>To output characters on an IO device, the following <c>Request</c>s exist:</p>
<p>
<em>{put_chars, Encoding, Characters}</em><br/>
<em>{put_chars, Encoding, Module, Function, Args}</em>
</p>
<list type="bulleted">
-<item>Encoding is either 'unicode' or 'latin1', meaning that the
+<item><c>Encoding</c> is either <c>unicode</c> or <c>latin1</c>, meaning that the
characters are (in case of binaries) encoded as either UTF-8 or
- iso-latin-1 (pure bytes). A well behaved io_server should also
- return error if list elements contain integers > 255 when the
- Encoding is set to latin1. Note that this does not in any way tell
- how characters should be put on the actual device or how the
- io_server should handle them. Different io_servers may handle the
- characters however they want, this simply tells the io_server which
- format the data is expected to have. In the Module/Function/argument
- case, the Encoding tells which format the designated function
- produces. Note that byte-oriented data is simplest sent using latin1
- Encoding</item>
-
-<item>Characters are the data to be put on the device. If Encoding is
- latin1, this is an iolist(). If Encoding is unicode, this is an
- Erlang standard mixed unicode list (one integer in a list per
+ ISO-latin-1 (pure bytes). A well behaved I/O server should also
+ return error if list elements contain integers > 255 when
+ <c>Encoding</c> is set to <c>latin1</c>. Note that this does not in any way tell
+ how characters should be put on the actual IO device or how the
+ I/O server should handle them. Different I/O servers may handle the
+ characters however they want, this simply tells the I/O server which
+ format the data is expected to have. In the <c>Module</c>/<c>Function</c>/<c>Args</c>
+ case, <c>Encoding</c> tells which format the designated function
+ produces. Note that byte-oriented data is simplest sent using the ISO-latin-1
+ encoding.</item>
+
+<item>Characters are the data to be put on the IO device. If <c>Encoding</c> is
+ <c>latin1</c>, this is an <c>iolist()</c>. If <c>Encoding</c> is <c>unicode</c>, this is an
+ Erlang standard mixed Unicode list (one integer in a list per
character, characters in binaries represented as UTF-8).</item>
-<item>Module, Function, Args denotes a function which will be called to
- produce the data (like io_lib:format). Args is a list of arguments
+<item><c>Module</c>, <c>Function</c>, and <c>Args</c> denote a function which will be called to
+ produce the data (like <c>io_lib:format/2</c>). <c>Args</c> is a list of arguments
to the function. The function should produce data in the given
- Encoding. The io_server should call the function as apply(Mod, Func,
- Args) and will put the returned data on the device as if it was sent
- in a {put_chars, Encoding, Characters} request. If the function
+ <c>Encoding</c>. The I/O server should call the function as
+ <c>apply(Mod, Func, Args)</c> and will put the returned data on the IO device as if it was sent
+ in a <c>{put_chars, Encoding, Characters}</c> request. If the function
returns anything else than a binary or list or throws an exception,
an error should be sent back to the client.</item>
</list>
-<p>The server replies to the client with an io_reply where the Reply
+<p>The I/O server replies to the client with an <c>io_reply</c> tuple where the <c>Reply</c>
element is one of:</p>
<p>
<em>ok</em><br/>
@@ -139,49 +139,50 @@ element is one of:</p>
</p>
<list type="bulleted">
-<item>Error describes the error to the client, which may do whatever it
- wants with it. The Erlang io-module typically returns it as is.</item>
+<item><c>Error</c> describes the error to the client, which may do whatever
+ it wants with it. The Erlang <seealso marker="stdlib:io">io</seealso>
+ module typically returns it as is.</item>
</list>
-<p>For backward compatibility the following Requests should also be
-handled by an io_server (these messages should not be present after
+<p>For backward compatibility the following <c>Request</c>s should also be
+handled by an I/O server (these requests should not be present after
R15B of OTP):</p>
<p>
<em>{put_chars, Characters}</em><br/>
<em>{put_chars, Module, Function, Args}</em>
</p>
-<p>These should behave as {put_chars, latin1, Characters} and {put_chars,
-latin1, Module, Function, Args} respectively. </p>
+<p>These should behave as <c>{put_chars, latin1, Characters}</c> and
+<c>{put_chars, latin1, Module, Function, Args}</c> respectively. </p>
</section>
<section>
<title>Input Requests</title>
-<p>To read characters from a device, the following Requests exist:</p>
+<p>To read characters from an IO device, the following <c>Request</c>s exist:</p>
<p><em>{get_until, Encoding, Prompt, Module, Function, ExtraArgs}</em></p>
<list type="bulleted">
-<item>Encoding denotes how data is to be sent back to the client and
+<item><c>Encoding</c> denotes how data is to be sent back to the client and
what data is sent to the function denoted by
- Module/Function/ExtraArgs. If the function supplied returns data as a
+ <c>Module</c>/<c>Function</c>/<c>ExtraArgs</c>. If the function supplied returns data as a
list, the data is converted to this encoding. If however the
function supplied returns data in some other format, no conversion
- can be done and it's up to the client supplied function to return
- data in a proper way. If Encoding is latin1, lists of integers
+ can be done and it is up to the client supplied function to return
+ data in a proper way. If <c>Encoding</c> is <c>latin1</c>, lists of integers
0..255 or binaries containing plain bytes are sent back to the
- client when possible, if Encoding is unicode, lists with integers in
- the whole unicode range or binaries encoded in UTF-8 are sent to the
+ client when possible; if <c>Encoding</c> is <c>unicode</c>, lists with integers in
+ the whole Unicode range or binaries encoded in UTF-8 are sent to the
client. The user supplied function will always see lists of integers, never
- binaries, but the list may contain numbers > 255 if the Encoding is
- 'unicode'.</item>
+ binaries, but the list may contain numbers > 255 if the <c>Encoding</c> is
+ <c>unicode</c>.</item>
-<item>Prompt is a list of characters (not mixed, no binaries) or an atom()
- to be output as a prompt for input on the device. The Prompt is
- often ignored by the io_server and a Prompt set to '' should always
- be ignored (and result in nothing being written to the device).</item>
+<item><c>Prompt</c> is a list of characters (not mixed, no binaries) or an atom
+ to be output as a prompt for input on the IO device. <c>Prompt</c> is
+ often ignored by the I/O server and if set to <c>''</c> it should always
+ be ignored (and result in nothing being written to the IO device).</item>
-<item><p>Module, Function, ExtraArgs denotes a function and arguments to
+<item><p><c>Module</c>, <c>Function</c>, and <c>ExtraArgs</c> denote a function and arguments to
determine when enough data is written. The function should take two
additional arguments, the last state, and a list of characters. The
function should return one of:</p>
@@ -189,23 +190,23 @@ latin1, Module, Function, Args} respectively. </p>
<em>{done, Result, RestChars}</em><br/>
<em>{more, Continuation}</em>
</p>
- <p>The Result can be any Erlang term, but if it is a list(), the
- io_server may convert it to a binary() of appropriate format before
- returning it to the client, if the server is set in binary mode (see
+ <p>The <c>Result</c> can be any Erlang term, but if it is a <c>list()</c>, the
+ I/O server may convert it to a <c>binary()</c> of appropriate format before
+ returning it to the client, if the I/O server is set in binary mode (see
below).</p>
- <p>The function will be called with the data the io_server finds on
- its device, returning {done, Result, RestChars} when enough data is
- read (in which case Result is sent to the client and RestChars are
- kept in the io_server as a buffer for subsequent input) or {more,
- Continuation}, indicating that more characters are needed to
- complete the request. The Continuation will be sent as the state in
+ <p>The function will be called with the data the I/O server finds on
+ its IO device, returning <c>{done, Result, RestChars}</c> when enough data is
+ read (in which case <c>Result</c> is sent to the client and <c>RestChars</c> is
+ kept in the I/O server as a buffer for subsequent input) or
+ <c>{more, Continuation}</c>, indicating that more characters are needed to
+ complete the request. The <c>Continuation</c> will be sent as the state in
subsequent calls to the function when more characters are
available. When no more characters are available, the function
- shall return {done,eof,Rest}.
+ shall return <c>{done, eof, Rest}</c>.
The initial state is the empty list and the data when an
- end of file is reached on the device is the atom 'eof'. An emulation
- of the get_line request could be (inefficiently) implemented using
+ end of file is reached on the IO device is the atom <c>eof</c>. An emulation
+ of the <c>get_line</c> request could be (inefficiently) implemented using
the following functions:</p>
<code>
-module(demo).
@@ -214,7 +215,9 @@ latin1, Module, Function, Args} respectively. </p>
until_newline(_ThisFar,eof,_MyStopCharacter) -&gt;
{done,eof,[]};
until_newline(ThisFar,CharList,MyStopCharacter) -&gt;
- case lists:splitwith(fun(X) -&gt; X =/= MyStopCharacter end, CharList) of
+ case
+ lists:splitwith(fun(X) -&gt; X =/= MyStopCharacter end, CharList)
+ of
{L,[]} -&gt;
{more,ThisFar++L};
{L2,[MyStopCharacter|Rest]} -&gt;
@@ -222,45 +225,47 @@ until_newline(ThisFar,CharList,MyStopCharacter) -&gt;
end.
get_line(IoServer) -&gt;
- IoServer ! {io_request, self(), IoServer, {get_until, unicode, '',
- ?MODULE, until_newline, [$\n]}},
+ IoServer ! {io_request,
+ self(),
+ IoServer,
+ {get_until, unicode, '', ?MODULE, until_newline, [$\n]}},
receive
{io_reply, IoServer, Data} -&gt;
Data
end.
</code>
- <p>Note especially that the last element in the Request tuple ([$\n])
+ <p>Note especially that the last element in the <c>Request</c> tuple (<c>[$\n]</c>)
is appended to the argument list when the function is called. The
function should be called like
- apply(Module, Function, [ State, Data | ExtraArgs ]) by the io_server</p>
+ <c>apply(Module, Function, [ State, Data | ExtraArgs ])</c> by the I/O server</p>
</item>
</list>
-<p>A defined number of characters is requested using this Request:</p>
+<p>A fixed number of characters is requested using this <c>Request</c>:</p>
<p>
<em>{get_chars, Encoding, Prompt, N}</em>
</p>
<list type="bulleted">
-<item>Encoding and Prompt as for get_until.</item>
+<item><c>Encoding</c> and <c>Prompt</c> as for <c>get_until</c>.</item>
-<item>N is the number of characters to be read from the device.</item>
+<item><c>N</c> is the number of characters to be read from the IO device.</item>
</list>
-<p>A single line (like in the example above) is requested with this Request:</p>
+<p>A single line (like in the example above) is requested with this <c>Request</c>:</p>
<p>
<em>{get_line, Encoding, Prompt}</em>
</p>
<list type="bulleted">
-<item>Encoding and prompt as above.</item>
+<item><c>Encoding</c> and <c>Prompt</c> as above.</item>
</list>
-<p>Obviously, get_chars and get_line could be implemented with the
-get_until request (and indeed was originally), but demands for
+<p>Obviously, the <c>get_chars</c> and <c>get_line</c> could be implemented with the
+<c>get_until</c> request (and indeed they were originally), but demands for
efficiency has made these additions necessary.</p>
-<p>The server replies to the client with an io_reply where the Reply
+<p>The I/O server replies to the client with an <c>io_reply</c> tuple where the <c>Reply</c>
element is one of:</p>
<p>
<em>Data</em><br/>
@@ -269,16 +274,17 @@ element is one of:</p>
</p>
<list type="bulleted">
-<item>Data is the characters read, in either list or binary form
- (depending on the io_server mode, see below).</item>
-<item>Error describes the error to the client, which may do whatever it
- wants with it. The Erlang io-module typically returns it as is.</item>
-<item>eof is returned when input end is reached and no more data is
+<item><c>Data</c> is the characters read, in either list or binary form
+ (depending on the I/O server mode, see below).</item>
+<item><c>Error</c> describes the error to the client, which may do whatever it
+ wants with it. The Erlang <seealso marker="stdlib:io">io</seealso>
+ module typically returns it as is.</item>
+<item><c>eof</c> is returned when input end is reached and no more data is
available to the client process.</item>
</list>
-<p>For backward compatibility the following Requests should also be
-handled by an io_server (these messages should not be present after
+<p>For backward compatibility the following <c>Request</c>s should also be
+handled by an I/O server (these reqeusts should not be present after
R15B of OTP):</p>
<p>
@@ -287,30 +293,30 @@ R15B of OTP):</p>
<em>{get_line, Prompt}</em><br/>
</p>
-<p>These should behave as {get_until, latin1, Prompt, Module, Function,
-ExtraArgs}, {get_chars, latin1, Prompt, N} and {get_line, latin1,
-Prompt} respectively.</p>
+<p>These should behave as <c>{get_until, latin1, Prompt, Module, Function,
+ExtraArgs}</c>, <c>{get_chars, latin1, Prompt, N}</c> and <c>{get_line, latin1,
+Prompt}</c> respectively.</p>
</section>
<section>
-<title>I/O-server modes</title>
-
-<p>Demands for efficiency when reading data from an io_server has not
-only lead to the addition of the get_line and get_chars requests, but
-has also added the concept of io_server options. No options are
-mandatory to implement, but all io_servers in the Erlang standard
-libraries honor the 'binary' option, which allows the Data in the
-io_reply to be binary instead of in list form <em>when possible</em>.
+<title>I/O-server Modes</title>
+
+<p>Demands for efficiency when reading data from an I/O server has not
+only lead to the addition of the <c>get_line</c> and <c>get_chars</c> requests, but
+has also added the concept of I/O server options. No options are
+mandatory to implement, but all I/O servers in the Erlang standard
+libraries honor the <c>binary</c> option, which allows the <c>Data</c> element of the
+<c>io_reply</c> tuple to be a binary instead of a list <em>when possible</em>.
If the data is sent as a binary, Unicode data will be sent in the
-standard Erlang unicode
-format, i.e. UTF-8 (note that the function in get_until still gets
-list data regardless of the io_server mode).</p>
+standard Erlang Unicode
+format, i.e. UTF-8 (note that the function of the <c>get_until</c> request still gets
+list data regardless of the I/O server mode).</p>
-<p>Note that i.e. the <c>get_until</c> request allows for a function with the data specified as always being a list. Also the return value data from such a function can be of any type (as is indeed the case when an io:fread request is sent to an io_server). The client has to be prepared for data received as answers to those requests to be in a variety of forms, but the server should convert the results to binaries whenever possible (i.e. when the function supplied to get_until actually returns a list). The example shown later in this text does just that.</p>
+<p>Note that i.e. the <c>get_until</c> request allows for a function with the data specified as always being a list. Also the return value data from such a function can be of any type (as is indeed the case when an <c>io:fread</c> request is sent to an I/O server). The client has to be prepared for data received as answers to those requests to be in a variety of forms, but the I/O server should convert the results to binaries whenever possible (i.e. when the function supplied to <c>get_until</c> actually returns a list). The example shown later in this text does just that.</p>
<p>An I/O-server in binary mode will affect the data sent to the client,
so that it has to be able to handle binary data. For convenience, it
-is possible to set and retrieve the modes of an io_server using the
-following I/O-requests:</p>
+is possible to set and retrieve the modes of an I/O server using the
+following I/O requests:</p>
<p>
<em>{setopts, Opts}</em>
@@ -318,72 +324,72 @@ following I/O-requests:</p>
<list type="bulleted">
-<item>Opts is a list of options in the format recognized by proplists (and
- of course by the io_server itself).</item>
+<item><c>Opts</c> is a list of options in the format recognized by <seealso marker="stdlib:proplists">proplists</seealso> (and
+ of course by the I/O server itself).</item>
</list>
-<p>As an example, the io_server for the interactive shell (in group.erl)
+<p>As an example, the I/O server for the interactive shell (in <c>group.erl</c>)
understands the following options:</p>
<p>
-<em>{binary, bool()} (or 'binary'/'list')</em><br/>
-<em>{echo, bool()}</em><br/>
+<em>{binary, boolean()}</em> (or <em>binary</em>/<em>list</em>)<br/>
+<em>{echo, boolean()}</em><br/>
<em>{expand_fun, fun()}</em><br/>
-<em>{encoding, 'unicode'/'latin1'} (or 'unicode'/'latin1')</em>
+<em>{encoding, unicode/latin1}</em> (or <em>unicode</em>/<em>latin1</em>)
</p>
-<p>- of which the 'binary' and 'encoding' options are common for all
-io_servers in OTP, while 'echo' and 'expand' is valid only for this
-io_server. It's worth noting that the 'unicode' option notifies how
-characters are actually put on the physical device, i.e. if the
-terminal per se is unicode aware, it does not affect how characters
+<p>- of which the <c>binary</c> and <c>encoding</c> options are common for all
+I/O servers in OTP, while <c>echo</c> and <c>expand</c> are valid only for this
+I/O server. It is worth noting that the <c>unicode</c> option notifies how
+characters are actually put on the physical IO device, i.e. if the
+terminal per se is Unicode aware, it does not affect how characters
are sent in the I/O-protocol, where each request contains encoding
information for the provided or returned data.</p>
-<p>The server should send one of the following as Reply:</p>
+<p>The I/O server should send one of the following as <c>Reply</c>:</p>
<p>
<em>ok</em><br/>
<em>{error, Error}</em>
</p>
-<p>An error (preferably enotsup) is to be expected if the option is
-not supported by the io_server (like if an 'echo' option is sent in a
-setopt Request to a plain file).</p>
+<p>An error (preferably <c>enotsup</c>) is to be expected if the option is
+not supported by the I/O server (like if an <c>echo</c> option is sent in a
+<c>setopts</c> request to a plain file).</p>
-<p>To retrieve options, this message is used:</p>
+<p>To retrieve options, this request is used:</p>
<p>
<em>getopts</em>
</p>
-<p>The 'getopts' message requests a complete list of all options
-supported by the io_server as well as their current values.</p>
+<p>The <c>getopts</c> request asks for a complete list of all options
+supported by the I/O server as well as their current values.</p>
-<p>The server replies:</p>
+<p>The I/O server replies:</p>
<p>
<em>OptList</em><br/>
-<em>{error,Error}</em>
+<em>{error, Error}</em>
</p>
<list type="bulleted">
-<item>OptList is a list of tuples {Option, Value} where Option is always
+<item><c>OptList</c> is a list of tuples <c>{Option, Value}</c> where <c>Option</c> is always
an atom.</item>
</list>
</section>
<section>
-<title>Multiple I/O requests</title>
+<title>Multiple I/O Requests</title>
-<p>The Request element can in itself contain several Requests by using
+<p>The <c>Request</c> element can in itself contain several <c>Request</c>s by using
the following format:</p>
<p>
<em>{requests, Requests}</em>
</p>
<list type="bulleted">
-<item>Requests is a list of valid Request tuples for the protocol, they
+<item><c>Requests</c> is a list of valid <c>io_request</c> tuples for the protocol, they
shall be executed in the order in which they appear in the list and
the execution should continue until one of the requests result in an
error or the list is consumed. The result of the last request is
sent back to the client.</item>
</list>
-<p>The server can for a list of requests send any of the valid results in
+<p>The I/O server can for a list of requests send any of the valid results in
the reply:</p>
<p>
@@ -395,7 +401,7 @@ the reply:</p>
<p>- depending on the actual requests in the list.</p>
</section>
<section>
-<title>Optional I/O-requests</title>
+<title>Optional I/O Requests</title>
<p>The following I/O request is optional to implement and a client
should be prepared for an error return:</p>
@@ -403,47 +409,47 @@ should be prepared for an error return:</p>
<em>{get_geometry, Geometry}</em>
</p>
<list type="bulleted">
-<item>Geometry is either the atom 'rows' or the atom 'columns'.</item>
+<item><c>Geometry</c> is either the atom <c>rows</c> or the atom <c>columns</c>.</item>
</list>
-<p>The server should send the Reply as:</p>
+<p>The I/O server should send the <c>Reply</c> as:</p>
<p>
<em>{ok, N}</em><br/>
<em>{error, Error}</em>
</p>
<list type="bulleted">
-<item>N is the number of character rows or columns the device has, if
- applicable to the device the io_server handles, otherwise {error,
- enotsup} is a good answer.</item>
+<item><c>N</c> is the number of character rows or columns the IO device has, if
+ applicable to the IO device the I/O server handles, otherwise <c>{error,
+ enotsup}</c> is a good answer.</item>
</list>
</section>
<section>
-<title>Unimplemented request types:</title>
+<title>Unimplemented Request Types</title>
-<p>If an io_server encounters a request it does not recognize (i.e. the
-io_request tuple is in the expected format, but the actual Request is
-unknown), the server should send a valid reply with the error tuple:</p>
+<p>If an I/O server encounters a request it does not recognize (i.e. the
+<c>io_request</c> tuple is in the expected format, but the actual <c>Request</c> is
+unknown), the I/O server should send a valid reply with the error tuple:</p>
<p>
<em>{error, request}</em>
</p>
-<p>This makes it possible to extend the protocol with optional messages
+<p>This makes it possible to extend the protocol with optional requests
and for the clients to be somewhat backwards compatible.</p>
</section>
<section>
-<title>An annotated and working example io_server:</title>
+<title>An Annotated and Working Example I/O Server</title>
-<p>An io_server is any process capable of handling the protocol. There is
-no generic io_server behavior, but could well be. The framework is
+<p>An I/O server is any process capable of handling the I/O protocol. There is
+no generic I/O server behavior, but could well be. The framework is
simple enough, a process handling incoming requests, usually both
-io_requests and other device-specific requests (for i.e. positioning ,
+I/O-requests and other IO device-specific requests (for i.e. positioning,
closing etc.).</p>
-<p>Our example io_server stores characters in an ets table, making up a
+<p>Our example I/O server stores characters in an ETS table, making up a
fairly crude ram-file (it is probably not useful, but working).</p>
<p>The module begins with the usual directives, a function to start the
-server and a main loop handling the requests:</p>
+I/O server and a main loop handling the requests:</p>
<code>
-module(ets_io_server).
@@ -486,18 +492,19 @@ loop(State) -&gt;
</code>
<p>The main loop receives messages from the client (which might be using
-the io-module to send requests). For each request the function
-request/2 is called and a reply is eventually sent using the reply/3
+the <seealso marker="stdlib:io">io</seealso> module to send requests).
+For each request the function
+<c>request/2</c> is called and a reply is eventually sent using the <c>reply/3</c>
function.</p>
-<p>The &quot;private&quot; message {From, rewind} results in the
+<p>The &quot;private&quot; message <c>{From, rewind}</c> results in the
current position in the pseudo-file to be reset to 0 (the beginning of
-the &quot;file&quot;). This is a typical example of device-specific
+the &quot;file&quot;). This is a typical example of IO device-specific
messages not being part of the I/O-protocol. It is usually a bad idea
-to embed such private messages in io_request tuples, as that might be
+to embed such private messages in <c>io_request</c> tuples, as that might be
confusing to the reader.</p>
-<p>Let's look at the reply function first...</p>
+<p>Let us look at the reply function first...</p>
<code>
@@ -506,8 +513,8 @@ reply(From, ReplyAs, Reply) -&gt;
</code>
-<p>Simple enough, it sends the io_reply tuple back to the client,
-providing the ReplyAs element received in the request along with the
+<p>Simple enough, it sends the <c>io_reply</c> tuple back to the client,
+providing the <c>ReplyAs</c> element received in the request along with the
result of the request, as described above.</p>
<p>Now look at the different requests we need to handle. First the
@@ -525,18 +532,18 @@ request({put_chars, Encoding, Module, Function, Args}, State) -&gt;
end;
</code>
-<p>The Encoding tells us how the characters in the message are
+<p>The <c>Encoding</c> tells us how the characters in the request are
represented. We want to store the characters as lists in the
-ets-table, so we convert them to lists using the
-unicode:characters_to_list/2 function. The conversion function
-conveniently accepts the encoding types unicode or latin1, so we can
-use the Encoding parameter directly.</p>
+ETS table, so we convert them to lists using the
+<seealso marker="stdlib:unicode#characters_to_list/2"><c>unicode:characters_to_list/2</c></seealso> function. The conversion function
+conveniently accepts the encoding types <c>unicode</c> or <c>latin1</c>, so we can
+use <c>Encoding</c> directly.</p>
-<p>When Module, Function and Arguments are provided, we simply apply it
+<p>When <c>Module</c>, <c>Function</c> and <c>Arguments</c> are provided, we simply apply it
and do the same thing with the result as if the data was provided
directly.</p>
-<p>Let's handle the requests for retrieving data too:</p>
+<p>Let us handle the requests for retrieving data too:</p>
<code>
request({get_until, Encoding, _Prompt, M, F, As}, State) -&gt;
@@ -550,11 +557,11 @@ request({get_line, Encoding, _Prompt}, State) -&gt;
</code>
<p>Here we have cheated a little by more or less only implementing
-get_until and using internal helpers to implement get_chars and
-get_line. In production code, this might be to inefficient, but that
+<c>get_until</c> and using internal helpers to implement <c>get_chars</c> and
+<c>get_line</c>. In production code, this might be too inefficient, but that
of course depends on the frequency of the different requests. Before
-we start actually implementing the functions put_chars/2 and
-get_until/5, lets look into the few remaining requests:</p>
+we start actually implementing the functions <c>put_chars/2</c> and
+<c>get_until/5</c>, let us look into the few remaining requests:</p>
<code>
request({get_geometry,_}, State) -&gt;
@@ -567,18 +574,18 @@ request({requests, Reqs}, State) -&gt;
multi_request(Reqs, {ok, ok, State});
</code>
-<p>The get_geometry request has no meaning for this io_server, so the
-reply will be {error, enotsup}. The only option we handle is the
-binary/list option, which is done in separate functions.</p>
+<p>The <c>get_geometry</c> request has no meaning for this I/O server, so the
+reply will be <c>{error, enotsup}</c>. The only option we handle is the
+<c>binary</c>/<c>list</c> option, which is done in separate functions.</p>
-<p>The multi-request tag (requests) is handled in a separate loop
+<p>The multi-request tag (<c>requests</c>) is handled in a separate loop
function applying the requests in the list one after another,
returning the last result.</p>
-<p>What's left is to handle backward compatibility and the file-module
+<p>What is left is to handle backward compatibility and the <seealso marker="kernel:file">file</seealso> module
(which uses the old requests until backward compatibility with pre-R13
-nodes is no longer needed). Note that the io_server will not work with
-a simple file:write if these are not added:</p>
+nodes is no longer needed). Note that the I/O server will not work with
+a simple <c>file:write/2</c> if these are not added:</p>
<code>
request({put_chars,Chars}, State) -&gt;
@@ -593,7 +600,7 @@ request({get_until, Prompt,M,F,As}, State) -&gt;
request({get_until,latin1,Prompt,M,F,As}, State);
</code>
-<p>Ok, what's left now is to return {error, request} if the request is
+<p>OK, what is left now is to return <c>{error, request}</c> if the request is
not recognized:</p>
<code>
@@ -601,7 +608,7 @@ request(_Other, State) -&gt;
{error, {error, request}, State}.
</code>
-<p>Let's move further and actually handle the different requests, first
+<p>Let us move further and actually handle the different requests, first
the fairly generic multi-request type:</p>
<code>
@@ -615,10 +622,10 @@ multi_request([], Result) -&gt;
<p>We loop through the requests one at the time, stopping when we either
encounter an error or the list is exhausted. The last return value is
-sent back to the client (it's first returned to the main loop and then
-sent back by the function io_reply).</p>
+sent back to the client (it is first returned to the main loop and then
+sent back by the function <c>io_reply</c>).</p>
-<p>The getopt and setopt requests are also simple to handle, we just
+<p>The <c>getopts</c> and <c>setopts</c> requests are also simple to handle, we just
change or read our state record:</p>
<code>
@@ -656,24 +663,24 @@ getopts(#state{mode=M} = S) -&gt;
end}],S}.
</code>
-<p>As a convention, all io_servers handle both {setopts, [binary]},
-{setopts, [list]} and {setopts,[{binary, bool()}]}, hence the trick
-with proplists:substitute_negations/2 and proplists:unfold/1. If
-invalid options are sent to us, we send {error,enotsup} back to the
+<p>As a convention, all I/O servers handle both <c>{setopts, [binary]}</c>,
+<c>{setopts, [list]}</c> and <c>{setopts,[{binary, boolean()}]}</c>, hence the trick
+with <c>proplists:substitute_negations/2</c> and <c>proplists:unfold/1</c>. If
+invalid options are sent to us, we send <c>{error, enotsup}</c> back to the
client.</p>
-<p>The getopts request should return a list of {Option, Value} tuples,
+<p>The <c>getopts</c> request should return a list of <c>{Option, Value}</c> tuples,
which has the twofold function of providing both the current values
-and the available options of this io_server. We have only one option,
+and the available options of this I/O server. We have only one option,
and hence return that.</p>
-<p>So far our io_server has been fairly generic (except for the rewind
-request handled in the main loop and the creation of an ets table).
-Most io_servers contain code similar to what's above.</p>
+<p>So far our I/O server has been fairly generic (except for the <c>rewind</c>
+request handled in the main loop and the creation of an ETS table).
+Most I/O servers contain code similar to the one above.</p>
<p>To make the example runnable, we now start implementing the actual
-reading and writing of the data to/from the ets-table. First the
-put_chars function:</p>
+reading and writing of the data to/from the ETS table. First the
+<c>put_chars/3</c> function:</p>
<code>
put_chars(Chars, #state{table = T, position = P} = State) -&gt;
@@ -686,10 +693,10 @@ put_chars(Chars, #state{table = T, position = P} = State) -&gt;
<p>We already have the data as (Unicode) lists and therefore just split
the list in runs of a predefined size and put each run in the
table at the current position (and forward). The functions
-split_data/3 and apply_update/2 are implemented below.</p>
+<c>split_data/3</c> and <c>apply_update/2</c> are implemented below.</p>
-<p>Now we want to read data from the table. The get_until function reads
-data and applies the function until it says it's done. The result is
+<p>Now we want to read data from the table. The <c>get_until/5</c> function reads
+data and applies the function until it says it is done. The result is
sent back to the client:</p>
<code>
@@ -700,11 +707,12 @@ get_until(Encoding, Mod, Func, As,
if
M =:= binary -&gt;
{ok,
- unicode:characters_to_binary(Data,unicode,Encoding),
+ unicode:characters_to_binary(Data, unicode, Encoding),
State#state{position = NewP}};
true -&gt;
case check(Encoding,
- unicode:characters_to_list(Data, unicode)) of
+ unicode:characters_to_list(Data, unicode))
+ of
{error, _} = E -&gt;
{error, E, State};
List -&gt;
@@ -730,24 +738,24 @@ get_loop(M,F,A,T,P,C) -&gt;
end.
</code>
-<p>Here we also handle the mode (binary or list) that can be set by
-the setopts request. By default, all OTP io_servers send data back to
-the client as lists, but switching mode to binary might increase
-efficiency if the server handles it in an appropriate way. The
-implementation of get_until is hard to get efficient as the supplied
-function is defined to take lists as arguments, but get_chars and
-get_line can be optimized for binary mode. This example does not
+<p>Here we also handle the mode (<c>binary</c> or <c>list</c>) that can be set by
+the <c>setopts</c> request. By default, all OTP I/O servers send data back to
+the client as lists, but switching mode to <c>binary</c> might increase
+efficiency if the I/O server handles it in an appropriate way. The
+implementation of <c>get_until</c> is hard to get efficient as the supplied
+function is defined to take lists as arguments, but <c>get_chars</c> and
+<c>get_line</c> can be optimized for binary mode. This example does not
optimize anything however. It is important though that the returned
data is of the right type depending on the options set, so we convert
the lists to binaries in the correct encoding <em>if possible</em>
-before returning. The function supplied in the get_until request may,
+before returning. The function supplied in the <c>get_until</c> request tuple may,
as its final result return anything, so only functions actually
returning lists can get them converted to binaries. If the request
-contained the encoding tag unicode, the lists can contain all unicode
+contained the encoding tag <c>unicode</c>, the lists can contain all Unicode
codepoints and the binaries should be in UTF-8, if the encoding tag
-was latin1, the client should only get characters in the range
-0..255. The function check/2 takes care of not returning arbitrary
-unicode codepoints in lists if the encoding was given as latin1. If
+was <c>latin1</c>, the client should only get characters in the range
+0..255. The function <c>check/2</c> takes care of not returning arbitrary
+Unicode codepoints in lists if the encoding was given as <c>latin1</c>. If
the function did not return a list, the check cannot be performed and
the result will be that of the supplied function untouched.</p>
@@ -768,13 +776,13 @@ check(latin1, List) -&gt;
end.
</code>
-<p>The function check takes care of providing an error tuple if unicode
+<p>The function check takes care of providing an error tuple if Unicode
codepoints above 255 is to be returned if the client requested
latin1.</p>
-<p>The two functions until_newline/3 and until_enough/3 are helpers used
-together with the get_until function to implement get_chars and
-get_line (inefficiently):</p>
+<p>The two functions <c>until_newline/3</c> and <c>until_enough/3</c> are helpers used
+together with the <c>get_until/5</c> function to implement <c>get_chars</c> and
+<c>get_line</c> (inefficiently):</p>
<code>
until_newline([],eof,_MyStopCharacter) -&gt;
@@ -782,7 +790,9 @@ until_newline([],eof,_MyStopCharacter) -&gt;
until_newline(ThisFar,eof,_MyStopCharacter) -&gt;
{done,ThisFar,[]};
until_newline(ThisFar,CharList,MyStopCharacter) -&gt;
- case lists:splitwith(fun(X) -&gt; X =/= MyStopCharacter end, CharList) of
+ case
+ lists:splitwith(fun(X) -&gt; X =/= MyStopCharacter end, CharList)
+ of
{L,[]} -&gt;
{more,ThisFar++L};
{L2,[MyStopCharacter|Rest]} -&gt;
@@ -802,10 +812,10 @@ until_enough(ThisFar,CharList,_N) -&gt;
</code>
<p>As can be seen, the functions above are just the type of functions
-that should be provided in get_until requests.</p>
+that should be provided in <c>get_until</c> requests.</p>
<p>Now we only need to read and write the table in an appropriate way to
-complete the server:</p>
+complete the I/O server:</p>
<code>
get(P,Tab) -&gt;
@@ -847,13 +857,13 @@ apply_update(Table, {Row, Col, List}) -&gt;
end.
</code>
-<p>The table is read or written in chunks of ?CHARS_PER_REC, overwriting
+<p>The table is read or written in chunks of <c>?CHARS_PER_REC</c>, overwriting
when necessary. The implementation is obviously not efficient, it is
just working.</p>
<p>This concludes the example. It is fully runnable and you can read or
-write to the io_server by using i.e. the io_module or even the file
-module. It's as simple as that to implement a fully fledged io_server
+write to the I/O server by using i.e. the <seealso marker="stdlib:io">io</seealso> module or even the <seealso marker="kernel:file">file</seealso>
+module. It is as simple as that to implement a fully fledged I/O server
in Erlang.</p>
</section>
</chapter>
diff --git a/lib/stdlib/doc/src/re.xml b/lib/stdlib/doc/src/re.xml
index 2211bfb925..71a6e34513 100644
--- a/lib/stdlib/doc/src/re.xml
+++ b/lib/stdlib/doc/src/re.xml
@@ -96,12 +96,12 @@
subjects during the program's lifetime. Compiling once and
executing many times is far more efficient than compiling each
time one wants to match.</p>
- <p>When the unicode option is given, the regular expression should be given as a valid unicode <c>charlist()</c>, otherwise as any valid <c>iodata()</c>.</p>
+ <p>When the unicode option is given, the regular expression should be given as a valid Unicode <c>charlist()</c>, otherwise as any valid <c>iodata()</c>.</p>
<p><marker id="compile_options"/>The options have the following meanings:</p>
<taglist>
<tag><c>unicode</c></tag>
- <item>The regular expression is given as a unicode <c>charlist()</c> and the resulting regular expression code is to be run against a valid unicode <c>charlist()</c> subject.</item>
+ <item>The regular expression is given as a Unicode <c>charlist()</c> and the resulting regular expression code is to be run against a valid Unicode <c>charlist()</c> subject.</item>
<tag><c>anchored</c></tag>
<item>The pattern is forced to be "anchored", that is, it is constrained to match only at the first matching point in the string that is being searched (the "subject string"). This effect can also be achieved by appropriate constructs in the pattern itself.</item>
<tag><c>caseless</c></tag>
@@ -478,7 +478,7 @@ This option makes it possible to include comments inside complicated patterns. N
<p>Replaces the matched part of the <c><anno>Subject</anno></c> string with the contents of <c><anno>Replacement</anno></c>.</p>
<p>The permissible options are the same as for <c>re:run/3</c>, except that the <c>capture</c> option is not allowed.
Instead a <c>{return, <anno>ReturnType</anno>}</c> is present. The default return type is <c>iodata</c>, constructed in a
- way to minimize copying. The <c>iodata</c> result can be used directly in many i/o-operations. If a flat <c>list()</c> is
+ way to minimize copying. The <c>iodata</c> result can be used directly in many I/O-operations. If a flat <c>list()</c> is
desired, specify <c>{return, list}</c> and if a binary is preferred, specify <c>{return, binary}</c>.</p>
<p>As in the <c>re:run/3</c> function, an <c>mp()</c> compiled
diff --git a/lib/stdlib/doc/src/unicode.xml b/lib/stdlib/doc/src/unicode.xml
index 1f6cbaccd7..d235f3e180 100644
--- a/lib/stdlib/doc/src/unicode.xml
+++ b/lib/stdlib/doc/src/unicode.xml
@@ -32,9 +32,9 @@
<module>unicode</module>
<modulesummary>Functions for converting Unicode characters</modulesummary>
<description>
- <p>This module contains functions for converting between different character representations. Basically it converts between iso-latin-1 characters and Unicode ditto, but it can also convert between different Unicode encodings (like UTF-8, UTF-16 and UTF-32).</p>
+ <p>This module contains functions for converting between different character representations. Basically it converts between ISO-latin-1 characters and Unicode ditto, but it can also convert between different Unicode encodings (like UTF-8, UTF-16 and UTF-32).</p>
<p>The default Unicode encoding in Erlang is in binaries UTF-8, which is also the format in which built in functions and libraries in OTP expect to find binary Unicode data. In lists, Unicode data is encoded as integers, each integer representing one character and encoded simply as the Unicode codepoint for the character.</p>
- <p>Other Unicode encodings than integers representing codepoints or UTF-8 in binaries are referred to as &quot;external encodings&quot;. The iso-latin-1 encoding is in binaries and lists referred to as latin1-encoding.</p>
+ <p>Other Unicode encodings than integers representing codepoints or UTF-8 in binaries are referred to as &quot;external encodings&quot;. The ISO-latin-1 encoding is in binaries and lists referred to as latin1-encoding.</p>
<p>It is recommended to only use external encodings for communication with external entities where this is required. When working inside the Erlang/OTP environment, it is recommended to keep binaries in UTF-8 when representing Unicode characters. Latin1 encoding is supported both for backward compatibility and for communication with external entities not supporting Unicode character sets.</p>
</description>
@@ -48,13 +48,13 @@
<datatype>
<name name="unicode_binary"/>
<desc>
- <p>A binary() with characters encoded in the UTF-8 coding standard.</p>
+ <p>A <c>binary()</c> with characters encoded in the UTF-8 coding standard.</p>
</desc>
</datatype>
<datatype>
<name name="unicode_char"/>
<desc>
- <p>An integer() representing a valid unicode codepoint.</p>
+ <p>An <c>integer()</c> representing a valid Unicode codepoint.</p>
</desc>
</datatype>
<datatype>
@@ -63,7 +63,7 @@
<datatype>
<name name="charlist"/>
<desc>
- <p>A unicode_binary is allowed as the tail of the list.</p>
+ <p>A <c>unicode_binary()</c> is allowed as the tail of the list.</p>
</desc>
</datatype>
<datatype>
@@ -85,7 +85,7 @@
</datatype>
<datatype>
<name name="latin1_binary"/>
- <desc><p>A <c>binary()</c> with characters coded in iso-latin-1.</p>
+ <desc><p>A <c>binary()</c> with characters coded in ISO-latin-1.</p>
</desc>
</datatype>
<datatype>
@@ -110,7 +110,9 @@
<name name="bom_to_encoding" arity="1"/>
<fsummary>Identify UTF byte order marks in a binary.</fsummary>
<type name="endian"/>
- <type_desc variable="Bin">A binary() of byte_size 4 or more.</type_desc>
+ <type_desc variable="Bin">
+ A <c>binary()</c> such that <c>byte_size(<anno>Bin</anno>) >= 4</c>.
+ </type_desc>
<desc>
<p>Check for a UTF byte order mark (BOM) in the beginning of a
@@ -126,7 +128,7 @@
<name name="characters_to_list" arity="1"/>
<fsummary>Convert a collection of characters to list of Unicode characters</fsummary>
<desc>
- <p>Same as characters_to_list(<anno>Data</anno>,unicode).</p>
+ <p>Same as <c>characters_to_list(<anno>Data</anno>, unicode)</c>.</p>
</desc>
</func>
<func>
@@ -134,8 +136,8 @@
<fsummary>Convert a collection of characters to list of Unicode characters</fsummary>
<desc>
- <p>This function converts a possibly deep list of integers and
- binaries into a list of integers representing unicode
+ <p>Converts a possibly deep list of integers and
+ binaries into a list of integers representing Unicode
characters. The binaries in the input may have characters
encoded as latin1 (0 - 255, one character per byte), in which
case the <c><anno>InEncoding</anno></c> parameter should be given as
@@ -148,18 +150,18 @@
<p>If <c><anno>InEncoding</anno></c> is <c>latin1</c>, the <c><anno>Data</anno></c> parameter
corresponds to the <c>iodata()</c> type, but for <c>unicode</c>,
the <c><anno>Data</anno></c> parameter can contain integers greater than 255
- (unicode characters beyond the iso-latin-1 range), which would
+ (Unicode characters beyond the ISO-latin-1 range), which would
make it invalid as <c>iodata()</c>.</p>
<p>The purpose of the function is mainly to be able to convert
- combinations of unicode characters into a pure unicode
+ combinations of Unicode characters into a pure Unicode
string in list representation for further processing. For
writing the data to an external entity, the reverse function
<seealso
- marker="#characters_to_binary/3">characters_to_binary/3</seealso>
+ marker="#characters_to_binary/3"><c>characters_to_binary/3</c></seealso>
comes in handy.</p>
- <p>The option <c>unicode</c> is an alias for <c>utf8</c>, as this is the
+ <p>The option <c>unicode</c> is an alias for <c>utf8</c>, as this is the
preferred encoding for Unicode characters in
binaries. <c>utf16</c> is an alias for <c>{utf16,big}</c> and
<c>utf32</c> is an alias for <c>{utf32,big}</c>. The <c>big</c>
@@ -167,7 +169,7 @@
encoding.</p>
<p>If for some reason, the data cannot be converted, either
- because of illegal unicode/latin1 characters in the list, or
+ because of illegal Unicode/latin1 characters in the list, or
because of invalid UTF encoding in any binaries, an error
tuple is returned. The error tuple contains the tag
<c>error</c>, a list representing the characters that could be
@@ -176,7 +178,7 @@
last part is mostly for debugging as it still constitutes a
possibly deep and/or mixed list, not necessarily of the same
depth as the original data. The error occurs when traversing the
- list and whatever's left to decode is simply returned as is.</p>
+ list and whatever is left to decode is simply returned as is.</p>
<p>However, if the input <c><anno>Data</anno></c> is a pure binary, the third
part of the error tuple is guaranteed to be a binary as
@@ -191,7 +193,7 @@
of a Unicode type, an error occurs whenever an integer
<list type="bulleted">
<item>greater than <c>16#10FFFF</c>
- (the maximum unicode character),</item>
+ (the maximum Unicode character),</item>
<item>in the range <c>16#D800</c> to <c>16#DFFF</c>
(invalid range reserved for UTF-16 surrogate pairs)</item>
</list>
@@ -205,14 +207,14 @@
(like the upper
bits of the bytes being wrong), the bytes are decoded to a
too large number, the bytes are decoded to a code-point in the
- invalid unicode
- range or encoding is &quot;overlong&quot;, meaning that a
+ invalid Unicode
+ range, or encoding is &quot;overlong&quot;, meaning that a
number should have been encoded in fewer bytes. The
case of a truncated UTF is handled specially, see the
paragraph about incomplete binaries below. If
<c><anno>InEncoding</anno></c> is <c>latin1</c>, binaries are always valid
as long as they contain whole bytes,
- as each byte falls into the valid iso-latin-1 range.</item>
+ as each byte falls into the valid ISO-latin-1 range.</item>
</list>
@@ -250,7 +252,7 @@
ever be decoded.</p>
<p>If any parameters are of the wrong type, the list structure
- is invalid (a number as tail) or the binaries does not contain
+ is invalid (a number as tail) or the binaries do not contain
whole bytes (bit-strings), a <c>badarg</c> exception is
thrown.</p>
@@ -258,28 +260,27 @@
</func>
<func>
<name name="characters_to_binary" arity="1"/>
- <fsummary>Convert a collection of characters to an UTF-8 binary</fsummary>
+ <fsummary>Convert a collection of characters to a UTF-8 binary</fsummary>
<desc>
- <p>Same as characters_to_binary(Data, unicode, unicode).</p>
+ <p>Same as <c>characters_to_binary(<anno>Data</anno>, unicode, unicode)</c>.</p>
</desc>
</func>
<func>
<name name="characters_to_binary" arity="2"/>
- <fsummary>Convert a collection of characters to an UTF-8 binary</fsummary>
+ <fsummary>Convert a collection of characters to a UTF-8 binary</fsummary>
<desc>
- <p>Same as characters_to_binary(<anno>Data</anno>, <anno>InEncoding</anno>, unicode).</p>
+ <p>Same as <c>characters_to_binary(<anno>Data</anno>, <anno>InEncoding</anno>, unicode)</c>.</p>
</desc>
</func>
<func>
<name name="characters_to_binary" arity="3"/>
- <fsummary>Convert a collection of characters to an UTF-8 binary</fsummary>
+ <fsummary>Convert a collection of characters to a UTF-8 binary</fsummary>
<desc>
- <p>This function behaves as <seealso
- marker="#characters_to_list/2">
- characters_to_list/2</seealso>, but produces an binary
- instead of a unicode list. The
+ <p>Behaves as <seealso marker="#characters_to_list/2">
+ <c>characters_to_list/2</c></seealso>, but produces an binary
+ instead of a Unicode list. The
<c><anno>InEncoding</anno></c> defines how input is to be interpreted if
binaries are present in the <c>Data</c>, while
<c><anno>OutEncoding</anno></c> defines in what format output is to be
@@ -294,7 +295,7 @@
<p>Errors and exceptions occur as in <seealso
marker="#characters_to_list/2">
- characters_to_list/2</seealso>, but the second element
+ <c>characters_to_list/2</c></seealso>, but the second element
in the <c>error</c> or
<c>incomplete</c> tuple will be a <c>binary()</c> and not a
<c>list()</c>.</p>
@@ -304,16 +305,18 @@
<func>
<name name="encoding_to_bom" arity="1"/>
<fsummary>Create a binary UTF byte order mark from encoding.</fsummary>
- <type_desc variable="Bin">A binary() of byte_size 4 or more.</type_desc>
+ <type_desc variable="Bin">
+ A <c>binary()</c> such that <c>byte_size(<anno>Bin</anno>) >= 4</c>.
+ </type_desc>
<desc>
- <p>Create an UTF byte order mark (BOM) as a binary from the
+ <p>Create a UTF byte order mark (BOM) as a binary from the
supplied <c><anno>InEncoding</anno></c>. The BOM is, if supported at all,
expected to be placed first in UTF encoded files or
messages.</p>
<p>The function returns <c>&lt;&lt;&gt;&gt;</c> for the
- <c>latin1</c> encoding, there is no BOM for ISO-latin-1.</p>
+ <c>latin1</c> encoding as there is no BOM for ISO-latin-1.</p>
<p>It can be noted that the BOM for UTF-8 is seldom used, and it
is really not a <em>byte order</em> mark. There are obviously no
diff --git a/lib/stdlib/doc/src/unicode_usage.xml b/lib/stdlib/doc/src/unicode_usage.xml
index e93d49b24a..6131a7c6d1 100644
--- a/lib/stdlib/doc/src/unicode_usage.xml
+++ b/lib/stdlib/doc/src/unicode_usage.xml
@@ -33,10 +33,10 @@
<file>unicode_usage.xml</file>
</header>
<p>Implementing support for Unicode character sets is an ongoing process. The Erlang Enhancement Proposal (EEP) 10 outlines the basics of Unicode support and also specifies a default encoding in binaries that all Unicode-aware modules should handle in the future.</p>
-<p>The functionality described in EEP10 is implemented in Erlang/OTP as of R13A, but that's by no means the end of it. More functionality will be needed in the future and more OTP-libraries might need updating to cope with Unicode data.</p>
+<p>The functionality described in EEP10 is implemented in Erlang/OTP as of R13A, but that is by no means the end of it. More functionality will be needed in the future and more OTP-libraries might need updating to cope with Unicode data.</p>
<p>This guide outlines the current Unicode support and gives a couple of recipes for working with Unicode data.</p>
<section>
-<title>What Unicode is</title>
+<title>What Unicode Is</title>
<p>Unicode is a standard defining codepoints (numbers) for all known, living or dead, scripts. In principle, every known symbol used in any language has a Unicode codepoint.</p>
<p>Unicode codepoints are defined and published by the <em>Unicode Consortium</em>, which is a non profit organization.</p>
<p>Support for Unicode is increasing throughout the world of computing, as the benefits of one common character set are overwhelming when programs are used in a global environment.</p>
@@ -56,19 +56,20 @@
<p>Additionally, the codepoint 16#FEFF is used for byte order marks (BOM's) and use of that character is not encouraged in other contexts than that. It actually is valid though, as the character "ZWNBS" (Zero Width Non Breaking Space). BOM's are used to identify encodings and byte order for programs where such parameters are not known in advance. Byte order marks are more seldom used than one could expect, but their use is becoming more widely spread as they provide the means for programs to make educated guesses about the Unicode format of a certain file.</p>
</section>
<section>
-<title>Standard Unicode representation in Erlang</title>
+<title>Standard Unicode Representation in Erlang</title>
<p>In Erlang, strings are actually lists of integers. A string is defined to be encoded in the ISO-latin-1 (ISO8859-1) character set, which is, codepoint by codepoint, a sub-range of the Unicode character set.</p>
<p>The standard list encoding for strings is therefore easily extendible to cope with the whole Unicode range: A Unicode string in Erlang is simply a list containing integers, each integer being a valid Unicode codepoint and representing one character in the Unicode character set.</p>
-<p>Regular Erlang strings in ISO-latin-1 are a subset of their Unicode strings.</p>
+<p>Regular Erlang strings in ISO-latin-1 are a subset of their Unicode
+strings.</p>
-<p>Binaries on the other hand are more troublesome. For performance reasons, programs often store textual data in binaries instead of lists, mainly because they are more compact (one byte per character instead of two words per character, as is the case with lists). Using erlang:list_to_binary/1, a regular Erlang string can be converted into a binary, effectively using the ISO-latin-1 encoding in the binary - one byte per character. This is very convenient for those regular Erlang strings, but cannot be done for Unicode lists.</p>
+<p>Binaries on the other hand are more troublesome. For performance reasons, programs often store textual data in binaries instead of lists, mainly because they are more compact (one byte per character instead of two words per character, as is the case with lists). Using <c>erlang:list_to_binary/1</c>, a regular Erlang string can be converted into a binary, effectively using the ISO-latin-1 encoding in the binary - one byte per character. This is very convenient for those regular Erlang strings, but cannot be done for Unicode lists.</p>
<p>As the UTF-8 encoding is widely spread and provides the most compact storage, it is selected as the standard encoding of Unicode characters in binaries for Erlang.</p>
<p>The standard binary encoding is used whenever a library function in Erlang should cope with Unicode data in binaries, but is of course not enforced when communicating externally. Functions and bit-syntax exist to encode and decode both UTF-8, UTF-16 and UTF-32 in binaries. Library functions dealing with binaries and Unicode in general, however, only deal with the default encoding.</p>
-<p>Character data may be combined from several sources, sometimes available in a mix of strings and binaries. Erlang has for long had the concept of iodata or iolists, where binaries and lists can be combined to represent a sequence of bytes. In the same way, the Unicode aware modules often allow for combinations of binaries and lists where the binaries have characters encoded in UTF-8 and the lists contain such binaries or numbers representing Unicode codepoints:</p>
+<p>Character data may be combined from several sources, sometimes available in a mix of strings and binaries. Erlang has for long had the concept of <c>iodata</c> or <c>iolists</c>, where binaries and lists can be combined to represent a sequence of bytes. In the same way, the Unicode aware modules often allow for combinations of binaries and lists where the binaries have characters encoded in UTF-8 and the lists contain such binaries or numbers representing Unicode codepoints:</p>
<code type="none">
unicode_binary() = binary() with characters encoded in UTF-8 coding standard
-unicode_char() = integer() >= 0 representing valid unicode codepoint
+unicode_char() = integer() >= 0 representing valid Unicode codepoint
chardata() = charlist() | unicode_binary()
@@ -76,16 +77,18 @@ charlist() = [unicode_char() | unicode_binary() | charlist()]
a unicode_binary is allowed as the tail of the list</code>
<p>The module <c>unicode</c> in STDLIB even supports similar mixes with binaries containing other encodings than UTF-8, but that is a special case to allow for conversions to and from external data:</p>
<code type="none">
-external_unicode_binary() = binary() with characters coded in a user specified Unicode
- encoding other than UTF-8 (UTF-16 or UTF-32)
+external_unicode_binary() = binary() with characters coded in
+ a user specified Unicode encoding other than UTF-8 (UTF-16 or UTF-32)
external_chardata() = external_charlist() | external_unicode_binary()
-external_charlist() = [unicode_char() | external_unicode_binary() | external_charlist()]
+external_charlist() = [unicode_char() |
+ external_unicode_binary() |
+ external_charlist()]
an external_unicode_binary() is allowed as the tail of the list</code>
</section>
<section>
-<title>Basic language support for Unicode</title>
+<title>Basic Language Support for Unicode</title>
<p><marker id="unicode_in_erlang"/>As of Erlang/OTP R16 Erlang can be
written in ISO-latin-1 or Unicode (UTF-8). The details on how to state
the encoding of an Erlang source file can be found in <seealso
@@ -107,54 +110,56 @@ Bin3 = &lt;&lt;$H/utf32-little, $e/utf32-little, $l/utf32-little, $l/utf32-littl
Bin4 = &lt;&lt;"Hello"/utf16&gt;&gt;,</code>
</section>
<section>
-<title>String- and character-literals</title>
-<p>For source code, there is an extension to the \OOO (backslash
-followed by three octal numbers) and \xHH (backslash followed by 'x',
-followed by two hexadecimal characters) syntax, namely \x{H ...} (a
-backslash followed by an 'x', followed by left curly bracket, any
+<title>String- and Character-literals</title>
+<p>For source code, there is an extension to the <c>\</c>OOO (backslash
+followed by three octal numbers) and <c>\x</c>HH (backslash followed by <c>x</c>,
+followed by two hexadecimal characters) syntax, namely <c>\x{</c>H ...<c>}</c> (a
+backslash followed by an <c>x</c>, followed by left curly bracket, any
number of hexadecimal digits and a terminating right curly bracket).
This allows for entering characters of any codepoint literally in a
string even when the encoding is ISO-latin-1.</p>
</section>
-<p>In the shell, if using a Unicode input device, '$' can be followed directly by a Unicode character producing an integer. In the following example the codepoint of a Cyrillic 's' is output:</p>
+<p>In the shell, if using a Unicode input device, <c>$</c> can be followed directly by a Unicode character producing an integer. In the following example the codepoint of a Cyrillic <c>s</c> is output:</p>
<pre>
7> <input>$с.</input>
1089</pre>
</section>
<section>
-<title>The interactive shell</title>
+<title>The Interactive Shell</title>
<p>The interactive Erlang shell, when started towards a terminal or started using the <c>werl</c> command on windows, can support Unicode input and output.</p>
-<p>On Windows&reg;, proper operation requires that a suitable font is installed and selected for the Erlang application to use. If no suitable font is available on your system, try installing the DejaVu fonts (dejavu-fonts.org), which are freely available and then select that font in the Erlang shell application.</p>
-<p>On Unix&reg;-like operating systems, the terminal should be able to handle UTF-8 on input and output (modern versions of XTerm, KDE konsole and the Gnome terminal do for example) and your locale settings have to be proper. As an example, my LANG environment variable is set as this:</p>
+<p>On Windows&reg;, proper operation requires that a suitable font is installed and selected for the Erlang application to use. If no suitable font is available on your system, try installing the DejaVu fonts (<c>dejavu-fonts.org</c>), which are freely available and then select that font in the Erlang shell application.</p>
+<p>On Unix&reg;-like operating systems, the terminal should be able to handle UTF-8 on input and output (modern versions of XTerm, KDE konsole and the Gnome terminal do for example) and your locale settings have to be proper. As an example, my <c>LANG</c> environment variable is set as this:</p>
<pre>
$ <input>echo $LANG</input>
en_US.UTF-8</pre>
-<p>Actually, most systems handle the LC_CTYPE variable before LANG, so if that is set, it has to be set to UTF-8:</p>
+<p>Actually, most systems handle the <c>LC_CTYPE</c> variable before <c>LANG</c>, so if that is set, it has to be set to <c>UTF-8</c>:</p>
<pre>
$ echo <input>$LC_CTYPE</input>
en_US.UTF-8</pre>
-<p>The LANG or LC_CTYPE setting should be consistent with what the terminal is capable of, there is no portable way for Erlang to ask the actual terminal about its UTF-8 capacity, we have to rely on the language and character type settings.</p>
+<p>The <c>LANG</c> or <c>LC_CTYPE</c> setting should be consistent with what the terminal is capable of, there is no portable way for Erlang to ask the actual terminal about its UTF-8 capacity, we have to rely on the language and character type settings.</p>
<p>To investigate what Erlang thinks about the terminal, the <c>io:getopts()</c> call can be used when the shell is started:</p>
<pre>
$ <input>LC_CTYPE=en_US.ISO-8859-1 erl</input>
-Erlang R13A (erts-5.10) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:0] [kernel-poll:false]
+Erlang R16B (erts-5.10) [source] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.10 (abort with ^G)
-1> <input>lists:keyfind(encoding,1,io:getopts()).</input>
+1> <input>lists:keyfind(encoding, 1, io:getopts()).</input>
{encoding,latin1}
2> <input>q().</input>
ok
$ <input>LC_CTYPE=en_US.UTF-8 erl</input>
-Erlang R13A (erts-5.10) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:0] [kernel-poll:false]
+Erlang R16B (erts-5.10) [source] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.10 (abort with ^G)
-1> <input>lists:keyfind(encoding,1,io:getopts()).</input>
+1> <input>lists:keyfind(encoding, 1, io:getopts()).</input>
{encoding,unicode}
2></pre>
<p>When (finally?) everything is in order with the locale settings, fonts and the terminal emulator, you probably also have discovered a way to input characters in the script you desire. For testing, the simplest way is to add some keyboard mappings for other languages, usually done with some applet in your desktop environment. In my KDE environment, I start the KDE Control Center (Personal Settings), select "Regional and Accessibility" and then "Keyboard Layout". On Windows XP&reg;, I start Control Panel->Regional and Language Options, select the Language tab and click the Details... button in the square named "Text services and input Languages". Your environment probably provides similar means of changing the keyboard layout. Make sure you have a way to easily switch back and forth between keyboards if you are not used to this, entering commands using a Cyrillic character set is, as an example, not easily done in the Erlang shell.</p>
<p>Now you are set up for some Unicode input and output. The simplest thing to do is of course to enter a string in the shell:</p>
<pre>
$ <input>erl</input>
+Erlang R16B (erts-5.10) [source] [async-threads:0] [hipe] [kernel-poll:false]
+
Eshell V5.10 (abort with ^G)
1> <input>lists:keyfind(encoding, 1, io:getopts()).</input>
{encoding,unicode}
@@ -166,6 +171,9 @@ ok
4> </pre>
<p>While strings can be input as Unicode characters, the language elements are still limited to the ISO-latin-1 character set. Only character constants and strings are allowed to be beyond that range:</p>
<pre>
+$ <input>erl</input>
+Erlang R16B (erts-5.10) [source] [async-threads:0] [hipe] [kernel-poll:false]
+
Eshell V5.10 (abort with ^G)
1> <input>$ξ</input>
958
@@ -174,7 +182,7 @@ Eshell V5.10 (abort with ^G)
2> </pre>
</section>
<section>
-<title>Unicode file names</title>
+<title>Unicode File Names</title>
<p>Most modern operating systems support Unicode file names in some way or another. There are several different ways to do this and Erlang by default treats the different approaches differently:</p>
<taglist>
<tag>Mandatory Unicode file naming</tag>
@@ -189,7 +197,7 @@ Eshell V5.10 (abort with ^G)
<p>A raw file name is not a list, but a binary. Many non core applications still do not handle file names given as binaries, why such raw names are avoided by default. This means that systems having implemented Unicode file naming through transparent file systems and an UTF-8 convention, do not by default have Unicode file naming turned on. Explicitly turning Unicode file name handling on for these types of systems is considered experimental.</p>
</item>
</taglist>
-<p>The Unicode file naming support was introduced with OTP release R14B01. A VM operating in Unicode file mode can work with files having names in any language or character set (as long as it's supported by the underlying OS and file system). The Unicode character list is used to denote file or directory names and if the file system content is listed, you will also be able to get Unicode lists as return value. The support lies in the Kernel and STDLIB modules, why most applications (that does not explicitly require the file names to be in the ISO-latin-1 range) will benefit from the Unicode support without change.</p>
+<p>The Unicode file naming support was introduced with OTP release R14B01. A VM operating in Unicode file mode can work with files having names in any language or character set (as long as it is supported by the underlying OS and file system). The Unicode character list is used to denote file or directory names and if the file system content is listed, you will also be able to get Unicode lists as return value. The support lies in the Kernel and STDLIB modules, why most applications (that does not explicitly require the file names to be in the ISO-latin-1 range) will benefit from the Unicode support without change.</p>
<p>On Operating systems with mandatory Unicode file names, this means that you more easily conform to the file names of other (non Erlang) applications, and you can also process file names that, at least on Windows, were completely inaccessible (due to having names that could not be represented in ISO-latin-1). Also you will avoid creating incomprehensible file names on MacOSX as the vfs layer of the OS will accept all your file names as UTF-8 and will not rewrite them.</p>
@@ -199,32 +207,32 @@ Eshell V5.10 (abort with ^G)
<p>It is worth noting that the file <c>encoding</c> options given when opening a file has nothing to do with the file <em>name</em> encoding convention. You can very well open files containing UTF-8 but having file names in ISO-latin-1 or vice versa.</p>
-<note>Erlang drivers and NIF shared objects still can not be named with names containing codepoints beyond 127. This is a known limitation to be removed in a future release. Erlang modules however can, but it is definitely not a good idea and is still considered experimental.</note>
+<note><p>Erlang drivers and NIF shared objects still can not be named with names containing codepoints beyond 127. This is a known limitation to be removed in a future release. Erlang modules however can, but it is definitely not a good idea and is still considered experimental.</p></note>
<section>
-<title>Notes about raw file names and automatic file name conversion</title>
+<title>Notes About Raw File Names and Automatic File Name Conversion</title>
<p>Raw file names is introduced together with Unicode file name support in erts-5.8.2 (OTP R14B01). The reason &quot;raw file names&quot; is introduced in the system is to be able to consistently represent file names given in different encodings on the same system. Having the VM automatically translate a file name that is not in UTF-8 to a list of Unicode characters might seem practical, but this would open up for both duplicate file names and other inconsistent behavior. Consider a directory containing a file named &quot;björn&quot; in ISO-latin-1, while the Erlang VM is operating in Unicode file name mode (and therefore expecting UTF-8 file naming). The ISO-latin-1 name is not valid UTF-8 and one could be tempted to think that automatic conversion in for example <c>file:list_dir/1</c> is a good idea. But what would happen if we later tried to open the file and have the name as a Unicode list (magically converted from the ISO-latin-1 file name)? The VM will convert the file name given to UTF-8, as this is the encoding expected. Effectively this means trying to open the file named &lt;&lt;&quot;björn&quot;/utf8&gt;&gt;. This file does not exist, and even if it existed it would not be the same file as the one that was listed. We could even create two files named &quot;björn&quot;, one named in the UTF-8 encoding and one not. If <c>file:list_dir/1</c> would automatically convert the ISO-latin-1 file name to a list, we would get two identical file names as the result. To avoid this, we need to differentiate between file names being properly encoded according to the Unicode file naming convention (i.e. UTF-8) and file names being invalid under the encoding. This is done by representing invalid encoding as &quot;raw&quot; file names, i.e. as binaries.</p>
<p>The core system of Erlang (Kernel and STDLIB) accepts raw file names except for loadable drivers and executables invoked using <c>open_port({spawn, ...} ...)</c>. <c>open_port({spawn_executable, ...} ...)</c> however does accept them. As mentioned earlier, the arguments given in the option list to <c>open_port({spawn_executable, ...} ...)</c> undergo the same conversion as the file names, meaning that the executable will be provided with arguments in UTF-8 as well. This translation is avoided consistently with how the file names are treated, by giving the argument as a binary.</p>
<p>To force Unicode file name translation mode on systems where this is not the default is considered experimental in OTP R14B01 due to the raw file names possibly being a new experience to the programmer and that the non core applications of OTP are not tested for compliance with raw file names yet. Unicode file name translation is expected to be default in future releases.</p>
-<p>If working with raw file names, one can still conform to the encoding convention of the Erlang VM by using the <c>file:native_name_encoding/0</c> function, which returns either the atom <c>latin1</c> or the atom <c>utf8</c> depending on the file name translation mode. On Linux, a VM started without explicitly stating the file name translation mode will default to <c>latin1</c> as the native file name encoding, why file names on the disk encoded as UTF-8 will be returned as a list of the names interpreted as ISO-latin-1. The &quot;UTF-8 list&quot; is not a practical type for displaying or operating on in Erlang, but it is backward compatible and usable in all functions requiring a file name. On Windows and MacOSX, the default behavior is that of file name translation, why the <c>file:native_name_encoding/0</c> by default returns <c>utf8</c> on those systems (the fact that Windows actually does not use UTF-8 on the file system level can safely be ignored by the Erlang programmer). The default behavior can be changed using the <c>+fnu</c> or <c>+fnl</c> options to the VM, see the <c>erl</c> command manual page.</p>
+<p>If working with raw file names, one can still conform to the encoding convention of the Erlang VM by using the <c>file:native_name_encoding/0</c> function, which returns either the atom <c>latin1</c> or the atom <c>utf8</c> depending on the file name translation mode. On Linux, a VM started without explicitly stating the file name translation mode will default to <c>latin1</c> as the native file name encoding, why file names on the disk encoded as UTF-8 will be returned as a list of the names interpreted as ISO-latin-1. The &quot;UTF-8 list&quot; is not a practical type for displaying or operating on in Erlang, but it is backward compatible and usable in all functions requiring a file name. On Windows and MacOSX, the default behavior is that of file name translation, why the <c>file:native_name_encoding/0</c> by default returns <c>utf8</c> on those systems (the fact that Windows actually does not use UTF-8 on the file system level can safely be ignored by the Erlang programmer). The default behavior can be changed using the <c>+fnu</c> or <c>+fnl</c> options to the VM, see the <seealso marker="erts:erl"><c>erl(1)</c></seealso> command manual page.</p>
<p>Even if you are operating without Unicode file naming translation automatically done by the VM, you can access and create files with names in UTF-8 encoding by using raw file names encoded as UTF-8. Enforcing the UTF-8 encoding regardless of the mode the Erlang VM is started in might, in some circumstances be a good idea, as the convention of using UTF-8 file names is spreading.</p>
</section>
<section>
-<title>Notes about MacOSX</title>
+<title>Notes About MacOSX</title>
<p>MacOSXs vfs layer enforces UTF-8 file names in a quite aggressive way. Older versions did this by simply refusing to create non UTF-8 conforming file names, while newer versions replace offending bytes with the sequence &quot;%HH&quot;, where HH is the original character in hexadecimal notation. As Unicode translation is enabled by default on MacOSX, the only way to come up against this is to either start the VM with the <c>+fnl</c> flag or to use a raw file name in <c>latin1</c> encoding. In that case, the file can not be opened with the same name as the one used to create this. The problem is by design in newer versions of MacOSX.</p>
<p>MacOSX also reorganizes the names of files so that the representation of accents etc is denormalized, i.e. the character <c>ö</c> is represented as the codepoints [111,776], where 111 is the character <c>o</c> and 776 is a special accent character. This type of denormalized Unicode is otherwise very seldom used and Erlang normalizes those file names on retrieval, so that denormalized file names is not passed up to the Erlang application. In Erlang the file name &quot;björn&quot; is retrieved as [98,106,246,114,110], not as [98,106,117,776,114,110], even though the file system might think differently.</p>
</section>
</section>
<section>
-<title>Unicode in environment variables and parameters</title>
+<title>Unicode in Environment Variables and Parameters</title>
<p>Environment variables and their interpretation is handled much in the same way as file names. If Unicode file names are enabled, environment variables as well as parameters to the Erlang VM are expected to be in Unicode.</p>
<p>If Unicode file names are enabled, the calls to <seealso marker="kernel:os#getenv/0"><c>os:getenv/0</c></seealso>, <seealso marker="kernel:os#getenv/1"><c>os:getenv/1</c></seealso> and <seealso marker="kernel:os#putenv/2"><c>os:putenv/2</c></seealso> will handle Unicode strings. On Unix-like platforms, the built-in functions will translate environment variables in UTF-8 to/from Unicode strings, possibly with codepoints > 255. On Windows the Unicode versions of the environment system API will be used, also allowing for codepoints > 255.</p>
<p>On Unix-like operating systems, parameters are expected to be UTF-8 without translation if Unicode file names are enabled.</p>
</section>
<section>
-<title>Unicode-aware modules</title>
-<p>Most of the modules in Erlang/OTP are of course Unicode-unaware in the sense that they have no notion of Unicode and really shouldn't have. Typically they handle non-textual or byte-oriented data (like <c>gen_tcp</c> etc).</p>
+<title>Unicode-aware Modules</title>
+<p>Most of the modules in Erlang/OTP are of course Unicode-unaware in the sense that they have no notion of Unicode and really should not have. Typically they handle non-textual or byte-oriented data (like <c>gen_tcp</c> etc).</p>
<p>Modules that actually handle textual data (like <c>io_lib</c>, <c>string</c> etc) are sometimes subject to conversion or extension to be able to handle Unicode characters.</p>
<p>Fortunately, most textual data has been stored in lists and range checking has been sparse, why modules like <c>string</c> works well for Unicode lists with little need for conversion or extension.</p>
<p>Some modules are however changed to be explicitly Unicode-aware. These modules include:</p>
@@ -237,7 +245,7 @@ Eshell V5.10 (abort with ^G)
<item>
<p>The <seealso marker="stdlib:io">io</seealso> module has been extended along with the actual I/O-protocol to handle Unicode data. This means that several functions require binaries to be in UTF-8 and there are modifiers to formatting control sequences to allow for outputting of Unicode strings.</p>
</item>
-<tag><c>file</c>, <c>group</c> and <c>user</c></tag>
+<tag><c>file</c>, <c>group</c>, <c>user</c></tag>
<item>
<p>I/O-servers throughout the system are able both to handle Unicode data and has options for converting data upon actual output or input to/from the device. As shown earlier, the <seealso marker="stdlib:shell">shell</seealso> has support for Unicode terminals and the <seealso marker="kernel:file">file</seealso> module allows for translation to and from various Unicode formats on disk.</p>
<p>The actual reading and writing of files with Unicode data is however not best done with the <c>file</c> module as its interface is byte oriented. A file opened with a Unicode encoding (like UTF-8), is then best read or written using the <seealso marker="stdlib:io">io</seealso> module.</p>
@@ -254,10 +262,10 @@ Eshell V5.10 (abort with ^G)
<p>The module <seealso marker="stdlib:string">string</seealso> works perfect for Unicode strings as well as for ISO-latin-1 strings with the exception of the language-dependent <seealso marker="stdlib:string#to_upper/1">to_upper</seealso> and <seealso marker="stdlib:string#to_lower/1">to_lower</seealso> functions, which are only correct for the ISO-latin-1 character set. Actually they can never function correctly for Unicode characters in their current form, there are language and locale issues as well as multi-character mappings to consider when conversion text between cases. Converting case in an international environment is a big subject not yet addressed in OTP.</p>
</section>
<section>
-<title>Unicode recipes</title>
+<title>Unicode Recipes</title>
<p>When starting with Unicode, one often stumbles over some common issues. I try to outline some methods of dealing with Unicode data in this section.</p>
<section>
-<title>Byte order marks</title>
+<title>Byte Order Marks</title>
<p>A common method of identifying encoding in text-files is to put a byte order mark (BOM) first in the file. The BOM is the codepoint 16#FEFF encoded in the same way as the rest of the file. If such a file is to be read, the first few bytes (depending on encoding) is not part of the actual text. This code outlines how to open a file which is believed to have a BOM and set the files encoding and position for further sequential reading (preferably using the <seealso marker="stdlib:io">io</seealso> module). Note that error handling is omitted from the code:</p>
<code>
open_bom_file_for_reading(File) -&gt;
@@ -268,7 +276,7 @@ open_bom_file_for_reading(File) -&gt;
io:setopts(F,[{encoding,Type}]),
{ok,F}.
</code>
-<p>The <c>unicode:bom_to_encoding/1</c> function identifies the encoding from a binary of at least four bytes. It returns, along with an term suitable for setting the encoding of the file, the actual length of the BOM, so that the file position can be set accordingly. Note that <c>file:position</c> always works on byte-offsets, so that the actual byte-length of the BOM is needed.</p>
+<p>The <c>unicode:bom_to_encoding/1</c> function identifies the encoding from a binary of at least four bytes. It returns, along with an term suitable for setting the encoding of the file, the actual length of the BOM, so that the file position can be set accordingly. Note that <c>file:position/2</c> always works on byte-offsets, so that the actual byte-length of the BOM is needed.</p>
<p>To open a file for writing and putting the BOM first is even simpler:</p>
<code>
open_bom_file_for_writing(File,Encoding) -&gt;
@@ -280,8 +288,8 @@ open_bom_file_for_writing(File,Encoding) -&gt;
<p>In both cases the file is then best processed using the <c>io</c> module, as the functions in <c>io</c> can handle codepoints beyond the ISO-latin-1 range.</p>
</section>
<section>
-<title>Formatted input and output</title>
-<p>When reading and writing to Unicode-aware entities, like the User or a file opened for Unicode translation, you will probably want to format text strings using the functions in <seealso marker="stdlib:io">io</seealso> or <seealso marker="stdlib:io_lib">io_lib</seealso>. For backward compatibility reasons, these functions don't accept just any list as a string, but require e special "translation modifier" when working with Unicode texts. The modifier is "t". When applied to the "s" control character in a formatting string, it accepts all Unicode codepoints and expect binaries to be in UTF-8:</p>
+<title>Formatted Input and Output</title>
+<p>When reading and writing to Unicode-aware entities, like the User or a file opened for Unicode translation, you will probably want to format text strings using the functions in <seealso marker="stdlib:io">io</seealso> or <seealso marker="stdlib:io_lib">io_lib</seealso>. For backward compatibility reasons, these functions do not accept just any list as a string, but require a special <em>translation modifier</em> when working with Unicode texts. The modifier is <c>t</c>. When applied to the <c>s</c> control character in a formatting string, it accepts all Unicode codepoints and expect binaries to be in UTF-8:</p>
<pre>
1> <input>io:format("~ts~n",[&lt;&lt;"åäö"/utf8&gt;&gt;]).</input>
åäö
@@ -289,21 +297,24 @@ ok
2> <input>io:format("~s~n",[&lt;&lt;"åäö"/utf8&gt;&gt;]).</input>
åäö
ok</pre>
-<p>Obviously the second <c>io:format</c> gives undesired output because the UTF-8 binary is not in latin1. Because ISO-latin-1 is still the defined character set of Erlang, the non prefixed "s" control character expects ISO-latin-1 in binaries as well as lists.</p>
-<p>As long as the data is always lists, the "t" modifier can be used for any string, but when binary data is involved, care must be taken to make the tight choice of formatting characters.</p>
-<p>The function <c>format</c> in <c>io_lib</c> behaves similarly. This function is defined to return a deep list of characters and the output could easily be converted to binary data for outputting on a device of any kind by a simple <c>erlang:list_to_binary</c>. When the translation modifier is used, the list can however contain characters that cannot be stored in one byte. The call to <c>erlang:list_to_binary</c> will in that case fail. However, if the io_server you want to communicate with is Unicode-aware, the list returned can still be used directly:</p>
+<p>Obviously the second <c>io:format/2</c> gives undesired output because the UTF-8 binary is not in latin1. Because ISO-latin-1 is still the defined character set of Erlang, the non prefixed <c>s</c> control character expects ISO-latin-1 in binaries as well as lists.</p>
+<p>As long as the data is always lists, the <c>t</c> modifier can be used for any string, but when binary data is involved, care must be taken to make the right choice of formatting characters.</p>
+<p>The function <c>format/2</c> in <c>io_lib</c> behaves similarly. This function is defined to return a deep list of characters and the output could easily be converted to binary data for outputting on a device of any kind by a simple <c>erlang:list_to_binary/1</c>. When the translation modifier is used, the list can however contain characters that cannot be stored in one byte. The call to <c>erlang:list_to_binary/1</c> will in that case fail. However, if the I/O server you want to communicate with is Unicode-aware, the list returned can still be used directly:</p>
<pre>
+$ <input>erl</input>
+Erlang R16B (erts-5.10) [source] [async-threads:0] [hipe] [kernel-poll:false]
+
Eshell V5.10 (abort with ^G)
1> <input>io_lib:format("~ts~n", ["θνιψοδε"]).</input>
["θνιψοδε","\n"]
2> <input>io:put_chars(io_lib:format("~ts~n", ["θνιψοδε"])).</input>
θνιψοδε
ok</pre>
-<p>The Unicode string is returned as a Unicode list, which is recognized as such since the Erlang shell uses the Unicode encoding. The Unicode list is valid input to the <seealso marker="stdlib:io#put_chars/2">io:put_chars</seealso> function, so data can be output on any Unicode capable device. If the device is a terminal, characters will be output in the \x{H ...} format if encoding is <c>latin1</c> otherwise in UTF-8 (for the non-interactive terminal - "oldshell" or "noshell") or whatever is suitable to show the character properly (for an interactive terminal - the regular shell). The bottom line is that you can always send Unicode data to the <c>standard_io</c> device. Files will however only accept Unicode codepoints beyond ISO-latin-1 if <c>encoding</c> is set to something else than <c>latin1</c>.</p>
+<p>The Unicode string is returned as a Unicode list, which is recognized as such since the Erlang shell uses the Unicode encoding. The Unicode list is valid input to the <seealso marker="stdlib:io#put_chars/2">io:put_chars/2</seealso> function, so data can be output on any Unicode capable device. If the device is a terminal, characters will be output in the <c>\x{</c>H ...<c>}</c> format if encoding is <c>latin1</c> otherwise in UTF-8 (for the non-interactive terminal - "oldshell" or "noshell") or whatever is suitable to show the character properly (for an interactive terminal - the regular shell). The bottom line is that you can always send Unicode data to the <c>standard_io</c> device. Files will however only accept Unicode codepoints beyond ISO-latin-1 if <c>encoding</c> is set to something else than <c>latin1</c>.</p>
</section>
<section>
-<title>Heuristic identification of UTF-8</title>
-<p>While it's strongly encouraged that the actual encoding of characters in binary data is known prior to processing, that is not always possible. On a typical Linux&reg; system, there is a mix of UTF-8 and ISO-latin-1 text files and there are seldom any BOM's in the files to identify them.</p>
+<title>Heuristic Identification of UTF-8</title>
+<p>While it iss strongly encouraged that the actual encoding of characters in binary data is known prior to processing, that is not always possible. On a typical Linux&reg; system, there is a mix of UTF-8 and ISO-latin-1 text files and there are seldom any BOM's in the files to identify them.</p>
<p>UTF-8 is designed in such a way that ISO-latin-1 characters with numbers beyond the 7-bit ASCII range are seldom considered valid when decoded as UTF-8. Therefore one can usually use heuristics to determine if a file is in UTF-8 or if it is encoded in ISO-latin-1 (one byte per character) encoding. The <c>unicode</c> module can be used to determine if data can be interpreted as UTF-8:</p>
<code>
heuristic_encoding_bin(Bin) when is_binary(Bin) -&gt;