diff options
Diffstat (limited to 'erts/doc/src')
-rw-r--r-- | erts/doc/src/erl.xml | 95 | ||||
-rw-r--r-- | erts/doc/src/erl_driver.xml | 28 | ||||
-rw-r--r-- | erts/doc/src/erl_nif.xml | 122 | ||||
-rw-r--r-- | erts/doc/src/erlang.xml | 108 | ||||
-rw-r--r-- | erts/doc/src/erts_alloc.xml | 21 | ||||
-rw-r--r-- | erts/doc/src/escript.xml | 10 | ||||
-rw-r--r-- | erts/doc/src/notes.xml | 2 | ||||
-rw-r--r-- | erts/doc/src/zlib.xml | 30 |
8 files changed, 340 insertions, 76 deletions
diff --git a/erts/doc/src/erl.xml b/erts/doc/src/erl.xml index bf0d132955..27a23174d5 100644 --- a/erts/doc/src/erl.xml +++ b/erts/doc/src/erl.xml @@ -792,6 +792,54 @@ SMP support enabled (see the <seealso marker="#smp">-smp</seealso> flag).</p> </item> + <tag><marker id="+SDcpu"><c><![CDATA[+SDcpu DirtyCPUSchedulers:DirtyCPUSchedulersOnline]]></c></marker></tag> + <item> + <p>Sets the number of dirty CPU scheduler threads to create and dirty + CPU scheduler threads to set online when threading support has been + enabled. The maximum for both values is 1024, and each value is further + limited by the settings for normal schedulers: the number of dirty CPU + scheduler threads created cannot exceed the number of normal scheduler + threads created, and the number of dirty CPU scheduler threads online + cannot exceed the number of normal scheduler threads online (see the + <seealso marker="#+S">+S</seealso> and <seealso marker="#+SP">+SP</seealso> + flags for more details). By default, the number of dirty CPU scheduler + threads created equals the number of normal scheduler threads created, + and the number of dirty CPU scheduler threads online equals the number + of normal scheduler threads online. <c>DirtyCPUSchedulers</c> may be + omitted if <c>:DirtyCPUSchedulersOnline</c> is not and vice versa. The + number of dirty CPU schedulers online can be changed at run time via + <seealso marker="erlang#system_flag_dirty_cpu_schedulers_online">erlang:system_flag(dirty_cpu_schedulers_online, DirtyCPUSchedulersOnline)</seealso>. + </p> + <p>This option is ignored if the emulator doesn't have threading support + enabled. Currently, <em>this option is experimental</em> and is supported only + if the emulator was configured and built with support for dirty schedulers + enabled (it's disabled by default). + </p> + </item> + <tag><marker id="+SDPcpu"><c><![CDATA[+SDPcpu DirtyCPUSchedulersPercentage:DirtyCPUSchedulersOnlinePercentage]]></c></marker></tag> + <item> + <p>Similar to <seealso marker="#+SDcpu">+SDcpu</seealso> but uses percentages to set the + number of dirty CPU scheduler threads to create and number of dirty CPU scheduler threads + to set online when threading support has been enabled. Specified values must be greater + than 0. For example, <c>+SDPcpu 50:25</c> sets the number of dirty CPU scheduler threads + to 50% of the logical processors configured and the number of dirty CPU scheduler threads + online to 25% of the logical processors available. <c>DirtyCPUSchedulersPercentage</c> may + be omitted if <c>:DirtyCPUSchedulersOnlinePercentage</c> is not and vice versa. The + number of dirty CPU schedulers online can be changed at run time via + <seealso marker="erlang#system_flag_dirty_cpu_schedulers_online">erlang:system_flag(dirty_cpu_schedulers_online, DirtyCPUSchedulersOnline)</seealso>. + </p> + <p>This option interacts with <seealso marker="#+SDcpu">+SDcpu</seealso> settings. + For example, on a system with 8 logical cores configured and 8 logical cores available, + the combination of the options <c>+SDcpu 4:4 +SDPcpu 50:25</c> (in either order) results + in 2 dirty CPU scheduler threads (50% of 4) and 1 dirty CPU scheduler thread online (25% of 4). + </p> + <p>This option is ignored if the emulator doesn't have threading support + enabled. Currently, <em>this option is experimental</em> and is supported only + if the emulator was configured and built with support for dirty schedulers + enabled (it's disabled by default). + </p> + </item> + <tag><marker id="+SDio"><c><![CDATA[+SDio IOSchedulers]]></c></marker></tag> <tag><c><![CDATA[+sFlag Value]]></c></tag> <item> <p>Scheduling specific flags.</p> @@ -941,6 +989,10 @@ when schedulers frequently run out of work. When disabled, the frequency with which schedulers run out of work will not be taken into account by the load balancing logic. + <br/> <c>+scl false</c> is similar to + <seealso marker="#+sub">+sub true</seealso> with the difference + that <c>+sub true</c> also will balance scheduler utilization + between schedulers. </p> </item> <tag><marker id="+sct"><c>+sct CpuTopology</c></marker></tag> @@ -1087,6 +1139,29 @@ documentation of the <seealso marker="#+sbt">+sbt</seealso> flag. </p> </item> + <tag><marker id="+sub"><c>+sub true|false</c></marker></tag> + <item> + <p>Enable or disable + <seealso marker="erts:erlang#statistics_scheduler_wall_time">scheduler + utilization</seealso> balancing of load. By default scheduler + utilization balancing is disabled and instead scheduler + compaction of load is enabled which will strive for a load + distribution which causes as many scheduler threads as possible + to be fully loaded (i.e., not run out of work). When scheduler + utilization balancing is enabled the system will instead try to + balance scheduler utilization between schedulers. That is, + strive for equal scheduler utilization on all schedulers. + <br/> <c>+sub true</c> is only supported on + systems where the runtime system detects and use a monotonically + increasing high resolution clock. On other systems, the runtime + system will fail to start. + <br/> <c>+sub true</c> implies + <seealso marker="#+scl">+scl false</seealso>. The difference + between <c>+sub true</c> and <c>+scl false</c> is that + <c>+scl false</c> will not try to balance the scheduler + utilization. + </p> + </item> <tag><marker id="+swct"><c>+sws very_eager|eager|medium|lazy|very_lazy</c></marker></tag> <item> <p> @@ -1126,18 +1201,18 @@ <tag><marker id="+spp"><c>+spp Bool</c></marker></tag> <item> <p>Set default scheduler hint for port parallelism. If set to - <c>true</c>, the VM will schedule port tasks when it by this can - improve the parallelism in the system. If set to <c>false</c>, - the VM will try to perform port tasks immediately and by this - improve latency at the expense of parallelism. If this - flag has not been passed, the default scheduler hint for port - parallelism is currently <c>false</c>. The default used can be - inspected in runtime by calling - <seealso marker="erlang#system_info_port_parallelism">erlang:system_info(port_parallelism)</seealso>. + <c>true</c>, the VM will schedule port tasks when doing so will + improve parallelism in the system. If set to <c>false</c>, the VM + will try to perform port tasks immediately, improving latency at the + expense of parallelism. If this flag has not been passed, the + default scheduler hint for port parallelism is currently + <c>false</c>. The default used can be inspected in runtime by + calling <seealso + marker="erlang#system_info_port_parallelism">erlang:system_info(port_parallelism)</seealso>. The default can be overriden on port creation by passing the <seealso marker="erlang#open_port_parallelism">parallelism</seealso> - option to - <seealso marker="erlang#open_port/2">open_port/2</seealso></p>. + option to <seealso + marker="erlang#open_port/2">open_port/2</seealso></p>. </item> <tag><marker id="sched_thread_stack_size"><c><![CDATA[+sss size]]></c></marker></tag> <item> diff --git a/erts/doc/src/erl_driver.xml b/erts/doc/src/erl_driver.xml index b453a4861e..c2f7fa4588 100644 --- a/erts/doc/src/erl_driver.xml +++ b/erts/doc/src/erl_driver.xml @@ -745,7 +745,7 @@ typedef struct ErlIOVec { created and decrement it once when the port associated with the lock terminates. The emulator will also increment the reference count when an async job is enqueued and decrement - it after an async job has been invoked, or canceled. Besides + it after an async job has been invoked. Besides this, it is the responsibility of the driver to ensure that the reference count does not reach zero before the last use of the lock by the driver has been made. The reference count @@ -1995,14 +1995,12 @@ ERL_DRV_EXT2TERM char *buf, ErlDrvUInt len <c>async_invoke</c> and <c>async_free</c>. It's typically a pointer to a structure that contains a pipe or event that can be used to signal that the async operation completed. - The data should be freed in <c>async_free</c>, because it's - called if <c>driver_async_cancel</c> is called.</p> + The data should be freed in <c>async_free</c>.</p> <p>When the async operation is done, <seealso marker="driver_entry#ready_async">ready_async</seealso> driver entry function is called. If <c>ready_async</c> is null in the driver entry, the <c>async_free</c> function is called instead.</p> - <p>The return value is a handle to the asynchronous task, which - can be used as argument to <c>driver_async_cancel</c>.</p> + <p>The return value is a handle to the asynchronous task.</p> <note> <p>As of erts version 5.5.4.3 the default stack size for threads in the async-thread pool is 16 kilowords, @@ -2040,26 +2038,6 @@ ERL_DRV_EXT2TERM char *buf, ErlDrvUInt len </desc> </func> <func> - <name><ret>int</ret><nametext>driver_async_cancel(long id)</nametext></name> - <fsummary>Cancel an asynchronous call</fsummary> - <desc> - <marker id="driver_async_cancel"></marker> - <p>This function used to cancel a scheduled asynchronous operation, - if it was still in the queue. It returned 1 if it succeeded, and - 0 if it failed.</p> - <p>Since it could not guarantee success, it was more or less useless. - The user had to implement synchronization of cancellation anyway. - It also unnecessarily complicated the implementation. Therefore, - as of OTP-R15B <c>driver_async_cancel()</c> is deprecated, and - scheduled for removal in OTP-R17. It will currently always fail, - and return 0.</p> - <warning><p><c>driver_async_cancel()</c> is deprecated and will - be removed in the OTP-R17 release.</p> - </warning> - - </desc> - </func> - <func> <name><ret>int</ret><nametext>driver_lock_driver(ErlDrvPort port)</nametext></name> <fsummary>Make sure the driver is never unloaded</fsummary> <desc> diff --git a/erts/doc/src/erl_nif.xml b/erts/doc/src/erl_nif.xml index 7ac8181d47..8b19725c02 100644 --- a/erts/doc/src/erl_nif.xml +++ b/erts/doc/src/erl_nif.xml @@ -181,7 +181,11 @@ ok to dispatch the work to another thread, return from the native function, and wait for the result. The thread can send the result back to the calling thread using message passing. Information - about thread primitives can be found below.</p> + about thread primitives can be found below. If you have built your system + with <em>the currently experimental</em> support for dirty schedulers, + you may want to try out this functionality by dispatching the work to a + <seealso marker="#dirty_nifs">dirty NIF</seealso>, + which does not have the same duration restriction as a normal NIF.</p> </description> <section> <title>FUNCTIONALITY</title> @@ -312,6 +316,38 @@ ok <p>The library initialization callbacks <c>load</c>, <c>reload</c> and <c>upgrade</c> are all thread-safe even for shared state data.</p> </item> + <tag>Dirty NIFs</tag> + <item><p><marker id="dirty_nifs"/><em>Note that the dirty NIF functionality + is experimental</em> and that you have to enable support for dirty + schedulers when building OTP in order to try the functionality out. Native functions + <seealso marker="#lengthy_work"> + must normally run quickly</seealso>, as explained earlier in this document. They + generally should execute for no more than a millisecond. But not all native functions + can execute so quickly; for example, functions that encrypt large blocks of data or + perform lengthy file system operations can often run for tens of seconds or more.</p> + <p>A NIF that cannot execute in a millisecond or less is called a "dirty NIF" since + it performs work that the Erlang runtime cannot handle cleanly. Applications + that make use of such functions must indicate to the runtime that the functions are + dirty so they can be handled specially. To schedule a dirty NIF for execution, the + application calls <seealso marker="#enif_schedule_dirty_nif">enif_schedule_dirty_nif</seealso>, + passing to it a pointer to the dirty NIF to be executed and indicating with a flag + argument whether it expects the operation to be CPU-bound or I/O-bound.</p> + <p>All dirty NIFs must ultimately invoke the <seealso marker="#enif_schedule_dirty_nif_finalizer"> + enif_schedule_dirty_nif_finalizer</seealso> as their final action, passing to it the + result they wish to return to the original caller. A finalizer function can either + receive the result and return it directly, or it can return a different value instead. + For convenience, the NIF API provides the <seealso marker="#enif_dirty_nif_finalizer"> + enif_dirty_nif_finalizer</seealso> function that applications can use as a finalizer; + it simply returns its result argument.</p> + <note><p>Dirty NIF support is available only when the emulator is configured with dirty + schedulers enabled. This feature is currently disabled by default. To determine whether + the dirty NIF API is available, native code can check to see if the C preprocessor macro + <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined. Also, if the Erlang runtime was built + without threading support, dirty schedulers are disabled. To check at runtime for the presence + of dirty scheduler threads, code can call the <seealso marker="#enif_have_dirty_schedulers"><c> + enif_have_dirty_schedulers()</c></seealso> API function, which returns true if dirty + scheduler threads are present, false otherwise.</p></note> + </item> </taglist> </section> <section> @@ -610,6 +646,18 @@ typedef enum { See also the <seealso marker="#WARNING">warning</seealso> text at the beginning of this document.</p> </desc> </func> + <func><name><ret>ERL_NIF_TERM</ret><nametext>enif_dirty_nif_finalizer(ErlNifEnv* env, ERL_NIF_TERM result)</nametext></name> + <fsummary>Simple dirty NIF result finalizer</fsummary> + <desc> + <p>A convenience function that a dirty NIF can use as a finalizer that simply + return its <c>result</c> argument as its return value. This function is provided + for dirty NIFs with results that should be returned directly to the original caller.</p> + <note><p>This function is available only when the emulator is configured with dirty + schedulers enabled. This feature is currently disabled by default. To determine whether + the dirty NIF API is available, native code can check to see if the C preprocessor macro + <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined.</p></note> + </desc> + </func> <func><name><ret>int</ret><nametext>enif_equal_tids(ErlNifTid tid1, ErlNifTid tid2)</nametext></name> <fsummary></fsummary> <desc><p>Same as <seealso marker="erl_driver#erl_drv_equal_tids">erl_drv_equal_tids</seealso>. @@ -730,6 +778,22 @@ typedef enum { and return true, or return false if <c>term</c> is not an unsigned integer or is outside the bounds of type <c>unsigned long</c>.</p></desc> </func> + <func><name><ret>int</ret><nametext>enif_have_dirty_schedulers()</nametext></name> + <fsummary>Runtime check for the presence of dirty scheduler threads</fsummary> + <desc> + <p>Check at runtime for the presence of dirty scheduler threads. If the emulator is + built with threading support, dirty scheduler threads are available and + <c>enif_have_dirty_schedulers()</c> returns true. If the emulator was built without + threading support, <c>enif_have_dirty_schedulers()</c> returns false.</p> + <p>If dirty scheduler threads are not available in the emulator, calls to + <c>enif_schedule_dirty_nif</c> and <c>enif_schedule_dirty_nif_finalizer</c> result in + the NIF and finalizer functions being called directly within the calling thread.</p> + <note><p>This function is available only when the emulator is configured with dirty + schedulers enabled. This feature is currently disabled by default. To determine whether + the dirty NIF API is available, native code can check to see if the C preprocessor macro + <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined.</p></note> + </desc> + </func> <func><name><ret>int</ret><nametext>enif_inspect_binary(ErlNifEnv* env, ERL_NIF_TERM bin_term, ErlNifBinary* bin)</nametext></name> <fsummary>Inspect the content of a binary</fsummary> <desc><p>Initialize the structure pointed to by <c>bin</c> with @@ -777,6 +841,20 @@ typedef enum { Erlang operators <c>=:=</c> and <c>=/=</c>.</p></desc> </func> + <func><name><ret>int</ret><nametext>enif_is_on_dirty_scheduler(ErlNifEnv* env)</nametext></name> + <fsummary>Check to see if executing on a dirty scheduler thread</fsummary> + <desc> + <p>Check to see if the current NIF is executing on a dirty scheduler thread. If the + emulator is built with threading support, calling <c>enif_is_on_dirty_scheduler</c> + from within a dirty NIF returns true. It returns false when the calling NIF is a regular + NIF or a NIF finalizer, both of which run on normal scheduler threads, or when the emulator + is built without threading support.</p> + <note><p>This function is available only when the emulator is configured with dirty + schedulers enabled. This feature is currently disabled by default. To determine whether + the dirty NIF API is available, native code can check to see if the C preprocessor macro + <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined.</p></note> + </desc> + </func> <func><name><ret>int</ret><nametext>enif_is_pid(ErlNifEnv* env, ERL_NIF_TERM term)</nametext></name> <fsummary>Determine if a term is a pid</fsummary> <desc><p>Return true if <c>term</c> is a pid.</p></desc> @@ -1141,6 +1219,48 @@ typedef enum { <desc><p>Same as <seealso marker="erl_driver#erl_drv_rwlock_tryrwlock">erl_drv_rwlock_tryrwlock</seealso>. </p></desc> </func> + <func><name><ret>ERL_NIF_TERM</ret><nametext>enif_schedule_dirty_nif(ErlNifEnv* env, int flags, ERL_NIF_TERM (*fp)(ErlNifEnv* env, int argc, const ERL_NIF_TERM argv[]), int argc, const ERL_NIF_TERM argv[])</nametext></name> + <fsummary>Schedule a dirty NIF for execution</fsummary> + <desc> + <p>Schedule dirty NIF <c>fp</c> to execute a long-running operation. The <c>flags</c> + argument must be set to either <c>ERL_NIF_DIRTY_JOB_CPU_BOUND</c> if the job is expected to + be primarily CPU-bound, or <c>ERL_NIF_DIRTY_JOB_IO_BOUND</c> for jobs that will be + I/O-bound. The <c>argc</c> and <c>argv</c> arguments can either be the originals passed + into the calling NIF, or they can be values created by the calling NIF. The calling + NIF must use the return value of <c>enif_schedule_dirty_nif</c> as its own return value.</p> + <p>Be aware that <c>enif_schedule_dirty_nif</c>, as its name implies, only schedules the + dirty NIF for future execution. The calling NIF does not block waiting for the dirty NIF to + execute and return, which means that the calling NIF can't expect to receive the dirty NIF + return value and use it for further operations.</p> + <p>A dirty NIF may not invoke the <seealso marker="#enif_make_badarg">enif_make_badarg</seealso> + to raise an exception. If it wishes to return an exception, the dirty NIF should pass a + regular result indicating the exception details to its finalizer, and allow the finalizer + to raise the exception on its behalf.</p> + <note><p>This function is available only when the emulator is configured with dirty schedulers + enabled. This feature is currently disabled by default. To determine whether the dirty NIF API + is available, native code can check to see if the C preprocessor macro + <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined.</p></note> + </desc> + </func> + <func><name><ret>ERL_NIF_TERM</ret><nametext>enif_schedule_dirty_nif_finalizer(ErlNifEnv* env, ERL_NIF_TERM result, ERL_NIF_TERM (*fp)(ErlNifEnv* env, ERL_NIF_TERM result))</nametext></name> + <fsummary>Schedule a dirty NIF finalizer</fsummary> + <desc> + <p>When a dirty NIF finishes executing, it must schedule a finalizer function to return + its result to the original NIF caller. The dirty NIF passes <c>result</c> as the value it + wants the finalizer to use as the return value. The <c>fp</c> argument is a pointer to the + finalizer function. The NIF API provides the <seealso marker="#enif_dirty_nif_finalizer"> + enif_dirty_nif_finalizer</seealso> function that can be used as a finalizer that simply + returns its <c>result</c> argument. You are also free to write your own custom finalizer + that uses <c>result</c> to derive a different return value, or ignores <c>result</c> + entirely and returns a completely different value.</p> + <p>Without exception, all dirty NIFs must invoke <c>enif_schedule_dirty_nif_finalizer</c> + to complete their execution.</p> + <note><p>This function is available only when the emulator is configured with dirty + schedulers enabled. This feature is currently disabled by default. To determine whether + the dirty NIF API is available, native code can check to see if the C preprocessor macro + <c>ERL_NIF_DIRTY_SCHEDULER_SUPPORT</c> is defined.</p></note> + </desc> + </func> <func><name><ret>ErlNifPid *</ret><nametext>enif_self(ErlNifEnv* caller_env, ErlNifPid* pid)</nametext></name> <fsummary>Get the pid of the calling process.</fsummary> <desc><p>Initialize the pid variable <c>*pid</c> to represent the diff --git a/erts/doc/src/erlang.xml b/erts/doc/src/erlang.xml index 124302a2cb..4cf5631727 100644 --- a/erts/doc/src/erlang.xml +++ b/erts/doc/src/erlang.xml @@ -3011,11 +3011,11 @@ os_prompt% </pre> <tag><marker id="open_port_parallelism"><c>{parallelism, Boolean}</c></marker></tag> <item> <p>Set scheduler hint for port parallelism. If set to <c>true</c>, - the VM will schedule port tasks when it by this can improve the + the VM will schedule port tasks when doing so will improve parallelism in the system. If set to <c>false</c>, the VM will - try to perform port tasks immediately and by this improving the - latency at the expense of parallelism. The default can be set on - system startup by passing the + try to perform port tasks immediately, improving latency at the + expense of parallelism. The default can be set on system startup + by passing the <seealso marker="erl#+spp">+spp</seealso> command line argument to <seealso marker="erl">erl(1)</seealso>. </p> @@ -5194,6 +5194,27 @@ ok </func> <func> <name name="system_flag" arity="2" clause_i="3"/> + <fsummary>Set system flag dirty CPU schedulers online</fsummary> + <desc> + <p><marker id="system_flag_dirty_cpu_schedulers_online"></marker> + Sets the amount of dirty CPU schedulers online. Valid range is + <![CDATA[1 <= DirtyCPUSchedulersOnline <= N]]> where <c>N</c> is the + lesser of the return values of <c>erlang:system_info(dirty_cpu_schedulers)</c> and + <c>erlang:system_info(schedulers_online)</c>. + </p> + <p>Returns the old value of the flag.</p> + <p><em>Note that the dirty schedulers functionality is experimental</em>, and + that you have to enable support for dirty schedulers when building OTP in + order to try the functionality out.</p> + <p>For more information see + <seealso marker="#system_info_dirty_cpu_schedulers">erlang:system_info(dirty_cpu_schedulers)</seealso> + and + <seealso marker="#system_info_dirty_cpu_schedulers_online">erlang:system_info(dirty_cpu_schedulers_online)</seealso>. + </p> + </desc> + </func> + <func> + <name name="system_flag" arity="2" clause_i="4"/> <fsummary>Set system flag fullsweep_after</fsummary> <desc> <p><c><anno>Number</anno></c> is a non-negative integer which indicates @@ -5211,7 +5232,7 @@ ok </desc> </func> <func> - <name name="system_flag" arity="2" clause_i="4"/> + <name name="system_flag" arity="2" clause_i="5"/> <fsummary>Set system flag min_heap_size</fsummary> <desc> <p>Sets the default minimum heap size for processes. The @@ -5226,7 +5247,7 @@ ok </desc> </func> <func> - <name name="system_flag" arity="2" clause_i="5"/> + <name name="system_flag" arity="2" clause_i="6"/> <fsummary>Set system flag min_bin_vheap_size</fsummary> <desc> <p>Sets the default minimum binary virtual heap size for processes. The @@ -5241,7 +5262,7 @@ ok </desc> </func> <func> - <name name="system_flag" arity="2" clause_i="6"/> + <name name="system_flag" arity="2" clause_i="7"/> <fsummary>Set system flag multi_scheduling</fsummary> <desc> <p><marker id="system_flag_multi_scheduling"></marker> @@ -5279,7 +5300,7 @@ ok </desc> </func> <func> - <name name="system_flag" arity="2" clause_i="7"/> + <name name="system_flag" arity="2" clause_i="8"/> <type name="scheduler_bind_type"/> <fsummary>Set system flag scheduler_bind_type</fsummary> <desc> @@ -5399,7 +5420,7 @@ ok </desc> </func> <func> - <name name="system_flag" arity="2" clause_i="8"/> + <name name="system_flag" arity="2" clause_i="9"/> <fsummary>Set system flag scheduler_wall_time</fsummary> <desc><p><marker id="system_flag_scheduler_wall_time"></marker> Turns on/off scheduler wall time measurements. </p> @@ -5409,7 +5430,7 @@ ok </desc> </func> <func> - <name name="system_flag" arity="2" clause_i="9"/> + <name name="system_flag" arity="2" clause_i="10"/> <fsummary>Set system flag schedulers_online</fsummary> <desc> <p><marker id="system_flag_schedulers_online"></marker> @@ -5425,7 +5446,7 @@ ok </desc> </func> <func> - <name name="system_flag" arity="2" clause_i="10"/> + <name name="system_flag" arity="2" clause_i="11"/> <fsummary>Set system flag trace_control_word</fsummary> <desc> <p>Sets the value of the node's trace control word to @@ -5785,6 +5806,71 @@ ok compiled; otherwise, <c>false</c>. </p> </item> + <tag><marker id="system_info_dirty_cpu_schedulers"><c>dirty_cpu_schedulers</c></marker></tag> + <item> + <p>Returns the number of dirty CPU scheduler threads used by + the emulator. Dirty CPU schedulers execute CPU-bound + native functions such as NIFs, linked-in driver code, and BIFs + that cannot be managed cleanly by the emulator's normal schedulers. + </p> + <p>The number of dirty CPU scheduler threads is determined at emulator + boot time and cannot be changed after that. The number of dirty CPU + scheduler threads online can however be changed at any time. The number of + dirty CPU schedulers can be set on startup by passing + the <seealso marker="erts:erl#+SDcpu">+SDcpu</seealso> command line flag, see + <seealso marker="erts:erl#+SDcpu">erl(1)</seealso>. + </p> + <p><em>Note that the dirty schedulers functionality is experimental</em>, and + that you have to enable support for dirty schedulers when building OTP in + order to try the functionality out.</p> + <p>See also <seealso marker="#system_flag_dirty_cpu_schedulers_online">erlang:system_flag(dirty_cpu_schedulers_online, DirtyCPUSchedulersOnline)</seealso>, + <seealso marker="#system_info_dirty_cpu_schedulers_online">erlang:system_info(dirty_cpu_schedulers_online)</seealso>, + <seealso marker="#system_info_dirty_io_schedulers">erlang:system_info(dirty_io_schedulers)</seealso>, + <seealso marker="#system_info_schedulers">erlang:system_info(schedulers)</seealso>, + <seealso marker="#system_info_schedulers_online">erlang:system_info(schedulers_online)</seealso>, and + <seealso marker="#system_flag_schedulers_online">erlang:system_flag(schedulers_online, SchedulersOnline)</seealso>.</p> + </item> + <tag><marker id="system_info_dirty_cpu_schedulers_online"><c>dirty_cpu_schedulers_online</c></marker></tag> + <item> + <p>Returns the number of dirty CPU schedulers online. The return value + satisfies the following relationship: + <c><![CDATA[1 <= DirtyCPUSchedulersOnline <= N]]></c>, where <c>N</c> is + the lesser of the return values of <c>erlang:system_info(dirty_cpu_schedulers)</c> and + <c>erlang:system_info(schedulers_online)</c>. + </p> + <p>The number of dirty CPU schedulers online can be set on startup by passing + the <seealso marker="erts:erl#+SDcpu">+SDcpu</seealso> command line flag, see + <seealso marker="erts:erl#+SDcpu">erl(1)</seealso>. + </p> + <p><em>Note that the dirty schedulers functionality is experimental</em>, and + that you have to enable support for dirty schedulers when building OTP in + order to try the functionality out.</p> + <p>For more information, see + <seealso marker="#system_info_dirty_cpu_schedulers">erlang:system_info(dirty_cpu_schedulers)</seealso>, + <seealso marker="#system_info_dirty_io_schedulers">erlang:system_info(dirty_io_schedulers)</seealso>, + <seealso marker="#system_info_schedulers_online">erlang:system_info(schedulers_online)</seealso>, and + <seealso marker="#system_flag_dirty_cpu_schedulers_online">erlang:system_flag(dirty_cpu_schedulers_online, DirtyCPUSchedulersOnline)</seealso>. + </p> + </item> + <tag><marker id="system_info_dirty_io_schedulers"><c>dirty_io_schedulers</c></marker></tag> + <item> + <p>Returns the number of dirty I/O schedulers as an integer. Dirty I/O schedulers + execute I/O-bound native functions such as NIFs and linked-in driver code that + cannot be managed cleanly by the emulator's normal schedulers. + </p> + <p>This value can be set on startup by passing + the <seealso marker="erts:erl#+SDio">+SDio</seealso> command line flag, see + <seealso marker="erts:erl#+SDio">erl(1)</seealso>. + </p> + <p><em>Note that the dirty schedulers functionality is experimental</em>, and + that you have to enable support for dirty schedulers when building OTP in + order to try the functionality out.</p> + <p>For more information, see + <seealso marker="#system_info_dirty_cpu_schedulers">erlang:system_info(dirty_cpu_schedulers)</seealso>, + <seealso marker="#system_info_dirty_cpu_schedulers_online">erlang:system_info(dirty_cpu_schedulers_online)</seealso>, and + <seealso marker="#system_flag_dirty_cpu_schedulers_online">erlang:system_flag(dirty_cpu_schedulers_online, DirtyCPUSchedulersOnline)</seealso>. + </p> + </item> <tag><c>dist</c></tag> <item> <p>Returns a binary containing a string of distribution diff --git a/erts/doc/src/erts_alloc.xml b/erts/doc/src/erts_alloc.xml index fffa9f594c..c9eca39a99 100644 --- a/erts/doc/src/erts_alloc.xml +++ b/erts/doc/src/erts_alloc.xml @@ -395,16 +395,17 @@ <c><![CDATA[<utilization>]]></c> is an integer in the range <c>[0, 100]</c> representing utilization in percent. When a utilization value larger than zero is used, allocator instances - are allowed to abandon multiblock carriers. Currently the default - is zero. If <c>de</c> (default enabled) is passed instead of a - <c><![CDATA[<utilization>]]></c>, a recomended non zero utilization - value will be used. The actual value chosen depend on allocator - type and may be changed between ERTS versions. Carriers will be - abandoned when memory utilization in the allocator instance falls - below the utilization value used. Once a carrier has been abandoned, - no new allocations will be made in it. When an allocator instance - gets an increased multiblock carrier need, it will first try to - fetch an abandoned carrier from an allocator instances of the same + are allowed to abandon multiblock carriers. If <c>de</c> (default + enabled) is passed instead of a <c><![CDATA[<utilization>]]></c>, + a recomended non zero utilization value will be used. The actual + value chosen depend on allocator type and may be changed between + ERTS versions. Currently the default equals <c>de</c>, but this + may be changed in the future. Carriers will be abandoned when + memory utilization in the allocator instance falls below the + utilization value used. Once a carrier has been abandoned, no new + allocations will be made in it. When an allocator instance gets an + increased multiblock carrier need, it will first try to fetch an + abandoned carrier from an allocator instances of the same allocator type. If no abandoned carrier could be fetched, it will create a new empty carrier. When an abandoned carrier has been fetched it will function as an ordinary carrier. This feature has diff --git a/erts/doc/src/escript.xml b/erts/doc/src/escript.xml index 180447cac4..d2b09d4515 100644 --- a/erts/doc/src/escript.xml +++ b/erts/doc/src/escript.xml @@ -44,6 +44,7 @@ <p><c>escript</c> runs a script written in Erlang.</p> <p>Here follows an example.</p> <pre> +$ <input>chmod u+x factorial</input> $ <input>cat factorial</input> #!/usr/bin/env escript %% -*- erlang -*- @@ -66,12 +67,13 @@ usage() -> fac(0) -> 1; fac(N) -> N * fac(N-1). -$ <input>factorial 5</input> +$ <input>./factorial 5</input> factorial 5 = 120 -$ <input>factorial</input> +$ <input>./factorial</input> usage: factorial integer -$ <input>factorial five</input> -usage: factorial integer </pre> +$ <input>./factorial five</input> +usage: factorial integer + </pre> <p>The header of the Erlang script in the example differs from a normal Erlang module. The first line is intended to be the interpreter line, which invokes <c>escript</c>. However if you diff --git a/erts/doc/src/notes.xml b/erts/doc/src/notes.xml index 8c008c493e..b4ebef72f4 100644 --- a/erts/doc/src/notes.xml +++ b/erts/doc/src/notes.xml @@ -257,7 +257,7 @@ processes before the BIF returns, or fail with an exception due to the port not being open. </p><p> The synchronous port BIFs are: </p> <list> <item><seealso - marker="erlang#port_close/1/"><c>port_close/1</c></seealso></item> + marker="erlang#port_close/1"><c>port_close/1</c></seealso></item> <item><seealso marker="erlang#port_command/2"><c>port_command/2</c></seealso></item> <item><seealso diff --git a/erts/doc/src/zlib.xml b/erts/doc/src/zlib.xml index afc597b729..11a7437f5a 100644 --- a/erts/doc/src/zlib.xml +++ b/erts/doc/src/zlib.xml @@ -161,20 +161,22 @@ list_to_binary([Compressed|Last])</pre> state. <c><anno>MemLevel</anno></c>=1 uses minimum memory but is slow and reduces compression ratio; <c><anno>MemLevel</anno></c>=9 uses maximum memory for optimal speed. The default value is 8.</p> - <p>The <c><anno>Strategy</anno></c> parameter is used to tune the - compression algorithm. Use the value <c>default</c> for - normal data, <c>filtered</c> for data produced by a filter - (or predictor), or <c>huffman_only</c> to force Huffman - encoding only (no string match). Filtered data consists - mostly of small values with a somewhat random - distribution. In this case, the compression algorithm is - tuned to compress them better. The effect of - <c>filtered</c>is to force more Huffman coding and less - string matching; it is somewhat intermediate between - <c>default</c> and <c>huffman_only</c>. The <c><anno>Strategy</anno></c> - parameter only affects the compression ratio but not the - correctness of the compressed output even if it is not set - appropriately.</p> + <p>The <c><anno>Strategy</anno></c> parameter is used to tune + the compression algorithm. Use the value <c>default</c> for + normal data, <c>filtered</c> for data produced by a filter (or + predictor), <c>huffman_only</c> to force Huffman encoding + only (no string match), or <c>rle</c> to limit match + distances to one (run-length encoding). Filtered data + consists mostly of small values with a somewhat random + distribution. In this case, the compression algorithm is tuned + to compress them better. The effect of <c>filtered</c>is to + force more Huffman coding and less string matching; it is + somewhat intermediate between <c>default</c> and + <c>huffman_only</c>. <c>rle</c> is designed to be almost as + fast as <c>huffman_only</c>, but give better compression for PNG + image data. The <c><anno>Strategy</anno></c> parameter only + affects the compression ratio but not the correctness of the + compressed output even if it is not set appropriately.</p> </desc> </func> <func> |