This module is similar to
Persistent terms is an advanced feature and is not a general replacement for ETS tables. Before using persistent terms, make sure to fully understand the consequence to system performance when updating or deleting persistent terms.
Term lookup (using
Storing or updating a term (using
When a (complex) term is deleted (using
All processes in the system will be scheduled to run a scan of their heaps for the term that has been deleted. While such scan is relatively light-weight, if there are many processes, the system can become less responsive until all process have scanned theirs heaps.
If the deleted term (or any part of it) is still used by a process, that process will do a major (fullsweep) garbage collection and copy the term into the process. However, at most two processes at a time will be scheduled to do that kind of garbage collection.
Deletion of atoms and other terms that fit in one machine word is specially optimized to avoid doing a global GC. It is still not recommended to update persistent terms with such values too frequently because the hash table holding the keys is copied every time a persistent term is updated.
Some examples are suitable uses for persistent terms are:
Storing of configuration data that must be easily accessible by all processes.
Storing of references for NIF resources.
Storing of references for efficient counters.
Storing an atom to indicate a logging level or whether debugging is turned on.
The current implementation of persistent terms uses the literal
Here is an example how the reserved virtual address space for literals can be raised to 2 GB (2048 MB):
erl +MIscs 2048
The runtime system will send a warning report to the error logger if more than 20000 persistent terms have been created. It will look like this:
More than 20000 persistent terms have been created. It is recommended to avoid creating an excessive number of persistent terms, as creation and deletion of persistent terms will be slower as the number of persistent terms increases.
It is recommended to use keys like
Prefer creating a few large persistent terms to creating many small persistent terms. The execution time for storing a persistent term is proportional to the number of already existing terms.
Updating a persistent term with the same value as it already
has is specially optimized to do nothing quickly; thus, there is
no need compare the old and new values and avoid calling
When atoms or other terms that fit in one machine word are deleted, no global GC is needed. Therefore, persistent terms that have atoms as their values can be updated more frequently, but note that updating such persistent terms is still much more expensive than reading them.
Updating or deleting a persistent term will trigger a global GC if the term does not fit in one machine word. Processes will be scheduled as usual, but all processes will be made runnable at once, which will make the system less responsive until all process have run and scanned their heaps for the deleted terms. One way to minimize the effects on responsiveness could be to minimize the number of processes on the node before updating or deleting a persistent term. It would also be wise to avoid updating terms when the system is at peak load.
Avoid storing a retrieved persistent term in a process if that persistent term could be deleted or updated in the future. If a process holds a reference to a persistent term when the term is deleted, the process will be garbage collected and the term copied to process.
Avoid updating or deleting more than one persistent term at a time. Each deleted term will trigger its own global GC. That means that deleting N terms will make the system less responsive N times longer than deleting a single persistent term. Therefore, terms that are to be updated at the same time should be collected into a larger term, for example, a map or a tuple.
The following example shows how lock contention for ETS tables can be minimized by having one ETS table for each scheduler. The table identifiers for the ETS tables are stored as a single persistent term:
%% There is one ETS table for each scheduler. Sid = erlang:system_info(scheduler_id), Tid = element(Sid, persistent_term:get(?MODULE)), ets:update_counter(Tid, Key, 1).
Any Erlang term.
Any Erlang term.
Erase the name for the persistent term with key
If there existed a previous persistent term associated with
key
Retrieve the keys and values for all persistent terms.
The keys will be copied to the heap for the process calling
Retrieve the value for the persistent term associated with
the key
This function fails with a
If the calling process holds on to the value of the persistent term and the persistent term is deleted in the future, the term will be copied to the process.
Return information about persistent terms in a map. The map has the following keys:
The number of persistent terms.
The total amount of memory (measured in bytes) used by all persistent terms.
Store the value
If the value
If there existed a previous persistent term associated with
key