Age | Commit message (Collapse) | Author |
|
|
|
|
|
|
|
* dgud/mnesia/try-catch:
mnesia: Replace catch with try-catch
|
|
Avoids building stacktraces where it is not needed and do
not mask errors, i.e. only catch the relevant classes in each try.
|
|
During Mnesia startup, after protocol negotiation, the list of connected
nodes is written to "recover_nodes". This list is later used to merge
the schema.
If Mnesia was stopped on a remote node between the protocol negotiation
and the moment the list is stored in "recover_nodes", the remote node
is still considered running: the value of "recover_nodes" stored during
mnesia_down/1 is overwritten. Therefore, this node may be used to
acquire a write lock on the schema in order to perform the merge. In
this case, the remote node never answers to the lock request and Mnesia
hang forever (application:start(mnesia) never returns).
To fix the problem, we check the list one last time and remove from it
all nodes where Mnesia is stopped. And because there is still a chance
for missing mnesia_down event, handle_cast({mnesia_down, ...}, ...)
writes to recover_nodes again, in addition to mnesia_down/1.
|
|
|
|
|
|
|
|
timer:send_interval behaves badly when resuming from sleep on some
platforms. For example, if I sleep for 10 minutes, and have a
send_interval running once per minute, when I resume, 10 messages
will be sent immediately, eliminating the benefit of only running
the work periodically. This is admittedly a separate bug with
send_interval, but the workaround is straightforward, and also
protects from messages piling up in the queue when the work takes
longer than the interval.
This patch fixes piled up error reports on resume from sleep:
** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}
You'll still be warned if mnesia is overloaded, just not repeatedly.
Additionally, erlang:send_after is more efficient than using the
timer module equivalent [1]
[1] http://www.erlang.org/doc/efficiency_guide/commoncaveats.html#id57251
|
|
Instead of just appending decisions to the log, use mnesia_log:log(Decision),
it will increment the counter which causes the log to be dumped even
if no actual commits are stored on this node.
This fixed a bug where the LATEST.log would grow forever on a node which
had the schema on disc, but where not involved in any commits.
|
|
With help from Kostis
|
|
Resolve name clash with auto-imported BIF error/2.
|
|
* uw/mnesia-overload:
Enable continuous monitoring of mnesia overload status
|
|
Mnesia currently issues an event whenever it detects an overload
condition. It recognizes two different types of overload:
- whenever the message queue of mnesia_tm process grows large
- when a log dump interval triggers before the previous dump is done
These events could be used to trigger a load regulation mechanism
to reduce the load until the condition seizes. The missing piece
is that there is no facility to ask mnesia whether the overload
condition still exists.
This patch implements a couple of functions in mnesia_lib that
can be used to sample the overload status. It has been tested
in a load regulator component being developed by Erlang Solutions.
No mnesia test suites have been run, since they are not available.
No documentation has been updated. The functions in mnesia_lib
are at any rate undocumented (as are all other functions in that
module). A decision would have to be made about whether to provide
a documented API on top of these functions.
The internal state record of mnesia_recover has been modified.
For this reason, a code change hook has been provided.
|
|
|