Age | Commit message (Collapse) | Author |
|
|
|
Take care of the call of erlang:get_stacktrace() in
module mnesia_lib.
|
|
|
|
|
|
|
|
Fast restarts could cause table to be blocked twice.
|
|
* dgud/mnesia/ext-backend/PR-858/OTP-13058:
mnesia_ext: Add basic backend extension tests
mnesia_ext: reuse snmp field for ext updates
mnesia_ext: Create table/data containers from mnesia monitor not temporary processes
mnesia_ext: Implement ext copies index
mnesia_ext: Load table ext
mnesia_ext: Dumper and schema changes
mnesia_ext: Refactor mnesia_schema.erl
mnesia_ext: Ext support in fragmented tables
mnesia_ext: Backup handling
mnesia_ext: Create schema functionality
mnesia_ext: Add ext copies and db_fold to low level api
mnesia_ext: Refactor record_validation code
mnesia_ext: Add create_external and increase protocol version to monitor
mnesia_ext: Add ext copies to records
mnesia_ext: Add supervisor and behaviour modules
|
|
|
|
|
|
move_table_copy needs the lock that was set previously in del_table_copy.
This doesn't work on old nodes, so bump protocol version and check it.
Remove old protocol conversion code, which have been around since OTP-R15.
Checking if lock is needed requires rpc communication via mnesia_gvar
ets table to be backwards compatible.
|
|
|
|
* dgud/mnesia/try-catch:
mnesia: Replace catch with try-catch
|
|
Avoids building stacktraces where it is not needed and do
not mask errors, i.e. only catch the relevant classes in each try.
|
|
* richcarl/dcd-dumps:
Make Mnesia DCD dump behaviour available via API
Make Mnesia DCD dump behaviour available via configuration
OTP-12481
|
|
If a DCD dump is desired on-demand, use the function
mnesia_controller:snapshot_dcd(Tables). Tables must be a list of
tables that have a local disc_copy, otherwise an error will be
returned. Once the operation actually executes, any table that doesn't
have a local disc_copy is ignored.
Specifically, the dump_log worker record has been changed to allow an
arity-0 fun instead of the default log dump. This fun will be executed
as if it were a normal log dump, and must return 'dumped'. This could
also be used to e.g. insert a backup operation between log dumps.
|
|
By doing an abort, the create_table can be restarted
if a node go down during the transaction.
{badarg,
[{erlang,link,[undefined],[]},
{mnesia_controller,
wait_for_schema_commit_lock,0,
[{file,"mnesia_controller.erl"},
{line,303}]},
{mnesia_schema,prepare_commit,3,
[{file,"mnesia_schema.erl"},
{line,1838}]},
{mnesia_tm,commit_participant,6,
[{file,"mnesia_tm.erl"},
{line,1669}]}]}}},
|
|
* dgud/mnesia/force-load-hangs/OTP-11948:
mnesia: Handle failed net_loads better
|
|
In case of a failed net load and no more available copies,
remove the table from late_load_queue, otherwise tables
can not be forced loaded.
|
|
|
|
|
|
timer:send_interval behaves badly when resuming from sleep on some
platforms. For example, if I sleep for 10 minutes, and have a
send_interval running once per minute, when I resume, 10 messages
will be sent immediately, eliminating the benefit of only running
the work periodically. This is admittedly a separate bug with
send_interval, but the workaround is straightforward, and also
protects from messages piling up in the queue when the work takes
longer than the interval.
This patch fixes piled up error reports on resume from sleep:
** WARNING ** Mnesia is overloaded: {dump_log, write_threshold}
You'll still be warned if mnesia is overloaded, just not repeatedly.
Additionally, erlang:send_after is more efficient than using the
timer module equivalent [1]
[1] http://www.erlang.org/doc/efficiency_guide/commoncaveats.html#id57251
|
|
* dgud/mnesia/read-sticky-bug/OTP-9786:
[mnesia] Read record from correct node
[mnesia] Fixed sticky read lock bug
[mnesia] Whitespace fixes
Conflicts:
lib/mnesia/src/mnesia_log.erl
|
|
|
|
create_table
|
|
Allow schema operation even if not all nodes are upgraded to
latest version.
|
|
|
|
The do_merge_schema function now converts cstructs from a remote node
when it detects that they are different. In order to be compatible the
other way around, mnesia_controller:get_cstructs() detects a remote caller,
and converts the cstructs before sending them.
|
|
|
|
|
|
Resolve name clash with auto-imported BIF error/2.
|
|
* uw/mnesia-overload:
Enable continuous monitoring of mnesia overload status
|
|
* uw/mnesia-schema-merge:
remove debug printout and accidental variable name reuse
Allow a user_defined function to wrap mnesia_schema:merge_schema()
|
|
Mnesia currently notifies the user if it detects a partitioned
network, but the options for resolving the situation are limited.
In practice, the only safe options are:
- set master_nodes and restart one of the affected 'islands'
- restart the entire system from backup
This patch introduces a way to resolve the situation without
restarting any nodes. The key to doing this safely is to
lock affected tables and run the merge function inside the same
transaction that merges the schema. Otherwise, one transaction
will merge the schema, after which writes to the database will
be replicated across the (potentially) inconsistent copies;
the transaction triggered by the asynchronous inconsistency event
will have to race to be the first to access the tables.
The normal call to merge the schema is done from mnesia_controller.
Previously, this was mnesia_schema:merge_schema().
The new function is merge_schema(UserFun), with the
following behaviour:
merge_schema(UserFun) ->
schema_transaction(
fun() ->
UserFun(fun(Arg) -> do_merge_schema(Arg) end)
end).
Where do_merge_schema(LockTabs) will execute the schema merge
as before, but also lock all tables in the list LockTabs which
have copies on the affected nodes (that is, everywhere the schema
table is locked).
The effect of this is to allow a wrapper function that calls the
merge and, if successful, continues to resolve the inconsistency
on the tables, knowing that they have now been locked on all
affected nodes.
The function that is actually called by the deconflict function
is mnesia_controller:connect_nodes(Nodes, UserFun), as in:
Tables = tables_to_deconflict(Node),
mnesia_controller:connect_nodes(
[Node], fun(MergeF) ->
case MergeF(Tables) of
{merged,_,_} ->
deconflict(Tables, Node);
Other ->
Other
end).
In the case where the merge fails, it is probably wise to
restart from a backup...
I have not run the mnesia test suite, as it is not available.
I have not updated documentation, as these functions are not
documented in the first place.
|
|
Mnesia currently issues an event whenever it detects an overload
condition. It recognizes two different types of overload:
- whenever the message queue of mnesia_tm process grows large
- when a log dump interval triggers before the previous dump is done
These events could be used to trigger a load regulation mechanism
to reduce the load until the condition seizes. The missing piece
is that there is no facility to ask mnesia whether the overload
condition still exists.
This patch implements a couple of functions in mnesia_lib that
can be used to sample the overload status. It has been tested
in a load regulator component being developed by Erlang Solutions.
No mnesia test suites have been run, since they are not available.
No documentation has been updated. The functions in mnesia_lib
are at any rate undocumented (as are all other functions in that
module). A decision would have to be made about whether to provide
a documented API on top of these functions.
The internal state record of mnesia_recover has been modified.
For this reason, a code change hook has been provided.
|
|
|