Age | Commit message (Collapse) | Author |
|
When building match result patterns the tuples they must qouted
with { }, which causes a problem with variable patterns.
Use element(1, Match) instead of trying to build the two tuple.
|
|
* dgud/mnesia/ext-backend/PR-858/OTP-13058:
mnesia_ext: Add basic backend extension tests
mnesia_ext: reuse snmp field for ext updates
mnesia_ext: Create table/data containers from mnesia monitor not temporary processes
mnesia_ext: Implement ext copies index
mnesia_ext: Load table ext
mnesia_ext: Dumper and schema changes
mnesia_ext: Refactor mnesia_schema.erl
mnesia_ext: Ext support in fragmented tables
mnesia_ext: Backup handling
mnesia_ext: Create schema functionality
mnesia_ext: Add ext copies and db_fold to low level api
mnesia_ext: Refactor record_validation code
mnesia_ext: Add create_external and increase protocol version to monitor
mnesia_ext: Add ext copies to records
mnesia_ext: Add supervisor and behaviour modules
|
|
|
|
Make ram_copies index always use ordered_set
And use index type as prefered type not a implementation requirement,
the standard implmentation will currently ignore the prefered type.
|
|
and increase some timeouts for very slow machines (arm)
|
|
|
|
* maint:
mnesia: let loader check if tablelock is needed
mnesia: Avoid deadlock possibility in mnesia:del_table_copy schema
|
|
del_table_copy grabs a write lock in a new process in prepare_op/3 to
change 'where_to_read' when a table copy is updated.
When del_table_copy(schema, Node) is called all copies located on Node
are deleted, and thus many locks are taken. Since this was done outside
of the schema-transaction, mnesia's deadlock prevention algorithms
was sidestepped and a deadlock could occur.
Fix by always grabbing write-locks for all changed tabs early and in the same
transaction, this might slow done the operation some but it must be done
and it also cleans up the code.
|
|
|
|
Introduced a leak of disk_log processes in the rewrite to try-catch.
|
|
There is no need to update the index table if a record is updated in non
indexed field. This removes one timing glitch where dirty_index_read would
return an empty list for records that where updated.
There is still an issue with dirty_index_read when updates are made to
the index field, it have been reduced but the real table updates are made
after the index table references have been added.
Originally reported by Nick Marino in erl-questions mailing list, thanks.
|
|
|
|
|
|
The docs express that exit({aborted, Reason}) are called when
an error occur.
|
|
* dgud/mnesia/try-catch:
mnesia: Replace catch with try-catch
|
|
Avoids building stacktraces where it is not needed and do
not mask errors, i.e. only catch the relevant classes in each try.
|
|
|
|
|
|
|
|
|
|
match_object returned wrong objects when matching on non key fields
and updates in the same transaction had been performed.
|
|
Need to re-raise the match macro if inside transaction
|
|
|
|
* schlagert/fix_basic_appups:
Dynamically configure typer_SUITE according to environment
Disable hipe_SUITE when environment doesn't support it
Make hipe non-upgradable by setting appup file empty
Fix missing module on hipe app file template
Add test suites performing app and appup file checks
Introduce appup test utility
Fix library application appup files
Fix non-library appup files according to issue #240
OTP-11744
|
|
Add the mentioned test suites for *all* library and touched
non-library applications.
|
|
* dgud/mnesia/add-sync-log/OTP-11729:
mnesia: cleanup some dialyzer unmatched return warnings
mnesia: Shorten testcase names
mnesia: Improve mnesia coredump info
mnesia: Add explicit sync_log command
|
|
For windows tests (limited path lenghts)
|
|
For performance reasons the file data is not synced to disk in mnesia,
data loss can happen between each dump.
mnesia:dump_log() can be used explicitly to ensure data is written to disk.
But that can take a long time, so mnesia:sync_log() which just
sync the log have been added.
|
|
|
|
dirty_update_counter returned the wrong value when a subscriber existed
and no events was sent. Thanks Anton Ryabkov.
|
|
|
|
|
|
|
|
|
|
If mnesia:cleartable/1 was called during a table_load an schema event
with delete and write of that table is sent, which caused the table
to contain the schema record instead of clearing the table.
|
|
|
|
|
|
|
|
|
|
Crashes on windows XP otherwise path > 256..
|
|
Conflicts:
lib/diameter/autoconf/vxworks/sed.general
xcomp/README.md
|
|
|
|
* maint:
Bumped version nr
ssl & public_key: Workaround that some certificates encode countryname as utf8 and close down gracefully if other ASN-1 errors occur.
Add more cross reference links to ct docs
Remove config option from common_test args
Update user config to use nested tuple keys
Allow mixed IPv4 and IPv6 addresses to sctp_bindx
Add checks for in6addr_any and in6addr_loopback
Fix SCTP multihoming
observer: fix app file (Noticed-by: Motiejus Jakstys)
Fix lib/src/test/ssh_basic_SUITE.erl to fix IPv6 option typos
Prevent index from being corrupted if a nonexistent item is deleted
Add tests showing that trying to delete non-existing object may corrupt the table index
Fix Table Viewer search crash on new|changed|deleted rows
Escape control characters in Table Viewer
Fix Table Viewer crash after a 'Found' -> 'Not found' search sequence
inet_drv.c: Set sockaddr lengths in inet_set_[f]address
Conflicts:
erts/preloaded/ebin/prim_inet.beam
|
|
* egil/r16/remove-vxworks-support/OTP-10146: (30 commits)
erts: Update doc to reflect VxWorks removal
erts: Remove VxWorks from documentation
Update preloaded erl_prim_loader and prim_file
snmp: Remove VxWorks
megaco: Remove VxWorks
diameter: Remove VxWorks
mnesia: Remove VxWorks
typer: Remove VxWorks
ssh: Remove VxWorks
inets: Remove VxWorks
hipe: Remove VxWorks
cosFileTransfer: Remove VxWorks
asn1: Remove VxWorks
orber: Remove VxWorks
os_mon: Remove VxWorks
runtime_tools: Remove VxWorks
test_server: Remove VxWorks references in doc
test_server: Remove VxWorks
stdlib: Remove VxWorks
kernel: Remove VxWorks from tests
...
Conflicts:
erts/emulator/test/Makefile
|
|
|
|
|
|
table index
In case of bag tables, trying to delete a non-existing object leads to
the index becoming corrupt. This happens if the non-existing object we
try to delete happens to share its key and index field value with a single
existing object in the table.
Result: The index entry corresponding to the existing object is
removed.
|
|
OTP-10106
OTP-10107
|
|
|
|
|