diff options
author | Rickard Green <[email protected]> | 2015-01-05 11:04:34 +0100 |
---|---|---|
committer | Rickard Green <[email protected]> | 2015-01-14 20:24:45 +0100 |
commit | 24fa075b5c0d54f2035a2ff510a82aa19187eda4 (patch) | |
tree | 2e26371bbcf360ae53f75c6bad1799aa375c1b72 /lib/diameter | |
parent | ce73c38b10d1dee5209b505ef054b108e747b522 (diff) | |
download | otp-24fa075b5c0d54f2035a2ff510a82aa19187eda4.tar.gz otp-24fa075b5c0d54f2035a2ff510a82aa19187eda4.tar.bz2 otp-24fa075b5c0d54f2035a2ff510a82aa19187eda4.zip |
Improve ethread atomics based on GCC builtins
* Use of __atomic builtins when available.
* Improved configure test that checks for missing memory
barrier in __sync_synchronize(). The old approach was to
verify known working gcc versions and check gcc version at
compile time. Besides not being very safe, the old approach
often unnecessarily caused usage of the very expensive
workaround.
* Introduced (no overhead) workaround for missing clobber in
__sync_synchronize() when using buggy LLVM implementation of
__sync_synchronize().
* Implement native memory barriers for ARM processors supporting
the DMB instruction.
* Use of volatile store on Alpha as atomic set operation if no
__atomic_store_n() is available (already used on x86/x86_64
Sparc V9, PowerPC, and MIPS). Fallback used when not using
volatile store is typically very expensive.
* Use volatile load on Alpha and ARM as atomic read operation
if no __atomic_load_n() is available (already used on
x86/x86_64 Sparc V9, PowerPC, and MIPS). Fallback when not
using volatile load is typically very expensive.
Diffstat (limited to 'lib/diameter')
0 files changed, 0 insertions, 0 deletions