aboutsummaryrefslogtreecommitdiffstats
path: root/lib/stdlib/test/stdlib.spec
diff options
context:
space:
mode:
authorJosé Valim <[email protected]>2018-04-30 00:17:07 +0800
committerJosé Valim <[email protected]>2018-08-07 18:52:04 +0200
commitdb41ff2fd39928a973eea25c3babfa5193f27319 (patch)
tree5c2400f4f5ac09ab6e8bf19d415100763d31302c /lib/stdlib/test/stdlib.spec
parent52c11d5afd18405eaa293bb881eddf23f408850f (diff)
downloadotp-db41ff2fd39928a973eea25c3babfa5193f27319.tar.gz
otp-db41ff2fd39928a973eea25c3babfa5193f27319.tar.bz2
otp-db41ff2fd39928a973eea25c3babfa5193f27319.zip
Optimize binary match
The idea is to use memchr on the first lookup for binary:match/2 and also after every match on binary:matches/2. We only use memchr in case of matches because benchmarks showed that using memchr even when we had false positives could negatively affect performance. This speeds up binary matching and binary splitting by 4x in some cases and by 70x in other scenarios (when the last character in the needle does not occur in the subject). The reason to use memchr is that it is highly specialized in most modern operating systems, often defaulting to SIMD operations. The implementation uses the reduction count to figure out how many bytes should be read with memchr. We could increase those numbers but they do not seem to make a large difference.
Diffstat (limited to 'lib/stdlib/test/stdlib.spec')
-rw-r--r--lib/stdlib/test/stdlib.spec2
1 files changed, 1 insertions, 1 deletions
diff --git a/lib/stdlib/test/stdlib.spec b/lib/stdlib/test/stdlib.spec
index 9c625091a8..4de7c1a0eb 100644
--- a/lib/stdlib/test/stdlib.spec
+++ b/lib/stdlib/test/stdlib.spec
@@ -1,4 +1,4 @@
{suites,"../stdlib_test",all}.
{skip_groups,"../stdlib_test",stdlib_bench_SUITE,
- [base64,gen_server,gen_statem,unicode],
+ [binary,base64,gen_server,gen_statem,unicode],
"Benchmark only"}.