diff options
author | Aliaksey Kandratsenka <[email protected]> | 2013-04-04 17:14:51 -0700 |
---|---|---|
committer | Fredrik Gustafsson <[email protected]> | 2013-04-19 10:52:24 +0200 |
commit | 2403af917b62af85e06c3e26a4665ac3c173f533 (patch) | |
tree | 593332abaa8e420e4c1fe51a8f7804fe71129c5d /lib/runtime_tools/examples/port1.d | |
parent | 2e04fc33553703c17f2958dd7b02c22ba77322a0 (diff) | |
download | otp-2403af917b62af85e06c3e26a4665ac3c173f533.tar.gz otp-2403af917b62af85e06c3e26a4665ac3c173f533.tar.bz2 otp-2403af917b62af85e06c3e26a4665ac3c173f533.zip |
fix excessive CPU consumption of timer_server
I've found stdlib's timer to burn CPU without good reason. Here's what
happens.
The problem is that it sleeps in milliseconds but computes time in
microseconds. And there is bug in code to compute milliseconds to
sleep. It computes microseconds difference between now and nearest
timer event and then does _truncating_ division by 1000. So on average
it sleeps 500 microseconds _less than needed_. On wakeup its checks do
I have timer tick that already occurred? No. Ok how much I need to
sleep ? It does that bad computation again and gets 0
milliseconds. So next gen_server timeout happens right away only to
find we're still before closest timer tick and to decide to sleep 0
milliseconds again. And again and again.
This commit changes division to pick ceiling of ratio rather than
floor. So that we always sleep not less then difference between now
and closest event time.
Diffstat (limited to 'lib/runtime_tools/examples/port1.d')
0 files changed, 0 insertions, 0 deletions