Age | Commit message (Collapse) | Author |
|
Reported-by: Boris Mühmer
|
|
Arrays have no meaningful toString method, but one must use
Arrays.toString instead. The meaningless value would look for example
like "[C@16f0472", instead of "[2,4]".
|
|
This is just a preparation to allow detection of older nodes
that do not understand maps (R16 and older).
|
|
to be: 116,Arity, K1,V1,K2,V2,...,Kn,Vn
instead of: 116,Arity, K1,K2,...,Kn, V1,V2,....,Vn
We think this will be better for future internal map structures
like HAMT. Would be bad if we need to iterate twice over HAMT
in term_to_binary, one for keys and one for values.
|
|
The API and implementation are simplistic, like for lists and tuples,
using arrays and without any connection to java.util.Map.
|
|
|
|
* nk/jinterface_dont_compress_if_size_increased/OTP-10822:
jinterface, OtpOutputStream: add a write_compressed(object, level) method
jinterface: fix a memory leak
jinterface: new limited OutputStream implementation without the need to resize
jinterface: don't return compressed external term if bigger than uncompressed
jinterface: don't compress small erlang terms < 5 bytes
jinterface, OtpOutputStream: properly override the three write() methods to ensure our growth strategy
jinterface: fix typo in error message if encoding fails
jinterface: don't need another FilterOutputStream wrapper
|
|
* vd/jinterface_windows_cookie/OTP-10821:
jinterface: fix finding cookie file on windows
|
|
Now that we use an own deflater, we can also allow the user to specify the compression level as in Erlang's term_to_binary/2.
|
|
after the first try to compress the value with a fixed buffer size, the deflater must be closed so that memory can be (instantly) reused
|
|
(saves memory re-allocations)
|
|
Now, OtpOutputStream#write_compressed() uses the same mechanism as erts_term_to_binary() in external.c: it tries to compress the given term into a buffer of the size of the uncompressed term and if this is not possible, i.e. the compression plus headers is bigger, it uses the uncompressed external term format instead.
|
|
Compression always has at least 5 bytes (the compressed tag + original size) so we can't get a smaller external term if the original term is already smaller than 5 bytes.
|
|
ensure our growth strategy
Previously, if code called e.g. write(byte[] b, int off, int len), the growth strategy of the parent class ByteArrayOutputStream was used.
|
|
|
|
DeflaterOutputStream is already an FilterOutputStream
|
|
Jinterface uses System.getProperty("user.home") to locate the user's home
and the cookie file. On Windows, the result might be different than the
value used by Erlang, which looks first to the HOMEDRIVE and HOMEPATH
variables.
The fix makes jinterface use the same logic.
|
|
The elems field would remain unitialized if the count parameter is zero in constructor
|
|
|
|
* sverk/r16/utf8-atoms:
erl_interface: Fix bug when transcoding atoms from and to UTF8
erl_interface: Changed erlang_char_encoding interface
erts: Testcase doing unicode atom printout with ~w
erl_interface: even more utf8 atom stuff
erts: Fix bug in analyze_utf8 causing faulty latin1 detection
Add UTF-8 node name support for epmd
workaround...
Fix merge conflict with hasse
UTF-8 atom documentation
test case
erl_interface: utf8 atoms continued
Add utf8 atom distribution test cases
atom fixes for NIFs and atom_to_binary
UTF-8 support for distribution
Implement UTF-8 atom support for jinterface
erl_interface: Enable decode of unicode atoms
stdlib: Fix printing of unicode atoms
erts: Change internal representation of atoms to utf8
erts: Refactor rename DFLAG(S)_INTERNAL_TAGS for conformity
Conflicts:
erts/emulator/beam/io.c
OTP-10753
|
|
|
|
With silent rules, the output of make is less verbose and compilation
warnings are easier to spot. Silent rules are disabled by default and
can be disabled or enabled at will by make V=0 and make V=1.
|
|
* vd/jinterface_epmd_localhost:
OtpEpmd.lokupNames() may hang if network is badly configured
OTP-10579
|
|
* nk/jinterface-fix_compressed_binary:
add (de)compress roundtrip tests with larger values
fix reading compressed binary terms from Java
OTP-10505
|
|
On some machines with weird network configurations,
InetAddress.getLocalHost() hangs. Searching for "localhost" works (at
least in the cases I met). The difference is that the loopback address
will be returned, instead of the real IP address, but for the local
machine this should not be a problem.
|
|
|
|
Larger compressed binary could not be decoded inside JInterface.
- applied a patch posted on erlang-questins in September 2009
http://erlang.org/pipermail/erlang-patches/2009-September/000478.html
-> extended this patch as it alone was not enough to fix the bug
Problem was that when reading from an InputStream, you can only specify a maximum number of bytes to read. Java doesn't quarantee that it actually reads this many bytes - it could be less!
This patch now reads up until the expected size bytes. If there are more than expected, the actual number of available bytes is not printed (we probably shouldn't read the additional bytes, security-wise - the erlang external byte representation is broken in this case though).
|
|
OTP-10106
OTP-10107
|
|
|
|
* vd/jinterface-atom-message:
Improve error message when creating a too long OtpErlangAtom
OTP-9928
|
|
* vd/java-string-bug:
add test for Java string bug
workaround for Java bug http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6242664
OTP-9927
|
|
* rc/spell-registered:
Correct spelling of "registered" in various places in the source code
OTP-9925
|
|
|
|
|
|
|
|
|
|
Previously, the buffer was increased linearly by 2048 bytes.
I now propose to use an exponential increase function
(similar to Javas ArrayList, e.g. always at least +50%).
This significantly increases performance of e.g. doRPC for
large parameters as the following comparison illustrates
(shown is the buffer size after each time, the buffer has reached its limit):
n n*2048 (n*3)/2+1 (n*3)/2+1 (at least +2048)
1 2,048 2,048 2,048
2 4,096 3,073 4,096
3 6,144 4,610 6,145
4 8,192 6,916 9,218
5 10,240 10,375 13,828
6 12,288 15,563 20,743
7 14,336 23,345 31,115
8 16,384 35,018 46,673
9 18,432 52,528 70,010
10 20,480 78,793 105,016
11 22,528 118,190 157,525
12 24,576 177,286 236,288
13 26,624 265,930 354,433
14 28,672 398,896 531,650
15 30,720 598,345 797,476
16 32,768 897,518 1,196,215
17 34,816 1,346,278 1,794,323
18 36,864 2,019,418 2,691,485
19 38,912 3,029,128 4,037,228
20 40,960 4,543,693 6,055,843
21 43,008 6,815,540 9,083,765
22 45,056 10,223,311 13,625,648
23 47,104 15,334,967 20,438,473
24 49,152 23,002,451 30,657,710
25 51,200 34,503,677 45,986,566
26 53,248 51,755,516 68,979,850
27 55,296 77,633,275 103,469,776
28 57,344 116,449,913 155,204,665
29 59,392 174,674,870 232,806,998
30 61,440 262,012,306 349,210,498
Actually, ArrayList uses the (n*3)/2+1 strategy. In order not to
decrease performance for messages <10k, we could keep the (public)
OtpOutputStream#defaultIncrement constant and let the buffer always
increase by at least this much (third column).
In order to create a buffer of 1MB, now only 16 array copies are
needed vs. (1024*1024/2048)=512 array copies for the linear increase
function. If a user sends a message of 10MB size, this is 22 vs.
5120 copies.
NOTE: the meaning of the "public static final int defaultIncrement"
member has changed a bit with this implementation (API compatibility?)
- why was this public in the first place?
|
|
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6242664
Java 1.5 has a bug where detecting codepoint offsets in strings that are
created by String.substring() gives wrong results. The new implementation
uses a different method, avoinding the issue.
The following code will crash without the fix:
final String s = "abcdefg";
final String ss = s.substring(3, 6);
final int[] cps = OtpErlangString.stringToCodePoints(ss);
|
|
Also print the value that we tried to use for the atom. This
helps a lot when debugging and doesn't affect anything when
the length is normal.
|
|
The two noncharacter code points 16#FFFE and 16#FFFF were not
allowed to be encoded or decoded using the unicode module or
bit syntax. That causes an inconsistency, since the noncharacters
16#FDD0 to 16#FDEF could be encoded/decoded.
There is two ways to fix that inconsistency.
We have chosen to allow 16#FFFE and 16#FFFF to be encoded and
decoded, because the noncharacters could be useful internally
within an application and it will make encoding and decoding
slightly faster.
Reported-by: Alisdair Sullivan
|
|
There once was a reason to have a "Makefile.otp" makefile, but
it doesn't apply any longer. Rename it to "Makefile" so that
the standard otp_subdir.mk file can be used for recursion into
sub directories.
|
|
|
|
|
|
|
|
The OtpMbox class was missing the hash() method while overriding
equals(). This can cause problems when using jinterface in a
larger Java application.
|
|
|
|
|
|
|