Java: StringBuffer to byte[] without toString - java

The title says it all. Is there any way to convert from StringBuilder to byte[] without using a String in the middle?
The problem is that I'm managing REALLY large strings (millions of chars), and then I have a cycle that adds a char in the end and obtains the byte[]. The process of converting the StringBuffer to String makes this cycle veryyyy very very slow.
Is there any way to accomplish this? Thanks in advance!

As many have already suggested, you can use the CharBuffer class, but allocating a new CharBuffer would only make your problem worse.
Instead, you can directly wrap your StringBuilder in a CharBuffer, since StringBuilder implements CharSequence:
Charset charset = StandardCharsets.UTF_8;
CharsetEncoder encoder = charset.newEncoder();
// No allocation performed, just wraps the StringBuilder.
CharBuffer buffer = CharBuffer.wrap(stringBuilder);
ByteBuffer bytes = encoder.encode(buffer);
EDIT: Duarte correctly points out that the CharsetEncoder.encode method may return a buffer whose backing array is larger than the actual data—meaning, its capacity is larger than its limit. It is necessary either to read from the ByteBuffer itself, or to read a byte array out of the ByteBuffer that is guaranteed to be the right size. In the latter case, there's no avoiding having two copies of the bytes in memory, albeit briefly:
ByteBuffer byteBuffer = encoder.encode(buffer);
byte[] array;
int arrayLen = byteBuffer.limit();
if (arrayLen == byteBuffer.capacity()) {
array = byteBuffer.array();
} else {
// This will place two copies of the byte sequence in memory,
// until byteBuffer gets garbage-collected (which should happen
// pretty quickly once the reference to it is null'd).
array = new byte[arrayLen];
byteBuffer.get(array);
}
byteBuffer = null;

If you're willing to replace the StringBuilder with something else, yet another possibility would be a Writer backed by a ByteArrayOutputStream:
ByteArrayOutputStream bout = new ByteArrayOutputStream();
Writer writer = new OutputStreamWriter(bout);
try {
writer.write("String A");
writer.write("String B");
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(bout.toByteArray());
try {
writer.write("String C");
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(bout.toByteArray());
As always, your mileage may vary.

For starters, you should probably be using StringBuilder, since StringBuffer has synchronization overhead that's usually unnecessary.
Unfortunately, there's no way to go directly to bytes, but you can copy the chars into an array or iterate from 0 to length() and read each charAt().

Unfortunately, the answers above that deal with ByteBuffer's array() method are a bit buggy... The trouble is that the allocated byte[] is likely to be bigger than what you'd expect. Thus, there will be trailing NULL bytes that are hard to get rid off, since you can't "re-size" arrays in Java.
Here is an article that explains this in more detail:
http://worldmodscode.wordpress.com/2012/12/14/the-java-bytebuffer-a-crash-course/

What are you trying to accomplish with "million of chars"? Are these logs that need to be parsed? Can you read it as just bytes and stick to a ByteBuffer? Then you can do:
buffer.array()
to get a byte[]
Depends on what it is you are doing, you can also use just a char[] or a CharBuffer:
CharBuffer cb = CharBuffer.allocate(4242);
cb.put("Depends on what it is you need to do");
...
Then you can get a char[] as:
cp.array()
It's always good to REPL things out, it's fun and proves the point. Java REPL is not something we are accustomed to, but hey, there is Clojure to save the day which speaks Java fluently:
user=> (import java.nio.CharBuffer)
java.nio.CharBuffer
user=> (def cb (CharBuffer/allocate 4242))
#'user/cb
user=> (-> (.put cb "There Be") (.array))
#<char[] [C#206564e9>
user=> (-> (.put cb " Dragons") (.array) (String.))
"There Be Dragons"

If you want performance, I wouldn't use StringBuilder or create a byte[]. Instead you can write progressively to the stream which will take the data in the first place. If you can't do that, you can copy the data from the StringBuilder to the Writer, but it much faster to not create the StringBuilder in the first place.

Related

Is reading/writing in an array more efficient than reading/writing a char/byte one by one?

try(FileReader reader = new FileReader("input.txt")) {
int c;
while ((c = reader.read()) != -1)
System.out.print((char)c);
} catch (Exception ignored) { }
In this code, I read a char by char. Is it more efficient in someway to read a into an array of chars at once? In other words, is there any kind of optimization that happens when reading in arrays?
For example in this code, I have an array of char called arr and I read into it until there is noting left to read. Is it more efficient?
try(FileReader reader = new FileReader("input.txt")) {
int size;
char[] arr = new char[100];
while ((size = reader.read(arr)) != -1)
for (int i = 0; i < size; i++)
System.out.print(arr[i]);
} catch (Exception ignored) { }
The question applies for both reading/writing both chars/bytes.
Depends on the reader. The answer can be yes, though. Whatever Reader or InputStream is the actual 'raw' driver (the one that isn't just wrapping another reader or inputstream, but the one that is actually talking to the OS to get the data) - it may well implement the single-character read() method by asking the OS to read a single character.
In the end, you have a disk, and disks return data in blocks. So if you ask for 1 byte, you have 2 options as a computer:
Ask the disk for the block that contains the byte that is to be read. Store the block in memory someplace for a while. Return one byte; for the next few moments, if more requests for bytes come in from the same block, return from the stored data in memory and don't bother asking the disk at all. NOTE: This requires memory! Who allocates it? How much memory is okay? Tricky questions. OSes tend to give low level tools and don't like just picking values for any of these questions.
Ask the disk for the block that contains the byte that is to be read. Find the 1 byte needed from within this block. Ignore the rest of the data, return just that one byte. If in a few moments another byte from that block is asked for... ask the disk, again, for the whole block, and repeat this routine.
Which of the two models you get depends on many factors: For example: What kind of disk is it, what OS do you have, what underlying java reader are you using. But it is plausible you end up in this second mode and that is, as you can probably tell, usually incredibly slow, because you end up reading the same block 4000+ times instead of only once.
So, how to fix this? Well, java doesn't really know what the OS is doing either, so the safest bet is to let java do the caching. Then you have no dependencies on whatever the OS is doing.
You could write it yourself, so instead of:
for (int i = in.read(); i != -1; i = in.read()) {
processOneChar((char) i);
}
you could do:
char[] buffer = new char[4096];
while (true) {
int r = in.read(buffer);
if (r == -1) break;
for (int i = 0; i < r; i++) processOneChar(buffer[i]);
}
more code, but now the second scenario (the same block is read off the disk a ton of times) can no longer occur; you have given the OS the freedom to return to you up to 4096 chars worth of data.
Or, use a java builtin: BufferedX:
BufferedReader br = new BufferedReader(in);
for (int i = br.read(); i != -1; i = br.read()) {
processOneChar((char) i);
}
The implementation of BufferedReader guarantees that java will take care of making some reasonably sized buffer to avoid rereads of the same block off of disk.
NB: Note that the FileReader constructor you are using should not be used. It uses platform default encoding (anytime you convert bytes to characters, encoding is involved), and platform default is a recipe for untestable bugs, which are very bad. Use new FileReader(file, StandardCharsets.UTF_8) instead, or better yet, use the new API:
Path p = Paths.get("C:/file.txt");
try (BufferedReader br = Files.newBufferedReader(p)) {
for (int i = br.read(); i != -1; i = br.read()) {
processOneChar((char) i);
}
}
Note that this:
Defaults to UTF-8, because the Files API defaults to UTF-8 unlike most places in the VM.
Makes a bufferedreader immediately, no need to make it yourself.
Properly manages the resource (ensures it is closed regardless of how this code exits, be it normally or be exception), by using an ARM block.
Because a BufferedX is involved, no risk of the 'read the same block a lot' performance hole.
NB: The same logic applies when writing; disks such as SSDs can only write a whole block at a time. Now it's not just slow as molasses to write, you're also ruining your disk, as they get a limited number of writes.

Why String receiver's size is smaller than original ByteArrayOutputStream's size when I call toString()

I'm in front of a curious problem. Some code is better than long story:
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
buffer.write(...); // I write byte[] data
// In debugger I can see that buffer's count = 449597
String szData = buffer.toString();
int iSizeData = buffer.size();
// But here, szData's count = 240368
// & iSizeData = 449597
So my question is: why szData doesn't contain all the buffer's data? (only one Thread run this code) because after that kind of operation, I don't want szData.charAt(iSizeData - 1) crashes!
EDIT: szData.getBytes().length = 450566. There is encoding problems I think. Better use a byte[] instead of a String finally?
In Java, char ≠ byte, depending on the default character coding of the platform, char can occupy up to 4 bytes in memory. You work either with bytes (binary data), or with characters (strings), you cannot (easily) switch between them.
For String operations like strncasecmp in C, use the methods of the String class, e.g. String.compareToIgnoreCase(String str). Also have a look at the StringUtils class from the Apache Commons Lang library.

Storing and comparing a large quantity of Strings in Java

My application stores a large number (about 700,000) of strings in an ArrayList. The strings are loaded from a text file like this:
List<String> stringList = new ArrayList<String>(750_000);
//there's a try catch here but I omitted it for this example
Scanner fileIn = new Scanner(new FileInputStream(listPath), "UTF-8");
while (fileIn.hasNext()) {
String s = fileIn.nextLine().trim();
if (s.isEmpty()) continue;
if (s.startsWith("#")) continue; //ignore comments
stringList.add(s);
}
fileIn.close();
Later on, Other strings are compared to this list, using this code:
String example = "Something";
if (stringList.contains(example))
doSomething();
This comparison will happen many hundreds (thousands?) of times.
This all works, but I want to know if there's anything I can do to make it better. I notice that the JVM increases in size from about 100MB to 600MB when it loads the 700K Strings. The strings are mainly about this size:
Blackened Recordings
Divergent Series: Insurgent
Google
Pixels Movie Money
X Ambassadors
Power Path Pro Advanced
CYRFZQ
Is there anything I can do to reduce the memory, or is that to be expected? Any suggestions in general?
ArrayList is a memory effective. Probably your issue is caused by java.util.Scanner. Scanner creates a lot of temp objects during parsing (Patterns, Matchers etc) and not suitable for big files.
Try to replace it with java.io.BufferedReader:
List<String> stringList = new ArrayList<String>();
BufferedReader fileIn = new BufferedReader(new FileReader("UTF-8"));
String line = null;
while ((line = fileIn.readLine()) != null) {
line = line.trim();
if (line.isEmpty()) continue;
if (line.startsWith("#")) continue; //ignore comments
stringList.add(line);
}
fileIn.close();
See java.util.Scanner source code
To pinpoint memory issue attach to your JVM any memory profiler, for example VisualVM from JDK tools.
Added:
Let's make few assumtions:
you have 700000 string with 20 characters each.
object reference size is 32 bits, object header - 24, array header - 16, char - 16, int 32.
Then every string will consume 24+32*2+32+(16+20*16) = 456 bits.
Whole ArrayList with string object will consume about 700000*(32*2+456) = 364000000 bits = 43.4 MB (very roughly).
Not quite an answer, but:
Your scenario uses around 70mb on my machine:
long usedMemory = -(Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory());
{//
String[] strings = new String[700_000];
for (int i = 0; i < strings.length; i++) {
strings[i] = new String(new char[20]);
}
}//
usedMemory += Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
System.out.println(usedMemory / 1_000_000d + " mb");
How did you reach 500mb there? As far as I know, String has internally a char[], and each char has 16 bits. Taking the Object and String overhead in account, 500mb is still quite much for the strings only. You may perform some benchmarking tests on your machine.
As others already mentioned, you should change the datastructure for element look-ups/comparison.
You're likely going to be better off using a HashSet instead of an ArrayList as both add and contains are constant time operations in a HashSet.
However, it does assume that your object's hashCode implementation (which is part of Object, but can be overridden) is evenly distributed.
There is a Trie data structure which can be used as dictionary, with so many strings they can occur multiple times. https://en.wikipedia.org/wiki/Trie . It seems to fit your case.
UPDATE:
An alternative can be HashSet or HashMap string -> something if you want occurrences of strings for example. Hashed collection will be faster than list for sure.
I would start with HashSet.
Using an ArrayList is a very bad idea for your use case, because it is not sorted, and hence you cannot efficiently search for an entry.
The best built-in type for your case is a is a TreeSet<String>. It guarantees O(log(n)) Performance for add() and contains().
Be aware that TreeSet is not thread-safe in the basic implementation. Use an mt-safe wrapper (see the JavaDocs of TreeSet for this).
Here is a Java 8 approach. It uses Files.lines() method which take advantage of Stream API. This method reads all lines from a file as a Stream.
As a consequence no String objects are created till the terminal operation which is a static method MyExecutor.doSomething(String).
/**
* Process lines from a file.
* Uses Files.lines() method which take advantage of Stream API introduced in Java 8.
*/
private static void processStringsFromFile(final Path file) {
try (Stream<String> lines = Files.lines(file)) {
lines.map(s -> s.trim())
.filter(s -> !s.isEmpty())
.filter(s -> !s.startsWith("#"))
.filter(s -> s.contains("Something"))
.forEach(MyExecutor::doSomething);
} catch (IOException ex) {
logProcessStringsFailed(ex);
}
}
I conducted an Analysis of Memory Usage in NetBeans and here are the Memory Results for empty implementation of doSomething()
public static void doSomething(final String s) {
}
Live Bytes = 6702720 ≈ 6.4MB.

How write big endian ByteBuffer to little endian in Java

I currently have a Java ByteBuffer that already has the data in Big Endian format. I then want to write to a binary file as Little Endian.
Here's the code which just writes the file still in Big Endian:
public void writeBinFile(String fileName, boolean append) throws FileNotFoundException, IOException
{
FileOutputStream outStream = null;
try
{
outStream = new FileOutputStream(fileName, append);
FileChannel out = outStream.getChannel();
byteBuff.position(byteBuff.capacity());
byteBuff.flip();
byteBuff.order(ByteOrder.LITTLE_ENDIAN);
out.write(byteBuff);
}
finally
{
if (outStream != null)
{
outStream.close();
}
}
}
Note that byteBuff is a ByteBuffer that has been filled in Big Endian format.
My last resort is a brute force method of creating another buffer and setting that ByteBuffer to little endian and then reading the "getInt" values from the original (big endian) buffer, and "setInt" the value to the little endian buffer. I'd imagine there is a better way...
Endianess has no meaning for a byte[]. Endianess only matter for multi-byte data types like short, int, long, float, or double. The right time to get the endianess right is when you are writing the raw data to the bytes and reading the actual format.
If you have a byte[] given to you, you must decode the original data types and re-encode them with the different endianness. I am sure you will agree this is a) not easy to do or ideal b) cannot be done automagically.
Here is how I solved a similar problem, wanting to get the "endianness" of the Integers I'm writing to an output file correct:
byte[] theBytes = /* obtain a byte array that is the input */
ByteBuffer byteBuffer = ByteBuffer.wrap(theBytes);
ByteBuffer destByteBuffer = ByteBuffer.allocate(theBytes.length);
destByteBuffer.order(ByteOrder.LITTLE_ENDIAN);
IntBuffer destBuffer = destByteBuffer.asIntBuffer();
while (byteBuffer.hasRemaining())
{
int element = byteBuffer.getInt();
destBuffer.put(element);
/* Could write destBuffer int-by-int here, or outside this loop */
}
There might be more efficient ways to do this, but for my particular problem, I had to apply a mathematical transformation to the elements as I copied them to the new buffer. But this should still work for your particular problem.

Java: Memory efficient ByteArrayOutputStream

I've got a 40MB file in the disk and I need to "map" it into memory using a byte array.
At first, I thought writing the file to a ByteArrayOutputStream would be the best way, but I find it takes about 160MB of heap space at some moment during the copy operation.
Does somebody know a better way to do this without using three times the file size of RAM?
Update: Thanks for your answers. I noticed I could reduce memory consumption a little telling ByteArrayOutputStream initial size to be a bit greater than the original file size (using the exact size with my code forces reallocation, got to check why).
There's another high memory spot: when I get byte[] back with ByteArrayOutputStream.toByteArray. Taking a look to its source code, I can see it is cloning the array:
public synchronized byte toByteArray()[] {
return Arrays.copyOf(buf, count);
}
I'm thinking I could just extend ByteArrayOutputStream and rewrite this method, so to return the original array directly. Is there any potential danger here, given the stream and the byte array won't be used more than once?
MappedByteBuffer might be what you're looking for.
I'm surprised it takes so much RAM to read a file in memory, though. Have you constructed the ByteArrayOutputStream with an appropriate capacity? If you haven't, the stream could allocate a new byte array when it's near the end of the 40 MB, meaning that you would, for example, have a full buffer of 39MB, and a new buffer of twice the size. Whereas if the stream has the appropriate capacity, there won't be any reallocation (faster), and no wasted memory.
ByteArrayOutputStream should be okay so long as you specify an appropriate size in the constructor. It will still create a copy when you call toByteArray, but that's only temporary. Do you really mind the memory briefly going up a lot?
Alternatively, if you already know the size to start with you can just create a byte array and repeatedly read from a FileInputStream into that buffer until you've got all the data.
If you really want to map the file into memory, then a FileChannel is the appropriate mechanism.
If all you want to do is read the file into a simple byte[] (and don't need changes to that array to be reflected back to the file), then simply reading into an appropriately-sized byte[] from a normal FileInputStream should suffice.
Guava has Files.toByteArray() which does all that for you.
For an explanation of the buffer growth behavior of ByteArrayOutputStream, please read this answer.
In answer to your question, it is safe to extend ByteArrayOutputStream. In your situation, it is probably better to override the write methods such that the maximum additional allocation is limited, say, to 16MB. You should not override the toByteArray to expose the protected buf[] member. This is because a stream is not a buffer; A stream is a buffer that has a position pointer and boundary protection. So, it is dangerous to access and potentially manipulate the buffer from outside the class.
I'm thinking I could just extend ByteArrayOutputStream and rewrite this method, so to return the original array directly. Is there any potential danger here, given the stream and the byte array won't be used more than once?
You shouldn't change the specified behavior of the existing method, but it's perfectly fine to add a new method. Here's an implementation:
/** Subclasses ByteArrayOutputStream to give access to the internal raw buffer. */
public class ByteArrayOutputStream2 extends java.io.ByteArrayOutputStream {
public ByteArrayOutputStream2() { super(); }
public ByteArrayOutputStream2(int size) { super(size); }
/** Returns the internal buffer of this ByteArrayOutputStream, without copying. */
public synchronized byte[] buf() {
return this.buf;
}
}
An alternative but hackish way of getting the buffer from any ByteArrayOutputStream is to use the fact that its writeTo(OutputStream) method passes the buffer directly to the provided OutputStream:
/**
* Returns the internal raw buffer of a ByteArrayOutputStream, without copying.
*/
public static byte[] getBuffer(ByteArrayOutputStream bout) {
final byte[][] result = new byte[1][];
try {
bout.writeTo(new OutputStream() {
#Override
public void write(byte[] buf, int offset, int length) {
result[0] = buf;
}
#Override
public void write(int b) {}
});
} catch (IOException e) {
throw new RuntimeException(e);
}
return result[0];
}
(That works, but I'm not sure if it's useful, given that subclassing ByteArrayOutputStream is simpler.)
However, from the rest of your question it sounds like all you want is a plain byte[] of the complete contents of the file. As of Java 7, the simplest and fastest way to do that is call Files.readAllBytes. In Java 6 and below, you can use DataInputStream.readFully, as in Peter Lawrey's answer. Either way, you will get an array that is allocated once at the correct size, without the repeated reallocation of ByteArrayOutputStream.
If you have 40 MB of data I don't see any reason why it would take more than 40 MB to create a byte[]. I assume you are using a growing ByteArrayOutputStream which creates a byte[] copy when finished.
You can try the old read the file at once approach.
File file =
DataInputStream is = new DataInputStream(FileInputStream(file));
byte[] bytes = new byte[(int) file.length()];
is.readFully(bytes);
is.close();
Using a MappedByteBuffer is more efficient and avoids a copy of data (or using the heap much) provided you can use the ByteBuffer directly, however if you have to use a byte[] its unlikely to help much.
... but I find it takes about 160MB of heap space at some moment during the copy operation
I find this extremely surprising ... to the extent that I have my doubts that you are measuring the heap usage correctly.
Let's assume that your code is something like this:
BufferedInputStream bis = new BufferedInputStream(
new FileInputStream("somefile"));
ByteArrayOutputStream baos = new ByteArrayOutputStream(); /* no hint !! */
int b;
while ((b = bis.read()) != -1) {
baos.write((byte) b);
}
byte[] stuff = baos.toByteArray();
Now the way that a ByteArrayOutputStream manages its buffer is to allocate an initial size, and (at least) double the buffer when it fills it up. Thus, in the worst case baos might use up to 80Mb buffer to hold a 40Mb file.
The final step allocates a new array of exactly baos.size() bytes to hold the buffer's contents. That's 40Mb. So the peak amount of memory that is actually in use should be 120Mb.
So where are those extra 40Mb being used? My guess is that they are not, and that you are actually reporting the total heap size, not the amount of memory that is occupied by reachable objects.
So what is the solution?
You could use a memory mapped buffer.
You could give a size hint when you allocate the ByteArrayOutputStream; e.g.
ByteArrayOutputStream baos = ByteArrayOutputStream(file.size());
You could dispense with the ByteArrayOutputStream entirely and read directly into a byte array.
byte[] buffer = new byte[file.size()];
FileInputStream fis = new FileInputStream(file);
int nosRead = fis.read(buffer);
/* check that nosRead == buffer.length and repeat if necessary */
Both options 1 and 2 should have an peak memory usage of 40Mb while reading a 40Mb file; i.e. no wasted space.
It would be helpful if you posted your code, and described your methodology for measuring memory usage.
I'm thinking I could just extend ByteArrayOutputStream and rewrite this method, so to return the original array directly. Is there any potential danger here, given the stream and the byte array won't be used more than once?
The potential danger is that your assumptions are incorrect, or become incorrect due to someone else modifying your code unwittingly ...
Google Guava ByteSource seems to be a good choice for buffering in memory. Unlike implementations like ByteArrayOutputStream or ByteArrayList(from Colt Library) it does not merge the data into a huge byte array but stores every chunk separately. An example:
List<ByteSource> result = new ArrayList<>();
try (InputStream source = httpRequest.getInputStream()) {
byte[] cbuf = new byte[CHUNK_SIZE];
while (true) {
int read = source.read(cbuf);
if (read == -1) {
break;
} else {
result.add(ByteSource.wrap(Arrays.copyOf(cbuf, read)));
}
}
}
ByteSource body = ByteSource.concat(result);
The ByteSource can be read as an InputStream anytime later:
InputStream data = body.openBufferedStream();
... came here with the same observation when reading a 1GB file: Oracle's ByteArrayOutputStream has a lazy memory management.
A byte-Array is indexed by an int and such anyway limited to 2GB. Without dependency on 3rd-party you might find this useful:
static public byte[] getBinFileContent(String aFile)
{
try
{
final int bufLen = 32768;
final long fs = new File(aFile).length();
final long maxInt = ((long) 1 << 31) - 1;
if (fs > maxInt)
{
System.err.println("file size out of range");
return null;
}
final byte[] res = new byte[(int) fs];
final byte[] buffer = new byte[bufLen];
final InputStream is = new FileInputStream(aFile);
int n;
int pos = 0;
while ((n = is.read(buffer)) > 0)
{
System.arraycopy(buffer, 0, res, pos, n);
pos += n;
}
is.close();
return res;
}
catch (final IOException e)
{
e.printStackTrace();
return null;
}
catch (final OutOfMemoryError e)
{
e.printStackTrace();
return null;
}
}

Categories

Resources