I currently have a Java ByteBuffer that already has the data in Big Endian format. I then want to write to a binary file as Little Endian.
Here's the code which just writes the file still in Big Endian:
public void writeBinFile(String fileName, boolean append) throws FileNotFoundException, IOException
{
FileOutputStream outStream = null;
try
{
outStream = new FileOutputStream(fileName, append);
FileChannel out = outStream.getChannel();
byteBuff.position(byteBuff.capacity());
byteBuff.flip();
byteBuff.order(ByteOrder.LITTLE_ENDIAN);
out.write(byteBuff);
}
finally
{
if (outStream != null)
{
outStream.close();
}
}
}
Note that byteBuff is a ByteBuffer that has been filled in Big Endian format.
My last resort is a brute force method of creating another buffer and setting that ByteBuffer to little endian and then reading the "getInt" values from the original (big endian) buffer, and "setInt" the value to the little endian buffer. I'd imagine there is a better way...
Endianess has no meaning for a byte[]. Endianess only matter for multi-byte data types like short, int, long, float, or double. The right time to get the endianess right is when you are writing the raw data to the bytes and reading the actual format.
If you have a byte[] given to you, you must decode the original data types and re-encode them with the different endianness. I am sure you will agree this is a) not easy to do or ideal b) cannot be done automagically.
Here is how I solved a similar problem, wanting to get the "endianness" of the Integers I'm writing to an output file correct:
byte[] theBytes = /* obtain a byte array that is the input */
ByteBuffer byteBuffer = ByteBuffer.wrap(theBytes);
ByteBuffer destByteBuffer = ByteBuffer.allocate(theBytes.length);
destByteBuffer.order(ByteOrder.LITTLE_ENDIAN);
IntBuffer destBuffer = destByteBuffer.asIntBuffer();
while (byteBuffer.hasRemaining())
{
int element = byteBuffer.getInt();
destBuffer.put(element);
/* Could write destBuffer int-by-int here, or outside this loop */
}
There might be more efficient ways to do this, but for my particular problem, I had to apply a mathematical transformation to the elements as I copied them to the new buffer. But this should still work for your particular problem.
Related
Native Code :
writing number 27 using fwrite().
int main()
{
int a = 27;
FILE *fp;
fp = fopen("/data/tmp.log", "w");
if (!fp)
return -errno;
fwrite(&a, 4, 1, fp);
fclose();
return 0;
}
Reading back the data(27) using DataInputStream.readInt() :
public int readIntDataInputStream(void)
{
String filePath = "/data/tmp.log";
InputStream is = null;
DataInputStream dis = null;
int k;
is = new FileInputStream(filePath);
dis = new DataInputStream(is);
k = dis.readInt();
Log.i(TAG, "Size : " + k);
return 0;
}
O/p
Size : 452984832
Well that in hex is 0x1b000000
0x1b is 27. But the readInt() is reading the data as big endian while my native coding is writing as little endian. . So, instead of 0x0000001b i get 0x1b000000.
Is my understanding correct? Did anyone came across this problem before?
From the Javadoc for readInt():
This method is suitable for reading bytes written by the writeInt method of interface DataOutput
If you want to read something written by a C program you'll have to do the byte swapping yourself, using the facilities in java.nio. I've never done this but I believe you would read the data into a ByteBuffer, set the buffer's order to ByteOrder.LITTLE_ENDIAN and then create an IntBuffer view over the ByteBuffer if you have an array of values, or just use ByteBuffer#getInt() for a single value.
All that aside, I agree with #EJP that the external format for the data should be big-endian for greatest compatibility.
There are multiple issues in your code:
You assume that the size of int is 4, it is not necessarily true, and since you want to deal with 32-bit ints, you should use int32_t or uint32_t.
You must open the file in binary more to write binary data reliably. The above code would fail on Windows for less trivial output. Use fopen("/data/tmp.log", "wb").
You must deal with endianness. You are using the file to exchange data between different platforms that may have different native endianness and/or endian specific APIs. Java seems to use big-endian, aka network byte order, so you should convert the values on the C platform with the hton32() utility function. It is unlikely to have significant impact on performance on the PC side, as this function is usually expanded inline, possibly as a single instruction and most of the time will be spent waiting for I/O anyway.
Here is a modified version of the code:
#include <endian.h>
#include <stdint.h>
#include <stdio.h>
int main(void) {
uint32_t a = hton32(27);
FILE *fp = fopen("/data/tmp.log", "wb");
if (!fp) {
return errno;
}
fwrite(&a, sizeof a, 1, fp);
fclose();
return 0;
}
Dude, I'm using following code to read up a large file(2MB or more) and do some business with data.
I have to read 128Byte for each data read call.
At the first I used this code(no problem,works good).
InputStream is;//= something...
int read=-1;
byte[] buff=new byte[128];
while(true){
for(int idx=0;idx<128;idx++){
read=is.read(); if(read==-1){return;}//end of stream
buff[idx]=(byte)read;
}
process_data(buff);
}
Then I tried this code which the problems got appeared(Error! weird responses sometimes)
InputStream is;//= something...
int read=-1;
byte[] buff=new byte[128];
while(true){
//ERROR! java doesn't read 128 bytes while it's available
if((read=is.read(buff,0,128))==128){process_data(buff);}else{return;}
}
The above code doesn't work all the time, I'm sure that number of data is available, but reads(read) 127 or 125, or 123, sometimes. what is the problem?
I also found a code for this to use DataInputStream#readFully(buff:byte[]):void which works too, but I'm just wondered why the seconds solution doesn't fill the array data while the data is available.
Thanks buddy.
Consulting the javadoc for FileInputStream (I'm assuming since you're reading from file):
Reads up to len bytes of data from this input stream into an array of bytes. If len is not zero, the method blocks until some input is available; otherwise, no bytes are read and 0 is returned.
The key here is that the method only blocks until some data is available. The returned value gives you how many bytes was actually read. The reason you may be reading less than 128 bytes could be due to a slow drive/implementation-defined behavior.
For a proper read sequence, you should check that read() does not equal -1 (End of stream) and write to a buffer until the correct amount of data has been read.
Example of a proper implementation of your code:
InputStream is; // = something...
int read;
int read_total;
byte[] buf = new byte[128];
// Infinite loop
while(true){
read_total = 0;
// Repeatedly perform reads until break or end of stream, offsetting at last read position in array
while((read = is.read(buf, read_total, buf.length - offset)) != -1){
// Gets the amount read and adds it to a read_total variable.
read_total = read_total + read;
// Break if it read_total is buffer length (128)
if(read_total == buf.length){
break;
}
}
if(read_total != buf.length){
// Incomplete read before 128 bytes
}else{
process_data(buf);
}
}
Edit:
Don't try to use available() as an indicator of data availability (sounds weird I know), again the javadoc:
Returns an estimate of the number of remaining bytes that can be read (or skipped over) from this input stream without blocking by the next invocation of a method for this input stream. Returns 0 when the file position is beyond EOF. The next invocation might be the same thread or another thread. A single read or skip of this many bytes will not block, but may read or skip fewer bytes.
In some cases, a non-blocking read (or skip) may appear to be blocked when it is merely slow, for example when reading large files over slow networks.
The key there is estimate, don't work with estimates.
Since the accepted answer was provided a new option has become available. Starting with Java 9, the InputStream class has two methods named readNBytes that eliminate the need for the programmer to write a read loop, for example your method could look like
public static void some_method( ) throws IOException {
InputStream is = new FileInputStream(args[1]);
byte[] buff = new byte[128];
while (true) {
int numRead = is.readNBytes(buff, 0, buff.length);
if (numRead == 0) {
break;
}
// The last read before end-of-stream may read fewer than 128 bytes.
process_data(buff, numRead);
}
}
or the slightly simpler
public static void some_method( ) throws IOException {
InputStream is = new FileInputStream(args[1]);
while (true) {
byte[] buff = is.readNBytes(128);
if (buff.length == 0) {
break;
}
// The last read before end-of-stream may read fewer than 128 bytes.
process_data(buff);
}
}
The title says it all. Is there any way to convert from StringBuilder to byte[] without using a String in the middle?
The problem is that I'm managing REALLY large strings (millions of chars), and then I have a cycle that adds a char in the end and obtains the byte[]. The process of converting the StringBuffer to String makes this cycle veryyyy very very slow.
Is there any way to accomplish this? Thanks in advance!
As many have already suggested, you can use the CharBuffer class, but allocating a new CharBuffer would only make your problem worse.
Instead, you can directly wrap your StringBuilder in a CharBuffer, since StringBuilder implements CharSequence:
Charset charset = StandardCharsets.UTF_8;
CharsetEncoder encoder = charset.newEncoder();
// No allocation performed, just wraps the StringBuilder.
CharBuffer buffer = CharBuffer.wrap(stringBuilder);
ByteBuffer bytes = encoder.encode(buffer);
EDIT: Duarte correctly points out that the CharsetEncoder.encode method may return a buffer whose backing array is larger than the actual data—meaning, its capacity is larger than its limit. It is necessary either to read from the ByteBuffer itself, or to read a byte array out of the ByteBuffer that is guaranteed to be the right size. In the latter case, there's no avoiding having two copies of the bytes in memory, albeit briefly:
ByteBuffer byteBuffer = encoder.encode(buffer);
byte[] array;
int arrayLen = byteBuffer.limit();
if (arrayLen == byteBuffer.capacity()) {
array = byteBuffer.array();
} else {
// This will place two copies of the byte sequence in memory,
// until byteBuffer gets garbage-collected (which should happen
// pretty quickly once the reference to it is null'd).
array = new byte[arrayLen];
byteBuffer.get(array);
}
byteBuffer = null;
If you're willing to replace the StringBuilder with something else, yet another possibility would be a Writer backed by a ByteArrayOutputStream:
ByteArrayOutputStream bout = new ByteArrayOutputStream();
Writer writer = new OutputStreamWriter(bout);
try {
writer.write("String A");
writer.write("String B");
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(bout.toByteArray());
try {
writer.write("String C");
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(bout.toByteArray());
As always, your mileage may vary.
For starters, you should probably be using StringBuilder, since StringBuffer has synchronization overhead that's usually unnecessary.
Unfortunately, there's no way to go directly to bytes, but you can copy the chars into an array or iterate from 0 to length() and read each charAt().
Unfortunately, the answers above that deal with ByteBuffer's array() method are a bit buggy... The trouble is that the allocated byte[] is likely to be bigger than what you'd expect. Thus, there will be trailing NULL bytes that are hard to get rid off, since you can't "re-size" arrays in Java.
Here is an article that explains this in more detail:
http://worldmodscode.wordpress.com/2012/12/14/the-java-bytebuffer-a-crash-course/
What are you trying to accomplish with "million of chars"? Are these logs that need to be parsed? Can you read it as just bytes and stick to a ByteBuffer? Then you can do:
buffer.array()
to get a byte[]
Depends on what it is you are doing, you can also use just a char[] or a CharBuffer:
CharBuffer cb = CharBuffer.allocate(4242);
cb.put("Depends on what it is you need to do");
...
Then you can get a char[] as:
cp.array()
It's always good to REPL things out, it's fun and proves the point. Java REPL is not something we are accustomed to, but hey, there is Clojure to save the day which speaks Java fluently:
user=> (import java.nio.CharBuffer)
java.nio.CharBuffer
user=> (def cb (CharBuffer/allocate 4242))
#'user/cb
user=> (-> (.put cb "There Be") (.array))
#<char[] [C#206564e9>
user=> (-> (.put cb " Dragons") (.array) (String.))
"There Be Dragons"
If you want performance, I wouldn't use StringBuilder or create a byte[]. Instead you can write progressively to the stream which will take the data in the first place. If you can't do that, you can copy the data from the StringBuilder to the Writer, but it much faster to not create the StringBuilder in the first place.
I am puzzled by the behavior of ObjectOutputStream. It seems like it has an overhead of 9 bytes when writing data. Consider the code below:
float[] speeds = new float[96];
float[] flows = new float[96];
//.. do some stuff here to fill the arrays with data
ByteArrayOutputStream baos = new ByteArrayOutputStream();
ObjectOutputStream oos=null;
try {
oos = new ObjectOutputStream(baos);
oos.writeInt(speeds.length);
for(int i=0;i<speeds.length;i++) {
oos.writeFloat(speeds[i]);
}
for(int i=0;i<flows.length;i++) {
oos.writeFloat(flows[i]);
}
oos.flush();
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if(oos!=null) {
oos.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
byte[] array = baos.toByteArray();
The length of the array is always 781, while I would expect it to be (1+96+96)*4 = 772 bytes. I can't seem to find where the 9 bytes go.
Thanks!
--edit: added if(oos!=null) { ... } to prevent NPE
ObjectOutputStream is used to serialize objects. You shouldn't make any assumptions how the data is stored.
If you want to store just the raw data use DataOutputStream instead.
The ObjectOutputStream writes a header at the beginning.
You can eliminate this header by subclassing ObjectOutputStream and implementing writeStreamHeader().
The JavaDoc for ObjectOutputStream tells you:
Primitive data, excluding serializable fields and externalizable data, is written to the ObjectOutputStream in block-data records. A block data record is composed of a header and data. The block data header consists of a marker and the number of bytes to follow the header. Consecutive primitive data writes are merged into one block-data record. The blocking factor used for a block-data record will be 1024 bytes. Each block-data record will be filled up to 1024 bytes, or be written whenever there is a termination of block-data mode. Calls to the ObjectOutputStream methods writeObject, defaultWriteObject and writeFields initially terminate any existing block-data record.
So the blocking stuff might be your missing overhead.
Java's serialization stream begins with a 4-byte header (2-byte "magic number" followed by a 2-byte version). The header is followed by a sequence of block-data and objects entries. There are two kinds of blocks-data entries: "short" and "long." Short blocks have a 2-byte overhead per block and the block can be at most 255 bytes long. Long blocks have a 5-byte overhead but they can be up to 4 GB in length. How long the "long" block can be in practice depends on the size of ObjectOutputStream's internal buffers.
In this case you only have one long data-block entry, so the overhead you see is 4 bytes from the stream header and 5 from the data block, 9 bytes in total.
You'll find the full documentation here: http://docs.oracle.com/javase/7/docs/platform/serialization/spec/protocol.html
I've got a 40MB file in the disk and I need to "map" it into memory using a byte array.
At first, I thought writing the file to a ByteArrayOutputStream would be the best way, but I find it takes about 160MB of heap space at some moment during the copy operation.
Does somebody know a better way to do this without using three times the file size of RAM?
Update: Thanks for your answers. I noticed I could reduce memory consumption a little telling ByteArrayOutputStream initial size to be a bit greater than the original file size (using the exact size with my code forces reallocation, got to check why).
There's another high memory spot: when I get byte[] back with ByteArrayOutputStream.toByteArray. Taking a look to its source code, I can see it is cloning the array:
public synchronized byte toByteArray()[] {
return Arrays.copyOf(buf, count);
}
I'm thinking I could just extend ByteArrayOutputStream and rewrite this method, so to return the original array directly. Is there any potential danger here, given the stream and the byte array won't be used more than once?
MappedByteBuffer might be what you're looking for.
I'm surprised it takes so much RAM to read a file in memory, though. Have you constructed the ByteArrayOutputStream with an appropriate capacity? If you haven't, the stream could allocate a new byte array when it's near the end of the 40 MB, meaning that you would, for example, have a full buffer of 39MB, and a new buffer of twice the size. Whereas if the stream has the appropriate capacity, there won't be any reallocation (faster), and no wasted memory.
ByteArrayOutputStream should be okay so long as you specify an appropriate size in the constructor. It will still create a copy when you call toByteArray, but that's only temporary. Do you really mind the memory briefly going up a lot?
Alternatively, if you already know the size to start with you can just create a byte array and repeatedly read from a FileInputStream into that buffer until you've got all the data.
If you really want to map the file into memory, then a FileChannel is the appropriate mechanism.
If all you want to do is read the file into a simple byte[] (and don't need changes to that array to be reflected back to the file), then simply reading into an appropriately-sized byte[] from a normal FileInputStream should suffice.
Guava has Files.toByteArray() which does all that for you.
For an explanation of the buffer growth behavior of ByteArrayOutputStream, please read this answer.
In answer to your question, it is safe to extend ByteArrayOutputStream. In your situation, it is probably better to override the write methods such that the maximum additional allocation is limited, say, to 16MB. You should not override the toByteArray to expose the protected buf[] member. This is because a stream is not a buffer; A stream is a buffer that has a position pointer and boundary protection. So, it is dangerous to access and potentially manipulate the buffer from outside the class.
I'm thinking I could just extend ByteArrayOutputStream and rewrite this method, so to return the original array directly. Is there any potential danger here, given the stream and the byte array won't be used more than once?
You shouldn't change the specified behavior of the existing method, but it's perfectly fine to add a new method. Here's an implementation:
/** Subclasses ByteArrayOutputStream to give access to the internal raw buffer. */
public class ByteArrayOutputStream2 extends java.io.ByteArrayOutputStream {
public ByteArrayOutputStream2() { super(); }
public ByteArrayOutputStream2(int size) { super(size); }
/** Returns the internal buffer of this ByteArrayOutputStream, without copying. */
public synchronized byte[] buf() {
return this.buf;
}
}
An alternative but hackish way of getting the buffer from any ByteArrayOutputStream is to use the fact that its writeTo(OutputStream) method passes the buffer directly to the provided OutputStream:
/**
* Returns the internal raw buffer of a ByteArrayOutputStream, without copying.
*/
public static byte[] getBuffer(ByteArrayOutputStream bout) {
final byte[][] result = new byte[1][];
try {
bout.writeTo(new OutputStream() {
#Override
public void write(byte[] buf, int offset, int length) {
result[0] = buf;
}
#Override
public void write(int b) {}
});
} catch (IOException e) {
throw new RuntimeException(e);
}
return result[0];
}
(That works, but I'm not sure if it's useful, given that subclassing ByteArrayOutputStream is simpler.)
However, from the rest of your question it sounds like all you want is a plain byte[] of the complete contents of the file. As of Java 7, the simplest and fastest way to do that is call Files.readAllBytes. In Java 6 and below, you can use DataInputStream.readFully, as in Peter Lawrey's answer. Either way, you will get an array that is allocated once at the correct size, without the repeated reallocation of ByteArrayOutputStream.
If you have 40 MB of data I don't see any reason why it would take more than 40 MB to create a byte[]. I assume you are using a growing ByteArrayOutputStream which creates a byte[] copy when finished.
You can try the old read the file at once approach.
File file =
DataInputStream is = new DataInputStream(FileInputStream(file));
byte[] bytes = new byte[(int) file.length()];
is.readFully(bytes);
is.close();
Using a MappedByteBuffer is more efficient and avoids a copy of data (or using the heap much) provided you can use the ByteBuffer directly, however if you have to use a byte[] its unlikely to help much.
... but I find it takes about 160MB of heap space at some moment during the copy operation
I find this extremely surprising ... to the extent that I have my doubts that you are measuring the heap usage correctly.
Let's assume that your code is something like this:
BufferedInputStream bis = new BufferedInputStream(
new FileInputStream("somefile"));
ByteArrayOutputStream baos = new ByteArrayOutputStream(); /* no hint !! */
int b;
while ((b = bis.read()) != -1) {
baos.write((byte) b);
}
byte[] stuff = baos.toByteArray();
Now the way that a ByteArrayOutputStream manages its buffer is to allocate an initial size, and (at least) double the buffer when it fills it up. Thus, in the worst case baos might use up to 80Mb buffer to hold a 40Mb file.
The final step allocates a new array of exactly baos.size() bytes to hold the buffer's contents. That's 40Mb. So the peak amount of memory that is actually in use should be 120Mb.
So where are those extra 40Mb being used? My guess is that they are not, and that you are actually reporting the total heap size, not the amount of memory that is occupied by reachable objects.
So what is the solution?
You could use a memory mapped buffer.
You could give a size hint when you allocate the ByteArrayOutputStream; e.g.
ByteArrayOutputStream baos = ByteArrayOutputStream(file.size());
You could dispense with the ByteArrayOutputStream entirely and read directly into a byte array.
byte[] buffer = new byte[file.size()];
FileInputStream fis = new FileInputStream(file);
int nosRead = fis.read(buffer);
/* check that nosRead == buffer.length and repeat if necessary */
Both options 1 and 2 should have an peak memory usage of 40Mb while reading a 40Mb file; i.e. no wasted space.
It would be helpful if you posted your code, and described your methodology for measuring memory usage.
I'm thinking I could just extend ByteArrayOutputStream and rewrite this method, so to return the original array directly. Is there any potential danger here, given the stream and the byte array won't be used more than once?
The potential danger is that your assumptions are incorrect, or become incorrect due to someone else modifying your code unwittingly ...
Google Guava ByteSource seems to be a good choice for buffering in memory. Unlike implementations like ByteArrayOutputStream or ByteArrayList(from Colt Library) it does not merge the data into a huge byte array but stores every chunk separately. An example:
List<ByteSource> result = new ArrayList<>();
try (InputStream source = httpRequest.getInputStream()) {
byte[] cbuf = new byte[CHUNK_SIZE];
while (true) {
int read = source.read(cbuf);
if (read == -1) {
break;
} else {
result.add(ByteSource.wrap(Arrays.copyOf(cbuf, read)));
}
}
}
ByteSource body = ByteSource.concat(result);
The ByteSource can be read as an InputStream anytime later:
InputStream data = body.openBufferedStream();
... came here with the same observation when reading a 1GB file: Oracle's ByteArrayOutputStream has a lazy memory management.
A byte-Array is indexed by an int and such anyway limited to 2GB. Without dependency on 3rd-party you might find this useful:
static public byte[] getBinFileContent(String aFile)
{
try
{
final int bufLen = 32768;
final long fs = new File(aFile).length();
final long maxInt = ((long) 1 << 31) - 1;
if (fs > maxInt)
{
System.err.println("file size out of range");
return null;
}
final byte[] res = new byte[(int) fs];
final byte[] buffer = new byte[bufLen];
final InputStream is = new FileInputStream(aFile);
int n;
int pos = 0;
while ((n = is.read(buffer)) > 0)
{
System.arraycopy(buffer, 0, res, pos, n);
pos += n;
}
is.close();
return res;
}
catch (final IOException e)
{
e.printStackTrace();
return null;
}
catch (final OutOfMemoryError e)
{
e.printStackTrace();
return null;
}
}