I'm trying to grab a set of mipmaplevels and save to a local cache file to avoid rebuilding them each time (and it is not practical to pre-generate them...)
I've got the mipmaplevels into a set of bitmaps OK, and now want to write them to my cache file, but whatever variety of buffer I use (direct or not, setting byteorder or not) hasArray always comes back false on the intbuffer. I must be doing something silly here, but I can't see the wood for the trees anymore.
Not been using java for long, so this is prolly a noob error ;)
Code looks like this:
int tsize = 256;
ByteBuffer xbb = ByteBuffer.allocate(tsize*tsize*4);
// or any other variety of create like wrap etc. makes no diference
xbb.order(ByteOrder.nativeOrder()); // in or out - makes no difference
IntBuffer ib = xbb.asIntBuffer();
for (int i = 0; i < tbm.length; i++) {
//ib.array() throws exception, ib.hasArray() returns false;
tbm[i].getPixels(ib.array(), 0, tsize, 0, 0, tsize, tsize);
ou.write(xbb.array(), 0, tsize*tsize*4);
tsize = tsize / 2;
}
See this link - it talks about same problem. There are answers there; however, it concludes with:
In another post
http://forum.java.sun.com/thread.jsp?forum=4&thread=437539
it is suggested to implement a version of DataBuffer which is
backed by a (Mapped)ByteBuffer *gasp* ...
Notice, that forum.java.sun.com has moved to Oracle somewhere.
I hope this helps. Anyhow, if you find better answer let me know too :-)
In my testing, changing the ByteBuffer into an IntBuffer using ByteBuffer.asIntBuffer() causes you to lose the backing array.
If you use IntBuffer.allocate(tsize*tsize) instead, you should be able to get the backing array.
ByteBuffer buf = ByteBuffer.allocate(10);
IntBuffer ib = buf.asIntBuffer();
System.out.printf("Buf %s, hasArray: %s, ib.hasArray %s\n", ib, buf.hasArray(), ib.hasArray());
buf = ByteBuffer.allocateDirect(10);
ib = IntBuffer.allocate(10);
System.out.printf("Buf %s, hasArray: %s, ib.hasArray %s\n", ib, buf.hasArray(), ib.hasArray());
Produces:
Buf java.nio.ByteBufferAsIntBufferB[pos=0 lim=2 cap=2], buf.hasArray: true, ib.hasArray false
Buf java.nio.HeapIntBuffer[pos=0 lim=10 cap=10], buf.hasArray: false, ib.hasArray true
Related
This question already has answers here:
Java multiple file transfer over socket
(3 answers)
Closed 5 years ago.
I am trying to transfer a file that is greater than 4gb using the Java SocketsAPI. I am already reading it via InputStreams and writing it via OutputStreams. However, analyzing the transmitted packets in Wireshark, I realise that the Sequence number of the TCP-packets is incremented by the byte-length of the packet, which seems to be 1440byte.
This leads to the behavior that when I try to send a file greater than 4gb, the total size of the Sequence-Number field of TCP is exceeded, leading to lots of error packages, but no error in Java.
My code for transmission currently looks like this:
DataOutputStream fileTransmissionStream = new DataOutputStream(transmissionSocket.getOutputStream());
FileInputStream fis = new FileInputStream(toBeSent);
int totalFileSize = fis.available();
fileTransmissionStream.writeInt(totalFileSize);
while (totalFileSize >0){
if(totalFileSize >= FileTransmissionManagementService.splittedTransmissionSize){
sendBytes = new byte[FileTransmissionManagementService.splittedTransmissionSize];
fis.read(sendBytes);
totalFileSize -= FileTransmissionManagementService.splittedTransmissionSize;
} else {
sendBytes = new byte[totalFileSize];
fis.read(sendBytes);
totalFileSize = 0;
}
byte[] encryptedBytes = DataEncryptor.encrypt(sendBytes);
/*byte[] bytesx = ByteBuffer.allocate(4).putInt(encryptedBytes.length).array();
fileTransmissionStream.write(bytesx,0,4);*/
fileTransmissionStream.writeInt(encryptedBytes.length);
fileTransmissionStream.write(encryptedBytes, 0, encryptedBytes.length);
What exactly have I done wrong in this situation, or is it not possible to transmit files greater than 4gb via one Socket?
TCP can handle infinitely long data streams. There is no problem with the sequence number wrapping around. As it is initially random, that can happen almost immediately, regardless of the length of the stream. The problems are in your code:
DataOutputStream fileTransmissionStream = new DataOutputStream(transmissionSocket.getOutputStream());
FileInputStream fis = new FileInputStream(toBeSent);
int totalFileSize = fis.available();
Classic misuse of available(). Have a look at the Javadoc and see what it's really for. This is also where your basic problem lies, as values > 2G don't fit into an int, so there is a truncation. You should be using File.length(), and storing it into a long.
fileTransmissionStream.writeInt(totalFileSize);
while (totalFileSize >0){
if(totalFileSize >= FileTransmissionManagementService.splittedTransmissionSize){
sendBytes = new byte[FileTransmissionManagementService.splittedTransmissionSize];
fis.read(sendBytes);
Here you are ignoring the result of read() here. It isn't guaranteed to fill the buffer: that's why it returns a value. See, again, the Javadoc.
totalFileSize -= FileTransmissionManagementService.splittedTransmissionSize;
} else {
sendBytes = new byte[totalFileSize];
Here you are assuming the file size fits into an int, and assuming the bytes fit into memory.
fis.read(sendBytes);
See above re read().
totalFileSize = 0;
}
byte[] encryptedBytes = DataEncryptor.encrypt(sendBytes);
/*byte[] bytesx = ByteBuffer.allocate(4).putInt(encryptedBytes.length).array();
fileTransmissionStream.write(bytesx,0,4);*/
We're not interested in your commented-out code.
fileTransmissionStream.writeInt(encryptedBytes.length);
fileTransmissionStream.write(encryptedBytes, 0, encryptedBytes.length);
You don't need all this crud. Use a CipherOutputStream to take care of the encryption, or better still SSL, and use the following copy loop:
byte[] buffer = new byte[8192]; // or much more if you like, but there are diminishing returns
int count;
while ((count = in.read(buffer)) > 0)
{
out.write(buffer, 0, count);
}
It seems that your protocol for the transmission is:
Send total file length in an int.
For each bunch of bytes read,
Send number of encrypted bytes ahead in an int,
Send the entrypted bytes themselves.
The basic problem, beyond the misinterpretations of the documentation that were pointed out in #EJP's answer, is with this very protocol.
You assume that the file length can be sent oven in an int. This means the length it sends cannot be more than Integer.MAX_VALUE. Of course, this limits you to files of 2G length (remember Java integers are signed).
If you take a look at the Files.size() method, which is a method for getting the actual file size in bytes, you'll see that it returns long. A long will accommodate files larger than 2GB, and larger than 4GB. So in fact, your protocol should at the very least be defined to start with a long rather than an int field.
The size problem really has nothing at all to do with the TCP packets.
The title says it all. Is there any way to convert from StringBuilder to byte[] without using a String in the middle?
The problem is that I'm managing REALLY large strings (millions of chars), and then I have a cycle that adds a char in the end and obtains the byte[]. The process of converting the StringBuffer to String makes this cycle veryyyy very very slow.
Is there any way to accomplish this? Thanks in advance!
As many have already suggested, you can use the CharBuffer class, but allocating a new CharBuffer would only make your problem worse.
Instead, you can directly wrap your StringBuilder in a CharBuffer, since StringBuilder implements CharSequence:
Charset charset = StandardCharsets.UTF_8;
CharsetEncoder encoder = charset.newEncoder();
// No allocation performed, just wraps the StringBuilder.
CharBuffer buffer = CharBuffer.wrap(stringBuilder);
ByteBuffer bytes = encoder.encode(buffer);
EDIT: Duarte correctly points out that the CharsetEncoder.encode method may return a buffer whose backing array is larger than the actual data—meaning, its capacity is larger than its limit. It is necessary either to read from the ByteBuffer itself, or to read a byte array out of the ByteBuffer that is guaranteed to be the right size. In the latter case, there's no avoiding having two copies of the bytes in memory, albeit briefly:
ByteBuffer byteBuffer = encoder.encode(buffer);
byte[] array;
int arrayLen = byteBuffer.limit();
if (arrayLen == byteBuffer.capacity()) {
array = byteBuffer.array();
} else {
// This will place two copies of the byte sequence in memory,
// until byteBuffer gets garbage-collected (which should happen
// pretty quickly once the reference to it is null'd).
array = new byte[arrayLen];
byteBuffer.get(array);
}
byteBuffer = null;
If you're willing to replace the StringBuilder with something else, yet another possibility would be a Writer backed by a ByteArrayOutputStream:
ByteArrayOutputStream bout = new ByteArrayOutputStream();
Writer writer = new OutputStreamWriter(bout);
try {
writer.write("String A");
writer.write("String B");
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(bout.toByteArray());
try {
writer.write("String C");
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(bout.toByteArray());
As always, your mileage may vary.
For starters, you should probably be using StringBuilder, since StringBuffer has synchronization overhead that's usually unnecessary.
Unfortunately, there's no way to go directly to bytes, but you can copy the chars into an array or iterate from 0 to length() and read each charAt().
Unfortunately, the answers above that deal with ByteBuffer's array() method are a bit buggy... The trouble is that the allocated byte[] is likely to be bigger than what you'd expect. Thus, there will be trailing NULL bytes that are hard to get rid off, since you can't "re-size" arrays in Java.
Here is an article that explains this in more detail:
http://worldmodscode.wordpress.com/2012/12/14/the-java-bytebuffer-a-crash-course/
What are you trying to accomplish with "million of chars"? Are these logs that need to be parsed? Can you read it as just bytes and stick to a ByteBuffer? Then you can do:
buffer.array()
to get a byte[]
Depends on what it is you are doing, you can also use just a char[] or a CharBuffer:
CharBuffer cb = CharBuffer.allocate(4242);
cb.put("Depends on what it is you need to do");
...
Then you can get a char[] as:
cp.array()
It's always good to REPL things out, it's fun and proves the point. Java REPL is not something we are accustomed to, but hey, there is Clojure to save the day which speaks Java fluently:
user=> (import java.nio.CharBuffer)
java.nio.CharBuffer
user=> (def cb (CharBuffer/allocate 4242))
#'user/cb
user=> (-> (.put cb "There Be") (.array))
#<char[] [C#206564e9>
user=> (-> (.put cb " Dragons") (.array) (String.))
"There Be Dragons"
If you want performance, I wouldn't use StringBuilder or create a byte[]. Instead you can write progressively to the stream which will take the data in the first place. If you can't do that, you can copy the data from the StringBuilder to the Writer, but it much faster to not create the StringBuilder in the first place.
I send long number via UDP.
LinkedQueue Q = new LinkedQueue<ByteBuffer>();
while (this._run) {
udp_socket.receive(packet);
if (packet.getLength() > 0) {
ByteBuffer bb = ByteBuffer.wrap(buf, 0, packet.getLength());
Q.add(bb);
}
}
//udp close. I remove data from Queue, but all ByteBuffers have same value.
while(!Q.isEmpty){
ByteBuffer b = Q.remove();
b.getLong();//same value
}
Why i receive same value? Any Suggest?
Does your byte buffer consists of just one long?
Probably not, my guess is that you put a bit too much for just one long in there.
And that's why it gives you same values on first sizeof(long) bytes.
What you need to do is to keep calling .getLong() until you hit the end of the buffer.
See the docs.
I have the following snippet of code. The marked line is causing a BufferUnderflowException. I read the documentation on the exception but still do not understand what exactly it menas. I use the .rewind() method which I was under the impression mitigates the issue.
Can anyone please enlighten me on the topic or cause of my error?
Bitmap cameraBaseSized = BitmapFactory.decodeFile(cameraPath, opts);
Bitmap canvasBlendSized = BitmapFactory.decodeFile(canvasPath, options);
Bitmap result = cameraBaseSized.copy(Config.ARGB_8888, true);
IntBuffer buffBase = IntBuffer.allocate(cameraBaseSized.getWidth()
* cameraBaseSized.getHeight());
cameraBaseSized.copyPixelsToBuffer(buffBase);
buffBase.rewind();
IntBuffer buffBlend = IntBuffer.allocate(canvasBlendSized.getWidth()
* canvasBlendSized.getHeight());
canvasBlendSized.copyPixelsToBuffer(buffBlend);
buffBlend.rewind();
IntBuffer buffOut = IntBuffer.allocate(cameraBaseSized.getWidth()
* cameraBaseSized.getHeight());
buffOut.rewind();
while (buffOut.position() < buffOut.limit()) {
int filterInt = buffBlend.get(); //BUFFERUNDERFLOW EXCEPTION
int srcInt = buffBase.get();
int redValueFilter = Color.red(filterInt);
int greenValueFilter = Color.green(filterInt);
int blueValueFilter = Color.blue(filterInt);
int redValueSrc = Color.red(srcInt);
int greenValueSrc = Color.green(srcInt);
int blueValueSrc = Color.blue(srcInt);
int redValueFinal = multiply(redValueFilter, redValueSrc);
int greenValueFinal = multiply(greenValueFilter, greenValueSrc);
int blueValueFinal = multiply(blueValueFilter, blueValueSrc);
int pixel = Color.argb(255, redValueFinal, greenValueFinal, blueValueFinal);
buffOut.put(pixel);
}
buffOut.rewind();
result.copyPixelsFromBuffer(buffOut);
And the exception snippet
11-29 14:41:57.347: E/AndroidRuntime(2166): Caused by: java.nio.BufferUnderflowException
11-29 14:41:57.347: E/AndroidRuntime(2166): at java.nio.IntArrayBuffer.get(IntArrayBuffer.java:55)
I also would like to add this is happening only on specific devices, particularly samsung flavors.
Maybe this test will help:
ByteBuffer b = ByteBuffer.allocate(1);
b.get();
b.get();
After allocation there is 1 byte in the buffer, the first get() reads this byte and buffer reaches its limit, second get() is illegal, there is nothing to read, so you get BufferUnderflowException.
This code does not fail:
ByteBuffer b = ByteBuffer.allocate(1);
b.get();
b.rewind();
b.get();
Your read the byte, but it is still in the buffer, so you can rewind it and read the byte again
Since it is happening on certain devices, it could be possible that you are not getting the pixel format you are expecting.
Also if buffBlend is for any reason shorter than buffOut (which could potentially be caused by the bitmaps being different formats), you will get this exception when you try and get() past the end of it.
EDIT:
You could change int filterInt = buffBlend.get(); to
int filterInt = 0;
if (buffBlend.position() < buffBlend.limit())
filterInt = buffBlend.get();
If you do that for both of your buffers, that should protect you against the exception, and blend with black when one image is bigger than the other.
If someone is still interested:
Short solution: call buffBlend.position(0); before executing the while loop. Do the same for all your IntBuffer objects.
Background:
When you put an element into a IntBuffer the index/position of the IntBuffer is incremented. And when you call .get() method the incrimination continues. That's the problem. Let's say you put 3 int values into an IntBuffer. After that you call the .get() and you get the BufferUnderflowException. This happens because when you call the get() method the position/index increments to the 4, but your IntBuffer has the capacity of 3.
Solution: before calling the first get(), execute intBufferObject.position(0); that is how you get your index position back to the 0.
See more here get() and put() methods documentation.
https://docs.oracle.com/javase/7/docs/api/java/nio/IntBuffer.html#get()
I have need to create FloatBuffer's from a dynamic set of floats (that is, I don't know the length ahead of time). The only way I've found to do this is rather inelegant (below). I assume I'm missing something and there must be a cleaner/simpler method.
My solution:
Vector<Float> temp = new Vector<Float>();
//add stuff to temp
ByteBuffer bb = ByteBuffer.allocateDirect( work.size() * 4/*sizeof(float)*/ );
bb.order( ByteOrder.nativeOrder() );
FloatBuffer floatBuf = bb.asFloatBuffer();
for( Float f : work )
floatBuf.put( f );
floatBuf.position(0);
I am using my buffers for OpenGL commands thus I need to keep them around (that is, the resulting FloatBuffer is not just a temporary space).
If you're using the OpenGL API through Java, I assume you're using LWJGL as the go-between. If so, there's a simple solution for this, which is to use the BufferUtils class in the org.lwjgl package. The method BufferUtils.createFloatBuffer() allows you to put in floats from an array, which if you're using a Vector, is a simple conversion. Although it's not much better than your method, it does save the need for a byte buffer which is nasty enough, and allows for a few quick conversions. The code for this exists in the new LWJGL tutorials for OpenGL 3.2+ here.
Hope this helps.
I would use a plain ByteBuffer and I would write out the data when the buffer fills. (or do what ever you planed to do with it)
e.g.
SocketChannel sc = ...
ByteBuffer bb = ByteBuffer.allocateDirect(32 * 1024).order(ByteOrder.LITTLE_ENDIAN);
for(int i = 0 ; i < 100000000; i++) {
float f = i;
// move to a checkFree(4) method.
if (bb.remaining() < 4) {
bb.flip();
while(bb.remaining() > 0)
sc.write(bb);
}
// end of method
bb.putFloat(f);
}
Creating really large buffers can actually be slower than processing the data as you generate it.
Note: this creates almost no garbage. There is only one object which is the ByteBuffer.