Java OutOfMemory issue while transferring byte array - java

I am writing remote desktop application. So I am transferring Images from one machine to another machine as byte array through socket. After receiving byte array I convert it into image and draw on a panel. Code looks approximately like below
imageBytes = //read from socket.
InputStream in = new ByteArrayInputStream(imageBytes);
BufferedImage bufferedImage = ImageIO.read(in);
Image image = Toolkit.getDefaultToolkit().createImage(bufferedImage.getSource());
Image scaledImage = image.getScaledInstance(rmdPanel.getWidth(),rmdPanel.getHeight() ,Image.SCALE_FAST);
Graphics graphics = rmdPanel.getGraphics();
graphics.drawImage(scaledImage, 0, 0, rmdPanel.getWidth(),rmdPanel.getHeight(),rmdPanel);
I also store imagebytes till next image comes(for comparison). Now I am getting java out of memory exception in this code(while receiving byte array). I have heap size of 128 mb-512 mb. Image bytes sent are maximum 3mb.

(you don't show the communication code, so i'm just guessing) if you are using ObjectInputStream/ObjectOutputStream over the socket streams, you need to be aware that they cache objects sent over the wire (to avoid resending the same data). sometimes, this is a nice feature, but it can cause problems if objects are held too long. you need to periodically call reset() on the ObjectOutputStream to clear this cache (in your case, possibly after every image send).
of course, the surest way to solve this problem is to attach a memory profiler and see what's using all the memory (or analyze a heap dump).

imageBytes = //read from socket.
InputStream in = new ByteArrayInputStream(imageBytes);
BufferedImage bufferedImage = ImageIO.read(in);
Why read the image bytes into an array? You don't need that. It is costing you at least one extra copy of the data, maybe two if the ByteArrayInputStream copies the byte array. Just do ImageIO.read() straight from the socket.

I think ImageIO.read(...) does some sort of caching of images or their input streams so that may be causing you to run out of memory as you keep reading images.

Related

Android Bitmap Compression insufficient and causing OutOfMemoryError crashes

I am basically trying to compress and pass a Base64 representation of an image selected by a user however the apps crash on different phones with OutOfMemoryError problems. Here's my compression and conversion code:
Bitmap bm = BitmapFactory.decodeFile(filePath);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] byteArrayImage = baos.toByteArray();
String base64String = Base64.encodeToString(byteArrayImage, Base64.DEFAULT);
This process is also painfully slow and causes the app to crash sometimes.
Here's an exception I got:
java.lang.OutOfMemoryError: Failed to allocate a 5035548 byte allocation with 5011320 free bytes and 4MB until OOM
at dalvik.system.VMRuntime.newNonMovableArray(Native Method)
at android.graphics.BitmapFactory.nativeDecodeAsset(Native Method)
at android.graphics.BitmapFactory.decodeStream(BitmapFactory.java:625)
at android.graphics.BitmapFactory.decodeResourceStream(BitmapFactory.java:460)
at android.graphics.drawable.Drawable.createFromResourceStream(Drawable.java:973)
at android.content.res.Resources.loadDrawableForCookie(Resources.java:2477)
What changes should I make?
final BitmapFactory.Options options = new BitmapFactory.Options();
options.inJustDecodeBounds = true;
options.inSampleSize = 2; //you can also calculate your inSampleSize
options.inJustDecodeBounds = false;
options.inTempStorage = new byte[16 * 1024];
Bitmap bm = BitmapFactory.decodeFile(filePath,options); //changed line code
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat.JPEG, 100, baos);
byte[] byteArrayImage = baos.toByteArray();
String base64String = Base64.encodeToString(byteArrayImage, Base64.DEFAULT);
Note : Using android:largeHeap="true" for your application doesn't considered to be a ideal solution.
Here's the extract from google that explains it,
However, the ability to request a large heap is intended only for a
small set of apps that can justify the need to consume more RAM (such
as a large photo editing app). Never request a large heap simply
because you've run out of memory and you need a quick fix—you should
use it only when you know exactly where all your memory is being
allocated and why it must be retained. Yet, even when you're confident
your app can justify the large heap, you should avoid requesting it to
whatever extent possible. Using the extra memory will increasingly be
to the detriment of the overall user experience because garbage
collection will take longer and system performance may be slower when
task switching or performing other common operations.
here's the complete link of the documentation https://developer.android.com/training/articles/memory.html
Edit 1: For Efficient scaling of Images like WhatsApp Image Compression checkout this SO Answer
Try to recycle your bitmap after used it. And set the bitmap to null too. If you want run the garbage collector.
this approach wont work well for very large pictures, say picture taken from a camera. a 13 mp photo is 4128x3096x3 bytes which is about 40 megabytes. that is the size of bitmap alone. if you are creating the base-64 representation on the fly, it would take another 40 megabytes and some more, since base-64 string costs more bytes to store than comparable raw byte array (bitmap).
do you really need to turn it into base 64? if for example you want to upload it can you do it directly via rest api or multipart post request.
if you cant do that perhaps you can split the operation, like per 1 MB, or instead of writing that string into memory you can write it to file and append it after every 1 MB operation?

How does Java (Android) reuse free-ed memory?

I have an Android application that reads a lot of chunks of bytes one by one via network, then combine them into a large buffer. For example,
ByteArrayOutputStream outputStream = new ByteArrayOutputStream( );
while (i < 10) {
// read is an API in a lib that returns byte[].
byte[] bytes = API.read();
outputStream.write(bytes);
i++;
}
...
The question is about the memory for bytes. Is there a way to force Java to use the same chunk of byte for all reads? So it does not have to free and allocate memory too much? Will JAVA runtime optimize the case? Thanks.
The byte[] will be garbage collected. It is not appropriate to use an NIO ByteBuffer in this case as you are getting byte[] anyway, though it could come in handy later.
With each loop iteration, a byte[] is being created and filled with data, read from into the stream, and then is no longer used. Once memory runs low (or earlier depending on how your JVM operates) the array will be deleted and the memory made available.
You need not worry about such things most of the time (unless you are concatenating tons of strings, which is extremely inefficient for this reason).

Java writing to ByteArrayOutputStream memory leak

I am writing bytes of image to ByteArrayOutputStream then sending it over socket.
The problem is, when I do
ImageIO.write(image, "gif", byteArray);
Memory goes up VERY much, kinda memory leak.
I send using this
ImageIO.write(image, "gif", byteArrayO);
byte [] byteArray = byteArrayO.toByteArray();
byteArrayO.flush();
byteArrayO.reset();
Connection.pw.println("" + byteArray.length);
int old = Connection.client.getSendBufferSize();
Connection.client.setSendBufferSize(byteArray.length);
Connection.client.getOutputStream().write(byteArray, 0, byteArray.length);
Connection.client.getOutputStream().flush();
image.flush();
image = null;
byteArrayO = null;
byteArray = null;
System.gc();
Connection.client.setSendBufferSize(old);
As you can see I have tried all ways, the error comes when I write to the ByteArrayOutputStream, not when I transfer it. The receiver does not get any errors.
Any way I can clear the byteArray, and remove everything it has in it from memory? I know reset() does, but it dont in here. I want to dispose of the ByteArrayOutputStream directly when this is done.
#Christoffer Hammarström probably has the best solution, but I'll add this to try to explain the memory usage.
These 2 lines are creating 3 copies of your image data:
ImageIO.write(image, "gif", byteArrayO);
byte [] byteArray = byteArrayO.toByteArray();
After executing this you have one copy of the data stored in image, one copy in the ByteArrayOutputStream and another copy in the byte array (toByteArray() does not return the internal buffer it creates a copy).
Calling reset() does not release the memory inside the ByteArrayOutputStream, it just resets the position counter back to 0. The data is still there.
To allow the memory to be released earlier you could assign each item to null as soon as you have finished with it. This will allow the memory to be collected by the garbage collector if it decides to run earlier. EG:
ImageIO.write(image, "gif", byteArrayO);
image = null;
byte [] byteArray = byteArrayO.toByteArray();
byteArrayO = null;
...
Why do you have to fiddle with the send buffer size? What kind of protocol are you using on top of this socket? It should be just as simple as:
ImageIO.write(image, "gif", Connection.client.getOutputStream());
If you have to use a ByteArrayOutputStream, at least use
byteArrayO.writeTo(Connection.client.getOutputStream())
so you don't make an extra redundant byte[].
This is not quite the answer you want, but something you might wish to consider.
Why not create a pool of byte arrays and resuse them everytime you need to. This will be a little more efficient as you wont be creating new arrays and throwing them away all the time. Using less gc is always a good thing. You will also be able to guarantee that the application has enough memory to operate in all the time.
You can request that the VM to run garbage collection through System.gc() but this is NOT guaranteed to actually happen. The virtual machine performs garbage collection when it decides it is necessary or is an appropriate time.
What you are describing is pretty normal. It has to put the bytes of the image you are creating somewhere.
Instead of memory you can use a FileOutputStream to write the bytes to. You then have to make a FileInputStream to read from the file you wrote to and a loop which reads bytes into a byte array buffer of say 64k size and then writes those bytes to the connection's output stream.
You mention error. If you are getting an error what is the error?
If you use the client JVM (-client argument to java) then the memory might be given back to the OS and the Java process will shrink again. I'm not sure about this.
If you don't like how much memory JAI is using you can try using Sanselan: http://commons.apache.org/imaging/

Using Dynamic Buffers? Java

In Java, I have a method
public int getNextFrame( byte[] buff )
that reads from a file into the buffer and returns the number of bytes read. I am reading from .MJPEG that has a 5byte value, say "07939", followed by that many bytes for the jpeg.
The problem is that the JPEG byte size could overflow the buffer. I cannot seem to find a neat solution for the allocation. My goal is to not create a new buffer for every image. I tried a direct ByteBuffer so I could use its array() method to get direct access to the underlying buffer. The ByteBuffer does not expand dynamically.
Should I be returning a reference to the parameter? Like:
public ByteBuffer getNextFrame( ByteBuffer ref )
How do I find the bytes read? Thanks.
java.io.ByteArrayOutputStream is a wrapper around a byte-array and enlarges it as needed. Perhaps this is something you could use.
Edit:
To reuse just call reset() and start over...
Just read the required number of bytes. Do not use read(buffer), but use read(buffer,0,size). If there are more bytes, just discard them, the JPG is broken anyway.
EDIT:
Allocating a byte[] is so much faster than reading from a file or a
socket, I would be surprised it will make much difference, unless you
have a system where micro-seconds cost money.
The time it takes to read a file of 64 KB is about 10 ms (unless the
file is in memory)
The time it takes to allocate a 64 KB byte[] is about 0.001 ms,
possibly faster.
You can use apache IO's IOBuffer, however this expands very expensively.
You can also use ByteBuffer, the position() will tell you how much data was read.
If you don't know how big the buffer will be and you have a 64-bit JVM you can create a large direct buffer. This will only allocate memory (by page) when used. The upshot is that you can allocate a 1 GB but might only ever use 4 KB if that is all you need. Direct buffer doesn't support array() however, you would have to read from the ByteBuffer using its other methods.
Another solution is to use an AtomicReference<byte[]> the called method can increase the size as required, but if its large enough it would reuse the previous buffer.
The usual way of accomplishing this in a high-level API is either let the user provide an OutputStream and fill it with your data (which can be a ByteArrayOutputStream or something completely different), or have an InputStream as return value, that the user can read to get the data (which will dynamically load the correct parts from the file and stop when finished).

Buffers and bytes?

Could someone explain to me the uses of using buffers, and perhaps some simple (documented) examples of a buffer in use. Thanks.
I lack much knowledge in this area of Java programming, so forgive me if I asked the question wrong. :s
A buffer is a space in memory where data is stored temporarily before it is processed. See Wiki article
Heres a simple Java example of how to use the ByteBuffer class.
Update
public static void main(String[] args) throws IOException
{
// reads in bytes from a file (args[0]) into input stream (inFile)
FileInputStream inFile = new FileInputStream(args[0]);
// creates an output stream (outFile) to write bytes to.
FileOutputStream outFile = new FileOutputStream(args[1]);
// get the unique channel object of the input file
FileChannel inChannel = inFile.getChannel();
// get the unique channel object of the output file.
FileChannel outChannel = outFile.getChannel();
/* create a new byte buffer and pre-allocate 1MB of space in memory
and continue to read 1mb of data from the file into the buffer until
the entire file has been read. */
for (ByteBuffer buffer = ByteBuffer.allocate(1024*1024); inChannel.read(buffer) != 1; buffer.clear())
{
// set the starting position of the buffer to be the current position (1Mb of data in from the last position)
buffer.flip();
// write the data from the buffer into the output stream
while (buffer.hasRemaining()) outChannel.write(buffer);
}
// close the file streams.
inChannel.close();
outChannel.close();
}
Hope that clears things up a little.
With a buffer, people usually mean some block of memory to temporarily store some data in. One primary use for buffers is in I/O operations.
A device like a harddisk is good at quickly reading or writing a block of consecutive bits on the disk in one go. Reading a large amount of data can be done very quickly if you tell the harddisk "read these 10,000 bytes and put them in memory here". If you would program a loop and get the bytes one by one, telling the harddisk to get one byte each time, it is going to be very inefficient and slow.
So you create a buffer of 10,000 bytes, tell the harddisk to read all the bytes in one go, and then you process those 10,000 bytes one by one from the buffer in memory.
The Sun Java tutorials section on I/O covers this topic:
http://java.sun.com/docs/books/tutorial/essential/io/index.html

Categories

Resources