This question already has answers here:
How to deal with "java.lang.OutOfMemoryError: Java heap space" error?
(31 answers)
Closed 9 years ago.
I have to serialize around a million items and I get the following exception when I run my code:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOfRange(Unknown Source)
at java.lang.String.<init>(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at java.io.BufferedReader.readLine(Unknown Source)
at org.girs.TopicParser.dump(TopicParser.java:23)
at org.girs.TopicParser.main(TopicParser.java:59)
How do I handle this?
I know that the official Java answer is "Oh noes! Out of memories! I give in!". This is all rather frustrating for anyone who has programmed in environments where running out of memory is not allowed to be a fatal error (for example, writing an OS, or writing apps for non-protected OSes).
The willingness to surrender is necessary - you can't control every aspect of Java memory allocation, so you can't guarantee that your program will succeed in low-memory conditions. But that doesn't mean you must go down without a fight.
Before fighting, though, you could look for ways to avoid the need. Perhaps you can avoid Java serialization, and instead define your own data format which does not require significant memory allocation to create. Serialization allocates a lot of memory because it keeps a record of objects it has seen before, so that if they occur again it can reference them by number instead of outputting them again (which could lead to an infinite loop). But that's because it needs to be general-purpose: depending on your data structure, you might be able to define some text/binary/XML/whatever representation which can just be written to a stream with very little need to store extra state. Or you might be able to arrange that any extra state you need is stored in the objects all along, not created at serialization time.
If your application does one operation which uses a lot of memory, but mostly uses much less, and especially if that operation is user-initiated, and if you can't find a way to use less memory or make more memory available, then it might be worth catching OutOfMemory. You could recover by reporting to the user that the problem is too big, and inviting them to trim it down and try again. If they've just spend an hour setting up their problem, you do not want to just bail out of the program and lose everything - you want to give them a chance to do something about it. As long as the Error is caught way up the stack, the excess memory will be unreferenced by the time the Error is caught, giving the VM at least a chance to recover. Make sure you catch the error below your regular event-handling code (catching OutOfMemory in regular event handling can result in busy loops, because you try to display a dialog to the user, you're still out of memory, and you catch another Error). Catch it only around the operation which you've identified as the memory-hog, so that OutOfMemoryErrors you can't handle, that come from code other than the memory hog, are not caught.
Even in a non-interactive app, it might make sense to abandon the failed operation, but for the program itself to carry on running, processing further data. This is why web servers manage multiple processes such that if one page request fails for lack of memory, the server itself doesn't fall over. As I said at the top, single-process Java apps can't make any such guarantees, but they can at least be made a bit more robust than the default.
That said, your particular example (serialization) may not be a good candidate for this approach. In particular, the first thing the user might want to do on being told there's a problem is save their work: but if it's serialization which is failing, it may be impossible to save. That's not what you want, so you might have to do some experiments and/or calculations, and manually restrict how many million items your program permits (based on how much memory it is running with), before the point where it tries to serialize.
This is more robust than trying to catch the Error and continue, but unfortunately it's difficult to work out the exact bound, so you would probably have to err on the side of caution.
If the error is occurring during deserialization then you're on much firmer ground: failing to load a file should not be a fatal error in an application if you can possibly avoid it. Catching the Error is more likely to be appropriate.
Whatever you do to handle lack of resources (including letting the Error take down the app), if you care about the consequences then it's really important to test it thoroughly. The difficulty is that you never know exactly what point in your code the problem will occur, so there is usually a very large number of program states which need to be tested.
Ideally, restructure your code to use less memory. For example, perhaps you could stream the output instead of holding the whole thing in memory.
Alternatively, just give the JVM more memory with the -Xmx option.
You should not handle it in code. OutOfMemory should not be caught and handled. Instead start your JVM with a bigger heapspace
java -Xmx512M
should do the trick.
See here for more details
Everyone else has already covered how to give Java more memory, but because "handle" could conceivably mean catch, I'm going to quote what Sun has to say about Errors:
An Error is a subclass of Throwable
that indicates serious problems that a
reasonable application should not try
to catch. Most such errors are
abnormal conditions.
(emphasis mine)
You get an OutOfMemoryError because your program requires more memory than the JVM has available. There is nothing you can specifically do at runtime to help this.
As noted by krosenvold, your application may be making sensible demands for memory but it just so happens that the JVM is not being started with enough (e.g. your app will have a 280MB peak memory footprint but the JVM only starts with 256MB). In this case, increasing the size allocated will solve this.
If you feel that you are supplying adequate memory at start up, then it is possible that your application is either using too much memory transiently, or has a memory leak. In the situation you have posted, it sounds like you are holding references to all of the million items in memory at once, even though potentially you are dealing with them sequentially.
Check what your references are like for items that are "done" - you should deference these as soon as possible so that they can be garbage collected. If you're adding a million items to a collection and then iterating over that collection, for example, you'll need enough memory to store all of those object instances. See if you can instead take one object at a time, serialise it and then discard the reference.
If you're having trouble working this out, posting a pseudo-code snippet would help.
In addition to some of the tips that have been give to you, as review the memory lacks and
also start the JVM with more memory (-Xmx512M).
Looks like you have a OutOfMemoryError cause your TopicParser is reading a line that probably is pretty big (and here is what you should avoid), you can use the FileReader (or, if the encoding is an issue, an InputStreamReader wrapping a FileInputStream). Use its read(char[]) method with a reasonably sized char[] array as a buffer.
Also finally to investigate a little why is the OutOfMemoryError you can use
-XX:+HeapDumpOnOutOfMemoryError
Flag in the JVM to get a dump heap information to disk.
Good luck!
Interesting - you are getting an out of memory on a readline. At a guess, you are reading in a big file without linebreaks.
Instead of using readline to get the stuff out of the file as one single big long string, write stuff that understands the input a bit better, and handles it in chunks.
If you simply must have the whole file in a single big long string ... well, get better at coding. In general, trying to handle mutimegabyte data by stuffing it all into a single array of byte (or whatever) is a good way to lose.
Go have a look at CharacterSequence.
Use the transient keyword to mark fields in the serialized classes which can be generated from existing data.
Implement writeObject and readObject to help with reconstructing transient data.
After you follow the suggestion of increasing heap space (via -Xmx) but sure to use either JConsole or JVisualVM to profile your applications memory usage. Make sure that memory usage does not continuously grow. If so you'll still get the OutOfMemoryException, it'll just take longer.
You can increase the size of the memory java uses with the -Xmx-option, for instance:
java -Xmx512M -jar myapp.jar
Better is to reduce the memory-footprint of your app. You serialize millions of items? Do you need to keep all of them in memory? Or can you release some of them after using them? Try to reduce the used objects.
Start java with a larger value for option -Xmx, for instance -Xmx512m
There's no real way of handling it nicely. Once it happens you are in the unknown territory. You can tell by the name - OutOfMemoryError. And it is described as:
Thrown when
the Java Virtual Machine cannot allocate an object because it is out of
memory, and no more memory could be made available by the garbage
collector
Usually OutOfMemoryError indicates that there is something seriously wrong with the system/approach (and it's hard to point a particular operation that triggered it).
Quite often it has to do with ordinary running out of heapspace. Using the -verbosegc and mentioned earlier -XX:+HeapDumpOnOutOfMemoryError should help.
You can find a nice and concise summary of the problem at javaperformancetuning
Before taking any dangerous, time-consuming or strategic actions, you should establish exactly what in your program is using up so much of the memory. You may think you know the answer, but until you have evidence in front of you, you don't. There's the possibility that memory is being used by something you weren't expecting.
Use a profiler. It doesn't matter which one, there are plenty of them. First find out how much memory is being used up by each object. Second, step though iterations of your serializer, compare memory snapshots and see what objects or data are created.
The answer will most likely be to stream the output rather than building it in memory. But get evidence first.
I have discovered an alternate, respecting all other views that we should not try to catch the memory out of exception, this is what I've learned in recent time.
catch (Throwable ex){
if (!(ex instanceof ThreadDeath))
{
ex.printStackTrace(System.err);
}}
for your reference: OutOfMemoryError
any feedback is welcome.
Avishek Arang
Related
I would like to provide my system with a way of detecting whether out of memory exception has occurred or not. The aim for this exercise is to expose this flag through JMX and act correspondingly (e.g. by configuring a relevant alert on the monitoring system), as otherwise these errors sit unnoticed for days.
Naive approach for this would be to set an uncaught exception handler for every thread and check whether the raised exception is instance of OutOfMemoryError and set a relevant flag. However, this approach isn't realistic for the following reasons:
The exception can occur anywhere, including 3rd party libraries. There is nothing I can do to prevent them catching Throwable and keeping it for themselves.
Libraries can spawn their own threads and I have no way of enforcing uncaught exception handlers for these threads.
One of possible scenarios I see is bytecode manipulation (e.g. attaching some sort of aspect on top of OutOfMemoryError), however I am not sure if that's right approach or whether this is doable in general.
We have -XX:+HeapDumpOnOutOfMemoryError enabled, but I don't see this as a solution for this problem as it was designed for something else - and it provides no Java callback when this happens.
Has anyone done this? How would you solve it or suggest solving it? Any ideas are welcome.
You could use an out of memory warning system; this OutOfMemoryError Warning System can be an inspiration. You could configure a listener which is invoked after a certain memory threshold ( say 80%) is breached - you can use this invocation to start taking corrective measures.
We use something similar, where we suspend the component's service when the memory threshold of the component reaches 80% and start the clean up action; the component comes back only when the used memory comes below a another configurable value threshold.
There is an article based on the post that Scorpion has already given a link to.
The technique is again based on using MemoryPoolMXBean and subscribing to the "memory threshold exceeded" event, but it's slightly different from what was described in original post.
Author states that when you subscribe for the plain "memory threshold exceeded" event, there is a possibility of "false alarm". Imagine a situation when the memory consumption is above the threshold, but there will be a garbage collection performed soon and a lot of the memory is freed after that. In fact that situation is quite common in real world applications.
Fortunately, there is another threshold, "collection usage threshold", and a corresponding event, which is fired based on memory consumption right after garbage collection. When you receive that event, you can be much more confident you're running out of memory.
We have -XX:+HeapDumpOnOutOfMemoryError enabled, but I don't see this
as a solution for this problem as it was designed for something else -
and it provides no Java callback when this happens.
This flag should be all that you need. Set the output directory of the resulting heap dump file in some known location that you check regularly. Having a callback would be of no use to you. If you are out of memory, you can't guarantee that the callback code has enough memory to execute! All you can do is collect the data and use an external program to analyze why you ran out of memory. Any attempt at recovering in process can create bigger problems.
Bytecode instrumentation is possible - but hard. HPjmeter's monitoring tool has the ability to predict future OOM's (with caveats) -- but only on HP-UX/Itanium based systems. You could dedicate a daemon thread to calculating used memory in process and trigger an alert when this is exceeded, but really you're not solving the problem.
You can catch any and all uncaught exceptions with the static Thread.setDefaultUncaughtExceptionHandler. Of course, it doesn't help if someone is catching all Throwables. (I don't think anything will, though with an OOME I'd suspect you'd get a cascading effect until something outside the offending try block blew up.) Hopefully the thread would have released enough memory for the exception handler to work; OOM errors do tend to multiply as you try to deal with them.
I am developing an application that allows users to set the maximum data set size they want me to run their algorithm against
It has become apparent that array sizes around 20,000,000 in size causes an 'out of memory' error. Because I am invoking this via reflection, there is not really a great deal I can do about this.
I was just wondering, is there any way I can check / calculate what the maximum array size could be based on the users heap space settings and therefore validate user entry before running the application?
If not, are there any better solutions?
Use Case:
The user provides a data size they want to run their algorithm against, we generate a scale of numbers to test it against up to the limit they provided.
We record the time it takes to run and measure the values (in order to work out the o-notation).
We need to somehow limit the users input so as to not exceed or get this error. Ideally we want to measure n^2 algorithms on as bigger array sizes as we can (which could last in terms of runtime for days) therefore we really don't want it running for 2 days and then failing as it would have been a waste of time.
You can use the result of Runtime.freeMemory() to estimate the amount of available memory. However, it might be that actually a lot of memory is occupied by unreachable objects, which will be reclaimed by GC soon. So you might actually be able to use more memory than this. You can try invoking the GC before, but this is not guaranteed to do anything.
The second difficulty is to estimate the amount of memory needed for a number given by the user. While it is easy to calculate the size of an ArrayList with so many entries, this might not be all. For example, which objects are stored in this list? I would expect that there is at least one object per entry, so you need to add this memory too. Calculating the size of an arbitrary Java object is much more difficult (and in practice only possible if you know the data structures and algorithms behind the objects). And then there might be a lot of temporary objects creating during the run of the algorithm (for example boxed primitives, iterators, StringBuilders etc.).
Third, even if the available memory is theoretically sufficient for running a given task, it might be practically insufficient. Java programs can get very slow if the heap is repeatedly filled with objects, then some are freed, some new ones are created and so on, due to a large amount of Garbage Collection.
So in practice, what you want to achieve is very difficult and probably next to impossible. I suggest just try running the algorithm and catch the OutOfMemoryError.
Usually, catching errors is something you should not do, but this seems like an occasion where its ok (I do this in some similar cases). You should make sure that as soon as the OutOfMemoryError is thrown, some memory becomes reclaimable for GC. This is usually not a problem, as the algorithm aborts, the call stack is unwound and some (hopefully a lot of) objects are not reachable anymore. In your case, you should probably ensure that the large list is part of these objects which immediately become unreachable in the case of an OOM. Then you have a good chance of being able to continue your application after the error.
However, note that this is not a guarantee. For example, if you have multiple threads working and consuming memory in parallel, the other threads might as well receive an OutOfMemoryError and not be able to cope with this. Also the algorithm needs to support the fact that it might get interrupted at any arbitrary point. So it should make sure that the necessary cleanup actions are executed nevertheless (and of course you are in trouble if those need a lot of memory!).
As we all know, there are multiple reasons of OutOfMEmoryError (see first answer). Why there is only one exception covering all these cases instead of multiple fine-grained ones inheriting from OutOfMEmoryError?
I'd expect because you really can't do anything else when that happens: it almost doesn't matter WHY you ran out, since you're screwed regardless. Perhaps the additional info would be nice, but...
I know tomcat tries to do this "Out Of Memory Parachute" thing, where they hold onto a chunk of memory and try and release it, but I'm not sure how well it works.
The garbage collection process is deliberately very vaguely described to allow the greatest possible freedom for the JVM-implementors.
Hence the classes you mention are not provided in the API, but only in the implementation.
If you relied on them, your program would crash if running on a JVM without these vendor-specific sub-classes, so you don't do that.
You only need to subclass an exception if applications need to be able to catch and deal with the different cases differently. But you shouldn't be catching and attempting to recover from these cases at all ... so the need should not arise.
... but yeah I would still like to have a more descriptive reason for dying on me.
The exception message tells you which of the OOME sub-cases have occurred. If you are complaining that the messages are too brief, it is not the role of Java exception messages to give a complete explanation of the problem they are reporting. That's what the javadocs and other documentation is for.
#Thorbjørn presents an equally compelling argument. Basically, the different subcases are all implementation specific. Making them part of the standard API risks constraining JVM implementations to do things in suboptimal ways to satisfy the API requirements. And this approach risks creating unnecessary application portability barriers when new subclasses are created for new implementation specific subcases.
(For instance the hypothetical UnableToCreateNativeThreadError 1) assumes that the thread creation failed because of memory shortage, and 2) that the memory shortage is qualitatively different from a normal out of memory. 2) is true for current Oracle JVMs, but not for all JVMs. 1) is possibly not even true for current Oracle JVMs. Thread creation could fail because of an OS-imposed limit on the number of native threads.)
If you are interested in why it is a bad idea to try to recover from OOME's, see these Questions:
Catching java.lang.OutOfMemoryError?
Can the JVM recover from an OutOfMemoryError without a restart
Is it possible to catch out of memory exception in java? (my answer).
IMO there is no definite answer to this question and it all boils down to the design decisions made at that time. This question is very similar to something like "why isn't Date class immutable" or "why does Properties extend from HashTable". As pointed out by another poster, subclasses really wouldn't matter since you are anyways screwed. Plus the descriptive error messages are good enough to start with troubleshooting measures.
Mostly because computing something smart will require to allocate memory at some point. So you have to trhrow OutOfMemoryException without doing anymore computation.
This is not a big deal anyway, because your program is already screwed up. At most what you can do is return an error to the system System.exit(ERROR_LEVEL); but you can't even log because it would require to allocate memory or use memory that is possibly screwed up.
this is because all 4 are fatal errors that impossible to recover from (except perhaps out of heap space but that would still remain near the edge of the failure point)
We have a swing based application that does complex processing on data. One of the prerequisites for our software is that any given column cannot have too many unique values. If the number is numeric, the user would need to discretize the data before they could from our tool.
Unfortunately, the algorithms we are using are combinatorially expensive in memory depending on the number of unique values per column. Right now with the wrong dataset, the app would run out of memory very quickly. Before doing one of these operations that would run out of memory, we should be able to calculate roughly how much memory the operation will need. It would be nice if we could check how much memory the app currently is using, estimate if the app is going to run out of memory, and show an error message accordingly rather than running out of memory. Using java.lang.Runtime, we can find the free memory, total memory, and max memory, but is this really helpful? Even if it appears we won't have enough heap space, it could be that if we wait 30 milliseconds the garbage collector will run, and suddenly we have more than enough heap space to run our operation. Is there anyway to really predict if we are going to run out of memory?
I have done something similar for a database application where the number of rows that were loaded could not be estimated. So in the loop that processes the result set I'm calling a "MemorWatcher" method that would check the memory that was free.
If the available memory goes under a certain threshold the watcher would force a garbage collection and re-check. If there still wasn't enough memory the watcher method signals this to the caller with an exception. The caller can gracefully recover from that exception - as opposed to the OutOfMemoryException which sometimes leaves Swing totally unstable.
I don't have expertise on this, but I feel you can take an extra step of bytecode analysis using ASM to preempt bugs like null pointer exception, out of memory exception etc.
Unless you run your application with the maximum amount of memory you need from the outset (using -Xms) I don't think you can achieve anything useful, since other applications will be able to consume memory before your app needs it.
Have you considered using Soft/WeakReferences, and letting garbage collection reap objects that you could possible recalculate/regenerate on the fly ?
I recently came across this in some code - basically someone trying to create a large object, coping when there's not enough heap to create it:
try {
// try to perform an operation using a huge in-memory array
byte[] massiveArray = new byte[BIG_NUMBER];
} catch (OutOfMemoryError oome) {
// perform the operation in some slower but less
// memory intensive way...
}
This doesn't seem right, since Sun themselves recommend that you shouldn't try to catch Error or its subclasses. We discussed it, and another idea that came up was explicitly checking for free heap:
if (Runtime.getRuntime().freeMemory() > SOME_MEMORY) {
// quick memory-intensive approach
} else {
// slower, less demanding approach
}
Again, this seems unsatisfactory - particularly in that picking a value for SOME_MEMORY is difficult to easily relate to the job in question: for some arbitrary large object, how can I estimate how much memory its instantiation might need?
Is there a better way of doing this? Is it even possible in Java, or is any idea of managing memory below the abstraction level of the language itself?
Edit 1: in the first example, it might actually be feasible to estimate the amount of memory a byte[] of a given length might occupy, but is there a more generic way that extends to arbitrary large objects?
Edit 2: as #erickson points out, there are ways to estimate the size of an object once it's created, but (ignoring a statistical approach based on previous object sizes) is there a way of doing so for yet-uncreated objects?
There also seems to be some debate as to whether it's reasonable to catch OutOfMemoryError - anyone know anything conclusive?
freeMemory isn't quite right. You'd also have to add maxMemory()-totalMemory(). e.g. assuming you start up the VM with max-memory=100M, the JVM may at the time of your method call only be using (from the OS) 50M. Of that, let's say 30M is actually in use by the JVM. That means you'll show 20M free (roughly, because we're only talking about the heap here), but if you try to make your larger object, it'll attempt to grab the other 50M its contract allows it to take from the OS before giving up and erroring. So you'd actually (theoretically) have 70M available.
To make this more complicated, the 30M it reports as in use in the above example includes stuff that may be eligible for garbage collection. So you may actually have more memory available, if it hits the ceiling it'll try to run a GC to free more memory.
You can try to get around this bit by manually triggering a System.GC, except that that's not such a terribly good thing to do because
-it's not guaranteed to run immediately
-it will stop everything in its tracks while it runs
Your best bet (assuming you can't easily rewrite your algorithm to deal with smaller memory chunks, or write to a memory-mapped file, or something less memory intensive) might be to do a safe rough estimate of the memory needed and insure that it's available before you run your function.
There are some kludges that you can use to estimate the size of an existing object; you could adapt some of these to predict the size of a yet-to-be created object.
However, in this case, I think it might be best to catch the Error. First of all, asking for the free memory doesn't account for what's available after garbage collection, which will be performed before raising an OOME. And, requesting a garbage collection with System.gc() isn't reliable. It's often explicitly disabled because it can wreck performance, and if it's not disabled… well, it can wreck performance when used unnecessarily.
It is impossible to recover from most errors. However, recoverability is up to the caller, not the callee. In this case, if you have a strategy to recover from an OutOfMemoryError, it is valid to catch it and fall back.
I guess that, in practice, it really comes down to the difference between the "slow" and "fast" way. If the "slow" method is fast enough, I'd stick with that, as it's safer and simpler. And, it seems to me, allowing it to be used as a fall back means that it is "fast enough." Don't let small optimizations derail the reliability of your application.
The "try to allocate and handle the error" approach is very dangerous.
What if you barely get your memory? A later OOM exception might occur because you brought things too close to the limits. Almost any library call will allocate memory at least briefly.
During your allocation a different thread may receive an OOM exception while trying to allocate a relatively small object. Even if your allocation is destined to fail.
The only viable approach is your second one, with the corrections noted in other answers. But you have to be sure and leave extra "slop space" in the heap when you decide to use your memory intensive approach.
I don't believe that there's a reasonable, generic approach to this that could safely be assumed to be 100% reliable. Even the Runtime.freeMemory approach is vulnerable to the fact that you may actually have enough memory after a garbage collection, but you wouldn't know that unless you force a gc. But then there's no foolproof way to force a GC either. :)
Having said that, I suspect if you really did know approximately how much you needed, and did run a System.gc() beforehand, and your running in a simple single-threaded app, you'd have a reasonably decent shot at getting it right with the .freeMemory call.
If any of those constraints fail, though, and you get the OOM error, your back at square one, and therefore are probably no better off than just catching the Error subclass. While there are some risks associated with this (Sun's VM does not make a lot of guarantees about what happens after an OOM... there's some risk of internal state corruption), there are many apps for which just catching it and moving on with life will leave you with no serious harm.
A more interesting question in my mind, however, is why are there cases where you do have enough memory to do this and others where you don't? Perhaps some more analysis of the performance tradeoffs involved is the real answer?
Definitely catching error is the worst approach. Error happens when there is NOTHING you can do about it. Not even create a log, puff, like "... Houston, we lost the VM".
I didn't quite get the second reason. It was bad because it is hard to relate SOME_MEMORY to the operations? Could you rephrase it for me?
The only alternative I see, is to use the hard disk as the memory ( RAM/ROM as in the old days ) I guess that is what you're pointing in your "else slower, less demanding approach"
Every platform has its limits, java suppport as much as RAM your hardware is willing to give ( well actually you by configuring the VM ) In Sun JVM impl that could be done with the
-Xmx
Option
like
java -Xmx8g some.name.YourMemConsumingApp
For instance
Of course you may end up trying to perform an operation that takes 10 gb of RAM
If that's your case then you should definitely swap to disk.
Additionally, using the strategy pattern could make a nicer code. Although here it looks overkill:
if (isEnoughMemory(SOME_MEMORY)) {
strategy = new InMemoryStrategy();
} else {
strategy = new DiskStrategy();
}
strategy.performTheAction();
But it may help if the "else" involves a lot of code and looks bad. Furthermore if somehow you can use a third approach ( like using a cloud for processing ) you can add a third Strategy
...
strategy = new ImaginaryCloudComputingStrategy();
...
:P
EDIT
After getting the problem with the second approach: If there are some times when you don't know how much RAM is going to be consumed but you do know how much you have left, you could use a mixed approach ( RAM when you have enough, ROM[disk] when you don't )
Suppose this theorical problem.
Suppose you receive a file from a stream and don't know how big it is.
Then you perform some operation on that stream ( encrypt it for instance ).
If you use RAM only it would be very fast, but if the file is large enough as to consume all your APP memory, then you have to perform some of the operation in memory and then swap to file and save temporary data there.
The VM will GC when running out of memory, you get more memory and then you perform the other chunk. And this repeat until you have the big stream processed.
while( !isDone() ) {
if (isMemoryLow()) {
//Runtime.getRuntime().freeMemory() < SOME_MEMORY + some other validations
swapToDisk(); // and make sure resources are GC'able
}
byte [] array new byte[PREDEFINED_BUFFER_SIZE];
process( array );
process( array );
}
cleanUp();