in my unit test I deliberately trying to raise an OutOfMemoryError exception. I use a simple statement like the following:
byte[] block = new byte[128 * 1024 * 1024 * 1024];
The code works on Win7 64bit with jdk6u21 64bit. But when I run this on Centos 5 64bit with jdk6u21 no OutOfMemoryError thrown, even when I make the size of the array bigger.
Any idea?
Linux doesn't always allocate you all the memory you ask for immediately, since many real applications ask for more than they need. This is called overcommit (it also means sometimes it guesses wrong, and the dreaded OOM killer strikes).
For your unittest, I would just throw OutOfMemoryError manually.
If you just want to consume all the memory do the following:
try {
List<Object> tempList = new ArrayList<Object>();
while (true) {
tempList.add(new byte[128 * 1024 * 1024 * 1024]);
}
} catch (OutOfMemoryError OME) {
// OK, Garbage Collector will have run now...
}
128*1024*1024*1024=0 because int is 32-bit. Java doesn't support arrays larger than 4Gb.
ulimit -v 102400
ulimit -d 102400
unitTest.sh
The above should limit your unit test to 1M of virtual memory, and 1M data segment size. When you reach either of those, your process should get ENOMEM. Careful, these restrictions apply for the process / shell where you called them exits; you might want to run them in a subshell.
man 2 setrlimit for details on how that works under the hood. help ulimit for the ulimit command.
You could deliberately set the maximum heap size of your JVM to a small amount by using the -Xmx flag.
Launch the following program:
public final class Test {
public static void main(final String[] args) {
final byte[] block = new byte[Integer.MAX_VALUE];
}
}
with the following JVM argument: -Xmx8m
That will do the trick:
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at Test.main(Test.java:4)
Minor point but allocating new long[Integer.MAX_VALUE] will use up memory 8x faster. (~16 GB each)
The reason for no OutofMemoryError is that the memory is being allocated in a uncommitted state, with no page.
If you write a non-zero byte into each 4K of the array, that will then cause the memory to be allocated.
Related
The code below 100% throws java.lang.OutOfMemoryError
Set<Integer> set = new HashSet<>();
new Random().ints(10_000_000,
Integer.MIN_VALUE,
Integer.MAX_VALUE)
.forEach(
v -> {
if(set.contains(v)){
System.out.println(v);
}else{
set.add(v);
}
}
);
AFAIK it's because lambda capture Integer with context? Could anybody explain what exactly happens here?
In your code you have Set which is accessible from main method. Local stack is a GC root, so it could not be collected by GC.
So, you're adding elements to the set which can't be collected. And set requires extra memory to store elements.
On my PC it's required about 600 MB of heap to run this program without OOM.
Here is heap dump of this program running on my pc.
I've tried the same code with vanilla for loop and got the same results.
So you just need to add more memory for your application. For example -Xmx1g will set your heap size to 1 gigabyte.
I write a piece of java code to create 500K small files (average 40K each) on CentOS. The original code is like this:
package MyTest;
import java.io.*;
public class SimpleWriter {
public static void main(String[] args) {
String dir = args[0];
int fileCount = Integer.parseInt(args[1]);
String content="##$% SDBSDGSDF ASGSDFFSAGDHFSDSAWE^#$^HNFSGQW%##&$%^J#%##^$#UHRGSDSDNDFE$T##$UERDFASGWQR!#%!#^$##YEGEQW%!#%!!GSDHWET!^";
StringBuilder sb = new StringBuilder();
int count = 40 * 1024 / content.length();
int remainder = (40 * 1024) % content.length();
for (int i=0; i < count; i++)
{
sb.append(content);
}
if (remainder > 0)
{
sb.append(content.substring(0, remainder));
}
byte[] buf = sb.toString().getBytes();
for (int j=0; j < fileCount; j++)
{
String path = String.format("%s%sTestFile_%d.txt", dir, File.separator, j);
try{
BufferedOutputStream fs = new BufferedOutputStream(new FileOutputStream(path));
fs.write(buf);
fs.close();
}
catch(FileNotFoundException fe)
{
System.out.printf("Hit filenot found exception %s", fe.getMessage());
}
catch(IOException ie)
{
System.out.printf("Hit IO exception %s", ie.getMessage());
}
}
}
}
You can run this by issue following command:
java -jar SimpleWriter.jar my_test_dir 500000
I thought this is a simple code, but then I realize that this code is using up to 14G of memory. I know that because when I use free -m to check the memory, the free memory kept dropping, until my 15G memory VM only had 70 MB free memory left. I compiled this using Eclipse, and I compile this against JDK 1.6 and then JDK1.7. The result is the same. The funny thing is that, if I comment out fs.write(), just open and close the stream, the memory stabilized at certain point. Once I put fs.write() back, the memory allocation just go wild. 500K 40KB files is about 20G. It seems Java's stream writer never deallocate its buffer during the operation.
I once thought java GC does not have time to clean. But this make no sense since I closed the file stream for every file. I even transfer my code into C#, and running under windows, the same code producing 500K 40KB files with memory stable at certain point, not taking 14G as under CentOS. At least C#'s behavior is what I expected, but I could not believe Java perform this way. I asked my colleague who were experienced in java. They could not see anything wrong in code, but could not explain why this happened. And they admit nobody had tried to create 500K file in a loop without stop.
I also searched online and everybody says that the only thing need to pay attention to, is close the stream, which I did.
Can anyone help me to figure out what's wrong?
Can anybody also try this and tell me what you see?
BTW, some people in this community tried the code on Windows and it seemed to worked fine. I didn't tried it on windows. I only tried in Linux as I thought that where people use Java for. So, it seems this issue happened on Linux).
I also did the following to limit the JVM heap, but it take no effects
java -Xmx2048m -jar SimpleWriter.jar my_test_dir 500000
I tried to test your prog on Win XP, JDK 1.7.25. Immediately got OutOfMemoryExceptions.
While debugging, with only 3000 count (args[1]), the count variable from this code:
int count = 40 * 1024 * 1024 / content.length();
int remainder = (40 * 1024 * 1024) % content.length();
for (int i = 0; i < count; i++) {
sb.append(content);
}
count is 355449. So the String you are trying to create will be 355449 * contents long, or as you calculated, 40Mb long. I was out of memory when i was 266587, and sb was 31457266 chars long. At which point each file I get is 30Mb.
The problem does not seem with memory or GC, but with the way you crate the string.
Did you see files created or was memory eating up before any file was created?
I think your main problem is the line:
int count = 40 * 1024 * 1024 / content.length();
should be:
int count = 40 * 1024 / content.length();
to create 40K, not 40Mb files.
[Edit2: The original answer is left in italics at the end of this post]
After your clarifications in the comments, I have run your code on a windows machine (Java 1.6) and here is my findings (numbers are from VisualVM, OS memory as seen from task manager):
Example with 40K size, writing to 500K files (no parameters to JVM):
Used Heap: ~4M, Total Heap: 16M, OS memory: ~16M
Example with 40M size, writing to 500 files (parameters to JVM -Xms128m -Xmx512m. Without parameters I get an OutOfMemory error when creating StringBuilder):
Used Heap: ~265M, Heap size: ~365M, OS memory: ~365M
Especially from the second example you can see that my original explanation still stands. Yes someone would expect that most of the memory would be freed since the byte[] of the BufferedOutputStream reside in the first generation space (short lived objects) but this a) does not happen immediately and b) when GC decides to kicks in (it actually does in my case), yes it will try to clear memory but it can clear as much memory as it sees fit, not necessarily all of it. GC does not provide any guarentees that you can count upon.
So generally speaking you should give to JVM as much memory you feel comfortable with. If you need to keep the memory low for special functionalities you should try a strategy as the code example I gave down below in my original answer i.e. just don't create all those byte[] objects.
Now in your case with CentOS, it does seem that JVM's behaves strangely. Perhaps we could talk about a buggy or bad implementation. To classify it as a leak/bug though you should try to use -Xmx to restrict the heap. Also please try what Peter Lawrey suggested to not create the BufferedOutputStream at all (in the small file case) since you just write all the bytes at once.
If it still exceeds the memory limit then you have encountered a leak and should probably file a bug. (You could still complain though and they may optimize it in the future).
[Edit1: The answer below assumed that the OP's code performed as many reading operations as the write operations, so the memory usage was justifiable. The OP clarified this is not the case, so his question is not answered
"...my 15G memory VM..."
If you give the JVM as much memory why it should try to run GC? As far as the JVM is concerned it is allowed to get as much memory from the system and run GC only when it thinks that is appropriate to do so.
Each execution of BufferedOutputStream will allocate a buffer of 8K size by default. JVM will try to reclaim that memory only when it needs to. This is the expected behaviour.
Do not confuse the memory that you see as free from the system's point of view and from the JVM's point of view. As far the system is concerned the memory is allocated and will be released when the JVM shuts down. As far the JVM's is concerned all the byte[] arrays allocated from BufferedOutputStream are not in use any more, it is "free" memory and will be reclaimed if it needs to.
If for some reason you don't desire this behaviour you could try the following: Extend the BufferedOutputStream class (e.g. create a ReusableBufferedOutputStream class) and add a new method e.g. reUseWithStream(OutputStream os). This method would then clear the internal byte[], flush and close the previous stream, reset any variables used and set the new stream. Your code then would become as below:
// intialize once
ReusableBufferedOutputStream fs = new ReusableBufferedOutputStream();
for (int i=0; i < fileCount; i ++)
{
String path = String.format("%s%sTestFile_%d.txt", dir, File.separator, i);
//set the new stream to be buffered and read
fs.reUseWithStream(new FileOutputStream(path));
fs.write(this._buf, 0, this._buf.length); // this._buf was allocated once, 40K long contain text
}
fs.close(); // Close the stream after we are done
Using the above approach you will avoid creating many byte[]. However I don't see any problem with the expected behaviour neither you mention any problem other than "I see it takes too much memory". You have congifured it to use it after all.]
I'm writing a program in java which has to make use of a large hash-table, the bigger the hash-table can be, the better (It's a chess program :P). Basically as part of my hash table I have an array of "long[]", an array of "short[]", and two arrays of "byte[]". All of them should be the same size. When I set my table size to ten-million however, it crashes and says "java heap out of memory". This makes no sense to me. Here's how I see it:
1 Long + 1 Short + 2 Bytes = 12 bytes
x 10,000,000 = 120,000,000 bytes
/ 1024 = 117187.5 kB
/ 1024 = 114.4 Mb
Now, 114 Mb of RAM doesn't seem like too much to me. In total my CPU has 4Gb of RAM on my mac, and I have an app called FreeMemory which shows how much RAM I have free and it's around 2Gb while running this program. Also, I set the java preferences like -Xmx1024m, so java should be able to use up to a gig of memory. So why won't it let me allocate just 114Mb?
You predicted that it should use 114 MB and if I run this (on a windows box with 4 GB)
public static void main(String... args) {
long used1 = memoryUsed();
int Hash_TABLE_SIZE = 10000000;
long[] pos = new long[Hash_TABLE_SIZE];
short[] vals = new short[Hash_TABLE_SIZE];
byte[] depths = new byte[Hash_TABLE_SIZE];
byte[] flags = new byte[Hash_TABLE_SIZE];
long used2 = memoryUsed() - used1;
System.out.printf("%,d MB used%n", used2 / 1024 / 1024);
}
private static long memoryUsed() {
return Runtime.getRuntime().totalMemory() - Runtime.getRuntime().freeMemory();
}
prints
114 MB used
I suspect you are doing something else which is the cause of your problem.
I am using Oracle HotSpot Java 7 update 10
Has not taken into account that each object is a reference and also use memory, and more "hidden things"... we must also take into account also the alignment... byte is not always a byte ;-)
Java Objects Memory Structure
How much memory is used by Java
To see how much memory is really in use, you can use a profiler:
visualvm
If you are using standard HashMap (or similar from JDK), each "long" (boxing/unboxing) really are more than 8bytes), you can use this as a base... (use less memory)
NativeIntHashMap
From what I have read about BlueJ, and serious technical information is almost impossible to find, BlueJ VM is quite likely not to support primitive types at all; your arrays are actually of boxed primitives. BlueJ uses a subset of all Java features, with emphasis on object orientation.
If that is the case, plus taking into consideration that performance and efficiency are quite low on BlueJ VM's list of priorities, you may actually be using quite a bit more memory than you think: a whole order of magnitude is quite imaginable.
I believe one way it would be to clean the heap memory after each execution, one link is here:
Java heap space out of memory
I have a file whose size is about 300mb. I want to read the contents line by line and then add it into ArrayList. So I have made an object of array list a1 , then reading the file using BufferedReader , after that when I add the lines from file into ArrayList it gives an error Exception in thread "main" java.lang.OutOfMemoryError: Java heap space.
Please tell me what should be the solution for this.
public static void main(String[] args) {
// TODO Auto-generated method stub
try {
FileReader file = new FileReader(
"/home/dmdd/Desktop/AsiaData/RawData/AllupperairVcomponent.txt");
ArrayList a1 = new ArrayList();
BufferedReader br = new BufferedReader(file);
String line = "";
while ((line = br.readLine()) != null) {
a1.add(line);
}
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}
}
Naively, increase the size of the heap via the Xmx command line argument (see this excellent answer for some guidance)
This'll only work up to a point though, instead consider structuring your data so that the memory requirements are minimized. Do you need the whole thing in memory at once? Perhaps you only need to test whether an item is in that set, consider using a hash or a bloom filter (etc).
Just increase the heap size of Java
java -Xmx250m
If you running your project from IDE set -Xmx250m in arguments.
250m is 250mb
If you have to have it in memory, you could try increasing the heap size by passing the -mx option to the java executable.
It may also be worth considering the question if you really need all that data in memory at the same time. It could be that you can either process it sequentially, or keep most or all of it on disk.
Pass -Xmx1024m to increase your heap sapce to 1024 mb.
java -Xms1024m -Xmx512m HelloWorld
You can increase up-to 4GB on a 32 bit system and on a 64 bit system you can go much higher.
Use java.nio.file.Files.readAllLines, it returns List<String>. And if you're getting OOME increase heap size as java -Xmx1024m
I agree with #Murali partly this will fix the problem you are facing. But it is advisable to use Caching when handling large files. What if the file size becomes 500Mb in a rare case. Make use of a Caching API like Memcached this will eliminate Memory Outages in JVM.
If you can: process the file in batches of 10000 lines or so.
read 10k lines
process
repeat until done
I run into the following errors when i try to store a large file into a string.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:2882)
at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:515)
at java.lang.StringBuffer.append(StringBuffer.java:306)
at rdr2str.ReaderToString.main(ReaderToString.java:52)
As is evident, i am running out of heap space. Basically my pgm looks like something like this.
FileReader fr = new FileReader(<filepath>);
sb = new StringBuffer();
char[] b = new char[BLKSIZ];
while ((n = fr.read(b)) > 0)
sb.append(b, 0, n);
fileString = sb.toString();
Can someone suggest me why i am running into heap space error? Thanks.
You are running out of memory because the way you've written your program, it requires storing the entire, arbitrarily large file in memory. You have 2 options:
You can increase the memory by passing command line switches to the JVM:
java -Xms<initial heap size> -Xmx<maximum heap size>
You can rewrite your logic so that it deals with the file data as it streams in, thereby keeping your program's memory footprint low.
I recommend the second option. It's more work but it's the right way to go.
EDIT: To determine your system's defaults for initial and max heap size, you can use this code snippet (which I stole from a JavaRanch thread):
public class HeapSize {
public static void main(String[] args){
long kb = 1024;
long heapSize = Runtime.getRuntime().totalMemory();
long maxHeapSize = Runtime.getRuntime().maxMemory();
System.out.println("Heap Size (KB): " + heapSize/1024);
System.out.println("Max Heap Size (KB): " + maxHeapSize/1024);
}
}
You allocate a small StringBuffer that gets longer and longer. Preallocate according to file size, and you will also be a LOT faster.
Note that java is Unicode, the string likely not, so you use... twice the size in memory.
Depending on VM (32 bit? 64 bit?) and the limits set (http://www.devx.com/tips/Tip/14688) you may simply not have enough memory available. How large is the file actually?
In the OP, your program is aborting while the StringBuffer is being expanded. You should preallocate that to the size you need or at least close to it. When StringBuffer must expand it needs RAM for the original capacity and the new capacity. As TomTom said too, your file is likely 8-bit characters so will be converted to 16-bit unicode in memory so it will double in size.
The program has not even encountered yet the next doubling - that is StringBuffer.toString() in Java 6 will allocate a new String and the internal char[] will be copied again (in some earlier versions of Java this was not the case). At the time of this copy you will need double the heap space - so at that moment at least 4 times what your actual files size is (30MB * 2 for byte->unicode, then 60MB * 2 for toString() call = 120MB). Once this method is finished GC will clean up the temporary classes.
If you cannot increase the heap space for your program you will have some difficulty. You cannot take the "easy" route and just return a String. You can try to do this incrementally so that you do not need to worry about the file size (one of the best solutions).
Look at your web service code in the client. It may provide a way to use a different class other than String - perhaps a java.io.Reader, java.lang.CharSequence, or a special interface, like the SAX related org.xml.sax.InputSource. Each of these can be used to build an implementation class that reads from your file in chunks as the callers needs it instead of loading the whole file at once.
For instance, if your web service handling routes can take a CharSequence then (if they are written well) you can create a special handler to return just one character at a time from the file - but buffer the input. See this similar question: How to deal with big strings and limited memory.
Kris has the answer to your problem.
You could also look at java commons fileutils' readFileToString which may be a bit more efficient.
Although this might not solve your problem, some small things you can do to make your code a bit better:
create your StringBuffer with an initial capacity the size of the String you are reading
close your filereader at the end: fr.close();
By default, Java starts with a very small maximum heap (64M on Windows at least). Is it possible you are trying to read a file that is too large?
If so you can increase the heap with the JVM parameter -Xmx256M (to set maximum heap to 256 MB)
I tried running a slightly modified version of your code:
public static void main(String[] args) throws Exception{
FileReader fr = new FileReader("<filepath>");
StringBuffer sb = new StringBuffer();
char[] b = new char[1000];
int n = 0;
while ((n = fr.read(b)) > 0)
sb.append(b, 0, n);
String fileString = sb.toString();
System.out.println(fileString);
}
on a small file (2 KB) and it worked as expected. You will need to set the JVM parameter.
Trying to read an arbitrarily large file into main memory in an application is bad design. Period. No amount of JVM settings adjustments/etc... are going to fix the core issue here. I recommend that you take a break and do some googling and reading about how to process streams in java - here's a good tutorial and here's another good tutorial to get you started.