Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have develop a project which has more than 3000 source files; when I want to run my project after few minutes it gives me error like java.lang.OutOfMemoryException: java heap space. I also increased the memory of my project by right clicking and VM option I gave the 1024MB I have two 2gb on my PC.
As you likely know you can increase the JVM's memory using java -Xms<initial heap size> -Xmx<maximum heap size>.
However this just delays the issue as there is likely a place in your application where is memory is leaking causing the heap overflow. I suggest you use a tool like Netbeans Profiler which will help you find out where the memory leak is occurring. Netbeans Profiler will allow you to see where objects are created, where garbage collection occurs, etc.
It may be that the application just allocates a lot of memory or there is an actual leak. My approach would be to use a memory analyzer such as Eclipse MAT to see what objects are taking up the most memory.
If they all seem valid then you probably need to increase the heap size space. Though I had worked with 5000+ class web app projects with 512MB heap, so I wouldn't doubt that it was a memory leak.
You should also look for instances of ByteArrayOutputStream in your code, they tend to take up a large chunk of memory as well.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm studying about JDK 7 JVM's Runtime Data Areas.
I want to compare JDK 7 JVM and JDK 8 JVM. There are some memory areas in JDK 7 JVM, but I'm confused.
I'm looking for JDK 7 JVM Runtime Data Areas Architecture picture and articles in blogs, but all of articles saying different.
Heap (including Young Generation, Old Generation)
Method Area (where is located in JVM? heap? non-heap? native memory? or independent?)
Runtime Constant Pool in Method Area
JVM Stack in Native Memory
Native Method Stack in Native Memory
PC Register in Native Memory
But I'm confused about PermGen's location in Runtime Data Areas.
someone telling PermGen is part of Method Area.
someone telling Method Area is part of PermGen.
someone telling PermGen is non-heap.
(then PermGen is located in Native Memory? then Runtime Data Areas separated 3 parts? (Heap, non-Heap(Method Area), Native Memory))
someone's picture telling PermGen is part of Heap
What is correct?
If you differentiate simply between Heap and Native memory PermGen is part of the Heap area. So is the method area.
The image you've attached is basically right in this regard.
In the Hotspot-VM Permanent Generation (PermGen) is/was one of the Heap areas.
It is, however, a special heap space separated from the main memory heap.
It is not affected by Java options like -Xmx or -Xms and has it's own limits and garbage collection behavior. Therefore one could also say it is non-heap depending on the viewpoint and context.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
We have apps that are deployed in production [Java / Scala]. We have alerts setup when ever there is a spike in CPU usage or memory usage.
Occasionally we see a huge spike in CPU or memory usage. Some times the application that is running on play stops responding to request.
I usually see the logs for last few API hits before the crash, that way I recently figured out one of API was downloading huge dump of data and memory got exhausted.
Can I get tips for troubleshooting the issues in general [commands / tools to capture stats] when things go wrong in prod?
This requires a lot of experience though. Below are some steps that you could follow:
Prerequisite:
You should understand java Memory Model i.e. what's New Generation(Eden, Survivor-01,Survivor-02), Old Generation, Meta Space, Heap, Stack etc.
Read this to understand it better.
You should understand how Garbage collection works. e.g. you should understand how Mark and Sweep algorithm works. Check the same link as above for same.
Now you could install visual VM. Also, in visual vm install a plugin visual gc it will show you memory used in different space. You will see another tab Visual GC
i) Observe Graphs(Heap one to top right in the snapshot below) in Monitor Tab.
**Trick: ** You could perform manual GC as well to observe how steep the graph line for Used Heap Space is and how quickly it fills up at running some block of code. I used it many times and it really helps (Especially if used with the debugger)!
ii) Also, try to observe the Thread Dump if multithreading is causing some issue.
iii) In any case, you could also do some profiling or sampling via profiler and sampler tab.
Below is a snapshot of sampler. See how clearly it tells how much memory is taken by what data type:
Important: Screenshot is of the heap. You could change to Per Thread Allocation tab to see per thread allocation.
Similarly, you could observe CPU consumption.
Alternatively, use JMeter if you think locally you are not able to reproduce the same. Jmeter can help you extensively load test your application basically.
Also, if you have integrated any server monitoring tool that could also be helpful. You could easily get notified for a problematic code.
At last, you could download the heap dump from the production system and analyze that on local using visual vm.
jmap -J-d64 -dump:format=b,file=<heap_dump_filename> <pid>
this link have more detailed answers from some really cool developers on same.
Use jstat. It comes with java and is very handy sometimes.
jstat -gc 2341 //2341 is the java process id.
These are from my experience. But In this direction, there would never be enough and I believe my knowledge keeps on evolving as I face more such issues. Hence, please practice it and explore further.
Having said that, there are other tools available so also feel free to find other ones that suit your needs well. To get started take a look at Jconsole.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In my application while multiple users, at least 10 users, are login in from different locations at the same time, the application is showing 2 errors :-
1) the OutOfMemoryError exception (heap space/ GC error)
and,
2) the Metaspace error (in jdk-8 environment).
For your information, the application is running on 64-bit windows 7 system and is using jdk-8 environment.
The jvm parameters are set to 1.5Gb in the environment, like below:-
-XX:PermSize=1024m -XX:MaxPermSize=1512m
I need 2 guidance:
1) Please suggest a solution to solve the GC problem, so that Any number of Users can access the application at the same time.
2) Also please provide a Concept on how to solve metaspace error and how to increase the default metaspace size of the application.
Thanks.
Classes and classes metadata get stored in the Metaspace
Static as well as dynamically loaded classes
Metaspace is not part of the Java heap and is allocated out of native
memory
java.lang.OutOfMemoryError: Metaspace
Metaspace by default is unlimited
MaxMetaspaceSize : This exception
means that the Metaspace is full with the loaded classes and their
metadata, and the application is requesting to load more classes The
JVM has already invoked Full GC but could not free up any space in
the Metaspace
Indicates: the Metaspace is sized smaller than the
footprint of classes and its metadata, or the application is
unintentionally holding on to some set of classes in the Metaspace
Try to increase the heap size memory by invoking following option
-Xmx
If you want to increase the Metaspacesize, invoke the following option.
-XX:MetaspaceSize=m -XX:MaxMetaspaceSize=n
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a variety of Rails applications running on Torquebox. Occasionally, Torquebox will stop responding to requests for a short period of time (maybe 2-5 minutes) and the logs will fill up with the following error messages:
java.lang.OutOfMemoryError: Direct buffer memory
The errors happen at unpredictable times (often days between). Load testing doesn't reproduce the problem, and the issue doesn't happen during peak load anyway. And, unlike many other types of memory errors I've seen in the past, the server actually recovers and starts responding normally again without any sort of restarts or intervention.
Are there any common coding mistakes, misconfigurations, or other potential problems that regularly cause this error? Google reveals lots lower level Java/Garbage collection type issues with various libraries (Netty for example), but I'm interested to see if there are other common places to look as well.
JNA/ByteBuffer not getting freed and causing C heap to run out of memory indicates that the direct memory might not be cleared if the Java heap doesn't require garbage collection very often (perhaps, in your case at non-peak times).
If you have a constant function using the direct memory, regardless of load, then the application might not be calling the garbage collection enough during the lighter load times. Using the GC module might help (http://ruby-doc.org/core-2.0/GC.html).
The problem in this case ended up being lack of enough total available memory. Java's memory footprint can end up being significantly larger than just the heap allocation. Direct Buffer Memory is only one of the types of memory use that falls outside the heap.
It's still not clear to me why the fluctuations occur at unpredictable times, but ensuring there is enough excess memory on the system to handle some fluctuation is critical to stability.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
This is a more specific question to follow up on another question that I have asked recently. A correct answer for this question will earn a correct answer for that previous question too (since that is still in limbo)!
Basically I have a Java desktop application with a memory leak issue. I am using the memory profiler in Netbeans IDE to profile the memory issue. These are the steps that I have taken done so far:
Attach a new memory profiler to the Netbeans project
Define profiling points at several careful chosen lines of code, and set them to trigger a memory heap dump
Run the application in profiling mode
The end result of this is that I have several memory dumps saved to disk in *.hprof files. Netbeans IDE lets me peruse the contents (basic sort and search) of these memory dumps, and even lets me walk the heap, by seeing what the references contained within each instance, and what other objects reference each instance. That is all good, and I have been able to identify 1 or 2 fairly obvious memory leaks and rectify about 15% of the problem thus far.
However, right now the method I am using relies on creating hypotheses about which objects should NOT be in memory at a particular point of time, and then investigating those. What I am after now is a way to compare two separate heap dumps: Basically I have two heap dumps that should be almost the same, because the application has been restored to the same state.
However, one is before the memory leak, and the other after the memory leak, and so they are obviously different. If I am able to compare these two heaps using a of tool, instead of manually as I am doing now, then I need not rely on hypotheses to identify where the leaks are occuring, and can just have the tool identify them for me.
This is important for me because of the the sheer number of classes and instances involved for this particular application (700+ and millions, repsectively)
Is the Netbeans IDE's profiler capable of doing this?
If not, is there a tool out there that is able to perform this task?
Thank you!
There is also a free, GUI tool for this task: VisualVM. It lets you take several heap dumps and then tell it to compare one with another, displaying the differing contents as a list, with a graphical representation of each element's share of used memory. Also, interactively browsing the heap dump difference is much more comfortable than with jhat.
You could use jhat. Specifically look at the option(-baseline baseline-dump-file) on the page I reference it says the following:
"Specify a baseline heap dump. Objects in both heap dumps with the same object ID will be marked as not being "new". Other objects will be marked as "new". This is useful while comparing two different heap dumps."
this may help when comparing the two heap dumps.
YourKit can compare heap dumps.