Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm studying about JDK 7 JVM's Runtime Data Areas.
I want to compare JDK 7 JVM and JDK 8 JVM. There are some memory areas in JDK 7 JVM, but I'm confused.
I'm looking for JDK 7 JVM Runtime Data Areas Architecture picture and articles in blogs, but all of articles saying different.
Heap (including Young Generation, Old Generation)
Method Area (where is located in JVM? heap? non-heap? native memory? or independent?)
Runtime Constant Pool in Method Area
JVM Stack in Native Memory
Native Method Stack in Native Memory
PC Register in Native Memory
But I'm confused about PermGen's location in Runtime Data Areas.
someone telling PermGen is part of Method Area.
someone telling Method Area is part of PermGen.
someone telling PermGen is non-heap.
(then PermGen is located in Native Memory? then Runtime Data Areas separated 3 parts? (Heap, non-Heap(Method Area), Native Memory))
someone's picture telling PermGen is part of Heap
What is correct?
If you differentiate simply between Heap and Native memory PermGen is part of the Heap area. So is the method area.
The image you've attached is basically right in this regard.
In the Hotspot-VM Permanent Generation (PermGen) is/was one of the Heap areas.
It is, however, a special heap space separated from the main memory heap.
It is not affected by Java options like -Xmx or -Xms and has it's own limits and garbage collection behavior. Therefore one could also say it is non-heap depending on the viewpoint and context.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am currently working on an event driven System with multiple components running. Recently , I have received an urgent requirement to identify the memory consumption of java components running , so that we can give a brief idea of memory requirements before it is getting deployed on UAT/customer production environments.
Do we have any API using which Deep retained size can be calculated or a formula can be provided using which memory requirements can be computed.
Any ideas on this will surely help.
I have seen some API's ( java instrumentation Api) using which Shallow size can be calculated , but this will not suffice my need.
I also found java Assist using which java byte code can be modified at runtime.
To identify the memory consumption of a java aplication, you can use a profiler.
In jdk 6 or greater you can find jvisualvm (https://docs.oracle.com/javase/8/docs/technotes/tools/unix/jvisualvm.html).
With jvisualvm, you can attach to a java process and, in sampler tab, you can see the memory consumed grouped by class type.
There are even other powerful profilers (JProfiler is one of them)
Enable garbage collection logging and analyze the log. As a bonus you will also be able to identify (and fix) aberrant behaviour.
To turn on gc logging, use the following flags:
-verbose:gc
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+PrintTenuringDistribution
-XX:+PrintGCCause
-Xloggc:/gc-%t.log
This log file can then be handled in a number of tools like Censum from JClarity or uploaded to https://gceasy.io/ for easy analysis. Note that you will see the memory consumption as a whole for the app, not a breakdown. For that you will have to use something like VisualVM mentioned above.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In my application while multiple users, at least 10 users, are login in from different locations at the same time, the application is showing 2 errors :-
1) the OutOfMemoryError exception (heap space/ GC error)
and,
2) the Metaspace error (in jdk-8 environment).
For your information, the application is running on 64-bit windows 7 system and is using jdk-8 environment.
The jvm parameters are set to 1.5Gb in the environment, like below:-
-XX:PermSize=1024m -XX:MaxPermSize=1512m
I need 2 guidance:
1) Please suggest a solution to solve the GC problem, so that Any number of Users can access the application at the same time.
2) Also please provide a Concept on how to solve metaspace error and how to increase the default metaspace size of the application.
Thanks.
Classes and classes metadata get stored in the Metaspace
Static as well as dynamically loaded classes
Metaspace is not part of the Java heap and is allocated out of native
memory
java.lang.OutOfMemoryError: Metaspace
Metaspace by default is unlimited
MaxMetaspaceSize : This exception
means that the Metaspace is full with the loaded classes and their
metadata, and the application is requesting to load more classes The
JVM has already invoked Full GC but could not free up any space in
the Metaspace
Indicates: the Metaspace is sized smaller than the
footprint of classes and its metadata, or the application is
unintentionally holding on to some set of classes in the Metaspace
Try to increase the heap size memory by invoking following option
-Xmx
If you want to increase the Metaspacesize, invoke the following option.
-XX:MetaspaceSize=m -XX:MaxMetaspaceSize=n
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I have develop a project which has more than 3000 source files; when I want to run my project after few minutes it gives me error like java.lang.OutOfMemoryException: java heap space. I also increased the memory of my project by right clicking and VM option I gave the 1024MB I have two 2gb on my PC.
As you likely know you can increase the JVM's memory using java -Xms<initial heap size> -Xmx<maximum heap size>.
However this just delays the issue as there is likely a place in your application where is memory is leaking causing the heap overflow. I suggest you use a tool like Netbeans Profiler which will help you find out where the memory leak is occurring. Netbeans Profiler will allow you to see where objects are created, where garbage collection occurs, etc.
It may be that the application just allocates a lot of memory or there is an actual leak. My approach would be to use a memory analyzer such as Eclipse MAT to see what objects are taking up the most memory.
If they all seem valid then you probably need to increase the heap size space. Though I had worked with 5000+ class web app projects with 512MB heap, so I wouldn't doubt that it was a memory leak.
You should also look for instances of ByteArrayOutputStream in your code, they tend to take up a large chunk of memory as well.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a variety of Rails applications running on Torquebox. Occasionally, Torquebox will stop responding to requests for a short period of time (maybe 2-5 minutes) and the logs will fill up with the following error messages:
java.lang.OutOfMemoryError: Direct buffer memory
The errors happen at unpredictable times (often days between). Load testing doesn't reproduce the problem, and the issue doesn't happen during peak load anyway. And, unlike many other types of memory errors I've seen in the past, the server actually recovers and starts responding normally again without any sort of restarts or intervention.
Are there any common coding mistakes, misconfigurations, or other potential problems that regularly cause this error? Google reveals lots lower level Java/Garbage collection type issues with various libraries (Netty for example), but I'm interested to see if there are other common places to look as well.
JNA/ByteBuffer not getting freed and causing C heap to run out of memory indicates that the direct memory might not be cleared if the Java heap doesn't require garbage collection very often (perhaps, in your case at non-peak times).
If you have a constant function using the direct memory, regardless of load, then the application might not be calling the garbage collection enough during the lighter load times. Using the GC module might help (http://ruby-doc.org/core-2.0/GC.html).
The problem in this case ended up being lack of enough total available memory. Java's memory footprint can end up being significantly larger than just the heap allocation. Direct Buffer Memory is only one of the types of memory use that falls outside the heap.
It's still not clear to me why the fluctuations occur at unpredictable times, but ensuring there is enough excess memory on the system to handle some fluctuation is critical to stability.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I run a mine craft server on a 32 bit Ubuntu system if I upgrade to64 bit what is the max memory I can give to java? I want it to have about 12 gig of ram but I can't do that on 32bit
There is effectively no maximum in the amount of ram a 64-bit system can address. You will be stopped only by your computer's hardware. I don't think java has a max amount of alloted RAM, either, provided you use the right switch in the command.
http://en.wikipedia.org/wiki/64-bit
Just to be clear, "hardware" includes paging / swap space, so if you actually require 12GB and only have 8GB of RAM, you'll need to be sure to have 4GB of spare swap space in order for Java to allocate additional memory successfully.
From the Java Tuning white paper:
For a 32-bit process model, the maximum virtual address size of the process is typically 4 GB, though some operating systems limit this to 2 GB or 3 GB. The maximum heap size is typically -Xmx3800m (1600m) for 2 GB limits), though the actual limitation is application dependent. For 64-bit process models, the maximum is essentially unlimited.
But truth is, such a huge heap memory usage (12GB) is counterproductive. After running for a long time, the time your application spends doing garbage collection will negate the effect of having so much memory available.