I know this question is a frequently one, however I checked some solutions from this site, but it seems like it didn't work for me.
So, I have written a small java program, and I want to know how much memory it consumes at different execution moments. I tried Runtime.getRuntime.total() and Runtime.getRuntime.free(), but I'm getting the (almost) same results each time.
I am not interested in how much memory is there available or used by the entire JVM, I want to know specifically for my java process.
this will get you how much heap memory your process has used.
MemoryUsage heapMemoryUsage = ManagementFactory.getMemoryMXBean().getHeapMemoryUsage();
heapMemoryUsage.getUsed();
you can also get all your memory pools and iterate through them to determine other memory usage
List<MemoryPoolMXBean> memoryPoolMXBeans = ManagementFactory.getMemoryPoolMXBeans()
you could also use jvisualvm to interrogate your application.
I am not interested in how much memory is there available or used by the entire JVM, I want to know specifically for my java process.
In a typical use-case, there is no real distinction between "the entire JVM" and "your Java process". At least, not from the perspective of memory usage.
The thing is, everything that the JVM does, and every bit of memory it allocates is done at the behest of the application. And every word of memory remains reachable because something in your application might use it at some point.
So really what you are asking for doesn't make a lot of sense. And naturally, the information is not available. Even if this did make sense, I don't know what you would gain by knowing what memory "belongs" to your application and what to the JVM.
Related
I'm using Java persistence API to develop a standalone software. Recently I saw that the memory usage keep rising when I'm creating objects from entity classes, as well as JPAController classes. It seems that the objects stays at the memory since the memory allocation to the project won't decrease (Eg: 400mb ---> Create Object ---> 450mb ---> Stays at 450mb). Will this affect badly on performance? Should I call System.gc() method to remove these objects?
Generally System.gc() is not guarenteed to perform a garbage collection. Ultimately it is up to the JVM to decide. See the javadoc.
Have you observed what happens when you are approaching your memory limits of the JVM, does garbage collection happen then ? If not and you receive an OutOfMemoryError, you either are retaining something longer than you need to, or actually need extra heap allocated to your VM.
In anycase System.gc() I believe shouldn't be used to solve such problems.
In my opinion, the approach to the problem should be different. Actually the call to System.gc() is not a guarantee that it will free any memory at all; please see When does System.gc() do anything
If you can measure a problem in your memory allocation, either via jconsole, or making a post mortem analysis on the jvm dump, or whatever, then this is another problem. By gathering this information you will know what remains where in your memory regions, and then take actions in order to contain it.
The only way that this would negatively affect performance throughout the life of your program is if you want to keep these entities around forever but the size of your old generation in your heap is less than the 450MB you specified. Assuming that you are want to keep around between 1 and 2 times the 450MB you have you have specified forever, with the default ratios of the JVM, setting a parameter such as -Xmx2g will probably be fine. There are many more parameters to fine tune your performance much more than that, but that's probably all the complexity you're looking for for now. If you want to check out some more details on heap tuning and really get into performance, check out this doc on Garbage Collection Tuning by Oracle. Alternatively, something to eat lunch to is a great Youtube video on GC tuning by a guy named Gil Tene.
But calling System.gc() probably won't do anything useful.
I've written a pretty complex java application that is doing a lot of calculations on price data from the markets in real time and from looking at the task manager in windows this sucker is taking close to 1MEG every 30 seconds and the performance is fine until it gets closer to the memory limit around 300MEG and then the g-collector really kicks in and spikes my CPU to around 50% and the UI performance rapidly degrades from all I've written so far it sounds like I have some bad code going on because the nature of my program is CPU intensive but by design stores very little data in memory.
I need some help on what might be some good next steps to take to see how I can figure out what the problem is, I think if I can see what objects are getting stored in memory that would help as maybe I have some lousy code but I am heart broken with Java as I thought these are problems I would not have to worry about. Thanks for any answers. - Duncan
Identify some reasonable performance targets (memory usage, throughput, latency).
Put together some repeatable performance tests, the closer you can get these to real life scenarios the better.
Get a hold of a good profiler. I've used YourKit with a lot of success, the Netbeans and Eclipse profilers are not bad either. Most decent profilers will be able to identify memory usage, GC and performance hotspots.
Identify the biggest culprits and start fixing the issues beginning at the TOP of the list.
Check out VisualVM. It's in the current JDK bin directory as jvisualvm. If you don't have a memory leak, the heap usage should go down when you run the garbage collector, and you can see which objects may be holding memory by calculating the retained sizes of objects in the heap.
http://download.oracle.com/javase/6/docs/technotes/guides/visualvm/intro.html
Like others say, use a profiler to find what is consuming the memory.
If you don't know already, the garbage collector can only release memory on objects that are out of scope. That is, don't have any references to them. Just make sure it goes out of scope when your done with it. It sounds like your locking it up in a way were it's still referenced some where.
Also, if you want to suggest to the GC that it cleans up, try this:
System.gc();
System.runFinalization();
Again, that is only a suggestion to the gc; but I've found it really helps if you run it after a lot of objects go out of scope.
Lastly, you can tweak your vm arguments.
There are settings for min/max heap size settings. If it's a critical application set them to the same and set it high (that way it doesn't have to keep allocating/deallocating - it just grabs one big chunk at startup). This isn't a fix, just a workaround.
I have been developing a small Java utility that uses two frameworks: Encog and Jetty to provide neural network functionality for a website.
The code is 'finished' in that it does everything it needs to do, but I have some problems with memory usage. When running on my development machine the memory usage seems to fluctuate between about 4MB and 13MB when the application is doing things (training neural networks) and at most it uses about 18MB. This is very good usage and I think it is due to the fact that I call System.GC() fairly regularly. I do this because the processing time doesn't matter for me, but the memory usage does.
So it all works fine on my machine, but as soon as I put it online on our server (shared unix hosting with memory limits) it uses about 19MB to start with and rises to hundreds of MB of memory usage when doing things. These are the same things that I have been doing in testing. The only way, I believe, to reduce the memory usage, is to quit the application and restart it.
The only difference that I can tell is the Java Virtual Machine that it is being run on. I do not know about this and I have tried to find the reason why it is acting this way, but a lot of the documentation assumes a great knowledge of Java and Virtual Machines. Could someone please help m with some reasons why this may be happening and perhaps some things to try to stop it.
I have looked at using GCJ to compile the application, but I don't know if this is something I should be putting a lot of time in to and whether it will actually help.
Thanks for the help!
UPDATE: Developing on Mac OS 10.6.3 and server is on a unix OS but I don't know what. (Server is from WebFaction)
I think it is due to the fact that I
call System.GC() fairly regularly
You should not do that, it's almost never useful.
A garbage collector works most efficiently when it has lots of memory to play with, so it will tend to use a large part of what it can get. I think all you need to do is to set the max heap size to something like 32MB with an -Xmx32m command line parameter - the default depends on whether the JVM believes it's running on a "server class" system, in which case it assumes that you want the application to use as much memory as it can in order to give better throughput.
BTW, if you're running on a 64 bit JVM on the server, it will legitimately need more memory (usually about 30%) than on a 32bit JVM due to larger references.
Two points you might consider:
Calls of System.gc can be disabled by a commandline parameter (-XX:-DisableExplicitGC), I think the behaviour also depends on the gc algorithm the vm uses. Normally invoking the gc should be left to the jvm
As long as there is enough memory available for the jvm I don't see anything wrong in using this memory to increase application and gc performance. As Michael Borgwardt said you can restrict the amount of memory the vm uses at the command line.
Also you may want to look at what mode the JVM has been started when you deploy it online. My guess its a server VM.
Take a look at the differences between the two right here on stackoverflow. Also, see what garbage collector is actually running on the actual deployment. See if you can tweek the GC behaviour, or change the GC algorithm.See the -X options if its a Sun JVM.
Basically the JVM takes the amount of memory it is allowed to as needed, in order to make the "new" operation as fast as possible (this is a science in itself).
So if you have a lot of objects being used, and then discarded, you will slowly and surely fill up the available memory. Then you can ask for garbage collection, but it is just a hint, and the JVM may choose not to listen.
So, you need another mechanism to keep memory usage down. The typical approach is to limit the amount of memory with -Xoptions, but be careful since the JVM you use on your pc may be very different from the one you deploy on, and the memory need may therefore be different.
Is there a deliberate requirement for low memory usage? If not, then just let it run and see how the JVM behaves. Use jvisualvm to attach and monitor.
Perhaps the server uses more memory because there is a higher load on your app and so more threads are in use? Jetty will use a number of threads to spread out the load if there are a lot of requests. Its worth a look at the thread count on the server versus on your test machine.
I have a standalone Java problem running in a linux server. I started the jvm with -Xmx256m. I attached a JMX monitor and can see that the heap never really passes 256Mb. However, on my linux system when I run the top command I can see that:
1) First of all, the RES memory usage of this process is around 350Mb. Why? I suppose this is because of memory outside of the heap?
2) Secondly, the VIRT memory usage of this process just keeps growing and growing. It never stops! It now shows at 2500Mb! So do I have a leak? But heap doesn't increase, it just cycles!
Ultimately this poses a problem because the swap of the system keeps growing and eventually the system dies.
Any ideas what is going on?
The important question I want to ask, what are some scenarios that this could be a result of my code and not the JVM, kernal, etc. For example, if the number of threads keeps growing, would that fit the description of my observations? Anything similar that you can suggest me to look out for?
A couple of potential problems:
Direct allocated buffers and memory mapped files are allocated outside of the Java heap, and can't conveniently be disposed.
An area of stack is reserved for each new thread.
Permanent generation (code and interned strings) is outside of the usual stack. It can be a problem is class loaders leak (usually when reloading webapps).
It's possible that the C heap is leaking.
pmap -x should show how your memory has disappeared.
Swap Sun vs IBM JVM to test
RES will include code + non-head data. Also, some things that you think would be stored in the heap aren't, such as the thread stack and "class data". (It's a matter of definition but code and class data are controlled by -XX:MaxPermSize=.)
This one sounds like a memory leak in either the JVM implementation, the linux kernel, or in library JNI code.
If using the Sun JVM, try IBM, or vice versa.
I'm not sure exactly how dlopen works, but code accessing system libraries might be remapping the same thing repeatedly, if that's possible.
Finally, you should use ulimit to make the system fail earlier, so you can repeat tests easily.
WRT #1, it's normal for your RSS to be larger than your heap. This is because system libraries and non-Java code are included in the RSS but not the heap size.
WRT #2, Yes, it sounds like you have a leak of some sort. If the system itself is crashing, you are likely consuming too much of a system resources, like sockets, threads, or files.
Try using lsof to see what files the JVM has open. Run this a few times as your memory increases. If the JVM is crashing, be sure to set the -XX:+HeapDumpOnOutOfMemoryError option.
In my experience, the most common cause of non-heap memory leak in Java is thread leak.
A tool you may find useful is jvmtop, which lets you monitor heap size, thread number and other metrics in real time.
Sounds like you have a leak. Can't you do profiling to see which function is driving the memory up? I am not sure though.
If I had to take a stab in the dark, I would say that the JVM you are using has a memory leak.
We have a j2ee application running on Jboss and we want to monitor its memory usage. Currently we use the following code
System.gc();
Runtime rt = Runtime.getRuntime();
long usedMB = (rt.totalMemory() - rt.freeMemory()) / 1024 / 1024;
logger.information(this, "memory usage" + usedMB);
This code works fine. That means it shows memory curve which corresponds to reality. When we create a big xml file from a DB a curve goes up, after the extraction is finished it goes down.
A consultant told us that calling gc() explicitly is wrong, "let jvm decide when to run gc". Basically his arguments were the same as disscussed here.
But I still don't understand:
how can I have my memory usage curve?
what is wrong with the explicit gc()? I don't care about small performance issues which can happen with explicit gc() and which I would estimate in 1-3%. What I need is memory and thread monitor which helps me in analysis of our system on customer site.
If you want to really look at what is going on in the VM memory you should use a good tool like VisualVM. This is Free Software and it's a great way to see what is going on.
Nothing is really "wrong" with explicit gc() calls. However, remember that when you call gc() you are "suggesting" that the garbage collector run. There is no guarantee that it will run at the exact time you run that command.
There are tools that let you monitor the VM's memory usage. The VM can expose memory statistics using JMX. You can also print GC statistics to see how the memory is performing over time.
Invoking System.gc() can harm the GC's performance because objects will be prematurely moved from the new to old generations, and weak references will be cleared prematurely. This can result in decreased memory efficiency, longer GC times, and decreased cache hits (for caches that use weak refs). I agree with your consultant: System.gc() is bad. I'd go as far as to disable it using the command line switch.
You can take a look at stagemonitor. It is a open source java (web) application performance monitor. It captures response time metrics, JVM metrics, request details (including a call stack captured by the request profiler) and more. The overhead is very low.
Optionally, you can use the great timeseries database graphite with it to store a long history of datapoints that you can look at with fancy dashboards.
Example:
Take a look at the project website to see screenshots, feature descriptions and documentation.
Note: I am the developer of stagemonitor
I would say that the consultant is right in the theory, and you are right in practice. As the saying goes:
In theory, theory and practice are the same. In practice, they are not.
The Java spec says that System.gc suggests to call garbage collection. In practice, it just spawns a thread and runs right away on the Sun JVM.
Although in theory you could be messing up some finely tuned JVM implementation of garbage collection, unless you are writing generic code intended to be deployed on any JVM out there, don't worry about it. If it works for you, do it.
Have you tried JMX?
http://java.sun.com/developer/technicalArticles/J2SE/jconsole.html
(source: sun.com)
Peek into what is happening inside tomcat through Visual VM.
http://www.skill-guru.com/blog/2010/10/05/increasing-permgen-size-in-your-server/
Take a look at the JVM args: http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp#DebuggingOptions
XX:-PrintGC Print messages at garbage collection. Manageable.
-XX:-PrintGCDetails Print more details at garbage collection.
Manageable. (Introduced in 1.4.0.)
-XX:-PrintGCTimeStamps Print timestamps at garbage collection.
Manageable (Introduced in 1.4.0.)
-XX:-PrintTenuringDistribution Print tenuring age information.
While you're not going to upset the JVM with explicit calls to System.gc() they may not have the effect you are expecting. To really understand what's going on with the memory in a JVM with read anything and everything the Brian Goetz writes.
Explicitly running System.gc() on a production system is a terrible idea. If the memory gets to any size at all, the entire system can freeze while a full GC is running. On a multi-gigabyte-sized server, this can easily be very noticeable, depending on how the jvm is configured, and how much headroom it has, etc etc - I've seen pauses of more than 30 seconds.
Another issue is that by explicitly calling GC you're not actually monitoring how the JVM is running the GC, you're actually altering it - depending on how you've configured the JVM, it's going to garbage collect when appropriate, and usually incrementally (It doesn't just run a full GC when it runs out of memory). What you'll be printing out will be nothing like what the JVM will do on it's own - for one thing you'll probably see fewer automatic / incremental GC's as you'll be clearing the memory manually.
As Nick Holt's post points out, options to print GC activity already exist as JVM flags.
You could have a thread that just prints out free and available at reasonable intervals, this will show you actual mem useage.
If you like a nice way to do this from the command line use jstat:
http://java.sun.com/j2se/1.5.0/docs/tooldocs/share/jstat.html
It gives raw information at configurable intervals which is very useful for logging and graphing purposes.
If you use java 1.5, you can look at ManagementFactory.getMemoryMXBean() which give you
numbers on all kinds of memory. heap and non-heap, perm-gen.
A good example can be found there
http://www.freshblurbs.com/explaining-java-lang-outofmemoryerror-permgen-space
If you use the JMX provided history of GC runs you can use the same before/after numbers, you just dont have to force a GC.
You just need to keep in mind that those GC runs (typically one for old and one for new generation) are not on regular intervalls, so you need to extract the starttime as well for plotting (or you plot against a sequence number, for most practical purposes that would be enough for plotting).
For example on Oracle HotSpot VM with ParNewGC, there is a JMX MBean called java.lang:type=GarbageCollector,name=PS Scavenge, it has a attribute LastGCInfo, it returns a CompositeData of the last YG scavenger run. It is recorded with duration, absolute startTime and memoryUsageBefore and memoryUsageAfter.
Just use a timer to read that attribute. Whenever a new startTime shows up you know that it describes a new GC event, you extract the memory information and keep polling for the next update. (Not sure if a AttributeChangeNotification somehow can be used.)
Tip: in your timer you might measure the distance to the last GC run, and if that is too long for the resulution of your plotting, you could invoke System.gc() conditionally. But I would not do that in a OLTP instance.
As has been suggested, try VisualVM to get a basic view.
You can also use Eclipse MAT, to do a more detailed memory analysis.
It's ok to do a System.gc() as long as you dont depend on it, for the correctness of your program.
The problem with system.gc, is that the JVM already automatically allocates time to the garbage collector based on memory usage.
However, if you are, for instance, working in a very memory limited condition, like a mobile device, System.gc allows you to manually allocate more time towards this garbage collection, but at the cost of cpu time (but, as you said, you aren't that concerned about performance issues of gc).
Best practice would probably be to only use it where you might be doing large amounts of deallocation (like flushing a large array).
All considered, since you are simply concerned about memory usage, feel free to call gc, or, better yet, see if it makes much of a memory difference in your case, and then decide.
About System.gc()… I just read in Oracle's documentation the following sentence here
The performance effect of explicit garbage collections can be measured by disabling them using the flag -XX:+DisableExplicitGC, which causes the VM to ignore calls to System.gc().
If your VM vendor and version supports that flag you can run your code with and without it and compare Performance.
Also note the previous quoted sentence is preceded by this one:
This can force a major collection to be done when it may not be necessary (for example, when a minor collection would suffice), and so in general should be avoided.
JavaMelody might be a solution for your need.
Developed for Java EE applications, this tool measure and build report about the real operation of your applications on any environments. It's free and open-source and easy to integrate into applications with some history, no database nor profiling, really lightweight.