i m writing a java code and i want run some performance tests .
I want to get the heap memory used by only one of the methods in my class.
public AccessControl {
public boolean Allowed () {
code
}
public void print () {
code }
}
i want to get the heap memory used in java everytime the method Allowed is called at runtime. i read i can do it through HPROf but i noticed that HPROf doesnt provide memory calculations for methods but only for classes is there a code i can write inside the method to get the available memory and then the used memory? thanks
There is no such thing as "heap memory used by a method". Methods don't take up heap memory, objects do.
If you're interested in the objects created within a specific method (and, of course, the methods it calls directly and indirectly), you could compare heap snapshots created before and after the method call (doable by running in a debugger and setting breakpoints).
But what actual problem are you trying to solve? Memory leaks are usually diagnosed by first finding the GC roots for apparently-unnecessary objects and the using a debugger to find out where and why these references are set.
InMemProfiler can be used to profile memory allocations. In trace mode this tool can be used to identify the source of memory allocations.
You could use the option #tracetarget-org.yourcode.YourClass to get the "Allocating Classes" output to show the per method memory allocation within YourClass.
However, you will only be able to profile memory allocations for each allocated class separately. There is currently no summary given across all the allocated classes. If you were really keen you could git fork InMemProfiler and try and add this functionality.
I would recommend that when profiling you do not bias your results by introducing new code. If HPROF does not solve your problem, then you could use a debugger and apply breakpoints around where you want to analyze. You may also want to try JProfiler. I would recommend that instead of focusing on one specific place in your code, to look at the big picture first. Otherwise you risk premature optimization.
If you use oracle jvm, you can do it!
ThreadMXBean threadmx = ManagementFactory.getThreadMXBean();
long startbyte = ((com.sun.management.ThreadMXBean)threadmx)
.getThreadAllocatedBytes(Thread.currentThread().getId());
//call method
int usedbyte = ((com.sun.management.ThreadMXBean)threadmx)
.getThreadAllocatedBytes(Thread.currentThread().getId())-startbyte;
One way is to use a memory profiler like jprofiler, but if you just want to see the memory used by a set of statements use below code
Runtime runtime = Runtime.getRuntime();
long memoryUsedBefore = runtime.totalMemory() - runtime.freeMemory();
System.out.println("Memory used before: " + memoryUsedBefore );
// set of memory consumption statements here
long memoryUsedAfter = runtime.totalMemory() - runtime.freeMemory();
System.out.println("Total Memory increased:" + (memoryUsedAfter - memoryUsedBefore ));
Related
I want my application to profile itself and tell me where memory allocation is taking place. Profilers are able to do that, so my application should be able to do that somehow too. How do profilers trap object instantiation and how can I do it myself?
You can do that by using -javaagent to pass instrumentation code into the JVM. We actually use this technique to find instance leaks in our applications. Below an example:
ConcurrentLinkedQueue<String> queue = new ConcurrentLinkedQueue<String>();
String s = new String("hi");
MemorySampler.start();
queue.offer(s);
MemorySampler.end();
if (MemorySampler.wasMemoryAllocated()) MemorySampler.printSituation();
Outputs:
Memory allocated on last pass: 24576
Memory allocated total: 24576
Stack Trace:
java.util.concurrent.ConcurrentLinkedQueue.offer(ConcurrentLinkedQueue.java:327)
TestGC.main(TestGC2.java:25)
Where you can see that the object allocation happens on line 327 of ConcurrentLinkedQueue.
Disclaimer: I work for Coral Blocks which develops the MemorySampler class above.
How do profilers trap object instantiation and how can I do it myself?
Profilers typically do that by interacting with a debug agent that is monitoring what is going on inside the JVM.
However, there is no way that an application can interact with the debug agent of its own JVM. It is not supported, and if you attempt to do it (somehow), your JVM is liable to lock up.
You may be able to get a application to profile itself is if you inject the profiling code into the bytecode files for all of the classes whose memory usage needs to be profiled. For example, replace each "new" or "newarray" instruction with a call to a method (of the appropriate type) that creates the object / array and also does your profiling stuff. But this is pretty tricky stuff ....
I am working on an application & my code is having out of memory error. I am not able to see memory utilisation of the code.so I am very confused were to see.
Also after my little analysis I came to kow that there is private static object getting creating & in the constructor of that class. some more objects are getting created. & that class is multithreaded..
so I want to know since the static objects does not get garbage collected.. will all the objects related to the constructor will not be garbage collected??
A static reference is only collected when the class is unloaded, and this only happened when the class loader is not used any more. If you haven't got multiple class loaders it is likely this will never be unloaded (until your program stops)
However, just because an object was once referenced statically doesn't change how it is collected. If you had a static reference to an object and no longer have a reference to that object, it will be collected as normal.
Having multiple threads can make finding bugs harder, but it doesn't change how objects are collected either.
You need to take a memory dump of your application and see why memory is building up. It is possible the objects you retaining are all needed. In this case you need to
reduce your memory requirement
increase your maximum memory.
You might not have a memory leak - you might simply surpassed the amount of avaialble RAM your system can provide.
you can add several JVM arguments to limit the size of RAM allocated to your runtime enviorment, and control the garbage collector - the tradeoff is it usually consumes more CPU.
You say you are not capable of seeing the memory utilisation?
Have you tried using JVisualVM (in $JAVA_HOME/bin/jvisualvm)
It should be capable of attaching to local processes and take heap dumps.
Also, Eclipse Memory Analyzer has some good reports for subsequent analysis
Hello i have one aplication that use java.swing.timer and this is in loop. The problem is that my windows memory process still glow up, and dont stop. I tried to clean my variables, use System.gc() etc... and dont work. I maked a sample to test this with thread, timerstack and swing timer, im adding itens inside a jcombobox and the memory is still raising.
Here comes the code:
//My Timers
#Action
public void botao_click1() {
jLabel1.setText("START");
timer1 = new java.util.Timer();
timer1.schedule(new TimerTask() {
#Override
public void run() {
adicionarItens();
limpar();
}
}, 100, 100);
}
#Action
public void botao_click2() {
thread = new Thread(new Runnable() {
public void run() {
while (true) {
adicionarItens();
try {
Thread.sleep(100);
limpar();
} catch (InterruptedException ex) {
Logger.getLogger(MemoriaTesteView.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
});
thread.start();
}
private void limpar() { // CleanUp array and jcombobox
texto = null;
jComboBox1.removeAllItems();
jComboBox1.setVisible(false);
//jComboBox1 = null;
System.gc();
}
private void adicionarItens() { //AddItens
texto = new String[6];
texto[0] = "HA";
texto[1] = "HA";
texto[2] = "HA";
texto[3] = "HA";
texto[4] = "HA";
texto[5] = "HA";
//jComboBox1 = new javax.swing.JComboBox();
jComboBox1.setVisible(true);
for (int i = 0; i < texto.length; i++) {
jComboBox1.addItem(texto[i].toString());
}
System.out.println("System Memory: "
+ Runtime.getRuntime().freeMemory() + " bytes free!");
}
well help please !!! =(
It isn't clear that you actually have a problem from the small snippet of code you posted.
Either way, you can't control what you want to control
-Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
-Xmx doesn't control what you think it controls, it controls the JVM heap, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
I don't see any mention of OutOfMemoryExceptions anywhere.
What you are concerned about you can't control, not directly anyway
What you should focus on is what in in your control, which is making sure you don't hold on to references longer than you need to, and that you are not duplicating things unnecessarily. The garbage collection routines in Java are highly optimized, and if you learn how their algorithms work, you can make sure your program behaves in the optimal way for those algorithms to work.
Java Heap Memory isn't like manually managed memory in other languages, those rules don't apply
What are considered memory leaks in other languages aren't the same thing/root cause as in Java with its garbage collection system.
Most likely in Java memory isn't consumed by one single uber-object that is leaking ( dangling reference in other environments ).
Intermediate objects may be held around longer than expected by the garbage collector because of the scope they are in and lots of other things that can vary at run time.
EXAMPLE: the garbage collector may decide that there are candidates, but because it considers that there is plenty of memory still to be had that it might be too expensive time wise to flush them out at that point in time, and it will wait until memory pressure gets higher.
The garbage collector is really good now, but it isn't magic, if you are doing degenerate things, it will cause it to not work optimally. There is lots of documentation on the internet about the garbage collector settings for all the versions of the JVMs.
These un-referenced objects may just have not reached the time that the garbage collector thinks it needs them to for them to be expunged from memory, or there could be references to them held by some other object ( List ) for example that you don't realize still points to that object. This is what is most commonly referred to as a leak in Java, which is a reference leak more specifically.
EXAMPLE: If you know you need to build a 4K String using a StringBuilder create it with new StringBuilder(4096); not the default, which is like 32 and will immediately start creating garbage that can represent many times what you think the object should be size wise.
You can discover how many of what types of objects are instantiated with VisualVM, this will tell you what you need to know. There isn't going to be one big flashing light that points at a single instance of a single class that says, "This is the big memory consumer!", that is unless there is only one instance of some char[] that you are reading some massive file into, and this is not possible either, because lots of other classes use char[] internally; and then you pretty much knew that already.
I don't see any mention of OutOfMemoryError
You probably don't have a problem in your code, the garbage collection system just might not be getting put under enough pressure to kick in and deallocate objects that you think it should be cleaning up. What you think is a problem probably isn't, not unless your program is crashing with OutOfMemoryError. This isn't C, C++, Objective-C, or any other manual memory management language / runtime. You don't get to decide what is in memory or not at the detail level you are expecting you should be able to.
Java, in theory, is immune to "leaks" of the sort that C-based languages can have. But it's still quite easy to design a data structure that grows in a more or less unbounded fashion, whether or not you intended that.
And, of course, if you schedule timer-based tasks and the like, they will exist until the time has expired and the task has completed (or cancelled), even if you don't retain a reference to them.
Also, some Java environments (Android is notorious for this) allocate images and the like in a way that is not subject to ordinary GC action and can cause heap to grow in an unbounded fashion.
I've a very simple class which has one integer variable. I just print the value of variable 'i' to the screen and increment it, and make the thread sleep for 1 second. When I run a profiler against this method, the memory usage increases slowly even though I'm not creating any new variables. After executing this code for around 16 hours, I see that the memory usage had increased to 4 MB (initially 1 MB when I started the program). I'm a novice in Java. Could any one please help explain where am I going wrong, or why the memory usage is gradually increasing even when there are no new variables created? Thanks in advance.
I'm using netbeans 7.1 and its profiler to view the memory usage.
public static void main(String[] args)
{
try
{
int i = 1;
while(true)
{
System.out.println(i);
i++;
Thread.sleep(1000);
}
}
catch(InterruptedException ex)
{
System.out.print(ex.toString());
}
}
Initial memory usage when the program started : 1569852 Bytes.
Memory usage after executing the loop for 16 hours : 4095829 Bytes
It is not necessarily a memory leak. When the GC runs, the objects that are allocated (I presume) in the System.out.println(i); statement will be collected. A memory leak in Java is when memory fills up with useless objects that can't be reclaimed by the GC.
The println(i) is using Integer.toString(int) to convert the int to a String, and that is allocating a new String each time. That is not a leak, because the String will become unreachable and a candidate for GC'ing once it has been copied to the output buffer.
Other possible sources of memory allocation:
Thread.sleep could be allocating objects under the covers.
Some private JVM thread could be causing this.
The "java agent" code that the profiler is using to monitor the JVM state could be causing this. It has to assemble and send data over a socket to the profiler application, and that could well involve allocating Java objects. It may also be accumulating stuff in the JVM's heap or non-heap memory.
But it doesn't really matter so long as the space can be reclaimed if / when the GC runs. If it can't, then you may have found a JVM bug or a bug in the profiler that you are using. (Try replacing the loop with one very long sleep and see if the "leak" is still there.) And it probably doesn't matter if this is a slow leak caused by profiling ... because you don't normally run production code with profiling enabled for that long.
Note: calling System.gc() is not guaranteed to cause the GC to run. Read the javadoc.
I don't see any memory leak in this code. You should see how Garbage collector in Java works and at its strategies. Very basically speaking GC won't clean up until it is needed - as indicated in particular strategy.
You can also try to call System.gc().
The objects are created probably in the two Java Core functions.
It's due to the text displayed in the console, and the size of the integer (a little bit).
Java print functions use 8-bit ASCII, therefor 56000 prints of a number, at 8 bytes each char will soon rack up memory.
Follow this tutorial to find your memory leak: Analyzing Memory Leak in Java Applications using VisualVM. You have to make a snapshot of your application at the start and another one after some time. With VisualVM you can do this and compare these to snapshots.
Try setting the JVM upper memory limit so low that the possible leak will cause it to run out of memory.
If the used memory hits that limit and continues to work away happily then garbage collection is doing its job.
If instead it bombs, then you have a real problem...
This does not seem to be leak as the graphs of the profiler also tell. The graph drops sharply after certain intervals i.e. when GC is performed. It would have been a leak had the graph kept climbing steadily. The heap space remaining after that must be used by the thread.sleep() and also (as mentioned in one of answers above) from the some code of the profiler.
You can try running VisualVM located at %JAVA_HOME%/bin and analyzing your application therein. It also gives you the option of performing GC at will and many more options.
I noted that the more features of VisualVM I used more memory was being consumed (upto 10MB). So this increase, it has to be from your profiler as well but it still is not a leak as space is reclaimed on GC.
Does this occur without the printlns? In other words, perhaps keeping the printlns displayed on the console is what is consuming the memory.
I'm writing an app the generates a big xlsx file using apache-POI.
At a certain time I get OutOfHeapSpace exception.
I want to solve it by writing the content to the xlsx file when I determine that I'll soon be out of space, and thus freeing the memory, and then reading the file from the disk and continuing the writing.
A better solution might be to predefine the number of cells I'll write in each "block" , and then write the block to disk, but in any case this made me wonder if there is a way to determine the heap space that is left at runtime?
Freeing unused memory at runtime is probably not very reliable since the JVM has considerable freedom in deciding when to garbage collect.
POI recently introduced the SXSSF API that uses streaming and thereby significantly reduces memory footprint for writing spreadsheets. This should help even with very big xsls files. There are a couple of downsides which are shown here. But if you can live with them, this should alleviate you of heap related problems.
The MemoryMXBean can give you a fair amount of information about the current memory usage.
public class PrintMemory {
public static void main(String[] args) {
MemoryMXBean memoryMXBean = ManagementFactory.getMemoryMXBean();
long[] array;
for (int i = 0; i < 100; i++) {
array = new long[100000];
System.out.printf("%d%n", memoryMXBean.getHeapMemoryUsage());
}
}
}
You can get the available memory, but it only tells you the most you can allocate without triggering a GC.
Instead you can trigger a GC and see how much memory is free after wards. The problem this is it has a perform overhead.
Another option is to monitor the GC cleanups via JMX and see how much is free after it naturally triggers a GC (if it doesn't run, you don't have a problem)
Using one of the coming with the JVM JMX MBeans is worth considering.
Like this one MemoryPoolMXBean