The program is receiving the image data in bytes from IP camera and then process the image. The first time when the program starts uses 470Mb of RAM, and in every 1 second it increases up to 15Mb, it will continue till there is no enough space and the computer hanged up.
The method getImage() is called every 100ms
I have done some experiment going to share here. The original code is like this: (in which the buffer is created only once and after that, it can be reused)
private static final int WIDTH = 640;
private static final int HEIGHT = 480;
private byte[] sJpegPicBuffer = new byte[WIDTH * HEIGHT];
private Mat readImage() throws Exception {
boolean isGetSuccess = camera.getImage(lUserID, sJpegPicBuffer, WIDTH * HEIGHT);
if (isGetSuccess) {
return Imgcodecs.imdecode(new MatOfByte(sJpegPicBuffer), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
}
return null;
}
In the above code, RAM goes up to Computer hang up (99% 10Gb). Then I changed the code like this: (in every loop it will create a new buffer)
private static final int WIDTH = 640;
private static final int HEIGHT = 480;
private Mat readImage() throws Exception {
byte[] sJpegPicBuffer = new byte[WIDTH * HEIGHT];
boolean isGetSuccess = camera.getImage(lUserID, sJpegPicBuffer, WIDTH * HEIGHT);
if (isGetSuccess) {
return Imgcodecs.imdecode(new MatOfByte(sJpegPicBuffer), Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
}
return null;
}
In this above code the RAM goes up to about 43% (5Gb)and then freed up.
Now the question is in the first block of code seems to be optimized, the buffer can be reused avoiding to create new space of memory in every call, but the result is not something we want. Why?
In the second block of code, it seems that the code is not as optimized as the first one, but works well than the first one.
But in general why the RAM increasing up to 10Gb in the first case and 5Gb in the second case. How can we control this situation?
This is a speculation, though I've seen similar scenario in real live few times.
Your Java code is interacting with native camera SDK (dll). Native code is like to allocating buffers in non JVM memory and use some internal Java objects to access that buffers. Common (a very poor) practice is to relay on Java object finalizer deallocate native buffer if it is not used any more.
Finalizers rely on garbage collector to trigger them, and this is a reason that pattern often fails. Although, finalizer is guarantied to run eventually, in practice it would not happen as long as there are enough space in Java heap and native memory would not be deallocated in timely fashion.
Java heap size have hard limit, but native memory pool used by C/C++ can grow as long as OS allow it to grow.
Concerning your problem
I assume in your first snippet, Java heap traffic is low. GC is idle and no finalizers are executes, thus memory allocated outside of Java heap keeps growing.
In second snippet, your are creating pressure on Java heap forcing GC to run frequently. As a side effect of GC finializer are executed and native memory released.
Instead of finalizers and buffer allocated in native code, your camera SDK may relay on Java direct memory buffers (these memory is direct accessing for C code so it is convenient to pass data over JVM boundary). Though effect would be mostly the same, because Java direct buffers implementation is using same pattern (with phantom references in stead of finalizers).
Suggestions
-XX:+PrintGCDetails and -XX:+PrintReferenceGC options would print information about reference processing so you can verify if finalizer/phantom references are indeed being used.
look at you camera's SDK docs to see is it is possible to release native resources early via API
option -XX:MaxDirectMemorySize=X can be used to cap direct buffer usage, if your camara's SDK relays on them. Though it is not a solution, but a safety net to let your application OOM before OS memory is exhausted
force every few frames (e.g. System.gc()). This is another poor option as behavior of System.gc() is JVM dependent.
PS
This is my post about resource management with finalizers and phantom references.
Related
This question already has answers here:
How to make a long time Full GC in Java manually
(2 answers)
Closed 1 year ago.
How do I drive the garbage collection activity to some significant level, say, 10% or more, preferrably without running into an out-of-memory condition?
I have been trying to build code that does this, but I'm not getting anywhere near 10%.
What approaches are there?
I tried a pool of randomly-sized blocks which are being replaced in random order, with newly created randomly-sized-again blocks; this is giving me ca. 20% CPU and 0.6%GC in VisualVM, slightly varying with pool and block sizes.
You might want to take a look here to get few ideas.
Basically the technique used in above example is to create fragmentation of Java heap memory as objects are added and removed from the LinkedHashMap being used
as a cache.
Running on my local with 300m max memory to JVM (java -Xmx300m -jar gcstress.jar) I was able to generate 20% consistent CPU usage for garbage collection.
You can do a humongous allocation (assuming G1GC with defaults):
public class Del {
public static void main(String[] args) {
for(int i=0;i<100_000;++i) {
System.out.println(allocate());
}
}
private static int allocate() {
int [] x = ThreadLocalRandom.current().ints(1024 * 1024, 10, 10_000_000).toArray();
return Arrays.hashCode(x);
}
}
You can constrain the heap and also enable GC logs to see how bad is G1 trying to cope with the constant allocations:
java -Xmx100m -Xms100m "-Xlog:gc*=info" Del.java
Running this on my machine shows that the CPU is occupied, constantly, from that java process, because of constant GC activity.
One way to cause the GC to spend a lot of time is to almost fill up the heap and then trigger repeated garbage collections by allocating and discarding1 lots of temporary objects.
A typical generational GC spends most of its time tracing and moving non-garbage objects from one space to another. When the heap is nearly full of non-garbage objects, and you trigger the GC repeatedly, it does a lot of work for very little gain (in terms of space reclaimed).
Another way (assuming that explicit GC has not been disabled) is to repeatedly call System.gc().
1 - That is, not keeping a reference to the object so that it is almost immediately unreachable.
[ONLY for debugging] Reduce the -XX:NewSize JVM parameter to a smaller size to trigger GC. This is for older GCs.
You can call System.gc() in program. Read here: Why it is bad to call System.gc()
I'm writing some stuff that uses ByteBuffers. In the docs of the API it says
There is no way to free a buffer explicitly (without JVM specific
reflection). Buffer objects are subject to GC and it usually takes two
GC cycles to free the off-heap memory after the buffer object becomes
unreachable.
However in a SO post's accepted answer I read
BigMemory uses the memory address space of the JVM process, via direct
ByteBuffers that are not subject to GC unlike other native Java
objects.
Now what should I do, shall I free the created buffer? Or do I misunderstand something in the docs or the answer?
It depends how you create the buffer, there are many possible use cases. Regular ByteBuffer.allocate() will be created on the heap and will be collected by the GC. Other options e.g. native memory might not.
Terracotta BigMemory is a type of native off-heap memory which is not governed by the JVM GC. If you allocate a buffer in this type of memory you have to clear it yourself.
It might be a good idea to clear the buffer even if it's allocated in the heap memory. GC will take care of collecting unused buffer it but this will take some time.
As the documentation of the BufferUtils in LWJGL also say: There is no way to explicitly free a ByteBuffer.
The ByteBuffer objects that are allocated with the standard mechanism (namely, by directly or indirectly calling ByteBuffer#allocateDirect) are subject to GC, and will be cleaned up eventually.
The answer that you linked to seems to refer to the BigMemory library in particular. Using JNI, you can create a (direct) ByteBffer that is not handled by the GC, and where it is up to you to actually free the underlying data.
However, a short advice: When dealing with LWJGL and other libraries that rely on (direct) ByteBuffer objects for the data transfer to the native side, you should think about the usage pattern of these buffers. Particularly for OpenGL binding libraries, you'll frequently need a ByteBuffer that only has space for 16 float values, for example (e.g. containing a matrix that is sent to OpenGL). And in many cases, the methods that do the data transfer with these buffers will be called frequently.
In such a case, it is usually not a good idea to allocate these small, short-lived buffers repeatedly:
class Renderer {
void renderMethodThatIsCalledThousandsOfTimesPerSecond() {
ByteBuffer bb = ByteBuffer.allocateDirect(16 * 4);
fill(bb);
passToOpenGL(bb);
}
}
The creation of these buffers and the GC can significantly reduce performance - and distressingly in the form of GC pauses, that could cause lags in a game.
For such cases, it can be beneficial to pull out the allocation, and re-use the buffer:
class Renderer {
private final ByteBuffer MATRIX_BUFFER_4x4 = ByteBuffer.allocateDirect(16 * 4);
void renderMethodThatIsCalledThousandsOfTimesPerSecond() {
fill(MATRIX_BUFFER_4x4);
passToOpenGL(MATRIX_BUFFER_4x4);
}
}
I have an JavaFX application, which includes a background thread used for data processing and show the result in the user interface.
I created the following code for data processing:
public static void runningThread(){
long startTime = java.lang.System.nanoTime();
WSN wsn = new WSN(100, 100, 30, 60, 200);
wsn.initializeNodePosition();
wsn.alphaNodesDead = wsn.nodeNumber/2;
BaseStation BS = new BaseStation(); //the BS is created
BS.x = 125;
BS.y = 50;
BS.maxRadius = 65;
BS.energyModel = new NOEnergyModel();
wsn.BS = BS;
BS.wsn = wsn;
Thread queryThread = new Thread() {
public void run() {
System.out.println("Start");
for(int m=0; m<1000; m++){
System.out.println(m);
wsn.protocol = new HEED(wsn);
wsn.generateHomogeneousWSN(HEEDNODE.class, new MITModel(), new SimpleAggregation());
wsn.protocol.setRadiusAndNeighbors();
boolean running = true;
while(running){
wsn.roundPerformed++;
wsn.protocol.election_cluster_formation(); //cluster formation
wsn.defineStandardCHRouting(); //defines the routing at CH level
wsn.protocol.runRound();
System.out.println(wsn.roundPerformed);
if(wsn.deadNodeList.size() >= wsn.alphaNodesDead){
long stopTime = java.lang.System.nanoTime();
System.out.println("end: " + (stopTime-startTime) / 1000000000.0);
running = false;
}
}
}
}
};
queryThread.start();
}
The problem is after I run the application and click "Start" button to run the "runningThread()" function, the consuming of memory and CPU are higher and higher, when it reaches more than 2GB of memory and 90% of CPU, the "for(int m=0; m<1000; m++)" loop become very slow. I am clearing all the object before each single loop starts.
Does JVM will reclaims memory automatically for reuse once any of the object lose all the reference to it ?
The memoryleak can be anywhere in your code. Use VisualVM profiler or a build-in profiler in your IDE to find it.
The symptoms you describe are strongly suggestive of a memory leak:
Ever increasing memory utilization
The application gets slower and slower because the GC takes longer and runs more and more often.
Eventually, you get an OutOfMemoryError and the application crashes.
A memory leak typically happens when an application creates more and more objects which cannot be garbage collected because they are still reachable.
The general solution to this is to find the cause of the memory leak (which is typically a bug) and fix it. There are lots of tools for finding memory leaks in Java programs, and StackOverflows on how to do it. on how to do it.
Forcing the GC to run won't help. The GC is already running ... and is not able to reclaim objects because they are still reachable.
Increasing the heap size probably won't help either. It just puts off the inevitable slowdown to later in your application's run.
Does JVM will reclaims memory automatically for reuse once any of the object lose all the reference to it ?
Yes.
And you DON'T need to force the GC to run.
The fact that objects are not being reclaimed implies that they are still reachable. In other words, your code that is supposed to ensure that objects become unreachable is not working.
It is 'undefined' as to when the JVM garbage collection kicks in.
I think your major problem is that your inner while loop is continually testing without a pause - try adding Thread.sleep(100) inside the while loop.
The JVM will reclaim memory for unused objects, but it won't do it until the GC process runs. I would take a guess that runRound is allocating a lot of objects internally. Chances are the GC won't have run by the next iteration of your loop, so new memory is allocated, then again, and again, etc.
Eventually it will hit whatever the JVM's ceiling is (-Xmx parameter) and then the GC will start to kick in a lot more aggressively to free up unused objects, but you don't really have a lot of control over when it happens.
Using a tool such as VisualVM will help you identify what your problem is. It will show you the difference between bytes allocated, and bytes actually used. You can also see when the GC process occurs. VisualVM is included with the JDK.
The other alternative is that runRound is allocating objects that it's keeping some sort of global reference to. Again, careful use of VisualVM should let you identify this.
Hello i have one aplication that use java.swing.timer and this is in loop. The problem is that my windows memory process still glow up, and dont stop. I tried to clean my variables, use System.gc() etc... and dont work. I maked a sample to test this with thread, timerstack and swing timer, im adding itens inside a jcombobox and the memory is still raising.
Here comes the code:
//My Timers
#Action
public void botao_click1() {
jLabel1.setText("START");
timer1 = new java.util.Timer();
timer1.schedule(new TimerTask() {
#Override
public void run() {
adicionarItens();
limpar();
}
}, 100, 100);
}
#Action
public void botao_click2() {
thread = new Thread(new Runnable() {
public void run() {
while (true) {
adicionarItens();
try {
Thread.sleep(100);
limpar();
} catch (InterruptedException ex) {
Logger.getLogger(MemoriaTesteView.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
});
thread.start();
}
private void limpar() { // CleanUp array and jcombobox
texto = null;
jComboBox1.removeAllItems();
jComboBox1.setVisible(false);
//jComboBox1 = null;
System.gc();
}
private void adicionarItens() { //AddItens
texto = new String[6];
texto[0] = "HA";
texto[1] = "HA";
texto[2] = "HA";
texto[3] = "HA";
texto[4] = "HA";
texto[5] = "HA";
//jComboBox1 = new javax.swing.JComboBox();
jComboBox1.setVisible(true);
for (int i = 0; i < texto.length; i++) {
jComboBox1.addItem(texto[i].toString());
}
System.out.println("System Memory: "
+ Runtime.getRuntime().freeMemory() + " bytes free!");
}
well help please !!! =(
It isn't clear that you actually have a problem from the small snippet of code you posted.
Either way, you can't control what you want to control
-Xmx only controls the Java Heap, it doesn't control consumption of native memory by the JVM, which is consumed completely differently based on implementation.
From the following article Thanks for the Memory ( Understanding How the JVM uses Native Memory on Windows and Linux )
Maintaining the heap and garbage collector use native memory you can't control.
More native memory is required to maintain the state of the
memory-management system maintaining the Java heap. Data structures
must be allocated to track free storage and record progress when
collecting garbage. The exact size and nature of these data structures
varies with implementation, but many are proportional to the size of
the heap.
and the JIT compiler uses native memory just like javac would
Bytecode compilation uses native memory (in the same way that a static
compiler such as gcc requires memory to run), but both the input (the
bytecode) and the output (the executable code) from the JIT must also
be stored in native memory. Java applications that contain many
JIT-compiled methods use more native memory than smaller applications.
and then you have the classloader(s) which use native memory
Java applications are composed of classes that define object structure
and method logic. They also use classes from the Java runtime class
libraries (such as java.lang.String) and may use third-party
libraries. These classes need to be stored in memory for as long as
they are being used. How classes are stored varies by implementation.
I won't even start quoting the section on Threads, I think you get the idea that
-Xmx doesn't control what you think it controls, it controls the JVM heap, not everything
goes in the JVM heap, and the heap takes up way more native memory that what you specify for
management and book keeping.
I don't see any mention of OutOfMemoryExceptions anywhere.
What you are concerned about you can't control, not directly anyway
What you should focus on is what in in your control, which is making sure you don't hold on to references longer than you need to, and that you are not duplicating things unnecessarily. The garbage collection routines in Java are highly optimized, and if you learn how their algorithms work, you can make sure your program behaves in the optimal way for those algorithms to work.
Java Heap Memory isn't like manually managed memory in other languages, those rules don't apply
What are considered memory leaks in other languages aren't the same thing/root cause as in Java with its garbage collection system.
Most likely in Java memory isn't consumed by one single uber-object that is leaking ( dangling reference in other environments ).
Intermediate objects may be held around longer than expected by the garbage collector because of the scope they are in and lots of other things that can vary at run time.
EXAMPLE: the garbage collector may decide that there are candidates, but because it considers that there is plenty of memory still to be had that it might be too expensive time wise to flush them out at that point in time, and it will wait until memory pressure gets higher.
The garbage collector is really good now, but it isn't magic, if you are doing degenerate things, it will cause it to not work optimally. There is lots of documentation on the internet about the garbage collector settings for all the versions of the JVMs.
These un-referenced objects may just have not reached the time that the garbage collector thinks it needs them to for them to be expunged from memory, or there could be references to them held by some other object ( List ) for example that you don't realize still points to that object. This is what is most commonly referred to as a leak in Java, which is a reference leak more specifically.
EXAMPLE: If you know you need to build a 4K String using a StringBuilder create it with new StringBuilder(4096); not the default, which is like 32 and will immediately start creating garbage that can represent many times what you think the object should be size wise.
You can discover how many of what types of objects are instantiated with VisualVM, this will tell you what you need to know. There isn't going to be one big flashing light that points at a single instance of a single class that says, "This is the big memory consumer!", that is unless there is only one instance of some char[] that you are reading some massive file into, and this is not possible either, because lots of other classes use char[] internally; and then you pretty much knew that already.
I don't see any mention of OutOfMemoryError
You probably don't have a problem in your code, the garbage collection system just might not be getting put under enough pressure to kick in and deallocate objects that you think it should be cleaning up. What you think is a problem probably isn't, not unless your program is crashing with OutOfMemoryError. This isn't C, C++, Objective-C, or any other manual memory management language / runtime. You don't get to decide what is in memory or not at the detail level you are expecting you should be able to.
Java, in theory, is immune to "leaks" of the sort that C-based languages can have. But it's still quite easy to design a data structure that grows in a more or less unbounded fashion, whether or not you intended that.
And, of course, if you schedule timer-based tasks and the like, they will exist until the time has expired and the task has completed (or cancelled), even if you don't retain a reference to them.
Also, some Java environments (Android is notorious for this) allocate images and the like in a way that is not subject to ordinary GC action and can cause heap to grow in an unbounded fashion.
I've a very simple class which has one integer variable. I just print the value of variable 'i' to the screen and increment it, and make the thread sleep for 1 second. When I run a profiler against this method, the memory usage increases slowly even though I'm not creating any new variables. After executing this code for around 16 hours, I see that the memory usage had increased to 4 MB (initially 1 MB when I started the program). I'm a novice in Java. Could any one please help explain where am I going wrong, or why the memory usage is gradually increasing even when there are no new variables created? Thanks in advance.
I'm using netbeans 7.1 and its profiler to view the memory usage.
public static void main(String[] args)
{
try
{
int i = 1;
while(true)
{
System.out.println(i);
i++;
Thread.sleep(1000);
}
}
catch(InterruptedException ex)
{
System.out.print(ex.toString());
}
}
Initial memory usage when the program started : 1569852 Bytes.
Memory usage after executing the loop for 16 hours : 4095829 Bytes
It is not necessarily a memory leak. When the GC runs, the objects that are allocated (I presume) in the System.out.println(i); statement will be collected. A memory leak in Java is when memory fills up with useless objects that can't be reclaimed by the GC.
The println(i) is using Integer.toString(int) to convert the int to a String, and that is allocating a new String each time. That is not a leak, because the String will become unreachable and a candidate for GC'ing once it has been copied to the output buffer.
Other possible sources of memory allocation:
Thread.sleep could be allocating objects under the covers.
Some private JVM thread could be causing this.
The "java agent" code that the profiler is using to monitor the JVM state could be causing this. It has to assemble and send data over a socket to the profiler application, and that could well involve allocating Java objects. It may also be accumulating stuff in the JVM's heap or non-heap memory.
But it doesn't really matter so long as the space can be reclaimed if / when the GC runs. If it can't, then you may have found a JVM bug or a bug in the profiler that you are using. (Try replacing the loop with one very long sleep and see if the "leak" is still there.) And it probably doesn't matter if this is a slow leak caused by profiling ... because you don't normally run production code with profiling enabled for that long.
Note: calling System.gc() is not guaranteed to cause the GC to run. Read the javadoc.
I don't see any memory leak in this code. You should see how Garbage collector in Java works and at its strategies. Very basically speaking GC won't clean up until it is needed - as indicated in particular strategy.
You can also try to call System.gc().
The objects are created probably in the two Java Core functions.
It's due to the text displayed in the console, and the size of the integer (a little bit).
Java print functions use 8-bit ASCII, therefor 56000 prints of a number, at 8 bytes each char will soon rack up memory.
Follow this tutorial to find your memory leak: Analyzing Memory Leak in Java Applications using VisualVM. You have to make a snapshot of your application at the start and another one after some time. With VisualVM you can do this and compare these to snapshots.
Try setting the JVM upper memory limit so low that the possible leak will cause it to run out of memory.
If the used memory hits that limit and continues to work away happily then garbage collection is doing its job.
If instead it bombs, then you have a real problem...
This does not seem to be leak as the graphs of the profiler also tell. The graph drops sharply after certain intervals i.e. when GC is performed. It would have been a leak had the graph kept climbing steadily. The heap space remaining after that must be used by the thread.sleep() and also (as mentioned in one of answers above) from the some code of the profiler.
You can try running VisualVM located at %JAVA_HOME%/bin and analyzing your application therein. It also gives you the option of performing GC at will and many more options.
I noted that the more features of VisualVM I used more memory was being consumed (upto 10MB). So this increase, it has to be from your profiler as well but it still is not a leak as space is reclaimed on GC.
Does this occur without the printlns? In other words, perhaps keeping the printlns displayed on the console is what is consuming the memory.