I compile a jar file, and i write to the Log every half a minute to check threads and memory situation.
Attached are the start of the log, and the end of the day log, after this software was stuck, and stop working.
In the middle of the day several automatic operations happened. I received quotes about 40 per seconds, and finished to take care of every one of the quotes before the next came.
Plus, every 4 seconds i write a map with info to the DB.
Any ideas why heap size in increasing?
(look at currHeapSize)
morning:
evening:
Any ideas why heap size in increasing?
These are classic symptoms of a Java storage leak. Somewhere in your application is a data structure that is accumulating more and more objects, and preventing them from being garbage collected.
The best way to find a problem like this is to use a memory profiler. The Answers to this Question explain.
Related
I noticed today that my program slowly chews memory. I checked with Java VisualIVM to try and learn more. I am very new at this. I am coding in Java8 with Swing to take care of the game.
Note that nothing is supposed to happen except rendering of objects.
No new instances or anything.
The Game Loop is looking something along the lines of
while (running)
try render all drawables at optimal fps
try update all entities at optimal rate
sleep if there is time over
From what I could find during a 20 minute period the following happened.
Sleeping is working, it is yielding most of its time.
5 classes were loaded some time into the run time.
The game uses about 70 MB when first launched. (At this point everything is loaded as far as I can tell.)
15 MB of RAM were taken pretty rapidly after the initial 70 MB. Followed by a slowly increase. Now 100 MB is taken in total.
CPU Usage seems sane, about 7% on my i-2500k.
The Heap size has been increased once. Used heap has never exceeded 50%.
If I comment everything in the Game Loop except for the while (running {} part I get fewer leaks, they still occur however.
Is this normal or should I dig deeper? If I need to dig deeper can someone point me in the direction of what to look after.
Now after 25 minutes it is up to 102 MB of ram. Meaning the leaks are smaller and fewer.
A reminder, I not very good at this. First time I try to debug my project this way. Please bear that in my mind.
Update
After roughly 40 minutes it settles at 101 to 102 MB of RAM usage. It hasn't exceeded that for 15 minutes now. It goes a bit up and down.
The Heap size is getting smaller and smaller. CPU Usage is steady.
Short Answer: There is no leak.
Expalanation
Using these questions as a reference.
Simple Class - Is it a Memory Leak?
Tracking down a memory leak / garbage-collection issue in Java
Creating a memory leak with Java
And this article.
http://www.oracle.com/webfolder/technetwork/tutorials/obe/java/gc01/index.html
As I mentioned in my question. I am very new to this. What triggered me was that I checked Windows Task Manager and saw an increase in memory usage by my application. So I decided to dig deeper. I learned a couple of things.
Javas Garbage Collection was vastly underestimated by me. It is messy to cause a memory leak if that is your intention. There can be problems with it when there is Threading involved however. Which caught my attention as I am using Threading in my project.
The tools that Windows provide are sub-par, I recommend using external tools. I used Java VisuaIVM. In this tool I found that there are loads of classes being loaded the first 2 minutes of the game. And 5 a bit in. Some of the first ones created are String references that JVM makes.
"Thread.sleep could be allocating objects under the covers." -- I found out it does these. a total of 5 even. Which explains my initial "5 classes were loaded some time into the run time.". What they do I have no clue as of yet.
About 10 - 15 MB was from the profiling that I did. I wish I wasn't such a rookie.
So again no leak that I could find.
I am developing an in memory data structure, and would like to add persistency.
I am looking for a way to do this fast. I thought about dumpping a heap-dump once in a while.
Is there a way to load this java heap dump as is, into my memory? or is it impossible?
Otherwise, other suggestions for fast write and fast read of the entire information?
(Serialization might take a lot of time)
-----------------edited explination:--------
Since my memory might be full of small pieces of information, referencing each other - and so serialization may require me to in efficeintly scan all my memory. reloading is also possibly problematic.
On the other hand, I can define a gigantic array, and each object I create, I shall put it in the array. Links will be a long number, reperesnting the place in the array. Now, I can just dump this array as is - and also reload it as is.
There are even some jvms like JRockit that utilize the disk space, and so maybe it is possible maybe to dump as is very quickly and to re-load very quicky.
To prove my point, java dump contains all the information of the jvm, and it is produced quickly.
Sorry, but serialization of 4GB isn't even close to being in the seconds dump is.
Also, memory is memory and there are operating systems that allow you a ram memory dump quicky.
https://superuser.com/questions/164960/how-do-i-dump-physical-memory-in-linux
When you think about it... this is quite a good strategy for persistant data structures. There is quite a hype about in-memory data bases in the last decade. But why settel for that? What if I want a fibonacci heap - to be "almost persistant". That is, every 5 minutes I will dump the inforamtion (quickly) and in case of a electrical outage, I have a backup from 5 minutes ago.
-----------------end of edited explination:--------
Thank you.
In general, there is no way to do this on HotSpot.
Objects in the heap have 2 words of header, the second of which points into permgen for the class metadata (known as a klassOop). You would have to dump all of permgen as well, which includes all the pointers to compiled code - so basically the entire process.
There would be no sane way to recover the heap state correctly.
It may be better to explain precisely what you want to build & why already-existing products don't do what you need.
Use Serialization. Implement java.io.Serializable, add serialVersionUID to all of your classes, and you can persist them to any OutputStream (file, network, whatever). Just create a starting object from where all your object are reachable (even indirectly).
I don't think that Serialization would take long time, it's optimized code in the JVM.
You can use jhat or jvisualvm to load your dump to analyze it. I don't know whether the dump file can be loaded and restarted again.
My Java program uses java.util.concurrent.Executor to run multiple threads, each one starts a runnable class, in that class it reads from a comma delimited text file on C: drive and loops through the lines to split and parse text into floats, after that data is stored into :
static Vector
static ConcurrentSkipListMap
My PC is a Win 7 64bit, Intel Core i7, has six * 2 cores and 24GB of RAM, I have noticed the program will run for 2 minutes and finish all 1700 files, but the CPU usage is only around 10% to 15%, no matter how many threads I assign using :
Executor executor=Executors.newFixedThreadPool(50);
Executors.newFixedThreadPool(500) won't have a better CPU usage or shorter time to finish the tasks. There is no network traffic, everything is on local C: drive, There is enough RAM for more threads to use, it will have an "OutOfMemoryError" when I increase the threads to 1000.
How come more threads doesn't translate to more CPU usage and less time of processing, why ?
Edit : My hard drive is a SSD 200 GB.
Edit : Finally found where the problem was, each thread writes it's results to a log file which is shared by all threads, the more times I run the app, the larger the log file, the slower it gets, and since it's shared, this definitely slows down the process, so after I stopped writing to the log file, it finishes all tasks in 10 seconds !
The OutOfMemoryError is probably coming from Java's own limits on its memory usage. Try using some of the arguments here to increase the maximum memory.
For speed, Adam Bliss starts with a good suggestion. If this is the same file over and over, then I imagine having multiple threads try to read it at the same time could result in a lot of contention over locks on the file. More threads would even mean more contention, which could even result in worse overall performance. So avoid that and simply load the file once if it's possible. Even if it's a large file, you have 24 GB of RAM. You can hold quite a large file, but you may need to increase the JVM's allowed memory to allow the whole file to be loaded.
If there are multiple files being used, then consider this fact: your disk can only read one file at a time. So having multiple threads trying to use the disk all at the same time probably won't be too effective if the threads aren't spending much time processing. Since you have so little CPU usage, it could be that the thread loads part of the file, then runs very quickly on the part that got buffered, and then spends a lot of time waiting for the rest of the file to load. If you're loading the file over and over, that could even still apply.
In short: Disk IO probably is your culprit. You need to work to reduce it so that the threads aren't contending for file content so much.
Edit:
After further consideration, it's more likely a synchronization issue. Threads are probably getting held up trying to add to the result list. If access is frequent, this will result in huge amounts of contention for locks on the object. Consider doing something like having each thread save it's results in a local list (like ArrayList, which is not thread safe), and then copying all values into the final, shared list in chunks to try to reduce contention.
You're probably being limited by IO, not cpu.
Can you reduce the number of times you open the file to read it? Maybe open it once, read all the lines, keep them in memory, and then iterate on that.
Otherwise, you'll have to look at getting a faster hard drive. SSDs can be quite speedy.
It is possible that your threads are somehow given low priority on the system? Increasing the number of threads in that case wouldn't correspond to an increase in CPU usage, since the amount of CPU space allotted to your program may be throttled somewhere else.
Are there any configuration files/ initialization steps where something like this could possibly occur?
I am trying to Tune one of my applications on JAVA.
I am using JAVA-Profiler and got some reports from it.
I saw that the number of page -faults for application are ranging from 30000 to 35000 range.
How can I decide if this number is too high or normal ?
I am getting same data for initial one minute and after half an hour as well.
My RAM is 2 GB and I am using application with single thread.
Thread is only trying to read messages from queue every 3 seconds and queue is empty.
Since no processing is being done, I think that page faults should not occur at all.
Please guide me here.
When you start your JVM, it reserves the maximum heap size as a continuous block. However, this virtual memory is only turned into main memory as you access those pages. i.e. every time your heap grows by 4 KB, you get one page fault. You will also get page fault from thread stacks in the same manner.
Your 35K page faults suggests you are using about 140 MB of heap.
BTW You can buy 8 GB for £25. You might consider an upgrade.
What's your JVM? If it's HotSpot, you can use JVM options like -XX:LargePageSizeInBytes or -XX:+UseMPSS to force desired page sizes in order to minimize page swapping. I Think there should be similar options for other JVMs too.
Take a look at this:
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html
We have a swing based application that does complex processing on data. One of the prerequisites for our software is that any given column cannot have too many unique values. If the number is numeric, the user would need to discretize the data before they could from our tool.
Unfortunately, the algorithms we are using are combinatorially expensive in memory depending on the number of unique values per column. Right now with the wrong dataset, the app would run out of memory very quickly. Before doing one of these operations that would run out of memory, we should be able to calculate roughly how much memory the operation will need. It would be nice if we could check how much memory the app currently is using, estimate if the app is going to run out of memory, and show an error message accordingly rather than running out of memory. Using java.lang.Runtime, we can find the free memory, total memory, and max memory, but is this really helpful? Even if it appears we won't have enough heap space, it could be that if we wait 30 milliseconds the garbage collector will run, and suddenly we have more than enough heap space to run our operation. Is there anyway to really predict if we are going to run out of memory?
I have done something similar for a database application where the number of rows that were loaded could not be estimated. So in the loop that processes the result set I'm calling a "MemorWatcher" method that would check the memory that was free.
If the available memory goes under a certain threshold the watcher would force a garbage collection and re-check. If there still wasn't enough memory the watcher method signals this to the caller with an exception. The caller can gracefully recover from that exception - as opposed to the OutOfMemoryException which sometimes leaves Swing totally unstable.
I don't have expertise on this, but I feel you can take an extra step of bytecode analysis using ASM to preempt bugs like null pointer exception, out of memory exception etc.
Unless you run your application with the maximum amount of memory you need from the outset (using -Xms) I don't think you can achieve anything useful, since other applications will be able to consume memory before your app needs it.
Have you considered using Soft/WeakReferences, and letting garbage collection reap objects that you could possible recalculate/regenerate on the fly ?