java first encounter with heap space error server data logger - java

I built my first Java program which is built on top of the Interactive Brokers Java API. That may or may not be important. I just extended the main API classes with a couple new classes.
The program is making data queries to a remote server. When the server responds, I log the received data to a local MySQL data base. Once the program finishes logging the data, the program will make the next data request.
I am having a problem after leaving the program running for some time, after making a couple hundred server requests. I will see this error, then the program doesn't continue to execute:
java.lang.OutOfMemoryError: Java heap space
I did some research, and from what I read, I conclude that the program is creating many new variables, and not destroying old worthless ones. Since I am using Netbeans for development, I used the Netbeans profiler to inspect if this was the case. See the picture here:
After running the program for quite some time, more and more of the memory is used up by Byte. So it seems that my theory is still true.
I don't really know where to go from here. There is no reference to a class or specific variable, just a variable type. How can pinpoint where the problem is coming from?
UPDATE
I corrected a specific problem that was mentioned by BigMike in the comments. Previiously, I was creating many Statements in the JDBC MySQL Connector API, and I was calling .execute() to execute the statements, but I wasn't closing the statement with .close().
I made sure the add the statement.close() call after each execution, and the program runs much better now. By looking at the RAM usage for this program, it seems to solved the problem. I am also not seeing the Java heap space error anymore, which is nice.
Thanks!

It's very hard to say what might be wrong by simply that.
It might have to do with Streams that you are opening that aren't being closed when you no longer need them.
Double check methods that allocate resources (reading from files, database, etc), especially if they read data into streams, and make sure you close those streams in a finally clause.
Apart from that, you can try and profile what methods are being called more often, etc, to try and narrow down the problem to a specific part of your code.
I found a site with a reasonable explanation of how Garbage Collection works, and what can cause OutOfMemoryErrors:
http://www.kdgregory.com/index.php?page=java.outOfMemory
If you read through that, there's a specific reference to high allocation of Object[] and byte[], that might point you in the right direction.

Generally speaking, this comes about for one of two reasons:
There is a memory leak in the application, such that the application fails to release items for garbage collection, leading to the JVM running out of memory over time.
The application attempted a one-off operation that would require more memory than is available, leading to the JVM running out of memory due to the operation.
Since your output seems to indicate that the bulk of the memory is consumed by literally a million plus small byte arrays, my guess is that #1 is probably the culprit; however, to verify this, restart your application and watch it's memory consumption over time. It will bounce up and down, but really you only need to watch the trend of consumption. If the consumption average continues to climb over time, you have a memory leak.
To solve this issue, you typically need the source code, and need to find the parts of the code where the troubling objects are being created, used, and then "stored" far beyond the last time that they will ever be used. The solution is to correct the code to no longer store them. HashMaps, Lists, and other Collections are often accomplices in memory leak problems.
If you lack the source code, you can attempt to measure the trend of the memory consumption, and schedule shutdowns and restarts of the application to effectively "reset the clock" such that you choose your downtime instead of watching the application choose it for you.
If it is a one-off operation (not likely considering your data) then you won't see an upward trend in memory consumption until the triggering event occurs. In such a case, with access to the source code, you should protect your application from processing data that grows very far outside of normal operating parameters. For example, reading a message from the network typically takes only a few KB, but in exceptional circumstances a client might transmit forever. In such a case, kill the message processing and throw the message away with an error if you exceed a maximum message size limit of 10 MB.
Without access to the source code in the latter scenario, the only hope is to identify the incoming upset, hunt down the source of the errant transmission, and attempt to manipulate it to prevent the overload of output.
The variations on how to approach these techniques are vast, but now you have a few ideas.

Related

Java - How is memory leak in Java harmful? How can it possibly get used for a bad cause?

Creating a memory leak with Java
I was going through above "interview" question. After reading it's answers I myself ended up having a few questions.
Let's guess there is already a memory leak in the code.
How is that harmful? How can the data go in wrong hands?
I am pretty sure that System.read(); (or something like that) is not going to read the data from the memory leak. Is that even possible?
Please help with some reference/code/documents.
Memory leak is really a broad argument, to be honest I've voted to close your question (because too broad) but on the other hand I would try to give you a little spark of what behind this problem.
Consider that you're creating a session in memory for every user connected to your web service, but you don't throw away the session after some time, simply because you forget or because a bad design of your application, this would cause a memory leak.
And again, consider that you don't close your open files or sockets.
Or consider that somewhere you save a reference to all the intermediate data structures produced by your process. In this case there is no way for the garbage collector to free the allocated memory.
Memory leaks mostly happens in long running application, because in the short run a memory leak have little chances to generate a out of memory exception. But in the long run the thing changes, there are applications that runs for months or even years.
There are so many situation where a memory leaks could happen. Many framework or libraries and even the languages try to save the programmers by this "bad" situations, but I personally think that is the experience of the programmer that does the difference.
For example in Java the Try with resource Statement is an example of language features born to help programmers in such situation (this helps to not forget).
So when designing your own objects that should close some resource at end of their life, try to implement java.lang.AutoCloseable interface and add the appropriate methods. Have a look at how many classes are now implementing the Autocloseable interface, this also explain how is important the memory (leak) and resource handling.
I would also suggest to study the difference between Java stack and heap memory management.
Once I experienced a Tomcat instance that hanged a server every three months. After some time the server had to be restarted every three week, till the time the server had to be restarted every day.
Comes out that "someone", wrote a for loop instead to add a while clause in a sql query.
So, there are programmers that does this as full time job, that are expert in this kind of investigations and that are able to find and correct memory leaks.

What to consider when writing a java program that is supposed to run 'forever'

I have to write a program that is thought to run 'forever' , meaning that it won't terminate regularly. Up until now I always wrote programs that would run and be terminated at the end of the day. The program has to do some synchronizations, pause for n minutes and than sync again.
AFAIK there should be no problem with my current implementation and it should theoretically run just fine, but I'm lacking any real-world experience.
So are there any 'patterns' or best practices for writing very robust and resource efficient java programs that have a very long runtime? What could be possible problems after for example a month/year of runtime?
Some background :
Java : 1.7 but compiled down to 1.5
OS : Windows (exact version is not certain yet)
Thanks in advance
Just a brain dump of all the things I've had to keep in mind when writing this kind of app.
Avoid Memory Leaks
I had an app that runs once at mid day, every day, and in that I had a FileWriter. I wasn't closing that properly, and then we started wondering why our virtual machine was going into melt down after a few weeks. Memory leaks can come in the form of anyhing really, with one of the most common examples being that you don't de-reference an object appropriately. For example, using a class's field as a method of temporary storage. Often the class persists, and so does the reference. This leaves you with objects, sitting in memory and doing nothing.
Use the right kind of Scheduler
I used a java Timer in that app, and later I learnt that it's better to use a ScheduledThreadPoolExecutor when another app was changing the System clock. So if you plan on keeping it completely Java based, I would strongly recommend using that over a Timer for all of the reasons detailed in this question.
Be mindful of memory usage and your environment
If your app is loading large amounts of data each and every day, and you have other apps running on the same server, you may want to be careful about the timing. For example, say at mid day, three of the apps run their scheduled operation, I would say running it at any other time would probably be a smart move. Be mindful of the environment in which you're executing your code in.
Error handling
You probably want to configure your app to let you know if something has gone wrong, without the app breaking down. If it's running at a certain time every few hours, that means people are probably depending on it, so I would have a function in your Java code that sends out an email to you, detailing the nature of the exception.
Make it configurable
Again, if it needs to run at various points in the day, you don't want to have to pull the thing down for a few hours to work out some minor changes to your code. Instead, port it into a java Properties file, or into an XML Config (or really, whatever). The advantage of this is that you can update your program and get it up and running before anyone really noticed the difference.
Be afraid of the static keyword
That bad boy will make objects persist, even when you destroy their parent reference. It is the mother of all memory leaks if you are not careful with it. It's fine for constants, and things that you know don't need to change and need to exist within the project to run well, but if you're using it for random values inside a project, you're going to quickly wonder why your app is crashing every few hours rather than syncing.
Props to #X86 for reminding me of that one.
Memory leaks are likely to be the biggest problem. Ensure that there are no long-term references held after an iteration of your logic. Even a relatively small object being referenced forever, will exhaust the memory eventually (and worse, it's going to be harder to detect during testing if the growth rate is 1GB/month). One approach that may help is using the snapshot functionality of profilers: take a snapshot during the pause, let the sync run a few times, and take another snapshot. Comparing these should show the delta between the synchronizations, which should hopefully be zero.
Cache maintenance is another issue. The overall size of a cache needs to be strictly limited (whereas often you can get away without in short-running programs, because everything seen will be small enough to not cause problems). Equally it's more important to do cache-invalidation properly - broadly speaking, everything that gets cached will become stale at some point while your program is still running, and you need to be able to detect this and take appropriate action. This can be tricky depending on where the golden source of the cached data is.
The last thing I'll mention is exception-handling. For short-running processes, it's often enough to simply let the process die when an exception is encountered, so the issue can be dealt with, and the app rerun. With a long-running process you'll likely need to be more defensive than this. Consider running parts of your program in threads, which can be restarted* if/when they fail. You may need a supervisor-type module, which checks that everything else is still heartbeating and reboots it if not. If appropriate to your structure, this is anecdotally a lot easier to achieve with actors-style libraries rather than Java's standard executors. And if it's at all possible, you may want to have hooks (perhaps exposed over JMX/MBeans) that let you modify the behaviour somewhat, to allow a short-term hack/workaround to be affected without having to bring the process down. Though this requires quite some amount of foresight to predict exactly what's going to go wrong in several months...
*or rather, the job can be restarted in another thread

java clearing internal memory cache

I am trying to test a Java program that I wrote, in order to prove its efficiency. I have many different tests to run, and I am trying to be precise in the runtime analysis. The problem is that different tests may access information on the disk that is common. So, in order to be "fair" in my experimental results I would like to somehow programmatically clear the internal memory being used by my Java program, in between experiments. In other words, I want each experiment to have the same "empty memory/cache".
I tried reading in a large file in between experiments. I also tried restarting my machine. Interestingly, the times are much much worse when I read in a large file, then when I simply restart the machine (say 40 sec to 5 sec). What is the correct way to clear internal memory (i.e., avoid the artificial speedup for experiments from common disk accesses) beyond restarting my machine between each experiment, which is not feasible?
What is the correct way to clear internal memory
Restart the computer. As you say this might not be the slowest option.
Computers are design to be efficient as possible, making them deliberately inefficient means making some assumption about how inefficient you want to make it, so there is no standard way of doing this.
One way of clearing the cache for files you have written is to delete them. If you want to clear the cache for file in disk you want to keep, you can write a large set of files, larger than your main memory and delete them.
If you are using Windows it doesn't use all the memory to cache files in any case. What you can do is to write large files of a few hundred MB at a time and time how long this takes. When the cache is exhausted you see a sudden jump in the time it takes from Java and you know at this point you have probably clearer the cache. After this happens, delete them and your cache is likely to be empty as a result.
This can help you get reproducible results but this may not be exactly the same as rebooting. Note: unless you expect to reboot your computer every time you run your application, the time it takes after a reboot may not be meaningful anyway.
BTW: The disk cache you should be worrying about the in the OS. Java doesn't cache files except in buffers you create and in the code you write so these should be under your control.

How to Optimize JVM & GC through Load Testing

Edit: Of the several extremely generous and helpful responses this question has already received, it is obvious to me that I didn't make an important part of this question clear when I asked it earlier this morning. The answers I've received so far are more about optimizing applications & removing bottlenecks at the code level. I am aware that this is way more important than trying to get an extra 3- or 5% out of your JVM!
This question assumes we've already done just about everything we could to optimize our application architecture at the code level. Now we want more, and the next place to look is at the JVM level and garbage collection; I've changed the question title accordingly. Thanks again!
We've got a "pipeline" style backend architecture where messages pass from one component to the next, with each component performing different processes at each step of the way.
Components live inside of WAR files deployed on Tomcat servers. Altogether we have about 20 components in the pipeline, living on 5 different Tomcat servers (I didn't choose the architecture or the distribution of WARs for each server). We use Apache Camel to create all the routes between the components, effectively forming the "connective tissue" of the pipeline.
I've been asked to optimize the GC and general performance of each server running a JVM (5 in all). I've spent several days now reading up on GC and performance tuning, and have a pretty good handle on what each of the different JVM options do, how the heap is organized, and how most of the options affect the overall performance of the JVM.
My thinking is that the best way to optimize each JVM is not to optimize it as a standalone. I "feel" (that's about as far as I can justify it!) that trying to optimize each JVM locally without considering how it will interact with the other JVMs on other servers (both upstream and downstream) will not produce a globally-optimized solution.
To me it makes sense to optimize the entire pipeline as a whole. So my first question is: does SO agree, and if not, why?
To do this, I was thinking about creating a LoadTester that would generate input and feed it to the first endpoint in the pipeline. This LoadTester might also have a separate "Monitor Thread" that would check the last endpoint for throughput. I could then do all sorts of processing where we check for average end-to-end travel time for messages, maximum throughput before faulting, etc.
The LoadTester would generate the same pattern of input messages over and over again. The variable in this experiment would be the JVM options passed to each Tomcat server's startup options. I have a list of about 20 different options I'd like to pass the JVMs, and figured I could just keep tweaking their values until I found near-optimal performance.
This may not be the absolute best way to do this, but it's the best way I could design with what time I've been given for this project (about a week).
Second question: what does SO think about this setup? How would SO create an "optimizing solution" any differently?
Last but not least, I'm curious as to what sort of metrics I could use as a basis of measure and comparison. I can really only think of:
Find the JVM option config that produces the fastest average end-to-end travel time for messages
Find the JVM option config that produces the largest volume throughput without crashing any of the servers
Any others? Any reasons why those 2 are bad?
After reviewing the play I could see how this might be construed as a monolithic question, but really what I'm asking is how SO would optimize JVMs running along a pipeline, and to feel free to cut-and-dice my solution however you like it.
Thanks in advance!
Let me go up a level and say I did something similar in a large C app many years ago.
It consisted of a number of processes exchanging messages across interconnected hardware.
I came up with a two-step approach.
Step 1. Within each process, I used this technique to get rid of any wasteful activities.
That took a few days of sampling, revising code, and repeating.
The idea is there is a chain, and the first thing to do is remove inefficiences from the links.
Step 2. This part is laborious but effective: Generate time-stamped logs of message traffic.
Merge them together into a common timeline.
Look carefully at specific message sequences.
What you're looking for is
Was the message necessary, or was it a retransmission resulting from a timeout or other avoidable reason?
When was the message sent, received, and acted upon? If there is a significant delay between being received and acted upon, what is the reason for that delay? Was it just a matter of being "in line" behind another process that was doing I/O, for example? Could it have been fixed with different process priorities?
This activity took me about a day to generate logs, combine them, find a speedup opportunity, and revise code.
At this rate, after about 10 working days, I had found/fixed a number of problems, and improved the speed dramatically.
What is common about these two steps is I'm not measuring or trying to get "statistics".
If something is spending too much time, that very fact exposes it to a dilligent programmer taking a close meticulous look at what is happening.
I would start with finding the optimum recommended jvm values specified for your hardware/software mix OR just start with what is already out there.
Next I would make sure that I have monitoring in place to measure Business throughputs and SLAs
I would not try to tweak just the GC if there is no reason to.
First you will need to find what are the major bottlenecks in your application. If it is I/O bound, SQL bound etc.
Key here is to MEASURE, IDENTIFY TOP bottlenecks, FIX them and conduct another iteration with a repeatable load.
HTH...
The biggest trick I am aware of when running multiple JVMs on the same machine is limiting the number of core the GC will use. Otherwise what can happen when one JVM does a full GC is it will attempt to grab every core, impacting the performance of all the JVMs even though they are not performing a GC. One suggestion is to limit the number of gc threads to 5/8 or less. (I can't remember where it is written)
I think you should test the system as a whole to ensure you have realistic interaction between the services. However, I would assume you may need to tune each service differently.
Changing command line options is useful if you cannot change the code. However if you profile and optimise the code you can make far for difference than tuning the GC parameters (in which cause you need to change them again)
For this reason, I would only change the command line parameters as a last resort, after you there is little improvement which can be made in the code of the application.

Trying to cause java.lang.OutOfMemoryException

I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks.
I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks).
One thing come into mind is to write a selenium script and has the script bombards the webapps.
One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system.
Any suggestions?
ps: I don't have access to the source code, as we just provide a hosting service (of course I could decompile the class files...)
If you don't have access to the source code of the J2EE app in question, the options that come to mind are:
Reduce the amount of RAM available to the JVM. You've already identified this one and said you don't want to do it.
Create a J2EE app (it could probably just be a JSP) and configure it to run within the same JVM as the target app, and have that app allocate a ridiculous amount of memory. That will reduce the amount of memory available to the target app, hopefully such that it fails in the way you're trying to force.
Try to use some profiling tools to investigate memory leakage. Also good to investigate memory damps that was taken after OOM happens and logs. IMHO: reducing memory is not the rightest way to investigate cose you can get issues not connected with real production one.
Do both, but in a controlled fashion :
Reduce the available memory to the absolute minimum (using -Xms1M -Xmx2M, as an example, but I fear your app won't even load with such limitations)
Do controlled "nuclear irradiation" : do Selenium scripts or each of the known working urls before to attack the presumed guilty one.
Finally, unleash the power that shall not be raised : start VisualVM and any other monitoring software you can think of (DB execution is a usual suspect).
If you are using Sun Java 6, you may want to consider attaching to the application with jvisualvm in the JDK. This will allow you to do in-place profiling without needing to alter anything in your scenario, and may possibly immediately reveal the culprit.
If you don't have the source use decompile it, at least if you think the terms of usage allows this and you live in a free country. You can use:
Java Decompiler or JAD.
In addition to all the others I must say that even if you can reproduce an OutOfMemory error, and find out where it occurred, you probably haven't found out anything worth knowing.
The trouble is that an OOM occurs when an allocation can not take place. The real problem however is not that allocation, but the fact that other allocations, in other parts of the code, have not been de-allocated (de-referenced and garbage collected). The failed allocation here might have nothing to do with the source of the trouble (no pun intended).
This problem is larger in your case as it might take weeks before trouble starts, suggesting either a sparsely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code was OK.
It might be a good idea to ask around why this amount of memory is configured for JBoss and not something different. If it's recommended by the supplier than maybe they already know about the leak and require this to mitigate the effects of the bug.
For these kind of errors it really pays to have some idea in which code path the problem occurs so you can do targeted tests. And test with a profiler so you can see during run-time which objects (Lists, Maps and such) are growing without shrinking.
That would give you a chance to decompile the correct classes and see what's wrong with them. (Closing or cleaning in a try block and not a finally block perhaps).
In any case, good luck. I think I'd prefer to find a needle in a haystack. When you find the needle you at least know you have found it:)
The root of the problem is most likely a memory leak in the webapp that the client is running. In order to track it down, you need to run the app with a representative workload with memory profiling enabled. Take some snapshots, and then use the profiler to compare the snapshots to see where objects are leaking. While source-code would be ideal, you should be able to at least figure out where the leaking objects are being allocated. Then you need to track down the cause.
However, if your customer won't release binaries so that you can run an identical system to what he is running, you are kind of stuck, and you'll need to get the customer to do the profiling and leak detection himself.
BTW - there is not a lot of point causing the webapp to throw an OutOfMemoryError. It won't tell you why it is happening, and without understanding "why" you cannot do much about it.
EDIT
There is not point "measuring the limits", if the root cause of the memory leak is in the client's code. Assuming that you are providing a servlet hosting service, the best thing to do is to provide the client with instructions on how to debug memory leaks ... and step out of the way. And if they have a support contract that requires you to (in effect) debug their code, they ought to provide you with the source code to do your job.

Categories

Resources