I'm looking into the stacksize parameter for Thread to handle some recursion as described in my other question: How to extend stack size without access to JVM settings?.
The Javadoc says:
On some platforms, specifying a higher value for the stackSize parameter may allow a thread to achieve greater recursion depth before throwing a StackOverflowError. Similarly, specifying a lower value may allow a greater number of threads to exist concurrently without throwing an OutOfMemoryError (or other internal error). The details of the relationship between the value of the stackSize parameter and the maximum recursion depth and concurrency level are platform-dependent. On some platforms, the value of the stackSize parameter may have no effect whatsoever.
Does anyone have some more details? The server running my code has Oracle Java Runtime Environment. Will specifying stack size have effect? I don't have info on the OS (or other system specs), and I can't test myself because I can't submit code year round.
Oracle Java Runtime Environment.
That's deprecated.
Will specifying stack size have effect?
It will change the size of each thread's stack, yes.
Will that affect your app? Probably not.
If you run many threads simultaneously (we're talking a couple hundred at least), lowering it may have an effect (specifically, may make your app work whereas without doing that, your app fails with out of memory errors, or the app becomes like molasses because your system doesn't have the RAM).
If you have deep recursive stacks, but not the kind that run forever (due to a bug in your code), upping it may have an effect (specifically, may make your app work whereas without doing that, your app fails with stack overflow errors).
Most java apps have neither, and in that case, whilst the -Xss option works fine, you won't notice. The memory load barely changes. The app continues to work just the same, and as fast.
Does YOUR app fall in one of the two exotic categories? How would we be able to tell without seeing any of the code?
Most apps don't, that's... all there is to say without more details.
If you're just trying to tweak things so it 'runs better', don't. The default settings are default for a reason: Because they work the best for the most cases. You don't tweak defaults unless you have a lot of info, preferably backed up by profiler reports, that tweaking is neccessary. And if the aim is to just generally 'make things run more smoothly', I'd start by replacing the obsolete (highly outdated) JRE you do have. JRE as a concept is gone (java8 is the last that had it, almost a decade old at this point) - just install a JDK.
Related
I have a situation in which I need to create thousands of instances of a class from third party API. Each new instance creates a new thread. I start getting OutOfMemoryError once threads are more than 1000. But my application requires creating 30,000 instances. Each instance is active all the time. The application is deployed on a 64 bit linux box with 8gb RAM and only 2 gb available to my application.
The way the third party library works, I cannot use the new Executor framework or thread pooling.
So how can I solve this problem?
Note that using thread pool is not an option. All threads are running all the time to capture events.
Sine memory size on the linux box is not in my control but if I had the choice to have 25GB available to my application in a 32GB system, would that solve my problem or JVM would still choke up?
Are there some optimal Java settings for the above scenario ?
The system uses Oracle Java 1.6 64 bit.
I concur with Ryan's Answer. But the problem is worse than his analysis suggests.
Hotspot JVMs have a hard-wired minimum stack size - 128k for Java 6 and 160k for Java 7.
That means that even if you set the stack size to the smallest possible value, you'd need to use roughly twice your allocated space ... just for thread stacks.
In addition, having 30k native threads is liable to cause problems on some operating systems.
I put it to you that your task is impossible. You need to find an alternative design that does not require you to have 30k threads simultaneously. Alternatively, you need a much larger machine to run the application.
Reference: http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2012-June/003867.html
I'd say give up now and figure another way to do it. Default stack size is 512K. At 30k threads, that's 15G in stack space alone. To fit into 2G, you'll need to cut it down to less than 64K stacks, and that leaves you with zero memory for the heap, including all the Thread objects, or the JVM itself.
And that's just the most obvious problem you're likely to run into when running that many simultaneous threads in one JVM.
I think we are missing lots of details, but would a distributed plateform would work? Each of individual instances would manage a range of your classes instances. Those plateform could be running on different pcs or virtual machines and communicate with each other?
I had the same problem with an SNMP provider that required a thread for each outstanding get (I wanted to have tens of thousands of outstanding gets going on at once). Now that NIO exists I'd just rewrite the library myself if I had to do this again.
You cannot solve it in "Java Code" or configuration. Windows chokes at around 2-3000 threads in my experience (this may have changed in later versions). When I was doing this I surprisingly found that Linux supported even less threads (around 1000).
When the system stops supplying threads, "Out of Memory" is the exception you should expect to see--so I'm sure that's it--I started getting this exception long before I ran out of memory. Perhaps you could hack linux somehow to support more, but I have no idea how.
Using the concurrent package will not help here. If you could switch over to "Green" threads it might, but that might take recompiling the JVM (it would be nice if it was available as a command line switch, but I really don't think it is).
There is a single core class that is used in transaction engine. I did a test with high number of concurrent transaction which resulted in a deadly stack-overflow exception. I would like to know if there is any way to measure how much stack memory is available in order to avoid the exception.
I am looking into a dynamic way of doing it as setting a hard limit on the number of concurrent transactions is not ideal.
Give Java VisualVM a try. It's from Oracle, and included with the JDK. You can find it here:
${JDK}/bin/jvisualvm.exe
Almost anything you want to know about your Java application's performance can be observed through this.
Here's a quick tutorial if you need it, although it doesn't actually need much of an explanation.
You can set the Stack Size of a Java program by using the -Xss argument (or "-XX:ThreadStackSize see Java HotSpot VM Options).
But, once set, the Java stack size cannot be changed dynamically.
I've recently learned about the -XX:+HeapDumpOnOutOfMemoryError VM argument and was told that it should be added as a matter of course to the HotSpot JVM as it is off by default. One of my co-workers made a comment that maybe we shouldn't because he heard that there's some pitfall to doing this but he can't remember what it was. I hate vague statements like that, but am trying to do my due diligence before making a final decision so am doing some investigation.
Most of the references to it I can find are more about how to use it (and where the dump files are located) and don't speak to any issues with using it. This SO question refers to a different argument, but the answers seem relevant to this one as well and imply that there are no issues: Why is this Hotspot JVM option not the default? -XX:+PrintConcurrentLocks
Does anyone know of any reason not to turn -XX:+HeapDumpOnOutOfMemoryError?
The main downside is that it creates a large file the each time a new program getting this error (the first time it happens for that JVM). If you have a heap of 2 GB, it could create a file that big each time, filling up disk space with heap dumps you don't need. Since its only useful for debugging/development purposes, it not useful for most end users.
With this particular flag I don't think any issues (Don't know about other flags). This is not even a diagnostic flag. It just prints GC/Memory state when JVM encounters OutofMemoryError (happens only once and that too while JVM stop).
One thing you need to accept is, it may (or) may not behave as expected, because it is -XX and
Options that are specified with -XX are not stable and are subject to change without notice
I'm in the process of benchmarking an app i've written. I ran my app through the benchmark 10 times in a loop (to get 10 results instead of only 1). Each time, the first iteration seems to take some 50 - 100 milliseconds longer than rest of the iterations.
Is this related to the JIT compiler and is there anything one could do to "reset" the state so that you would get the initial "lag" included with all iterations?
To benchmark a long running application you should allow an initialization (1st pass), thats because classes have to be loaded, code has to be generated, in web-apps JSP compile to servlets etc. JIT of course plays its role also. Sometimes a pass could take longer if garbage collection occurs.
It is probably caused by the JIT kicking in, however you probably want to ignore the initial lag anyway. At least most benchmarks try to, because it heavily distorts the statistics.
You can't "uncompile" code that has been compiled but you can turn compiling off completely by using the -Xint command line switch.
The first pass will probably always be slower because of the JIT. I'd even expect to see differences when more runs are made because of possible incremental compilation or better branch prediction.
For benchmarking, follow the recommondations given in the other answers (except I wouldn't turn off the JIT because you'd have your app running with JIT in a production environment).
In any case use a profiler such as JVisualVM (included in JDK).
Is this related to the JIT compiler
Probably yes, though there are other potential sources of "lag":
Bootstrapping the JVM and creation of the initial classloader.
Reading and loading the application's classes, and the library classes that are used.
Initializing the classes.
JIT compilation.
Heap warmup effects; e.g. the overheads of having a heap that is initially too small. (This can result on the GC running more often than normal ... until the heap reaches a size that matches the application's peak working set size.)
Virtual memory warmup effects; e.g. the OS overheads incurred when the JVM grows the process address space and physical pages are allocated.
... and is there anything one could do to "reset" the state so that you would get the initial "lag" included with all iterations?
There is nothing you can do, apart from starting the JVM over again.
However, there are things that you can do to remove some of these sources of "lag"; e.g. turning of JIT compilation, using a large initial heap size, and running on an otherwise idle machine.
Also, the link that #Joachim contributed above is worth a thorough read.
There are certain structures you might have in your code, such as singletons which are initialized only once and consume system resources. If you're using a database connection pool for example, this might be the case. Moreover it is the time needed by Java classes to be initialized. For these reasons, I think you should discard that first value and keep only the rest.
I am trying to reproduce java.lang.OutOfMemoryException in Jboss4, which one of our client got, presumably by running the J2EE applications over days/weeks.
I am trying to find a way for the webapp to spitout java.lang.OutOfMemoryException in a matter of minutes (instead of days/weeks).
One thing come into mind is to write a selenium script and has the script bombards the webapps.
One other thing that we can do is to reduce JVM heap size, but we would prefer not to do this, as we want to see the limit of our system.
Any suggestions?
ps: I don't have access to the source code, as we just provide a hosting service (of course I could decompile the class files...)
If you don't have access to the source code of the J2EE app in question, the options that come to mind are:
Reduce the amount of RAM available to the JVM. You've already identified this one and said you don't want to do it.
Create a J2EE app (it could probably just be a JSP) and configure it to run within the same JVM as the target app, and have that app allocate a ridiculous amount of memory. That will reduce the amount of memory available to the target app, hopefully such that it fails in the way you're trying to force.
Try to use some profiling tools to investigate memory leakage. Also good to investigate memory damps that was taken after OOM happens and logs. IMHO: reducing memory is not the rightest way to investigate cose you can get issues not connected with real production one.
Do both, but in a controlled fashion :
Reduce the available memory to the absolute minimum (using -Xms1M -Xmx2M, as an example, but I fear your app won't even load with such limitations)
Do controlled "nuclear irradiation" : do Selenium scripts or each of the known working urls before to attack the presumed guilty one.
Finally, unleash the power that shall not be raised : start VisualVM and any other monitoring software you can think of (DB execution is a usual suspect).
If you are using Sun Java 6, you may want to consider attaching to the application with jvisualvm in the JDK. This will allow you to do in-place profiling without needing to alter anything in your scenario, and may possibly immediately reveal the culprit.
If you don't have the source use decompile it, at least if you think the terms of usage allows this and you live in a free country. You can use:
Java Decompiler or JAD.
In addition to all the others I must say that even if you can reproduce an OutOfMemory error, and find out where it occurred, you probably haven't found out anything worth knowing.
The trouble is that an OOM occurs when an allocation can not take place. The real problem however is not that allocation, but the fact that other allocations, in other parts of the code, have not been de-allocated (de-referenced and garbage collected). The failed allocation here might have nothing to do with the source of the trouble (no pun intended).
This problem is larger in your case as it might take weeks before trouble starts, suggesting either a sparsely used application, or an abnormal code path, or a relatively HUGE amount of memory in relation to what would be necessary if the code was OK.
It might be a good idea to ask around why this amount of memory is configured for JBoss and not something different. If it's recommended by the supplier than maybe they already know about the leak and require this to mitigate the effects of the bug.
For these kind of errors it really pays to have some idea in which code path the problem occurs so you can do targeted tests. And test with a profiler so you can see during run-time which objects (Lists, Maps and such) are growing without shrinking.
That would give you a chance to decompile the correct classes and see what's wrong with them. (Closing or cleaning in a try block and not a finally block perhaps).
In any case, good luck. I think I'd prefer to find a needle in a haystack. When you find the needle you at least know you have found it:)
The root of the problem is most likely a memory leak in the webapp that the client is running. In order to track it down, you need to run the app with a representative workload with memory profiling enabled. Take some snapshots, and then use the profiler to compare the snapshots to see where objects are leaking. While source-code would be ideal, you should be able to at least figure out where the leaking objects are being allocated. Then you need to track down the cause.
However, if your customer won't release binaries so that you can run an identical system to what he is running, you are kind of stuck, and you'll need to get the customer to do the profiling and leak detection himself.
BTW - there is not a lot of point causing the webapp to throw an OutOfMemoryError. It won't tell you why it is happening, and without understanding "why" you cannot do much about it.
EDIT
There is not point "measuring the limits", if the root cause of the memory leak is in the client's code. Assuming that you are providing a servlet hosting service, the best thing to do is to provide the client with instructions on how to debug memory leaks ... and step out of the way. And if they have a support contract that requires you to (in effect) debug their code, they ought to provide you with the source code to do your job.