We are getting frequent out of memory errors in our dev. machines We are running webshpere, eclipse, soap UI and maven in it. Our server gets down due to this "out of memory errors" when we restart our applications in websphere 2/3 times, We already increased the virtual memory setting in wesphere to 1GB.
So what i did was copied the jre we use in eclipse and maven folders so that each of these uses individual jvms. But the performance of websphere is same. 2/3 restarts and out of memory errors.
Is there any may of making eclipse and maven use different jvms other than websphere's?
In response to the question:
If you start java multiple times, multiple copies of java will be running with each their own memory. Eclipse and websphere are probably started separately so use independent memory. Your trouble should not be there.
In response to your problem
Out of Memory
Both Eclipse and Websphere can gobble up memory like there's not tomorrow. Look al the -X flags, the flag for perm gen space should be added to the flag for heap space to get the memory consumption. Also allow some overhead for the OS, windowing environment, e-mail client, browser (500 MB - 1 GB or so, depending on the OS and what you're running). So it can be that the computer is out of memory.
More frequently the amount of memory assigned to the jvm is just not enough. Java has not been started with enough memory for the app assigned to it. It's up to you to deduce if it is Heap Space which ran out, or PermGen space. Both can be adjusted, have a look at this website. The flags are -Xmx and -XX:MaxPermSize. Look at the start scripts for Websphere, as that's the one complaining.
Recommendation
Check which kind of memory is out, and search for that on stack overflow; either PermGen or Heap Space should do.
You should set the JVM Xmx parameter less than or equal to 256MB in all the three process. It will never cross the 256 MB limit (considering that the program does not have memory leaks)
Copying the folder won't be enough.
In Eclipse, go to Preferences, under Java-> Installed JREs make sure the JRE used is on a different path than the one used by WebSphere.
Related
I have a spring-boot app that I suspect might have a memory leak. Over time the memory consumption seems to increase, taking like 500M of memory until I restart the application. After a fresh restart it takes something like 150M. The spring-boot app should be a pretty stateless rest app, and there shouldn't be any objects left around after request is completed. I would wish the garbage collector would take care of this.
Currently on production the spring-boot app seems to use 343M of memory (RSS). I got the heapdump of the application and analysed it. According to the analysis the heapdump is only 31M of size. So where does the missing 300M lie in? How is the heapdump correlated with the actual memory the application is using? And how could I profile the memory consumption past the heapdump? If the memory used is not in the heap, then where is it? How to discover what is consuming the memory of the spring-boot application?
So where does the missing 300M lie in?
A lot of research has gone into this, especially in trying to tune the parameters that control the non-heap. One result of this research is the memory calculator (binary).
You see, in Docker environments with a hard limit on the available amount of memory, the JVM will crash when it tries to allocate more memory than is available. Even with all the research, the memory calculator still has a slack-option called "head-room" - usually set to 5 to 10% of total available memory in case the JVM decides to grab some more memory anyway (e.g. during intensive garbage collection).
Apart from "head-room", the memory calculator needs 4 additional input-parameters to calculate the Java options that control memory usage.
total-memory - a minimum of 384 MB for Spring Boot application, start with 512 MB.
loaded-class-count - for latest Spring Boot application about 19 000. This seems to grow with each Spring version. Note that this is a maximum: setting a too low value will result in all kinds of weird behavior (sometimes an "OutOfMemory: non-heap" exception is thrown, but not always).
thread-count - 40 for a "normal usage" Spring Boot web-application.
jvm-options - see the two parameters below.
The "Algorithm" section mentions additional parameters that can be tuned, of which I found two worth the effort to investigate per application and specify:
-Xss set to 256kb. Unless your application has really deep stacks (recursion), going from 1 MB to 256kb per thread saves a lot of memory.
-XX:ReservedCodeCacheSize set to 64MB. Peak "CodeCache" usage is often during application startup, going from 192 MB to 64 MB saves a lot of memory which can be used as heap. Applications that have a lot of active code during runtime (e.g. a web-application with a lot of endpoints) may need more "CodeCache". If "CodeCache" is too low, your application will use a lot of CPU without doing much (this can also manifest during startup: if "CodeCache" is too low, your application can take a very long time to startup). "CodeCache" is reported by the JVM as a non-heap memory region, it should not be hard to measure.
The output of the memory calculator is a bunch of Java options that all have an effect on what memory the JVM uses. If you really want to know where "the missing 300M" is, study and research each of these options in addition to the "Java Buildpack Memory Calculator v3" rationale.
# Memory calculator 4.2.0
$ ./java-buildpack-memory-calculator --total-memory 512M --loaded-class-count 19000 --thread-count 40 --head-room 5 --jvm-options "-Xss256k -XX:ReservedCodeCacheSize=64M"
-XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
# Combined JVM options to keep your total application memory usage under 512 MB:
-Xss256k -XX:ReservedCodeCacheSize=64M -XX:MaxDirectMemorySize=10M -XX:MaxMetaspaceSize=121289K -Xmx290768K
Besides heap, you have thread stacks, meta space, JIT code cache, native shared libraries and the off-heap store (direct allocations).
I would start with thread stacks: how many threads does your application spawn at peak? Each thread is likely to allocate 1MB for its stack by default, depending on Java version, platform, etc. With (say) 300 active threads (idle or not), you'll allocate 300MB of stack memory.
Consider making all your thread pools fixed-size (or at least provide reasonable upper bounds). Even if this proves not to be root cause for what you observed, it makes the app behaviour more deterministic and will help you better isolate the problem.
We can view how much of memory consumption in spring boot app, in this way.
Create spring boot app as .jar file and execute it using java -jar springboot-example.jar
Now open the CMD and type jconsole and hit enter.
Note :- before opening the jconsole you need to run .jar file
Now you can see a window like below and it will appear application that previously ran in Local Process section.
Select springboot-example.jar and click below connect button.
After it will show the below prompt and give Insecure connection option.
Finally you can see Below OverView (Heap Memory, Threads...).
You can use "JProfiler" https://www.ej-technologies.com/products/jprofiler/overview.html
remotely or locally to monitor running java app memory usage.
You can using "yourkit" with IntelliJ if you are using that as your IDE to troubleshoot memory related issues for your spring boot app. I have used this before and it provides better insight to applications.
https://www.yourkit.com/docs/java/help/idea.jsp
Interesting article about memory profiling: https://www.baeldung.com/java-profilers
I have a java process running on Ubuntu OS, which as per jconsole shows heap memory as 150 MB but for the same process "system monitor" in ubuntu shows approx. 470 MB. Also when I check size of jars in classpath it comes around 200 MB.
I am considering that all jars that are present in classpath will be loaded in JVM for that particular process.
Please can any one help me understand? ... am i missing on something?
Ubuntu system manager is shows totally memory occupied by JRE(Java Runtime Environment). JRE contains other memory such as Stack Memory(Native & Java stacks), Code memory(where java classes and code is present), along with Heap memory. Thus, Heap memory that JConsole will always be less than what it's shown by memory manager. Moreover, JRE manages memory on its own, independent of OS. Thus, it may be possible that It has already acquired more memory from OS, but your program at present is requiring less memory. So, It's holding so that whenever next time your program requires more memory, it doesn't have to go to OS to request for more memory. Because system calls for requesting more memory from OS are expensive.
Coming to your JAR loading question. All the jars, present in your classpath may not be necessarily loaded by your JRE. They'll be loaded on demand basis. Thus, It may happen that there're 100 Jars in your classpath, but only 2 classes in one of the jars are loaded. Thus, those two classes will be loaded in the memory.
For performing more sophisticated memory analysis, I recommend you to use JVisualVM, that comes with several plugins, to analyze memory in the program.
I have a Java SE desktop application which uses a lot of memory (1,1 GB would be desired). All target machines (Win 7, Win Vista) have plenty of physical memory (at least 4GB, most of them have more). There is also enough free memory.
Now, when the machines have some uptime and a lot of programs were started and terminated, the memory becomes fragmented (this is what I assume). This leads to the following error when the JVM is started:
JVM creation failed
Error occurred during initialization of VM
Could not reserve enough space for object heap
Even closing all running programs doesn't help in such a situation (despite Task Manager and other tools report enough free memory). The only thing thas helps is to reboot the machine and fire up the Java applicaton as one of the first programs launched.
As far as I've investigated, the Oracle VM requires one contiguous chunk of memory.
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
I start my JVM with the following arguments:
-J-client -J-Xss2m -J-Xms512m -J-Xmx1100m -J-XX:PermSize=64m -J-Dsun.zip.disableMemoryMapping=true
Is there any other way to assign about 1,1 GB of heap to my java application when this amount is available but may be fragmented?
Use an OS which doesn't get fragmented virtual memory. e.g. 64-bit windows or any version of UNIX.
BTW It is hard for me to imagine how this is possible in the first place but I know it to be the case. Each process has its own virtual memory so its arrangement of virtual memory shouldn't depend on anything which is already running or has run before.
I believe it might be a hang over from the MS-DOS TSR days. Shared libraries loaded are given absolute addresses (added to the end of the signed address space, 2 GB, the high half is reserved for the OS and the last 512 MB for the BIOS) in memory meaning they must use the same address range in every program they are used in. Over time the maximum address is determined by the lowest shared library loaded or used (I don't know which one by I suspect the lowest loaded)
Recently we started using New Relic to monitor our production webapp hosted in tomcat 7.0.6 server but we have observed that memory footprint of this tomcat is increasing continuously and within a week it eats up all the server(AWS High-Memory Double Extra Large Instance) memory and become unresponsive, only way to get it back is by restarting it.
We provide Xms & Xmx arguments while starting the tomcat but within few hours memory usage of tomcat process cross Xmx value and it keeps on increasing until all the server memory is over. Here is process command:
/usr/java/jdk1.6.0_24//bin/java
-Djava.util.logging.config.file=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/conf/logging.properties
-Xms8192m
-Xmx8192m
-javaagent:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/newrelic/newrelic.jar
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Duser.timezone=Asia/Calcutta
-Djava.endorsed.dirs=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/endorsed
-classpath /xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/bootstrap.jar:/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/bin/tomcat-juli.jar
-Dcatalina.base=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Dcatalina.home=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6
-Djava.io.tmpdir=/xxx/xxx/xxx/xxx/apache-tomcat-7.0.6/temp org.apache.catalina.startup.Bootstrap start"
Ideally I would expect this process not to use more than 8GB of memory but within hours it goes above 10GB and within few days it goes above 20GB and everything else on this server suffers because of it(I use 'top' to see memory usage). How is this possible?
There's an issue which affects any Sun/Oracle JVM and will manifest as unbounded growth in non-heap (native) memory. There is a workaround in place for New Relic Java agent versions 2.16+ by adding a shutdown delay to class transformation in your newrelic.yml file in the common section.
class_transformer:
shutdown_delay: 3600
From the changelog
Work-around for Oracle JVM bug that in rare cases causes a native
memory leak
In rare cases, the Oracle JVM can leak native OS memory (not heap
space) when classes are intercepted by the agent. This setting turns
off interception of classes that are loaded after the given number of
seconds. The agent will continue to monitor classes loaded before this
time.
I am sharing some more information on above reported incident. memory leak is not in Java heap. The application never reaches any OUT OF MEMORY error(8 gb is the Java heap max limit what we have set). However the virtual and resident memory keep on increasing till the time RAM runs out of memory.
We have confirmed that this leak happens when relic agent is used.
Version : New Relic Agent v2.1.2
Sorry for the trouble. We (New Relic) are investigating the problem but the first suggestion is to please try the latest 2.2.1 version of the Java Agent which made substantial changes to the way we instrument classes.
I will follow-up here when we have more information.
My Linux server need to be able to handle 30+ eclipse instances for developers. I did a quick test of running 10 eclipse instances. The Java process associated with each eclipse initially around 200MB RSS memory, increased up to around 550MB, when more projects are loaded.
But Java process doesn't seem to release memory, after closing/deleting all projects within eclipse instances. I still see it uses over 550MB RSS.
How can I change Eclipse or Java settings so that memory foot print got reduced when developers closed down projects or being idle for a while?
Thanks
You may want to experiment with these (and other) JVM tuning options to make the JVM less reluctant to return memory to the OS:
-XX:MaxHeapFreeRatio Maximum percentage of heap free after GC to avoid shrinking. Default is 70.
-XX:MinHeapFreeRatio Minimum percentage of heap free after GC to avoid expansion. Default is 40.
However, I suspect that you won't see the eclipse process shrink to anywhere near its initial size, since eclipse is a huge, complex application that probably lazy-loads (but does not unload, once used) a lot of classes and associated data structures.
I've never seen Java release memory.
I don't think you will get any value out of trying to get it to release memory with Eclipse, I've watched that little memory counter for YEARS and never once see the allocated memory drop.
You might try one of these.
After each session, exit the JVM and restart.
Set your -Xmx lower.
Separate your instances into categories with high -Xmx and low -Xmx and let the user determine which one he wants.
As a side-thought, if it really mattered to you, you MIGHT be able to run multiple eclipse instances under one VM. It would probably be WAY too much work (man-weeks to man-years), but if you could get it right you could reduce overhead by like 150-200mb/instance. The disadvantage would be that a VM crash (Pretty rare these days) would kill everyone.
Testing this theory would be a matter of calling eclipse's main from within an existing JVM and trying to get it to display somewhere useful. The rest of the man-year is spent trying to figure out where they used evil static variables or singletons and changing them to something else.
Switch the Java to use the G1 garbage collector with the HeapFreeRatio parameters. Use these options in eclipse.ini:
-XX:+UnlockExperimentalVMOptions
-XX:+UseG1GC
-XX:MinHeapFreeRatio=5
-XX:MaxHeapFreeRatio=25
Now when Eclipse eats up more than 1 GB of RAM for a complicated operation and switched back to 300 MB after Garbage Collection the memory will be released back to the operating system.
I would suggest checking on garbage collection, setting right options or even forcing GC periodically might increase time till eclipse memory usage grows high.
Following link might be useful http://www.eclipsezone.com/eclipse/forums/t93757.html