Glassfish stop itself - java

I'm new in glassfish.
I have a vps in digitalocean with Ram 512 mb.
I just running 1 domain and 1 simple web service application.
But glassfish stop without my command.
Any suggest?
I using glassfish 4.1.1
Edit for Trevor. I forget to say about error log.
I checked the error log. No log that say error or why the glassfish stop. It's running okay when i restart, but after few hours it happen again

There is a good chance you are running out of memory.
By default, the server has -Xmx512m set, which means the heap size can increase to 512MB. Since that is all you have available on your DigitalOcean machine, it will start with a lower amount, and increase as you deploy your application to it. Once GlassFish tries to use more memory than your DigitalOcean machine can spare, it will die.
Decreasing this to something like -Xmx256m will probably give you more stability. For simple apps, you should be OK with that.
Note: you may want to also decrease the -XX:MaxPermSize=192m to -XX:MaxPermSize=128m. If you are on Java 8, then this doesn't matter any more and the value isn't used. For Java 7, decreasing this will help.
Either change this value through the admin console at http://[$HOSTNAME]:4848 and go to Configurations -> server-config -> JVM Settings, then click the JVM Options tab (you will need to change the value, then click save, then restart GlassFish).
Or change it in the domain.xml directly (being careful to get it right)
glassfish41/glassfish/domains/domain1/config/domain.xml
You will notice that that value is in that file twice. One is the actual server-config, used for the server itself, and the other is the "default-config" which is a template used for creating new configurations. Make sure you change the correct one! If you're unsure, just change both.

Related

How to change Java heap size for Tomcat 9 installed as Windows Service from command line?

I have Tomcat 9 installed as service on Windows 7 64 bit. I want to:
see what heap size is currently configured and active
update the service configuration persistently to use a different heap size
verify that Tomcat is actually using the new heap size.
do all that from the command line.
For 1 and 3: I naively tried to use jconsole, but I don't find the process there because Tomcat is running as Local System Service. While I found out how to run jconsole as Local System Account, it seems that JMX is deactivated when Tomcat is installed as service.
So finding out the currently used memory sizes by JMX seems to be at least very complicated (possibly enabling JMX remote, which should be over TLS...).
For 2: I suppose this is the corresponding place of Tomcat's documentation, which reads:
To update the service parameters, you need to use the //US// parameter.
Update the service named 'Tomcat9'
C:\> tomcat9 //US//Tomcat9 --Description="Apache Tomcat Server - http://tomcat.apache.org/ " ^
--Startup=auto --Classpath=%JAVA_HOME%\lib\tools.jar;%CATALINA_HOME%\bin\bootstrap.jar
But I don't understand that text sufficiently to apply it to my problem. Especially, I don't want to change other parameters (like description, startup etc.).
As far as I understand, when run as a service, configuration is stored in the Windows registry, so the usual configuration in tomcat/conf does not or at least does only partly apply.
Please note that this question is not about installing Tomcat, but about modifying an existing installation. Also I am not interested in some hacky way to get the desired result (somehow), but in the best practice to do that; and it would be perfect to have links to reference documentation for that.
Given the documentation that you link to
--JvmMx
Maximum memory pool size in MB. (Not used in exe mode.)
should help to control the heap size.
For fetching the current values, tools such as jvmtop might be easiest answer.
For 1 and 3 there is jmap. You just need to know the process Id of the tomcat instance running.
jmap -heap 7082
Here is the output from a running jvm I have right now(the relevant lines):
Heap Configuration:
MinHeapFreeRatio = 0
MaxHeapFreeRatio = 100
MaxHeapSize = 1073741824 (1024.0MB) // that is -Xmx flag
....
NewSize = 357564416 (341.0MB) // 1
MaxNewSize = 357564416 (341.0MB)
OldSize = 716177408 (683.0MB) // 2
1 + 2 = -Xms flag
Unfortunately I can't answer 2, since I've never started tomcat on windows - as a service (I hardly know what that means for windows). But assuming this is a process that is started by windows as a script...
Shouldn't : tomcat9 -Xms512M -Xmx2G... work? Again, just a hint, not sure. The last thing to notice is that the heap can be changed only at start-up of the jvm, you can't do that at runtime while tomcat is running obviously (just in case...).

Webstart application fails to start with -Xmx2G on Java 8u60

I have a Java Webstart application that starts successfully with -Xmx1G, but fails to start with -Xmx2G. Some of my users really need 2G of heap.
This seems to be a problem with Java 8u60 only, because I have a report of someone launching successfully with Java 8u51.
The failure looks like this: I see the blue 'Java...' splash screen, and then after a few seconds, poof it's gone, before displaying the Java console and without producing any trace information in the expected place.
The failure occurs only on those clients with less than 2G of memory available. But, I am a little surprised that requesting a 'maximum' heap size could cause the application to fail so early and without any diagnostic information. We are dealing with a 'maximum' value, after all, not an 'initial' value. I read in multiple places that the JVM is not supposed to do this.
But I also remembered reading that the 'initial', if unspecified, is based on the maximum. So, along with passing -Xmx2G, I tried passing -Xms512M, -Xms256M, and -Xms128M. But, this attempt to shrink the initial heap size did not help. I cannot get this thing to start with -Xmx2G!
Does anyone have any light to shed on this situation? A solution? A workaround? In the short term, I'll change to -Xmx1G, but, as I said at the beginning, I have some users that really need -Xmx2G. I'd like to avoid having two separate *.jnlp files, which would also entail having two separate *.jar files!
Turns out that this is exactly what Webstart on Java8u60 does if the client machine does not have enough memory to satisfy -Xmx. It attempts to start, and then poof, it disappears without any indication as to what went wrong.
So, I will end up having to build my application in different configurations if I want to enable the users with more memory to allocate that memory to my application. This is because signing requires the *.jnlp file to into the *.jar file itself, and this *.jnlp file must be an exact match with the *.jnlp file used to launch the application.

Openshift gear size and increasing Xmx

Background:
In Openshift, I am using 3 small gears each with 512m. The app has a web load balancer added to it. The web app is a tomcat 7 based app deployed with jbossews-2.0
Question 1: I read somewhere that a web load balancer will itself run in a gear. So does that mean, for scaling I have only one more gear left? Considering 1 gear in which the tomcat instance runs, 1 where load balancer run and in case load increased ill have the last one gear out of 3 to scale?
Question 2: Documentation says that each gear comes with 512m. I have configured New relic with my tomcat 7 app. I am seeing the following jvm configurations set :
-XX:MaxPermSize=102m, -XX:MinHeapFreeRatio=20, -Xms40m, -Xmx256m
Now if I have 512m available, am I right in thinking that I can increase the max heap to somewhat greater maybe -Xmx384m or maybe the complete 512m ?
Question 3: If yes, how to do so? I have added action hooks that indeed set the arguments but the environment settings in new relic still shows the max heap as 227m. In the provided list of arguments I see two arguments, one is my custom xmx and one comes by default
-Xms40m, -Xmx256m, -Xmx384m,
It seems that jvm picks the first arguments it finds which i am not sure why its not getting updated by my custom arg. To set this is what I did in my pre hook
export _JAVA_OPTIONS="$_JAVA_OPTIONS -Xmx384m -javaagent:/var/lib/openshift/{###}/app-root/repo/newrelic/newrelic.jar"
also tried
export JAVA_OPTS="-Xmx384m $JAVA_OPTS"
Please guide so that my custom xmx argument is only read instead of default.
Question 1:
copied from https://developers.openshift.com/en/overview-platform-features.html#how-scaling-works
The first web gear in a scaling application has HAProxy installed, but also your web application. Once you scale to 3 gears, the web gear that is collocated with HAProxy is turned off, to allow HAProxy more resources to route traffic. Here’s a diagram of your scalable app. If you scale down back to 2 gears or less, the web cartridge on your first gear is started again.
Questions 2 & 3
You can increase the heap size however, you have to essentially reset all the existing JAVA_OPTS along with your new -Xmx value. So next time use the JAVA_OPTS env variable and set your heap size, then be sure to copy the rest OPTS from https://github.com/openshift/origin-server/blob/master/cartridges/openshift-origin-cartridge-jbossas/versions/7/bin/standalone.conf#L136.
I don't think you can actually change the options as they are set by the cartridge/gear size combination, and because JAVA_OPTS is controlled by openshift. Anyway, you have to use JAVA_OPTS_EXT instead and that's limited to some extra params like system properties, garbage collector, etc.

Tomcat7 dying, nothing in the logs

I have searched and searched and did not help me much hence posting the new question.
Platform
Ubuntu 11.10 server 64 bit
JVM 1.7.0_03
Tomcat 7
There is nothing special in the configuration - front end server is apache using ajp connector. Tomcat runs as ubuntu service.
On our server, tomcat7 is dying and could not figure out the reason. I have checked all the log files (syslog, catalina.out, even auth.log) to see if there is something getting logged.
As per top command server still has around 4gb of memory free and cpu usage is averaging around 35% most of the times.
In order to isolate the problem, is there any way to get the exit status code of tomcat process that terminated?
I read some reports where jvm logging error log in case of jvm crash. I am not seeing it either.
It seems like I need to set ulimit to get the core dump, but not sure how to do it for tomcat service or is the setting valid for all the users.
It seems like I need to set ulimit to get the core dump, but not sure how to do it for tomcat service or is the setting valid for all the users.
One way to do that without interfering with anything else would be to add a ulimit command to the catalina.sh script. (It is a bit hacky ... but it sounds like you are at the point where hackiness might give happiness.)

Out of Memory on Tomcat Shutdown

Short description of my problem: I start up Tomcat with my deployed Wicket application. When I want to shut down tomcat I get this error message:
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.lang.ref.Reference.<clinit>(Reference.java:145)
I am running the following setup:
Ubuntu Linux: 10.04 (lucid) with a 2.6.18-028stab094.3 kernel
Java Version: "1.6.0_26" Java HotSpot(TM) 64-Bit Server VM
Tomcat Version: 7.0.23
jvm_args: -Xms512m -Xmx512m -XX:MaxPermSize=205m (these are added via CATALINA_OPTS, nothing else)
Wicket 1.5.1
Tomcat is configured with two virtual hosts on subdomains with ModProxy
My application is deployed as ROOT.war in the appbase directory (it makes no difference if I deploy one or both applications)
'''No application deployed does not result in OOM on shutdown''', unless I mess around with the jvm args
The size of the war is about 500k, all libraries are deployed in tomcat/common/lib (directory which I added to common.loader in conf/catalina.properties)
ulimit -u -> unlimited
When I check the Tomcat manager app it says the following about the JVM memory:
Free memory: 470.70 MB Total memory: 490.68 MB Max memory: 490.68 MB
(http connector) Max threads: 200 Current thread count: 6 Current thread busy: 1
'top' or 'free -m' is similar:
Mem: 2097152k total, 1326772k used, 770380k free, 0k buffers
20029 myuser 18 0 805m 240m 11m S 0 11.7 0:19.24 java
I tried to start jmap to get a dump of the heap, it also fails with an OutOfMemoryError. Actually as long as one or both of my applications are deployed any other java process fails with the same OOM Error (see top).
The problem occurs while the application is deployed. So something is seriously wrong with it. However the application is actually running smoothly for quite a while. But I have seen OOMs in the application as well, so I don't trust the calm.
My application is using a custom filter class? Could that be it?
For completeness (hopefully), here's the list of libraries in my common/lib:
activation-1.1.jar
antlr-2.7.6.jar
antlr-runtime-3.3.jar
asm-3.1.jar
asm-commons-3.1.jar
asm-tree-3.1.jar
c3p0-0.9.1.1.jar
commons-collections-3.1.jar
commons-email-1.2.jar
dependencies-provided.tgz
dom4j-1.6.1.jar
ejb3-persistence-1.0.2.GA.jar
geronimo-annotation_1.0_spec-1.1.1.jar
geronimo-jaspic_1.0_spec-1.0.jar
geronimo-jta_1.1_spec-1.1.1.jar
hibernate-annotations-3.4.0.GA.jar
hibernate-commons-annotations-3.1.0.GA.jar
hibernate-core-3.3.0.SP1.jar
hibernate-entitymanager-3.4.0.GA.jar
hibernate-search-3.1.0.GA.jar
javassist-3.4.GA.jar
joda-time-1.6.2.jar
jta-1.1.jar
log4j-1.2.16.jar
lombok-0.9.3.jar
lucene-core-2.4.0.jar
mail-1.4.1.jar
mysql-connector-java-5.1.14.jar
persistence-api-1.0.jar
quartz-2.1.1.jar
servlet-api-2.5.jar
slf4j-api-1.6.1.jar
slf4j-log4j12-1.6.1.jar
stringtemplate-4.0.2.jar
wicket-auth-roles-1.5.1.jar
wicket-core-1.5.1.jar
wicket-datetime-1.5.1.jar
wicket-extensions-1.5.1.jar
wicket-request-1.5.1.jar
wicket-util-1.5.1.jar
xml-apis-1.0.b2.jar
I appreciate any hint or even speculation that gives me additional ideas what to try.
Update: I tested some more and found that this behaviour only occurs while one or both of my applications are deployed. The behaviour does not occur on "empty" tomcat (that was a mistake on my part messing with jvm args)
Update2: I am currently experimenting trying to reproduce this behaviour in a virtual box, I want to debug this with a profiler. I am still not convinved that it should be impossible to run my setup on 2GB RAM.
Update3 (10/01/12): I am trying to run jenkins instead of my own application. Same behaviour, so it is definitely server configuration issues. Jenkins jobs fail when maven is called, so I need not even try the shutdown hack suggested below because I need a second java process while running Jenkins. It was suggested to me that because this is a Virtual Server ulimits may be imposed from outside and I would not be able to see them. I think I'll ask a new question regarding this. Thx all.
Update4 (02/05/12): see below for the answer that contains the hint. I'll clarify some more up here: I am now 95% sure that the errors occur because I am reaching my thread limit. However because this is a virtual server the method described below would not work to check this value because it is not visible with ulimit, that was what was confusing me and only today I found out that this is the "numproc" value that I can see in the Parallels Power Panel that I can log into for my virtual server. There were Resource Alerts for numproc but I did not see those either until just now. The value has a hard limit of 96 which I cannot change of course. The current value of numproc corresponds to the number of processes I see with "top" after toggling "H" to see threads. I had a very hard time finding this because this numproc value is hidden deep inside the panel. Sadly 96 is a rather low number if you want to run a tomcat with apache and mysql. I am also very sad that I cannot even find this value in the small print of my hosting contract and it is rather relevant to my application I dare say. So I guess I'll need a server upgrade.
Thanks all for your helpful answers in the end everyone helped me a bit to find out what the problem was.
The tomcat shutdown procedure consits of sending an command/word via a tcp port to the running tomcat VM. This port is configured in the server.xml (if I remember corretly, writting on my phone right now). So far so good.
Unfortunately, the shutdown script does this by starting a 2. VM using the same java options used for the tomcat. Your system simply has not enough memory for this.
As a sollution you could write your own stop script using telnet or something.
I could help with later if needed.
Hope that helps.
Viele grüsse Bert
Seems you have too many threads open.
Use this command :
ulimit -u
What is the result ?
Should be something like :
max user processes (-u) 100
If this is correct, you can edit this file :
/etc/security/limits.conf
and the the following modifications :
#<domain> <type> <item> <value>
user soft nproc 10000
user hard nproc 10000
You can probably survive for a while like this. All you need to do is kill the tomcat process whenever you need to restart it. It is not a nice approach, but the main concern is that your application runs correctly.
It seems to me though, that on the long run, you might need to order a hosting plan with more RAM available.
I was having a similar problem with a tomcat installation just last week. I managed to fix it by giving tomcat a smaller heap. Something like this:
export CATALINA_OPTS=”-Xms256m -Xmx512m”
Before starting Tomcat may help. In the meantime you'll have to kill it the old fashioned way, with a kill -9 ;)
EDIT: you could also take look here, it appears tomcat automatically creates a bunch of "spare" threads, but you can limit those as well as your max thread count in the config. Hope it helps.

Categories

Resources