The Domino server crashed with a java outofmemory error (HTTP). It generated a snap trc file (Snap.20160426.111944.4212.0007.trc). How can this be analyzed? It looks like IBM does have a TraceFormat jar file but that ships with Websphere from what I can tell and not with Domino (there is a TraceFormat.dat file in Domino).
Any suggestions on where to get this?
Howard
That's strange. I've never seen such a file on any Domino server. I'm more used to NSD files, which are a pain in the a** to read, but can at least help form an idea of what the server was doing around the time of the crash.
Anyway, one thing to know is that Domino ships with ridiculousy low default settings for JVM memory allocation. You may want to check the notes.ini for the parameter HTTPJVMMaxHeapSize : chances are it is set way below the actual capacity of the machine.
While you're there you could as well check other JVM related parameters such as JavaMaxHeapSize
Related
Its actually a Minecraft server. I have 16GB of RAM on my desktop here running a 4 core processor each core at 3.0ghz speed and 4GB Video Memory. its a pretty beefy computer (especially back in the day) yet it is still able to hold its own as a gaming computer even today running some pretty awesome games from Xboxone and whatever.
Well, I'm trying to run a game server on this desktop (I know it can handle it). Problem is, the server runs, but I can't see any of the mobs (NPC's creators of the world) on the map yet I can hear them. I know they are there. I can hear them. I go around my map, hearing them, but not seeing them.
I looked in other places on the web regarding this issue and found the issue is a memory issue (not enough memory). So I need to increase the memory of Java 8--problem is, it says in my server console "Ignoring max memory--support removed in 8.0" meaning though I set the memory in the bat file to run the server, it is ignoring how much I am telling it to use to run my server... and this is annoying.
Okay there is more details.
I entered a command in the server /memory
The server reports that the max memory allocated to the server is only a mere 1GB!!!! and I'm like WHAT!? Cuz I know I know I KNOW I have WAY more than that to offer my server! I need to increase that! So this is the issue.
To sum it up: Java 8 says it does not support max memory or min memory anymore if I were to set it up in a bat file to use 10,000MB (10GB) for my server when I run it--it ignores that... yet I need to force it to USE THAT AMMOUNT. how do I do this?
In control panel I already set it in the java, java tab, (field that is already there by default).
So I'm not sure what else to do.
Seems to me it was dumb of Java to remove support for memory heap customization in 8, makes me miss Java 7 if you ask me.
So any idea how I can make this work?
Make sure your java arguments include -Xmx and -Xms.
Read this question/answer for further details: What are the Xms and Xmx parameters when starting JVMs?
My application is just a bigger version of the default Jhipster app.. I even have no Cache.
I deployed it successfully on an Amazon free tier t1.micro instance.
I experienced some random 503 errors. I checked the health of the instance and it sometimes said "no data sent" some other times "93% of memory is in use". Now it's down (red).
I cloned the environment, then terminated the original one. I get those various errors.
I deployed the war with Dev spring profile but I believe it's not what is causing this much horror.
Do I need to configure the java memory usage ? Why could the app be this memory hungry?
I posted the question on StackOverflow as I am caring more about performance tuning of the deployed Jhipster war but if you think it's more a problem with Amazon please let me know why you think that.
Thanks
Deploy the application on a instance with much more memory ie an t2.large (8GB)
The size on an existing instance can be altered by using the console "stop", find the console "instance settings" "instance type" change and start again
Ensure that your application has a method for attaching jconsole to it available (apparently the development version does, with jmx). See http://docs.oracle.com/javase/8/docs/technotes/guides/management/jconsole.html for more information on jconsole
Run the application and monitor the nice graphs in jconsole
See what the peak is over a few days of normal use. Also log on to the server with ssh and use free -m to see the system memory use ( see http://www.linuxatemyram.com/ for a guide to interpreting the data )
Once you know the actual amount of RAM it uses choose an appropriate instance size, see http://www.ec2instances.info/
You might need to adjust the -Xmx setting, I don't know the specifics with jhipster but this is a common requirement for java applications
I have a Java Webstart application that starts successfully with -Xmx1G, but fails to start with -Xmx2G. Some of my users really need 2G of heap.
This seems to be a problem with Java 8u60 only, because I have a report of someone launching successfully with Java 8u51.
The failure looks like this: I see the blue 'Java...' splash screen, and then after a few seconds, poof it's gone, before displaying the Java console and without producing any trace information in the expected place.
The failure occurs only on those clients with less than 2G of memory available. But, I am a little surprised that requesting a 'maximum' heap size could cause the application to fail so early and without any diagnostic information. We are dealing with a 'maximum' value, after all, not an 'initial' value. I read in multiple places that the JVM is not supposed to do this.
But I also remembered reading that the 'initial', if unspecified, is based on the maximum. So, along with passing -Xmx2G, I tried passing -Xms512M, -Xms256M, and -Xms128M. But, this attempt to shrink the initial heap size did not help. I cannot get this thing to start with -Xmx2G!
Does anyone have any light to shed on this situation? A solution? A workaround? In the short term, I'll change to -Xmx1G, but, as I said at the beginning, I have some users that really need -Xmx2G. I'd like to avoid having two separate *.jnlp files, which would also entail having two separate *.jar files!
Turns out that this is exactly what Webstart on Java8u60 does if the client machine does not have enough memory to satisfy -Xmx. It attempts to start, and then poof, it disappears without any indication as to what went wrong.
So, I will end up having to build my application in different configurations if I want to enable the users with more memory to allocate that memory to my application. This is because signing requires the *.jnlp file to into the *.jar file itself, and this *.jnlp file must be an exact match with the *.jnlp file used to launch the application.
I have a Java server that I wrote myself running as a service. Right now looks like the application is somehow eating all my drive space at a 1GB per hour rate.
After a stop of the service the disk space becomes available by itself (I'm not deleting anything). From the application I'm not creating any files or writing to disk besides logs or the database but those are not growing so fast.
The big problem with this is that I can't find any file or folder that is eating up all my drive. I don't know if it is a system file that I don't have access to from the explorer or if it's a virus or a JVM bug. I'm using Oracle JVM 64 bit from JDK 7 update 7.
I appreciate a lot any help you can provide me with this. I have never seen something like that before.
Thanks.
Here are the possible pointers:
Check if your disk is full because of other applications (possibly malware)
Check if there are any IO operations from your application
Check if your local repository (like .m2, .gradle/caches) are filling it up during build with transitive dependencies
If possible, add couple of loggers to display the size of your hardisk using new File("/").getTotalSpace(); along with RAM details and watch how they are changing
Finally if nothing works out, try your application in another machine
Short description of my problem: I start up Tomcat with my deployed Wicket application. When I want to shut down tomcat I get this error message:
Error occurred during initialization of VM
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:640)
at java.lang.ref.Reference.<clinit>(Reference.java:145)
I am running the following setup:
Ubuntu Linux: 10.04 (lucid) with a 2.6.18-028stab094.3 kernel
Java Version: "1.6.0_26" Java HotSpot(TM) 64-Bit Server VM
Tomcat Version: 7.0.23
jvm_args: -Xms512m -Xmx512m -XX:MaxPermSize=205m (these are added via CATALINA_OPTS, nothing else)
Wicket 1.5.1
Tomcat is configured with two virtual hosts on subdomains with ModProxy
My application is deployed as ROOT.war in the appbase directory (it makes no difference if I deploy one or both applications)
'''No application deployed does not result in OOM on shutdown''', unless I mess around with the jvm args
The size of the war is about 500k, all libraries are deployed in tomcat/common/lib (directory which I added to common.loader in conf/catalina.properties)
ulimit -u -> unlimited
When I check the Tomcat manager app it says the following about the JVM memory:
Free memory: 470.70 MB Total memory: 490.68 MB Max memory: 490.68 MB
(http connector) Max threads: 200 Current thread count: 6 Current thread busy: 1
'top' or 'free -m' is similar:
Mem: 2097152k total, 1326772k used, 770380k free, 0k buffers
20029 myuser 18 0 805m 240m 11m S 0 11.7 0:19.24 java
I tried to start jmap to get a dump of the heap, it also fails with an OutOfMemoryError. Actually as long as one or both of my applications are deployed any other java process fails with the same OOM Error (see top).
The problem occurs while the application is deployed. So something is seriously wrong with it. However the application is actually running smoothly for quite a while. But I have seen OOMs in the application as well, so I don't trust the calm.
My application is using a custom filter class? Could that be it?
For completeness (hopefully), here's the list of libraries in my common/lib:
activation-1.1.jar
antlr-2.7.6.jar
antlr-runtime-3.3.jar
asm-3.1.jar
asm-commons-3.1.jar
asm-tree-3.1.jar
c3p0-0.9.1.1.jar
commons-collections-3.1.jar
commons-email-1.2.jar
dependencies-provided.tgz
dom4j-1.6.1.jar
ejb3-persistence-1.0.2.GA.jar
geronimo-annotation_1.0_spec-1.1.1.jar
geronimo-jaspic_1.0_spec-1.0.jar
geronimo-jta_1.1_spec-1.1.1.jar
hibernate-annotations-3.4.0.GA.jar
hibernate-commons-annotations-3.1.0.GA.jar
hibernate-core-3.3.0.SP1.jar
hibernate-entitymanager-3.4.0.GA.jar
hibernate-search-3.1.0.GA.jar
javassist-3.4.GA.jar
joda-time-1.6.2.jar
jta-1.1.jar
log4j-1.2.16.jar
lombok-0.9.3.jar
lucene-core-2.4.0.jar
mail-1.4.1.jar
mysql-connector-java-5.1.14.jar
persistence-api-1.0.jar
quartz-2.1.1.jar
servlet-api-2.5.jar
slf4j-api-1.6.1.jar
slf4j-log4j12-1.6.1.jar
stringtemplate-4.0.2.jar
wicket-auth-roles-1.5.1.jar
wicket-core-1.5.1.jar
wicket-datetime-1.5.1.jar
wicket-extensions-1.5.1.jar
wicket-request-1.5.1.jar
wicket-util-1.5.1.jar
xml-apis-1.0.b2.jar
I appreciate any hint or even speculation that gives me additional ideas what to try.
Update: I tested some more and found that this behaviour only occurs while one or both of my applications are deployed. The behaviour does not occur on "empty" tomcat (that was a mistake on my part messing with jvm args)
Update2: I am currently experimenting trying to reproduce this behaviour in a virtual box, I want to debug this with a profiler. I am still not convinved that it should be impossible to run my setup on 2GB RAM.
Update3 (10/01/12): I am trying to run jenkins instead of my own application. Same behaviour, so it is definitely server configuration issues. Jenkins jobs fail when maven is called, so I need not even try the shutdown hack suggested below because I need a second java process while running Jenkins. It was suggested to me that because this is a Virtual Server ulimits may be imposed from outside and I would not be able to see them. I think I'll ask a new question regarding this. Thx all.
Update4 (02/05/12): see below for the answer that contains the hint. I'll clarify some more up here: I am now 95% sure that the errors occur because I am reaching my thread limit. However because this is a virtual server the method described below would not work to check this value because it is not visible with ulimit, that was what was confusing me and only today I found out that this is the "numproc" value that I can see in the Parallels Power Panel that I can log into for my virtual server. There were Resource Alerts for numproc but I did not see those either until just now. The value has a hard limit of 96 which I cannot change of course. The current value of numproc corresponds to the number of processes I see with "top" after toggling "H" to see threads. I had a very hard time finding this because this numproc value is hidden deep inside the panel. Sadly 96 is a rather low number if you want to run a tomcat with apache and mysql. I am also very sad that I cannot even find this value in the small print of my hosting contract and it is rather relevant to my application I dare say. So I guess I'll need a server upgrade.
Thanks all for your helpful answers in the end everyone helped me a bit to find out what the problem was.
The tomcat shutdown procedure consits of sending an command/word via a tcp port to the running tomcat VM. This port is configured in the server.xml (if I remember corretly, writting on my phone right now). So far so good.
Unfortunately, the shutdown script does this by starting a 2. VM using the same java options used for the tomcat. Your system simply has not enough memory for this.
As a sollution you could write your own stop script using telnet or something.
I could help with later if needed.
Hope that helps.
Viele grüsse Bert
Seems you have too many threads open.
Use this command :
ulimit -u
What is the result ?
Should be something like :
max user processes (-u) 100
If this is correct, you can edit this file :
/etc/security/limits.conf
and the the following modifications :
#<domain> <type> <item> <value>
user soft nproc 10000
user hard nproc 10000
You can probably survive for a while like this. All you need to do is kill the tomcat process whenever you need to restart it. It is not a nice approach, but the main concern is that your application runs correctly.
It seems to me though, that on the long run, you might need to order a hosting plan with more RAM available.
I was having a similar problem with a tomcat installation just last week. I managed to fix it by giving tomcat a smaller heap. Something like this:
export CATALINA_OPTS=”-Xms256m -Xmx512m”
Before starting Tomcat may help. In the meantime you'll have to kill it the old fashioned way, with a kill -9 ;)
EDIT: you could also take look here, it appears tomcat automatically creates a bunch of "spare" threads, but you can limit those as well as your max thread count in the config. Hope it helps.