I'm trying to run jstack command on my java application. Application is rather big, running inside jboss AS occupying about 4gb of memory. OS is Windows Server 2003 Standard edition. Every time i get an error "Not enough storage is available to process this command". There is enough ram, 16gb, and disk space. So, any ideas?
I ran into this recently on Win2008r2 and thought I'd share my solution since it took a while to figure out. Rob's comment about psexec -s is what did it for me.
It appears that on Vista and later jstack doesn't work against services because of the user context. It has nothing to do with memory. I suspect this is the same reason people have seen this problem on 2003 via remote desktop, unless you use the /admin or /console switch on mstsc. As of Vista the tightened security is probably what broke it.
Starting my app from a cmd window worked fine, but that doesn't help me debug our standard install. Enabling the java debug port (for VisualVM, Eclipse or most any Java debugger) requires an app restart, so you lose the state you're probably trying to capture if you don't already have debugging enabled. Starting the service under my user credentials did not work - I was a little surprised at that. But psexec -s runs jstack from the system context, which worked like a charm. Oh, and you'll need to run psexec from an elevated cmd prompt, if UAC is on.
In the past I have seen this when the JVM is running as a Windows Service on Windows 2003.
First, check to see if this is an issue with the TMP directory.
Second, jstack (or the other utilities like jconsole) will not connect to the local process unless it is running in the same session. If the service is running as a specific user, you may be able to connect by logging into the same session. If you are using Remote Desktop, you can connect using "mstsc /admin" (used to be /console) and try to run jstack again. Definitely check to make sure the TMP directory is set properly if this doesn't fix the problem.
If the service is running as LocalSystem, the above procedure probably will not help much. I don't know if there is a way to log into the same session as LocalSystem.
Some other alternatives may be to set the process up for remote monitoring and use jvisualvm (from the server itself or another machine) to connect over a port and do a thread dump.
We had problems running JStack on a Windows machine with even a modest application (1GB). We ended up doing our stack and heap analysis using Netbeans. This seemed to cope with the parsing of dump files a lot better. YMMV.
Give Netbeans a try for profiling - its very good. Note that VisualVM is a cutdown NB profiler and comes with 6u7.
psexec -s jstack PID >> c:\jstack.log perfectly works on the same machine. For the first time it took some time but again I executed with the redirect to file option, it completed with in few seconds.
This is an error message from the underlying O/S. There's not much you can do in your code to deal with this other than catch the exception which is thrown. Boo to Windows for being so limited.
http://technet.microsoft.com/en-us/library/cc978735.aspx
Related
Thank you for viewing my post.
I'm running selenium-server-standalone as a windows service utilizing nssm(- the Non-Sucking Service Manager | http://nssm.cc/), utilizing the identical process as mentioned in this stackoverflow post #: https://stackoverflow.com/a/10656979/956863.
Quick Summary of post:
Download and extract nssm.exe
Installed NSSM and from the command line ran: nssm install Selenium-Server "C:\Program Files\Java\jre6\bin\java.exe" "-jar C:\Selenium\selenium-server-standalone-2.24.1.jar"
The machine where I'm running this process is running windows XP, service pack 3. This solution to run selenium server as a service works like a charm, and when selenium server is running, and crashes for some reason, selenium server successfully restarts without manual intervention.
But I"m coming into work, and am being informed by system administrators that high cpu alerts are being thrown. And again system logs are providing no information... So I'm wondering if selenium is actually the cause of this issue, and want to eliminate the possibility of running selenium as a service being blamed for this cpu spike.
Can anyone think of a solution, perhaps a way to stop the selenium service when cpu utilization reaches X amount? Or?
In the meantime, I'm going to set some sort of long term CPU utilization monitor and see if that can see something that the system monitor in xp may be missing. ( If anybody knows of a good way to achieve this, i'm open to suggestions as well )
I have selenium running as a service on windows 2008 server and noticed that its not able to clean up the headless browser instances. My tests are written in JavaScript with Soda so I have a start up and close out the browser instances but running as a service it doesn't close out those instances in the task manager.
I actually have two ways I'm running the service one way is where I'm using a bat file to run selenium the other way I have it running directly off a registry key.
I was able to fix the browser issue after I just added another step process to teamcity to run taskkill automatically on any browsers left open when the tests were complete. This fixed my CPU spiking issue.
Despite having vague reports of CPU spikes with Selenium as a service, I have yet to see one with my own eyes. Which version of Java are you using?
Our commercial run-anything-as-a-service product supports CPU tracking and can restart Selenium when it hogs the CPU. I suggest that you download the free 30-day trial and use it see if you can confirm or rule out Selenium as the problem in that time frame. Follow this guide to set up Selenium as a service.
I have searched and searched and did not help me much hence posting the new question.
Platform
Ubuntu 11.10 server 64 bit
JVM 1.7.0_03
Tomcat 7
There is nothing special in the configuration - front end server is apache using ajp connector. Tomcat runs as ubuntu service.
On our server, tomcat7 is dying and could not figure out the reason. I have checked all the log files (syslog, catalina.out, even auth.log) to see if there is something getting logged.
As per top command server still has around 4gb of memory free and cpu usage is averaging around 35% most of the times.
In order to isolate the problem, is there any way to get the exit status code of tomcat process that terminated?
I read some reports where jvm logging error log in case of jvm crash. I am not seeing it either.
It seems like I need to set ulimit to get the core dump, but not sure how to do it for tomcat service or is the setting valid for all the users.
It seems like I need to set ulimit to get the core dump, but not sure how to do it for tomcat service or is the setting valid for all the users.
One way to do that without interfering with anything else would be to add a ulimit command to the catalina.sh script. (It is a bit hacky ... but it sounds like you are at the point where hackiness might give happiness.)
We have a Java application running on Solaris which makes a connection to Oracle and checks the database for work to perform, and it runs just fine. We tried running the same code on a standalone Fedora system, its performance is good too. However, when we move it to its home on a Fedora VMWare virtual machine, it can take upwards of five minutes for the application to make the connection to the database. It ultimately DOES make the connection - it's just snail-slow. We suspect it's a configuration issue somewhere but can't find it. So far as we can tell, the two Fedora boxes have nearly identical configurations. Has anyone run into this problem before? If so, how did you get around it?
Thanks in advance for your help.
Mike Preston
Found it! When we are running under Solaris, we are running a 32-bit JVM with 32-bit extensions. We are executing through a Korn shell script and had an added -d64 flag to coerce 64-bit processing. On the Linux boxes we removed the -d64 flag from the shell script and everybody's happy. Thanks Alex for your thoughts and assistance.
Here is the solution which settled the issue...our headless development server was only occasionally getting any keyboard activity to fill the entropy pool (please read the article - I won't try to explain it here) and I assume it was blocking until there was enough "noise" to generate the requisite random numbers.Since there is only one other developer working on the system, it might take a couple of minutes to fill the buffer. Once the buffer was full it went ahead and executed the connection as expected. That also explains why we sometimes would see crisp performance followed by slow. In a nutshell, we added the string "-Djava.security.egd=file:///dev/urandom" to the Korn shell script between the call to java and the jar file name and now it works like a champ. Here's the full command string:
/usr/bin/java -Xms64m -Xmx1024m -Djava.security.egd=file:///dev/urandom -jar $1 $2 $PID
If you DO read the article, be sure to read the comments below. One of them is really funny!
I have a java GAE web app with datanucleus as the JPA provider. When deploying locally on my machine - the deployment hangs (takes minutes). Looking at the task manager I have a javac process running. Any idea what is going wrong?
Agreed. Its the problem with GAE as it takes a 6permutation Compilation only after which the application would be deployed and shown on the browser. I feel its the problem only with GAE and not JPA. I have developed a similar app and if you feel its because of JPA, you can check the corresponding database admin to see how many threads are being opened for the user. If you seem to find some aren't Garbage collected, check your code. Else you can use ConnectionPooling mechanism (to speedup db retrieval using ORM).....
The answer depends on several parameters
How you deploy, are you using eclipse or command line?
GAE version (and GAE/GWT eclipse plugin version)
Windows or Linux?
In any case, a Thread Dump can help seeing which non daemon threads are stuck.
For command line deployment in Windows - press CtrlBreak after it hangs to get the thread dump
In Eclipse, if there is a way to deploy in debug mode, look at the debug view stack for the same info
See this answer as well: How to Force Thread Dump in Eclipse?
This thing depend on which platform u r using
Windows
linux
mac os x
you can check what is going on by checksignal
sending a signal
Usage:
SendSignal <pid>
<pid> - send ctrl-break to process <pid> (hex ok)
You can get the source via anonymous CVS at
cvs -d :pserver:anon#www.latenighthacking.com:/code-cvsroot co 2003/SendSignal
I have recently deployed my simple application in google app engine via eclipse. It failed to deploy couple of times. after sometime, It deploy successfully. I was able to access the application. if it hangs, stop the deployment process and redeply
I'm on Windows Vista 64 bit, with a 64 bit jvm installed. I'm trying to use jstack and jmap -- two utilities that come with the JDK -- to peek into an application server's guts. This works fine on a windows xp machine, 32 bit.
However, when I run these commands against the processid for a ColdFusion application server on this vista64 machine, I get the error message in the title of this post.
All I'm doing is running jstack , where pid is the processid of my CF server, and I'm getting this
this machine has plenty of available memory, but I highly doubt it's a memory problem. The reason I say that is that if I start JBoss, which is taking up just as much memory as CF, I can run jstack against that process.
Thanks for advice
Figured it out. The problem was that ColdFusion was running as a windows service. By stopping the service and running from the command line (jrun start cfusion) , I was able to successfully use the JDK tools
this posting provides details on how to execute jstack when the process is running as a windows service. basically, use the psexec command.
Jstack and Not enough storage is available to process this command