Remote Debugging disconnects very frequently for Intellij - java

I am trying to remote debug my code using intellij but it disconnects in a few seconds . Can anyone suggest how can I prevent or increase the time of disconnection.

I suspect it is likely not a problem exactly with the Intellij remote debugger. It is likely the fact you are spending time with the JVM completely stopped.
For my situation (with exactly the same symptoms), I was always disconnected after about 1 minute of stepping through the code and looking at things. It was Kubernetes restarting the pod due to liveness check failures. When the pod is restarted the debugger is disconnected.
I solved my problem by changing the breakpoint to be "suspend the thread" rather than "suspend the jvm".

Related

IntelliJ Idea remote debugger hangs

When I use remote debugger in IntelliJ to debug a Java application on server, it stops on breakpoints successfully but when I try to evaluate any expressions or variables it hangs and shows nothing (usually with "collecting data" message). From that point I can't even continue stepping through code anymore. I have to click resume so it at least runs, but it will never stop at other breakpoints too until I restart the debug session and usually even the Java application being debugged.
I can step through the code after stopping on breakpoint, also I see the variables in the debugger panel, it only starts to behave weirdly when I try to evaluate an expression or add a watcher. Then it stops working and restart of the debugger and the app is needed.
Did anybody experience something similar? Is it IntelliJ or server problem?
(sorry this is so vaguely described, but I have no idea what to share or what the problem might be)
Expression evaluation during remote debug need more data to be sync then the other operations ( breakpoints add/remove, step managements ecc ).
So this kind of issue should be just related to:
a slow connection
an huge complexity and amount of data involved into the operation to execute in the remote server

Whenever my program crashes in eclipse it stays running in the background

It is really frustrating especially when I am working with sockets. Anyone know how to fix this? I constantly go into the task manager...
I think the most likely reason for this is a thread which does not terminate. This might be caused by the thread waiting for a time out, but a number of other reasons might prevent the thread from exiting as well.
I suggest you connect jvisualvm (part of the jdk, located in the bin folder) to your application and investigate what part of your application stays alive.
Edit: If your application runs in your systems default vm, you should see it in jvisualvm out of the box. But if you are using different vms, you have to start the application with appropriate parameters in order to connect jvisualvm to it.
This short guide explains the settings pretty well.

application server restart on OOM exception

Can we automatically restart a websphere application server v6.1 on OOM exception after heap dump is created?we have an enterprise application hosted on websphere application server,recently we are facing OOM exceptions,and from time to time the app server gets automatically restarted after the heap dump is generated.But recently the app server restart is not happening automatically but has to be done manually.Can you please let me know what may be the issue
There is no in built/parameter based option in WAS 6.1 answering your question. It comes in v.7.0.
Better way I/Many follow is write a basic java program to monitor the sysout.log/ syserr.log for the particular String "OutOfMemory" or "in total in the server that may be hung". If the log has any of those string, then (i) stop the server, (ii) rotate the logs (iii)start the server.
Schedule this java program for every 2 or 5 mins.
I wont recommend this method, This is not a good practice as well. I would recommend WASADM should inform the data related team to fix the issue and providing the logs, threads, hprofs, etc..
But most of the time, it is difficult and time consuming for data/application team to fix it immediately. So WAS administrator has to follow these kind of methods.

Memory Leak in multiple apps

I have a memory leak in two apps in Tomcat 6.0.35 server that appeared "out of nowhere". One app is Solr and the other is our own software. I'm hoping someone has seen this before as it's been happening to me for the last few weeks and I have to keep restarting Tomcat in a production environment.
It appeared on our original server despite the fact that none of the code related to thread or DB connection operation has been touched. As the old server this app runs on was due to be retired I migrated the site to a new server and a "cleaner" environment with the idea that would clear out any legacy stuff. But it continues to happen.
Just before Tomcat shuts down the catalina.out log is filled with errors like:
2012-04-25 21:46:00,300 [main] ERROR org.apache.catalina.loader.WebappClassLoader- The web application [/AppName] appears to have started a thread named [MultiThreadedHttpConnectionManager cleanup] but has failed to stop it. This is very likely to create a memory leak.
2012-04-25 21:46:00,339 [main] ERROR org.apache.catalina.loader.WebappClassLoader- The web application [/AppName] appears to have started a thread named [com.mchan
ge.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2] but has failed to stop it. This is very likely to create a memory leak.
2012-04-25 21:46:00,470 [main] ERROR org.apache.catalina.loader.WebappClassLoader- The web application [/AppName] is still processing a request that has yet to fin
ish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Conte
xt implementation.
During that migration we went from Solr 1.4->Solr 3.6 in an attempt to fix the problem. When the errors above start filling the log the Solr error below follows right behind repeated 10-15 times and then tomcat stops working and I have to shutdown and startup to get it to respond.
2012-04-25 21:46:00,527 [main] ERROR org.apache.catalina.loader.WebappClassLoader- The web application [/solr] created a ThreadLocal with key of type [org.a
pache.solr.schema.DateField.ThreadLocalDateFormat] (value [org.apache.solr.schema.DateField$ThreadLocalDateFormat#1f1e90ac]) and a value of type [org.apache.solr.
schema.DateField.ISO8601CanonicalDateFormat] (value [org.apache.solr.schema.DateField$ISO8601CanonicalDateFormat#6b2ed43a]) but failed to remove it when the web a
pplication was stopped. This is very likely to create a memory leak.
My research has brought up a lot of suggestions about changing the code that manages threads to make sure they kill off DB pooled connections etc. but the this code has not been changed in nearly 12 months. Also the Solr application is crashing and that's 3rd party so my thinking is that this is environmental (jar conflict, versioning, config fat fingered?)
My last change was updating the mysql connector for java to the latest as some memory leak bugs existed around pooling in earlier releases but the server's just crashed again only a few hours later.
One thing I just noticed is I'm seeing thousands of sessions in the Tomcat web manager but that could be a red herring.
If anyone has seen this any help is very much appreciated.
[Edit]
I think I found the source of the problem. It wasn't a memory leak after all. I've taken over an application from another development team that uses c3p0 for database pooling via Hibernate. c3p0 has a bug/feature that if you don't release DB connections c3p0 can go into a waiting state once all the connections (via MaxPoolSize: default is 15) are used. It will wait indefinitely for a connection to become available. Hence my stall.
I upped the MaxPoolSize firstly from 25->100 and my application ran for several days without a hang and then from 100->1000 and it's been running steady ever since (over 2 weeks).
This isn't the complete solution as I need to find out why it's running out of pooled connections so I also set c3p0's unreturnedConnectionTimeout to 4hrs which enforces a 4hr time limit on all connections regardless of whether they're active or not. If it's an active connection it will close it and re-open again.
Not pretty and c3p0 don't recommend it but it gives me some breathing space to find out the source of the problem.
Note: when using c3p0 with Hibernate the settings are stored in your persistence.xml file but not all settings can be put there. Some settings (e.g. unreturnedConnectionTimeout) must go in c3p0.properties
You state that the sequence of events is:
errors appear
Tomcat stops responding
restart is required
However, the memory leak error messages only get reported when the web application is stopped. Therefore, something is triggering the web applications to stop (or reload). You need to figure out what is triggering this and stop it.
Regarding the actual leaks, you may find this useful:
http://people.apache.org/~markt/presentations/2010-11-04-Memory-Leaks-60mins.pdf
It looks both your app and Solr have some leaks that need to be fixed. The presentation will provide you with some pointers. I would also consider an upgrade to the latest 7.0.x. The memory leak detection has been improved and not all improvements have made it into 6.0.x yet.

Tomcat dies suddenly

Trying to diagnose some bizarre Tomcat (7.0.21) and/or JVM errors on a 64-bit linux (CentOS) machine.
I'm load testing our server application and tried hitting it with 100K messages. Launched jvisualvm and kept my eye on the heap the whole time. Everything was looking great* (see below) until I got to about 93K processed messages and then Tomcat just died. Ran a ps on Tomcat's PID number to confirm it was dead.
Up until this crash:
Load test had been running for about 90 minutes; should have finished shortly thereafter since we were at 93K/100K)
CPU was holding strong around 45%
Used heap was around 2GB (plus or minus a bunch after GCs) but heap size grew from 4GB to MAX_HEAP after about 30 minutes
Class loading/unloading was cycling normally
Thread dumps were normal
Nowhere in the server code are any calls to System.exit() - so we can rule that right out (and yes I've double-checked!!!).
I'm not sure if this is Tomcat crashing or the JVM (how do I tell?). And even if I did know, I can't seem to find any indication of what went wrong:
All of the server app's logs just stop without any ERROR messages (even though we have logging universally set to DEBUG and higher)
Tomcat's catalina.out and respect localhost_access_* files just stop without any info
I've heard it is possible to have Tomcat log a coredump when it does but not sure how to do that and online examples aren't helping much.
How would SO go about diagnosing this? What steps should I take to start ruling out all of the possible factors?
Thanks in advance!
If the JVM crashes, you should have a hs_err_pidNNN.log file; you don't have to do anything to enable this. Its location depends on your OS and how you are running Tomcat. On Windows, they can show up on your desktop, unless you are running as a service. Otherwise, they should be in the current working directory of the crashed process.
Your operating system probably provides additional tools for process monitoring; you could describe your environment more, or perhaps ask at serverfault.com.
It's also possible that jvisualvm is actually causing the crash.
I'd try reproducing the problem, and progressively simplify the scenario to help isolate the cause.
Another possibility is that the OS is running out of memory and the OOM Killer is killing your process. In this case, the JVM wouldn't get an opportunity to write a heap dump, or an hs_err_pid file.
You can use the option java -XX:+HeapDumpOnOutOfMemoryError to create a heap dump for jvm crash due to out of memory error.
More details here Using HeapDumpOnOutOfMemoryError parameter for heap dump for JBoss.
Sorry I had to remove the green check from #erickson. I finally figured out what was killing Tomcat.
It looks like a profiler plugin is not configured correctly with VisualVM and attempting to run a profile on the Tomcat process killed it.
Investigating why right now, and will update this answer once I know more.

Categories

Resources