org.eclipse.jdi.TimeOutException - java

I am facing this eclipse error while debugging:
org.eclipse.jdi.TimeOutException: Timeout occurred while waiting for packet 220 occurred creating step request.
I googled a bit and also checked it on stackoverflow but have not found any solution. I am working on Mac OSx and using Eclipse Kepler but I get the same error on Windows 7 with Eclipse Mars. I am using Java 1.8.0_25, 64-Bit Server VM (build 25.25-b02, mixed mode)

I was also facing the same eclipse error while trying to debug the multi-threaded program code. Reducing the number of break points allowed me to debug the code without any errors. I believe there is some limit to place the debug/watch points in eclipse (w.r.t stack memory).
Increasing the Java stack size could be other solution. Info can be found here.

It seems this problem has been mentioned on the Google Code forums:
The problem seems to occur because the tick [producer] thread doesn't play well with the debugger.
It's suggesting that problems occur with the debugger when you have 2 threads (a producer and a consumer thread), and you attempt to suspend the consumer thread.
The workaround:
If you put a breakpoint to pause the tick thread, then you can step through both test threads nicely.
This suggests that you should set a breakpoint within the producer thread (not the consumer thread). Apprently the timeout occured when putting a breakpoint on the consumer thread, and putting a breakpoint on both threads caused an IllegalStateException.
I hope this helps!
The idea is to block the producer thread, which forces the consumer thread to wait (assuming it's blocked while waiting for data, not polling). You can then resume the producer thread, which resumes the consumer thread for that "tick". The producer thread goes back to waiting.
Apparently, it takes 2 of these cycles to represent 1 "tick", as suggested by the person who found the workaround:
When they're both blocked waiting for a tick, you can release the tick thread until
one of the test threads is released, and then leave the tick thread blocked again
until you need the next tick. It seems to take two cycles of the tick thread to
advance one tick.

Related

Java: How to forcefully kill unneeded thread?

I am using some 3rd party code, which may hang indefinitely in some cases. This leads to hanging threads which keep holding the resources and clogging a thread pool.
Eventually thread pool becomes full of useless threads and system effectively fails.
In Java one can't forcefully kill the thread (kill -9). But how then to manage such edge cases?
Obviously fixing the bug would be better, however alternative include
only run the 3rd party code/library in a sub-process. Just killing the thread is unlikely to be enough.
you could hack the 3rd party code to check for interrupts in the sections you find run for too long. You can take a stack trace to run out where this is.
Use Thread.stop() though this has been disabled in Java 8.
when you detect there is a hung thread, increase the size of the thread pool by one. This will give you the correct number of active threads.

How to find where a thread was originally started

Supposed I have an application that can spawn multiple threads if needed for doing tasks ... so nothing special. I use Eclipse to write and debug Java applications. A thread (lets call it "async task") is immediatly respawned after it leaves the run() method (so there is bug and I want to find the reason for this behavior).
My question, if I pause this thread "async task" using the eclipse IDE (debug perspective ..) is there way to find out where this thread was originally started (for example using the Debug view or any other)? Because I want to know who spawns this thread (without making a text search or something like this).
Is there a good way to get this information?
I would set a breakpoint at Thread.start() and enable a condition
Whenever a thread named "async task" is started the condition is evaluated to true and the thread that invokes the start method is paused. Then you can see in the stacktrace from where the call came.
Had a similar problem in production and wrote a litte java agent that logs the stack of every thread start. So the system can run and you get the info live in the log. That helps a lot, when you have many threads. https://github.com/xdev-software/thread-origin-agent
You can't check whether new thread start or not by using debuger since debug will hang your entire JVM.
You can put some logs and check how threads works there.

Analysing threads through eclipse debug view

If a multi threaded application is running in debug from eclipse then is there any way to know which thread is sleeping or waiting by looking into debug view where all the threads are listed? As I can only see running threads there.
All the threads are shown, the (Running) value just means you have not suspended the thread. You can use the Suspend button to suspend an individual thread or the entire application. When you do this you can expand the entry for the thread in the view and see if it is sleeping, waiting or executing code.
Single suspended thread which is waiting:
You can use JvisualVM to get a graph over time of which threads are running/sleeping. JvisualVM comes with your JDK. If you are looking for a performance issue, it also has a profiler. There is also a plugin for eclipse (which I've never used) that can help with launching it. http://visualvm.java.net/eclipse-launcher.html

Need help analyze Java Thread dump

I am using samurai tool to analyze thread dump. It looks like it has many blocked threads. I have no clue to derive anything from the thread dump.
I have an SQL query in my Java application that runs on weblogic that takes enormous time to complete. After running this query by clicking on my Java application button several times hangs my JVM.
Thread dumps can be found # : http://www.megafileupload.com/en/file/379103/biserver2-txt.html
Can you help me understand what does the thread dump say ?
The amount of data you provide is a bit overwhelming, so let's just give you a hint how to proceed. For the analysis I use open source threadlogic application based on TDA. It takes few seconds to parse 3 MiB worth of data but in nicely shows 22 different stack trace dumps in one file:
Drilling down to reveals really disturbing list of warnings and alerts.
I don't have time to examine all of them, but here is a list of those marked as FATAL (keep in mind that false-positives are also to be expected):
Wait for SLSB Beans
Description: Waiting for Stateless Session Bean (SLSB) instance from the SLSB Free pool
Advice: Beans all in use, free pool size size insufficient
DEADLOCK
Description: Circular Lock Dependency Detected leading to Deadlock
Advice: Deadlock detected with circular dependency in locks, blocked threads will not recover without Server Restart. Fix the order of locking and or try to avoid locks or change order of locking at code level, Report with SR for Server/Product Code
Finalizer Thread Blocked
Description: Finalizer Thread Blocked
Advice: Check if the Finalizer Thread is blocked for a lock which can lead to wasted memory waiting to be reclaimed from Finalizer Queue
WLS Unicast Clustering unhealthy
Description: Unicast messaging among Cluster members is not healthy
Advice: Unicast group members are unable to communicate properly, apply latest Unicast related patches and enable Message Ordering or switch to Multicast
WLS Muxer is processing server requests
Description: WLS Muxer is handling subsystem requests
Advice: WLS Server health is unhealthy as some subsystems are overwhelmed with requests which is leading to the Muxer threads directly handling requests. instead of dispatching to relevant subsystems. There is likely a bug here.
Stuck Thread
Description: Thread is Stuck, request taking very long time to finish
Advice: Check why the thread or call is taking very long??. Is it blocked for unavailable or bad resource or contending for Lock?. Can be ignored if it is doing repeat work in a loop. (like adapter threads polling for events in a infinite loop)...
The issue was with WLDF logging information to log file. Once disabled it helped improve performance enormously. I am not a fan of ThreadLogic as a tool for thread dump analysis. It shows circular deadlock when you have stuck threads no matter how variant the issue is.
Thread dumps are the snapshot of all threads running in the application at given moment. Thread dump will have hundreds/thousands of application threads. It would be hard to scroll through every single line of the stack trace in every single thread. Call Stack Tree consolidates all the threads stack trace into one single tree and gives you one single view. It makes the thread dumps navigation much simpler and easier. Below is the sample call stack tree generated by fastThread.io.
Fig 1: Call stack Tree
You can keep drilling down to see code execution path. Fig 2 shows the drilled down version of a particular branch in the Call Stack Tree diagram.
Fig 2: Drilled down Call Stack Tree
Sample call stack tree generated by FastThread.io

How to find BOTH threads of a deadlock?

We're having the classic spring/hibernate/mysql stack running in Tomcat 5.5. Once in a while we're getting a deadlock when the attempt times out to lock a table row. Some kind of deadlock exception is thrown.
The exception is clear and the stack trace indicate what went wrong. But it doesn't show the other thread which is holding the actual lock. Unless I know what that thread is doing it's all just a needle in a haystack.
QUESTION: Is there a way to find the other thread ?
Thanks !
Jan
Try using the following command in MySQL next time you see a deadlock. This should show you the last deadlock.
SHOW INNODB STATUS
Typically when you see a deadlock on your application server the logs show only the victim thread (the one which was rolled back). Since the other thread has completed no exception is thrown. You need to go back to your DB to recreate the transactions.
Once you have a capture from your DB for where the deadlock occured then you can investigate further.
not sure if you've figured it out already but if it's a deadlock, thread dump would be of great help here. Depending on what OS the application is run and on your priviledges to access it, you can generate it in many ways.
on *nix sending QUIT signal to the process ('kill -3 pid') would do the work
using jconsole/jvisualvm has an option to get it
using standard jdk jstack (consider -F -l options) will do the trick
if you are lucky to be on solaris pstack will help a lot
Once you've got it, analyse locked/waiting threads to find a deadlock. You can do it manually or using some existing analyzers that utilize deadlock detection algorithms. Btw jvm has one builtin and it can give you the idea right in the thread dump.
If I can help more just let me know.
good luck.
regards,
baz
if it's a code problem you could try to connect to the running process using jconsole and detect the deadlock.
If you need to find the thread that holds a lock, you can do this in Eclipse through the debug view. Have a look at http://archive.eclipse.org/eclipse/downloads/drops/R-3.1-200506271435/eclipse-news-part2b.html and scroll down to 'Debugging locks and deadlocks'.
The locks owned by a thread as well as the lock a thread is waiting for can both be displayed inline in the Debug view by toggling the Show Monitors menu item in the Debug view drop-down menu. Threads and locks involved in a deadlock are highlighted in red.

Categories

Resources