What happen to a thread on a server crash? - java

Let's say I am following a client-server model, and one client, which is actually a thread, block on the remote server, which is actually a monitor.
What will happen to that thread, if the server crashes for some reason?

Answer is: it depends:
it is possible that this thread just sits there; waiting forever.
but it is also possible that an exception is thrown at some point; and that thread somehow gets back "alive".
Things that play a role here:
the underlying TCP stack
your usage of that (for example it is possible to give timeout values to sockets; which cause exceptions to be thrown on timeout situations)
how exactly your client is coded
In other words: nobody can tell you what your client application will be doing. Because we do not have any information about the implementation and configuration details you are dealing with.
Or, changing perspective: you should make sure that your client uses some form of timeouts. As said, this could be done by setting a timeout on the sockets used for communication. Or by having another thread that monitors all threads talking to a server; and that kicks in at some point in order to prevent such threads from waiting forever.
Long story short: if you are serious about such issues; you have to do a lot of research; a good starting point would be the old classic "Release it" by Michael Nygard.

Related

Why SSLSocket write option does not have timeout?

In Java, write operation on SSLSocket API is blocking and the write operation does not support timeout also.
Can someone please explain?
Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
How to add timeout for write operation?
My application creates two threads one for read and one for write.
1- Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
Yes there can. Though not literally forever :-)
2- Can someone please suggest how we can add timeout for write operation?
You cannot do it with Java's implementation of sockets / SSL sockets, etcetera. Java sockets support connect timeouts and read timeouts but not write timeouts.
See also: How can I set Socket write timout in java?
(Why? Well socket write timeouts were requested in bug ID JDK-4031100 back in 1997 but the bug was closed with status "WontFix". Read the link for the details.)
The alternatives include:
Use a Timer to implement the timeout, and interrupt the thread or close the Socket if the timer goes off. Note that both interrupting and closing will leave you in a state where you need to abandon the socket.
Use NIO selectors and non-blocking I/O.
Because:
If such a facility is needed at all, it is needed at the TCP level, not just the SSL level.
There is no API for it at the TCP level, and I don't mean just in Java: there is no C level API for it either, except maybe on a couple of platforms.
If you added it at the SSL level, a write timeout event would leave the connection in an indeterminate state which would mean that it had to be closed, because you couldn't know how much data had been transmitted, so you couldn't maintain integrity at the SSL level.
To address your specific questions:
Can there be a situation where write operation can block a thread forever? I checked on Internet and it seems that there is a possibility of blocking forever.
Yes. I've seen an application blocked for several days in such a situation. Although not, as #StephenC rightly says, forever. We haven't lived that long yet.
How to add timeout for write operation?
You can do it at the TCP level with non-blocking I/O and a Selector, and you can layer an SSLEngine on top of that to get SSL, but it is a tedious and highly error-prone exercise that many have tried: few have succeeded. Not for the faint-hearted.

Thread status TimedWait. How to debug?

My application run some complex threads that fetch maps in a background thread and draw them. Sometimes if I run the app for a couple hours on a slow network I seem to be getting it into a weird state where all my threads status are showing TimedWait or Wait (except the ones that are Native such as main).
What is the cause of this? How can I debug it? I am absolutely lost and I know this is a bit of a general question but I would appreciate it if someone could point me to the right direction. EG:
How to pin point the cause of the problem.
What king of issues generally cause all the threads to lock up?
Anybody seen anything similar?
Thanks
A timed wait is simply a thread which is blocked on some O/S level call which has a timeout specified, such as a simple wait primitive (Object.wait()) or a socket operation (Socket read()/write()), a thread queue etc. It's quite normal for any complex program to have several or many of these - I have an application server which routinely has hundreds, even thousands.
Your threads may be backing up on non-responsive connections and may not be misbehaving at all, per se. It may simply be that you need to program them to detect and abort an idle connection.
Click on each of the threads which you are concerned about and analyze their stack trace for how they got there.
Most decent profiling tools (and application containers) will have the option of printing a full stack trace, and more modern ones will do a dead-lock and live-lock analysis for you. The JVisualVM tool distributed with Sun's JDK and available on the net as VisualVM will do this and it's very effective. Most decent profilers will also show lock acquisition in the stack trace (yours, above, is not in that view).
Otherwise, you are looking for two or more threads contending for the same lock or acquiring the same locks in a different order. You may need to do this manually by actually examining the source and annotating your stack trace, but you should be able to whittle down likely candidates if your tool doesn't point right to the conflicting threads.

Best practice for android client to communicate with a server using threads

I am building an android app that communicates with a server on a regular basis as long as the app is running.
I do this by initiating a connection to the server when the app starts, then I have a separate thread for receiving messages called ReceiverThread, this thread reads the message from the socket, analyzes it, and forwards it to the appropriate part of the application.
This thread runs in a loop, reading whatever it has to read and then blocks on the read() command until new data arrives, so it spends most of it's time blocked.
I handle sending messages through a different thread, called SenderThread. What I am wondering about is: should I structure the SenderThread in a similar fashion? Meaning should I maintain some form a queue for this thread, let it send all the messages in the queue and then block until new messages enter the queue, or should I just start a new instance of the thread every time a message needs to be sent, let it send the message and then "die"? I am leaning towards the first approach, but I do not know what is actually better both in term of performance (keeping a blocked thread in memory versus initializing new threads), and in terms of code correctness.
Also since all of my activities need to be able to send and receive messages I am holding a reference to both threads in my Application class, is that an acceptable approach or should I implement it differently?
One problem I have encountered with this is that sometimes if I close my application and run it again I actually have two instances of ReceiverThread, so I get some messages twice.
I am guessing that this is because my application did not actually close and the previous thread was still active (blocked on the read() operation), and when I opened the application again a new thread was initialized, but both were connected to the server so the server sent the message to both. Any tips on how to get around this problem, or on how to completely re-organize it so it will be correct?
I tried looking up these questions but found some conflicting examples for my first question, and nothing that is useful enough and applies to my second question...
1. Your approach is ok, if you really need to keep an open connection between the server and client at all time at all cost. However I would use an asynchronous connection, like sending an HTTP request to the server and then get a reply whenever the server feels like it.
If you need the server to reply to the client at some later time, but you don't know when, you could also look into the Google Cloud Messaging framework, which gives you a transparent and consistent way of sending small messages to your clients from your server.
You need to consider some things, when you're developing a mobile application.
A smartphone doesn't have endless amount of battery.
A smartphone's Internet connection is somewhat volatile and you will lose Internet connection at different times.
When you keep a direct connection to server all the time, your app keep sending keep-alive packets, which means you'll suck the phone dry pretty fast.
When the Internet connection is as unstable as it gets on mobile broadband, you will lose the connection sometimes and need to recover from this. So if you use TCP because you want to make sure your packets are received you get to resend the same packets a lot of times and so get a lot of overhead.
Also you might run in to threading problems on the server-side, if you open threads on the server on your own, which it sounds like. Let's say you have 200 clients connecting to the server at the same time. Each client has 1 thread open on the server. If the server needs to serve 200 different threads at the same time, this could be quite a performance consuming task for the server in the end and you will need to do a lot work on your own as well.
2. When you exit your application, you'll need to clean-up after you. This should be done in your onPause method of the Activity which is active.
This means, killing off all active threads (or at least interupting them), saving the state of your UI (if you need this) and flushing and closing whatever open connections to the server you have.
As far as using Threads goes, I would recommend using some of the build-in threading tools like Handlers or implementing the AsyncTask.
If you really think Thread is the way to go, I would definitely recommend using a Singleton pattern as a "manager" for your threading.
This manager would control your threads, so you don't end up with more than one Thread talking to the server at any given time, even though you're in another part of the application.
As far as the Application class implementation goes, take a look at the Application class documentation:
Base class for those who need to maintain global application state. You can provide your own implementation by specifying its name in your AndroidManifest.xml's tag, which will cause that class to be instantiated for you when the process for your application/package is created.
There is normally no need to subclass Application. In most situation, static singletons can provide the same functionality in a more modular way.
So keeping away from implementing your own Application class is recommended, however if you let one of your Activities initialize your own Singleton class for managing the Threads and connections you might (just might) run into trouble, because the initialization of the singleton might "bind" to the specific Activity and so if the specific Activity is removed from the screen and paused it might be killed and so the singleton might be killed as well. So initializing the singleton inside your Application implementation might deem useful.
Sorry for the wall of text, but your question is quite "open-ended", so I've tried to give you a somewhat open-ended question - hope it helps ;-)

Pattern for working with process that may throw OutOfMemoryError

I have a Java client - server application. The client is designed to run arbitrary user code. If the user code running on the client creates an OutOfMemoryError then the client ends up in an ugly state. Normally the client would be sending messages (via RMI) to the server until the code the client is running terminates and the client gracefully disconnects with the server.
What would people recommend for the OOM situation on the client? Could I catch it and kill the client process? Presumably I would not be able to push any commands out from the server because the client will be unresponsive.
If possible, I would like the client process to be terminated without having to log on to the client machine and kill it "by hand". Clearly it would be best if the user code did not cause this error in the first place, but that is up to the user to debug and my framework to deal with in as friendly way as possible. Thanks for your ideas!
It's a bad idea to catch OutOfMemoryError. If you catch it, you'll probably not have enough memory to kill the process anyway...
One good practice when developing a server side application is to use a socket timeout. If your client doesn't send any command for a given amount of time, the connection is dropped. This makes your server more reliable, more secure, and prevents situations like yours happening.
Another thing you can do is to try to make your client app "OutOfMemoryError proof", not in the way that it can't run out of memory, but in the way that it shouldn't make your application crash.
You could define some 'reserve memory' you could catch the exception and deal with it (at least in most cases). But you need to make sure the chunk is large enough. This works as follows:
static byte[] reserveMemory = new byte[1024 * 1024];
try {
...
} catch (OutOfMemoryError e) {
reserveMemory = null;
cleanUpAndExit();
}
You could do a few things. 1 is spawn the user code into it's own process and watch it. If it failed, you could then notify the back end server of the failure so it could clean up the connection.
You could use a stateless server (which may not be possible) with asynchronous communication protocols (such as HTTPInvoker) instead of RMI. (Again, this may not be possible depending on what you are trying to do).
You could watch the memory usage in the client and kill the thread that is running the client code when the memory hits a certain point, and then have the watcher close out the connection, notifying the server of the problem.
Or, as Vivien Barousse mentioned, you could have a low timeout on the server side to prevent the situation.
Hope these help.
If you're routinely getting out-of-memory errors, I think the real solution is not to find a way to catch them, as all that that accomplishes is allowing you to die a less dramatic death. The real thing to do is rework your application to not keep running out of memory. What are you doing that you're running out of memory? If you're reading a bunch of data off a database and trying to process it all at once, maybe you can read it in smaller chunks, like one record at a time. If you're creating animations, maybe you can do one frame at a time instead of holding it all in memory at once. Etc.
Wrap a Java client program by another program -- maybe written in C++, Java or any other language (it must not work on the same VM as your client) -- that restart or log an error message.
Your client should log its state (create checkpoints*) every x operation/(...)/on application start and close. So you can implement a functionality in the client that clean up mess -- based on checkpoint info -- if a client is restarted with info that it crashed before. So the wrapper does not need to have any complicated functionality.
ps. OutOfMemory exception is a terminal error so there is no sense to catch it. Of course, sometimes there is enough memory to do something but it is not a rule
Checkpoints/Recovery is an architectural pattern used heavily in dependable systems see http://www.google.de/search?q=dependable+sytem+checkpoints+restore to find a number of publication on this topic.

How to pass sockets created to another Java Process

We have an application which creates many sockets which belongs to its thread, By design if this application somehow fails, all threads stop which is not wanted. So to overcome this issue, each thread must be separated from the main application, if one of the threads fails, the other ones should be running. One thing in our mind is to pass created socket to another java process, so what is the correct way?
An other approach also is welcome?
Waiting for your suggestions...
Forking:
You can't pass a socket handle between Java processes using the normal API as far as I can tell. However, it does seem to be possible on windows using the Winsock 2 API. On Posix you should be able to fork a child process with access to the parent socket, since forked processes inherit the parent's sockets.
You could, I think, implement a new SocketImpl class which supports moving a socket handle to another process, but you'd need to write some JNI code to do it.
Sounds pretty hairy to me, I doubt forking a new process from within Java is a good idea!
Listeners:
Another approach might be to spawn a new 'listener' process which is essentially a new pre-forked worker. Each worker could then take turns to listen to the socket for connections.
The workers would then need to coordinate with a control process which manages spawning new processes as needed.
I agree with #Bozho, if an error in one thread can take them all down (I guess it would have to be a JVM exception killing the whole app) you have a bigger problem. You should look at isolating the threads if possible.
It isn't. (Sockets can't be serialized.)
When one thread fails, its exception should be caught, logged, and this should not interfere with other threads.
So either design it to stop completely, or design it to not stop completely.
Or pass all the information about the socket (address/port) to another application, which itself could open a similar socket.
see this similar question socket passing between processes.
Unfortunately the barrier of the address space can not be exceeded.
I rather agree with Bozho you need to redesign your applications / critic threads so that an Exception or an Error does kill your whole VM.
To help you with that I suggest you to have a look to :
Thread.setDefaultUncaughtExceptionHandler(...) and Thread.setUncaughtExceptionHandler(...) (see hyperlink below) which helps to fetch unforseen problems (such as runtimes)
Runtime.addShutdownHook(...) (see hyperlink below) which helps closing things nicely (for example when an OutOfMemoryError occurs)
Regards
Cerber
http: //java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#setUncaughtExceptionHandler(java.lang.Thread.UncaughtExceptionHandler)
http: //java.sun.com/j2se/1.5.0/docs/api/java/lang/Runtime.html#addShutdownHook(java.lang.Thread)
Use a class that is shared between threads to hold sockets. You can use a HashMap to label each socket so other threads can reference the one it needs.
I want to respond to those who say 'just catch the exceptions and exit the thread'.
You cannot catch all the exceptions. The following cause the java jvm to exit:
assertion the jvm due to bugs in the jvm implementation
some faillure in jni code (sigsegv, sigabrt)
OutOfMemory

Categories

Resources