I'm programming in java and am using XML-RPC to submit data from a client to a server. My problem is that when I XmlRpcClient.execute code but whenever I have a connection error, the application gets stuck until I eventually get a Timeout exception (which I want). I placed this whole process in a new thread and wanted the ability to stop/cancel the process if I didn't want to wait for the timeout.
I learned how to stop Threads but idk if I can interrupt the XmlRpcClient.execute code.
any ideas?
The default execute method is, by nature, synchronous, that is, blocking.
If you are using Jakarta Commons HttpClient, you could set the socket timeout to a shorter value (the default is 0 meaning no timeout) with the transport's setConnectionTimeout method.
I believe, though, that the proper handling would be to use the executeAsync method and providing a callback to it in order to continue.
Related
For an exercise, we are to implement a server that has a thread that listens for connections, accepts them and throws the socket into a BlockingQueue. A set of worker threads in a pool then goes through the queue and processes the requests coming in through the sockets.
Each client connects to the server, sends a large number of requests (waiting for the response before sending the next request) and eventually disconnects when done.
My current approach is to have each worker thread waiting on the queue, getting a socket, then processing one request, and finally putting the (still open) socket back into the queue before handling another request, potentially from a different client. There are many more clients than there are worker threads, so many connections queue up.
The problem with this approach: A thread will be blocked by a client even if the client doesn't send anything. Possible pseudo-solutions, all not satisfactory:
Call available() on the inputStream and put the connection back into the queue if it returns 0. The problem: It's impossible to detect if the client is still connected.
As above but use socket.isClosed() or socket.isConnected() to figure out if the client is still connected. The problem: Both methods don't detect a client hangup, as described nicely by EJP in Java socket API: How to tell if a connection has been closed?
Probe if the client is still there by reading from or writing to it. The problem: Reading blocks (i.e. back to the original situation where an inactive client blocks the queue) and writing actually sends something to the client, making the tests fail.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Short answer: no. For a longer answer, refer to the one by EJP.
Which is why you probably shouldn't put the socket back on the queue at all, but rather handle all the requests from the socket, then close it. Passing the connection to different worker threads to handle requests separately won't give you any advantage.
If you have badly behaving clients you can use a read timeout on the socket, so reading will block only until the timeout occurs. Then you can close that socket, because your server doesn't have time to cater to clients that don't behave nicely.
Is there a way to solve this problem? I.e. is it possible to distinguish a disconnected client from a passive client without blocking or sending something?
Not really when using blocking IO.
You could look into the non-blocking (NIO) package, which deals with things a little differently.
In essence you have a socket which can be registered with a "selector". If you register sockets for "is data ready to be read" you can then determine which sockets to read from without having to poll individually.
Same sort of thing for writing.
Here is a tutorial on writing NIO servers
Turns out the problem is solvable with a few tricks. After long discussions with several people, I combined their ideas to get the job done in reasonnable time:
After creating the socket, configure it such that a blocking read will only block for a certain time, say 100ms: socket.setSoTimeout(100);
Additionally, record the timestamp of the last successful read of each connection, e.g. with System.currentTimeMillis()
In principle (see below for exception to this principle), run available() on the connection before reading. If this returns 0, put the connection back into the queue since there is nothing to read.
Exception to the above principle in which case available() is not used: If the timestamp is too old (say, more than 1 second), use read() to actually block on the connection. This will not take longer than the SoTimeout that you set above for the socket. If you get a TimeoutException, put the connection back into the queue. If you read -1, throw the connection away since it was closed by the remote end.
With this strategy, most read attempts terminate immediately, either returning some data or nothing beause they were skipped since there was nothing available(). If the other end closed its connection, we will detect this within one second since the timestamp of the last successful read is too old. In this case, we perform an actual read that will return -1 and the socket's isClosed() is updated accordingly. And in the case where the socket is still open but the queue is so long that we have more than a second of delay, it takes us aditionally 100ms to find out that the connection is still there but not ready.
EDIT: An enhancement of this is to change "last succesful read" to "last blocking read" and also update the timestamp when getting a TimeoutException.
No, the only way to discern an inactive client from a client that didn't shut down their socket properly is to send a ping or something to check if they're still there.
Possible solutions I can see is
Kick clients that haven't sent anything for a while. You would have to keep track of how long they've been quiet for, and once they reach a limit you assume they've disconnected .
Ping the client to see if they're still there. I know you asked for a way to do this without sending anything, but if this is really a problem, i.e you can't use the above solution, this is probably the best way to do it depending on the specifics(since it's an exercise you might have to imagine the specifics).
A mix of both, actually this is probably better. Keep track of how long they've been quiet for, after a bit send them a ping to see if they still live.
I use SSLEngines together with NIO to provide nonblocking SSL connections to my application. At some point during the handshake (probably after receiving ServerHelloDone) the SSLEngine requires me to process a delegated task.
So I call getDelegatedTask and call it's run method. The task itself calls X509ExtendedKeyManager.getCertificateChain, which in turn throws an NullPointerException. That exception is caught by the Handshaker and stored for later reporting.
However reporting works by calling the private checkTaskThrown method that is only called when a message was received or a message is to be sent.
But without getCertificateChain to complete correctly, there is nothing to send and the other side sends nothing as well, so there is nothing to receive. Hence the exception stays hidden.
As no side proceeds, we have a livelock.
And I found no way to prevent or detect that, except for
Using reflection to call checkTaskThrown
Use some task / timer for a timeout
Neither of which is the route I want to go...
When the task completes you should retry the operation that returned NEED_TASK.
You need to find and fix the NPE in your KeyManager.
Hi guys am getting following error am using Websocket and Tomcat8.
java.lang.IllegalStateException: The remote endpoint was in state [TEXT_FULL_WRITING] which is an invalid state for called method
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase$StateMachine.checkState(WsRemoteEndpointImplBase.java:1092)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase$StateMachine.textStart(WsRemoteEndpointImplBase.java:1055)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendString(WsRemoteEndpointImplBase.java:186)
at org.apache.tomcat.websocket.WsRemoteEndpointBasic.sendText(WsRemoteEndpointBasic.java:37)
at com.iri.monitor.webSocket.IRIMonitorSocketServlet.broadcastData(IRIMonitorSocketServlet.java:369)
at com.iri.monitor.webSocket.IRIMonitorSocketServlet.access$0(IRIMonitorSocketServlet.java:356)
at com.iri.monitor.webSocket.IRIMonitorSocketServlet$5.run(IRIMonitorSocketServlet.java:279)
You are trying to write to a websocket that is not in a ready state. The websocket is currently in writing mode and you are trying to write another message to that websocket which raises an error. Using an async write or as not such good practice a sleep can prevent this from happening. This error is also normally raised when a websocket program is not thread safe.
Neither async or sleep can help.
The key problem is the send-method can not be called concurrently.
So it's just about concurrency, you can use locks or some other thing. Here is how I handle it.
In fact, I write a actor to wrap the socketSession. It will produce an event when the send-method is called. Each actor will be registered in an Looper which contains a work thread and an event queue. Meanwhile the work thread keeps sending message.
So, I will use the sync-send method inside, the actor model will make sure about the concurrency.
The key problem now is about the number of Looper. You know, you can't make neither too much or too few threads. But you can still estimate a number by your business cases, and keep adjusting it.
it is actually not a concurrency issue, you will have the same error in a single-threaded environment. It is about asynchronous calls that must not overlap.
You should use session.get**Basic**Remote().sendText instead of session.get**Async**Remote().sendText() to avoid this problem. Should not be an issue as long as the amount of data you are writing stays reasonable small.
I am trying to write an event driven HTTP web server. Because I will be using only one thread, the events have to queued up and handled asynchronously (I am also using Java NIO). However, I am stuck with the initial step only. I have opened a ServerSocketChannel. I am not sure how to get a new SocketChannel connection when a request comes in. Is there an operating system queue that I can access through Java? (I am not sure as Java is OS independent) I do not want to use any blocking calls.
If I am proceeding in the wrong direction, any help would be appreciated.
thanks.
You need to:
create a Selector
put the ServerSocketChannel into non-blocking mode
register the SSC with the Selector using OP_ACCEPT
write a select() loop, which you will find in the NIO tutorial
In the select() loop you will find keys for which isAcceptable() returns true: that means you need to call ServerSocketChannel.accept() to accept a connection. That returns a SocketChannel, which you must then put into non-blocking mode and register with OP_READ.
In turn that will cause keys for which isReadable() returns true: that means you should read the associated SocketChannel.
You will find examples of all this in the NIO Tutorial. It gets much more complicated than this ;-)
Because of browser compatibility issues, I have decided to use long polling for a real time syncing and notification system. I use Java on the backend and all of the examples I've found thus far have been PHP. They tend to use while loops and a sleep method. How do I replicate this sort of thing in Java? There is a Thread.sleep() method, which leads me to...should I be using a separate thread for each user issuing a poll? If I don't use a separate thread, will the polling requests be blocking up the server?
[Update]
First of all, yes it is certainly possible to do a straightforward, long polling request handler. The request comes in to the server, then in your handler you loop or block until the information you need is available, then you end the loop and provide the information. Just realize that for each long polling client, yes you will be tying up a thread. This may be fine and perhaps this is the way you should start. However - if your web server is becoming so popular that the sheer number of blocking threads is becoming a performance problem, consider an asynchronous solution where you can keep a large numbers of client requests pending - their request is blocking, that is not responding until there is useful data, without tying up one or more threads per client.
[original]
The servlet 3.0 spec provides a standard for doing this kind asynchronous processing. Google "servlet 3.0 async". Tomcat 7 supports this. I'm guessing Jetty does also, but I have not used it.
Basically in your servlet request handler, when you realize you need to do some "long" polling, you can call a method to create an asynchronous context. Then you can exit the request handler and your thread is freed up, however the client is still blocking on the request. There is no need for any sleep or wait.
The trick is storing the async context somewhere "convenient". Then something happens in your app and you want to push data to the client, you go find that context, get the response object from it, write your content and invoke complete. The response is sent back to the client without you having to tie up a thread for each client.
Not sure this is the best solution for what you want but usually if you want to do this at period intervals in java you use the ScheduleExecutorService. There is a good example at the top of the API document. The TimeUnit is a great enum as you can specify the period time easily and clearly. So you can specify it to run every x minutes, hours etc