I am currently writing a small Java program where I have a client sending commands to a server. A separate Thread is dealing with replies from that server (the reply is usually pretty fast). Ideally I pause the Thread that made the server request until such time as the reply is received or until some time limit is exceeded.
My current solution looks like this:
public void waitForResponse(){
thisThread = Thread.currentThread();
try {
thisThread.sleep(10000);
//This should not happen.
System.exit(1);
}
catch (InterruptedException e){
//continue with the main programm
}
}
public void notifyOKCommandReceived() {
if(thisThread != null){
thisThread.interrupt();
}
}
The main problem is: This code does throw an exception when everything is going as it should and terminates when something bad happens. What is a good way to fix this?
There are multiple concurrency primitives which allow you to implement thread communication. You can use CountDownLatch to accomplish similar result:
public void waitForResponse() {
boolean result = latch.await(10, TimeUnit.SECONDS);
// check result and react correspondingly
}
public void notifyOKCommandReceived() {
latch.countDown();
}
Initialize latch before sending request as follows:
latch = new CountDownLatch(1);
Related
Let's assume that I have a grpc-java server with code as something like this:
#Override
public void getData(RequestValue requestValue, StreamObserver<ResponseValue>responseObserver) {
ResponseValue rv = ... // blocking code here
responseObserver.onNext(rv);
responseObserver.onCompleted();
}
So I have a responseValue as a result of blocking code (data from database or other service).
I want to avoid blocking my current thread using another thread-pool for my blocking tasks. For example, in Netty I can use specific EventExecutorGroup for such tasks.
How can I manage it properly with grpc-java service?
The easiest way is to do this is pass the responseObserver to the long running task:
#Override
public void getData(RequestValue requestValue, StreamObserver<ResponseValue> responseObserver) {
Runnable r = () -> {
try {
ResponseValue rv = ... // blocking code here
responseObserver.onNext(rv);
responseObserver.onCompleted();
} catch (Exception e) {
responseObserver.onError(e);
}
executor.schedule(r);
}
It is important that you complete the call at some time, even if an unexpected error occurs. Otherwise you will leak calls (that remain open until the timeout occurs, if ever).
While using Unirest, the program doesn't exit until we manually shutdown every thread by invoking Unirest.shutdown(). If I had to make just one request, it's easy:
private static void asyncRequest (String link) {
try {
Future <HttpResponse <JsonNode>> request = Unirest.head(link).asJsonAsync(
new Callback<JsonNode>() {
#Override
public void completed(HttpResponse<JsonNode> httpResponse) {
print(httpResponse.getHeaders());
try {
Unirest.shutdown();
} catch (IOException e) {
e.printStackTrace();
}
}
#Override
public void failed(UnirestException e) {
print(e.getMessage());
}
#Override
public void cancelled() {
print("Request cancelled");
}
}
);
} catch (Exception e) {
e.printStackTrace();
}
}
public static void main(String[] args) throws Exception {
asyncRequest("https://entrepreneur.com");
}
But I have to make multiple HTTP request in parallel (subsequent requests are meant not to wait for previous requests to complete). In the code above, I have to execute the code inside asyncRequest more than once with different links. The problem is I can't decide when to invoke Unirest.shutdown() so that the program exits as soon as the last request receives response. If I call Unirest.shutdown() after all the calls to asyncRequest in main, some or all the requests might get interrupted. If I call it inside completed (and other overridden methods), only the first request is made and others are interrupted. How can I solve this?
In theory, you could make the current thread wait for the execution of the method and after they are all done you can call the shutdown. But this would make the whole process synchronous, which is not what we want. So what I would do is, run different thread (other than the main one) which will wait for the execution of all your http requests. To do so you can use the class CountDownLatch, initializing with the countdown before it releases the control to the parent thread. You pass the instance of the CountDownLatch to the async method and you decrease by one each time you complete an http request. When it reaches 0 it returns the control to the parent thread, where you know you can call shutdown method as all your requests are done.
tyrus websockets ClientManager connectToServer 'Handshake response not received'
how do I retry the connection without more and more daemon and Grizzly-kernel and Grizzly-worker threads created.
Is there a call to Session or client to kill/cleanup
Thread-1 to 4 and Grizzly-kernel and Grizzly-worker threads?
Example JAVA main line which attempts forever to make and maintain a connection with a server which may not be running or is periodically restart.
public void onClose(Session session, CloseReason closeReason) {
latch.countDown();
}
enter code here
public static void main(String[] args) {
while (true) {
latch = new CountDownLatch(1);
ClientManager client = ClientManager.createClient();
try {
client.connectToServer(wsListener.class, new URI("wss://<host>/ws"));
latch.await();
}
catch (DeploymentException e) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
break;
}
}
catch (Exception e) {
throw new RuntimeException(e);
}
client = null;
latch = null;
// HERE... clean up
}
}
client.connectToServer returns Session instance and when you call Session.close(), client runtime should be shut down (no threads left).
You did not specify version of Tyrus you are using (I recommend 1.3.3, we made some improvements in this area). Also you might be interested in our shared container support, see TYRUS-275. You could combine it with Thread pool config and you should have much better control of number of spawned/running threads.
We are always looking for new use cases, so if you think you have something which should be better supported in Tyrus, feel free to create new enhancement request on our JIRA.
I got this exact same behavior. I was using a lot of threads and synchronization and managed to accidently get the onOpen method of the ClientEndpoint blocking which caused the handshake to time out.
having trouble with inter-thread communication and "solved" it by using "dummy messages" all over the place. Is this a bad idea? What are possible solutions?
Example Problem i have.
main thread starts a thread for processing and inserting records into database.
main thread reads a possibly huge file and puts one record (object) after another into a blockingqueue. processing thread reads from queue and does work.
How do I tell "processing thread" to stop?
Queue can be empty but work is not done and the main thread does not now either when processing thread has finished work and can't interrupt it.
So processing thread does
while (queue.size() > 0 || !Thread.currentThread().isInterrupted()) {
MyObject object= queue.poll(100, TimeUnit.MILLISECONDS);
if (object != null) {
String data = object.getData();
if (data.equals("END")) {
break;
}
// do work
}
}
// clean-up
synchronized queue) {
queue.notifyAll();
}
return;
and main thread
// ...start processing thread...
while(reader.hasNext(){
// ...read whole file and put data in queue...
}
MyObject dummy = new MyObject();
dummy.setData("END");
queue.put(dummy);
//Note: empty queue here means work is done
while (queue.size() > 0) {
synchronized (queue) {
queue.wait(500); // over-cautios locking prevention i guess
}
}
Note that insertion must be in same transaction and transaction can't be handled
by main thread.
What would be a better way of doing this?
(I'm learning and don't want to start "doing it the wrong way")
These dummy message is valid. It is called "poison". Something that the producer sends to the consumer to make it stop.
Other possibility is to call Thread.interrupt() somewhere in the main thread and catch and handle the InterruptedException accordingly, in the worker thread.
"solved" it by using "dummy messages" all over the place. Is this a
bad idea? What are possible solutions?
It's not a bad idea, it's called "Poison Pills" and is a reasonable way to stop a thread-based service.
But it only works when the number of producers and consumers is known.
In code you posted, there are two threads, one is "main thread", which produces data, the other is "processing thread", which consumes data, the "Poison Pills" works well for this circumstance.
But to imagine, if you also have other producers, how does consumer know when to stop (only when all producers send "Poison Pills"), you need to know exactly the number of all the producers, and to check the number of "Poison Pills" in consumer, if it equals to the number of producers, which means all producers stopped working, then consumer stops.
In "main thread", you need to catch the InterruptedException, since if not, "main thread" might not able to set the "Poison Pill". You can do it like below,
...
try {
// do normal processing
} catch (InterruptedException e) { /* fall through */ }
finally {
MyObject dummy = new MyObject();
dummy.setData("END");
...
}
...
Also, you can try to use the ExecutorService to solve all your problem.
(It works when you just need to do some works and then stop when all are finished)
void doWorks(Set<String> works, long timeout, TimeUnit unit)
throws InterruptedException {
ExecutorService exec = Executors.newCachedThreadPool();
try {
for (final String work : works)
exec.execute(new Runnable() {
public void run() {
...
}
});
} finally {
exec.shutdown();
exec.awaitTermination(timeout, unit);
}
}
I'm learning and don't want to start "doing it the wrong way"
You might need to read the Book: Java Concurrency in Practice. Trust me, it's the best.
What you could do (which I did in a recent project) is to wrap the queue and then add a 'isOpen()'method.
class ClosableQ<T> {
boolean isOpen = true;
private LinkedBlockingQueue<T> lbq = new LinkedBlockingQueue<T>();
public void put(T someObject) {
if (isOpen) {
lbq.put(someObject);
}
}
public T get() {
if (isOpen) {
return lbq.get(0);
}
}
public boolean isOpen() {
return isOpen;
}
public void open() {
isOpen = true;
}
public void close() {
isOpen = false;
}
}
So your writer thread becomes something like :
while (reader.hasNext() ) {
// read the file and put it into the queue
dataQ.put(someObject);
}
// now we're done
dataQ.close();
and the reader thread:
while (dataQ.isOpen) {
someObject = dataQ.get();
}
You could of course extend the list instead but that gives the user a level of access you might not want. And you need to add some concurrency thingies to this code, like AtomicBoolean.
I know this has been discussed some times before, but I can't find an appropriate solution for my problem. I want to run a ServerSocket thread in the background, listening to the specified port. It's working actually, but only once. Seems that the port the server is listening to is never closed correctly and still active when I try to restart (O don't restart the thread itself). Can some tell why it is not working correctly? Thanks in advance for any help...!
edit:
I have same problem on the client side. I have a sender thread and also that one cannot not be stopped. What is the best way to do that?
The ClientConnector is just a class which connects to the server port and sends the data.
It's not a thread or anything like that.
That's my sender class:
private class InternalCamSender extends Thread {
private int sendInterval = 500; // default 500 ms
private ClientConnector clientConn = null;
public InternalCamSender() {
this.sendInterval = getSendingInterval();
this.clientConn = new ClientConnector();
}
#Override
public void run() {
while(!Thread.currentThread().isInterrupted()) {
clientConn.sendCamPdu(CodingScheme.BER, createNewPDU());
try {
Thread.sleep(sendInterval);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
And I try to handle it's behaviour like that:
if(jButton_startSending.getText().equals(STARTSENDING)) {
new Thread() {
public void run() {
iSender = new InternalCamSender();
iSender.start();
jButton_startSending.setText(STOPSENDING);
}
}.start();
} else {
new Thread() {
public void run() {
if(iSender.isAlive()) {
iSender.interrupt();
try {
iSender.join();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
iSender = null;
jButton_startSending.setText(STARTSENDING);
}
}.start();
}
Somehow I cannot stop the InternalCamSender like that. I tried with a volatile boolean before, was the same. I read the http://download.oracle.com/javase/1.5.0/docs/guide/misc/threadPrimitiveDeprecation.html page and tried also the example What should I use instead of Thread.stop? but even that was not stopping the thread? I am lost.
Any ideas?
edit:
found the answer for my clinet sending problem here http://www.petanews.de/code-snippets/java/java-threads-sauber-beenden-ohne-stop/
even i don't know why that is working. I am sure I tried that way before.
Problem solved!
You should close your resources (the streams and socket) in a finally block, rather than a catch block - this way the resources are always closed, whether an exception is caught or not.
It's also a bad practice to call System.exit() from within a catch block or within a thread - you are forcibly shutting down the whole JVM on any instance of an error. This is likely the cause of your problem with the server socket as well - whenever any exception is encountered with reading/closing the streams, you are exiting the JVM before you have a chance to close the server socket.