Let's assume that I have a grpc-java server with code as something like this:
#Override
public void getData(RequestValue requestValue, StreamObserver<ResponseValue>responseObserver) {
ResponseValue rv = ... // blocking code here
responseObserver.onNext(rv);
responseObserver.onCompleted();
}
So I have a responseValue as a result of blocking code (data from database or other service).
I want to avoid blocking my current thread using another thread-pool for my blocking tasks. For example, in Netty I can use specific EventExecutorGroup for such tasks.
How can I manage it properly with grpc-java service?
The easiest way is to do this is pass the responseObserver to the long running task:
#Override
public void getData(RequestValue requestValue, StreamObserver<ResponseValue> responseObserver) {
Runnable r = () -> {
try {
ResponseValue rv = ... // blocking code here
responseObserver.onNext(rv);
responseObserver.onCompleted();
} catch (Exception e) {
responseObserver.onError(e);
}
executor.schedule(r);
}
It is important that you complete the call at some time, even if an unexpected error occurs. Otherwise you will leak calls (that remain open until the timeout occurs, if ever).
Related
So I'm using ListenableFuture as a return type for certain operations. I expect the users to add callback to the future and then handle the success and exception cases. Now if the user cannot handle the exception, I want to have the ability to throw that exception onto the main Thread. Here's some code example:
public class SomeProcessor {
ListeningExecutorService executor = MoreExecutors.listeningDecorator(Executors.newSingleThreadExecutor());
public ListenableFuture<String> doStringProcessing() {
return executor.submit(() -> doWork());
}
private String doWork() {
return "stuff";
}
}
Then in a client class:
public class SomeConsumer {
public SomeConsumer (SomeProcessor processor) {
Futures.addCallback(processor.doStringProcessing(), new FutureCallback<String>() {
#Override
public void onSuccess(String result) {
// do something with result
}
#Override
public void onFailure(Throwable t) {
if (t instanceof ExceptionICanHandle) {
// great, deal with it
} else {
// HERE I want to throw on the Main thread, not on the executor's thread
// Assume somehow I can get a hold of the main thread object
mainThread.getUncaughtExceptionHandler().uncaughtException(mainThread, t);
// This above code seems wrong???
throw new RuntimeException("Won't work as this is not on the mainthread");
}
}
}, MoreExecutors.directionExecutor());
}
}
There is no direct way to do this.1
Hence, this question boils down to a combination of 2 simple things:
How do I communicate some data from a submitted task back to the code that is managing the pool itself? Which boils down to: How do I send data from one thread to another, and...
How do I throw an exception - which is trivial - throw x;.
In other words, you make the exception in your task, and do not throw it, instead, you store the object in a place the main thread can see it, and notify the main thread they need to go fetch it and throw it. Your main thread waits for this notification and upon receiving it, fetches it, and throws it.
A submitted task cannot simply 'ask' for its pool or the thread that manages it. However, that is easy enough to solve: Simply pass either the 'main thread' itself, or more likely some third object that serves as common communication line between them, to the task itself, so that task knows where to go.
Here is one simplistic approach based on the raw synchronization primitives baked into java itself:
public static void main(String[] args) {
// I am the main thread
// Fire up the executorservice here and submit tasks to it.
// then ordinarily you would let this thread end or sleep.
// instead...
ExecutorService service = ...;
AtomicReference<Throwable> err = new AtomicReference<>();
Runnable task = () -> doWork(err);
service.submit(task);
while (true) {
synchronized (err) {
Throwable t = err.get();
if (t != null) throw t;
err.wait();
}
}
}
public void doWork(AtomicReference<Throwable> envelope) {
try {
doActualWork();
catch (Throwable t) {
synchronized (envelope) {
envelope.set(t);
envelope.notifyAll();
}
}
}
There are many, many ways to send messages from one thread to another and the above is a rather finicky, primitive form. It'll do fine if you don't currently have any comms channels already available to you. But, if you already have e.g. a message queue service or the like you should probably use that instead here.
[1] Thread.stop(someThrowable) literally does this as per its own documentation. However, it doesn't work - it's not just deprecated, it has been axed entirely; calling it throws an UnsupportedOperationException on modern VMs (I think at this point 10 years worth of releases at least), and is marked deprecated with the rather ominous warning of This method is inherently unsafe. and a lot more to boot, it's not the right answer.
I am currently writing a small Java program where I have a client sending commands to a server. A separate Thread is dealing with replies from that server (the reply is usually pretty fast). Ideally I pause the Thread that made the server request until such time as the reply is received or until some time limit is exceeded.
My current solution looks like this:
public void waitForResponse(){
thisThread = Thread.currentThread();
try {
thisThread.sleep(10000);
//This should not happen.
System.exit(1);
}
catch (InterruptedException e){
//continue with the main programm
}
}
public void notifyOKCommandReceived() {
if(thisThread != null){
thisThread.interrupt();
}
}
The main problem is: This code does throw an exception when everything is going as it should and terminates when something bad happens. What is a good way to fix this?
There are multiple concurrency primitives which allow you to implement thread communication. You can use CountDownLatch to accomplish similar result:
public void waitForResponse() {
boolean result = latch.await(10, TimeUnit.SECONDS);
// check result and react correspondingly
}
public void notifyOKCommandReceived() {
latch.countDown();
}
Initialize latch before sending request as follows:
latch = new CountDownLatch(1);
I am using GRPC-Java 1.1.2. In an active GRPC session, I have a few bidirectional streams open. Is there a way to clean them from the client end when the client is disconnecting? When I try to disconnect, I run the following look for a fixed number of times and then disconnect but I can see the following error on the server side (not sure if its caused by another issue though):
disconnect from client
while (!channel.awaitTermination(3, TimeUnit.SECONDS)) {
// check for upper bound and break if so
}
channel.shutdown().awaitTermination(3, TimeUnit.SECONDS);
error on server
E0414 11:26:48.787276000 140735121084416 ssl_transport_security.c:439] SSL_read returned 0 unexpectedly.
E0414 11:26:48.787345000 140735121084416 secure_endpoint.c:185] Decryption error: TSI_INTERNAL_ERROR
If you want to close gRPC (server-side or bi-di) streams from the client end, you will have to attach the rpc call with a Context.CancellableContext found in package io.grpc.
Suppose you have an rpc:
service Messaging {
rpc Listen (ListenRequest) returns (stream Message) {}
}
In the client side, you will handle it like this:
public class Messaging {
private Context.CancellableContext mListenContext;
private MessagingGrpc.MessagingStub getMessagingAsyncStub() {
/* return your async stub */
}
public void listen(final ListenRequest listenRequest, final StreamObserver<Message> messageStream) {
Runnable listenRunnable = new Runnable() {
#Override
public void run() {
Messaging.this.getMessagingAsyncStub().listen(listenRequest, messageStream);
}
if (mListenContext != null && !mListenContext.isCancelled()) {
Log.d(TAG, "listen: already listening");
return;
}
mListenContext = Context.current().withCancellation();
mListenContext.run(listenRunnable);
}
public void cancelListen() {
if (mListenContext != null) {
mListenContext.cancel(null);
mListenContext = null;
}
}
}
Calling cancelListen() will emulate the error, 'CANCELLED', the connection will be closed, and onError of your StreamObserver<Message> messageStream will be invoked with throwable message: 'CANCELLED'.
If you use shutdownNow() it will more aggressively shutdown the RPC streams you have. Also, you need to call shutdown() or shutdownNow() before calling awaitTermination().
That said, a better solution would be to end all your RPCs gracefully before closing the channel.
having trouble with inter-thread communication and "solved" it by using "dummy messages" all over the place. Is this a bad idea? What are possible solutions?
Example Problem i have.
main thread starts a thread for processing and inserting records into database.
main thread reads a possibly huge file and puts one record (object) after another into a blockingqueue. processing thread reads from queue and does work.
How do I tell "processing thread" to stop?
Queue can be empty but work is not done and the main thread does not now either when processing thread has finished work and can't interrupt it.
So processing thread does
while (queue.size() > 0 || !Thread.currentThread().isInterrupted()) {
MyObject object= queue.poll(100, TimeUnit.MILLISECONDS);
if (object != null) {
String data = object.getData();
if (data.equals("END")) {
break;
}
// do work
}
}
// clean-up
synchronized queue) {
queue.notifyAll();
}
return;
and main thread
// ...start processing thread...
while(reader.hasNext(){
// ...read whole file and put data in queue...
}
MyObject dummy = new MyObject();
dummy.setData("END");
queue.put(dummy);
//Note: empty queue here means work is done
while (queue.size() > 0) {
synchronized (queue) {
queue.wait(500); // over-cautios locking prevention i guess
}
}
Note that insertion must be in same transaction and transaction can't be handled
by main thread.
What would be a better way of doing this?
(I'm learning and don't want to start "doing it the wrong way")
These dummy message is valid. It is called "poison". Something that the producer sends to the consumer to make it stop.
Other possibility is to call Thread.interrupt() somewhere in the main thread and catch and handle the InterruptedException accordingly, in the worker thread.
"solved" it by using "dummy messages" all over the place. Is this a
bad idea? What are possible solutions?
It's not a bad idea, it's called "Poison Pills" and is a reasonable way to stop a thread-based service.
But it only works when the number of producers and consumers is known.
In code you posted, there are two threads, one is "main thread", which produces data, the other is "processing thread", which consumes data, the "Poison Pills" works well for this circumstance.
But to imagine, if you also have other producers, how does consumer know when to stop (only when all producers send "Poison Pills"), you need to know exactly the number of all the producers, and to check the number of "Poison Pills" in consumer, if it equals to the number of producers, which means all producers stopped working, then consumer stops.
In "main thread", you need to catch the InterruptedException, since if not, "main thread" might not able to set the "Poison Pill". You can do it like below,
...
try {
// do normal processing
} catch (InterruptedException e) { /* fall through */ }
finally {
MyObject dummy = new MyObject();
dummy.setData("END");
...
}
...
Also, you can try to use the ExecutorService to solve all your problem.
(It works when you just need to do some works and then stop when all are finished)
void doWorks(Set<String> works, long timeout, TimeUnit unit)
throws InterruptedException {
ExecutorService exec = Executors.newCachedThreadPool();
try {
for (final String work : works)
exec.execute(new Runnable() {
public void run() {
...
}
});
} finally {
exec.shutdown();
exec.awaitTermination(timeout, unit);
}
}
I'm learning and don't want to start "doing it the wrong way"
You might need to read the Book: Java Concurrency in Practice. Trust me, it's the best.
What you could do (which I did in a recent project) is to wrap the queue and then add a 'isOpen()'method.
class ClosableQ<T> {
boolean isOpen = true;
private LinkedBlockingQueue<T> lbq = new LinkedBlockingQueue<T>();
public void put(T someObject) {
if (isOpen) {
lbq.put(someObject);
}
}
public T get() {
if (isOpen) {
return lbq.get(0);
}
}
public boolean isOpen() {
return isOpen;
}
public void open() {
isOpen = true;
}
public void close() {
isOpen = false;
}
}
So your writer thread becomes something like :
while (reader.hasNext() ) {
// read the file and put it into the queue
dataQ.put(someObject);
}
// now we're done
dataQ.close();
and the reader thread:
while (dataQ.isOpen) {
someObject = dataQ.get();
}
You could of course extend the list instead but that gives the user a level of access you might not want. And you need to add some concurrency thingies to this code, like AtomicBoolean.
I have a service which process a request from a user.
And this service call another external back-end system(web services). but I need to execute those back-end web services in parallel. How would you do that? What is the best approach?
thanks in advance
-----edit
Back-end system can run requests in parallel, we use containers like (tomcat for development) and websphere finally for production.
So I'm already in one thread(servlet) and need to spawn two tasks and possibly run them in parallel as close together as possible.
I can imagine using either quartz or thread with executors or let it be on Servlet engine. What is proper path to take in such a scenario?
You can use Threads to run the requests in parallel.
Depending on what you want to do, it may make sense to build on some existing technology like Servlets, that do the threading for you
The answer is to run the tasks in separate threads.
For something like this, I think you should be using a ThreadPoolExecutor with a bounded pool size rather than creating threads yourself.
The code would look something like this. (Please note that this is only a sketch. Check the javadocs for details, info on what the numbers mean, etc.)
// Create the executor ... this needs to be shared by the servlet threads.
Executor exec = new ThreadPoolExecutor(1, 10, 120, TimeUnit.SECONDS,
new ArrayBlockingQueue(100), ThreadPoolExecutor.CallerRunsPolicy);
// Prepare first task
final ArgType someArg = ...
FutureTask<ResultType> task = new FutureTask<ResultType>(
new Callable<ResultType>() {
public ResultType call() {
// Call remote service using information in 'someArg'
return someResult;
}
});
exec.execute(task);
// Repeat above for second task
...
exec.execute(task2);
// Wait for results
ResultType res = task.get(30, TimeUnit.SECONDS);
ResultType res2 = task2.get(30, TimeUnit.SECONDS);
The above does not attempt to handle exceptions, and you need to do something more sophisticated with the timeouts; e.g. keeping track of the overall request time and cancelling tasks if we run over time.
This is not a problem that Quartz is designed to solve. Quartz is a job scheduling system. You just have some tasks that you need to be executed ASAP ... possibility with the facility to cancel them.
Heiko is right that you can use Threads. Threads are complex beasts, and need to be treated with care. The best solution is to use a standard library, such as java.util.concurrent. This will be a more robust way of managing parallel operations. There are performance benefits which coming with this approach, such as thread pooling. If you can use such a solution, this would be the recommended way.
If you want to do it yourself, here is a very simple way of executing a number of threads in parallel, but probably not very robust. You'll need to cope better with timeouts and destruction of threads, etc.
public class Threads {
public class Task implements Runnable {
private Object result;
private String id;
public Task(String id) {
this.id = id;
}
public Object getResult() {
return result;
}
public void run() {
System.out.println("run id=" + id);
try {
// call web service
Thread.sleep(10000);
result = id + " more";
} catch (InterruptedException e) {
// TODO do something with the error
throw new RuntimeException("caught InterruptedException", e);
}
}
}
public void runInParallel(Runnable runnable1, Runnable runnable2) {
try {
Thread t1 = new Thread(runnable1);
Thread t2 = new Thread(runnable2);
t1.start();
t2.start();
t1.join(30000);
t2.join(30000);
} catch (InterruptedException e) {
// TODO do something nice with exception
throw new RuntimeException("caught InterruptedException", e);
}
}
public void foo() {
Task task1 = new Task("1");
Task task2 = new Task("2");
runInParallel(task1, task2);
System.out.println("task1 = " + task1.getResult());
System.out.println("task2 = " + task2.getResult());
}
}