How to retry an operation when it fails - java

I have singleton client with the below contract
public interface MQPublisher {
void publish(String message) throws ClientConnectionException, ClientErrorException;
void start() throws ClientException;
void stop();
}
The class which is using this publisher is as below :
public class MessagePublisher {
#Autowired
private MQPublisher publisher;
private AtomicBoolean isPublisherRunning;
public void startPublisher() {
if (!isPublisherRunning.get()) {
publisher.start();
isPublisherRunning.compareAndSet(false, true);
}
}
#Retry(RETRY_MSG_UPLOAD)
public void sendMessage(String msg) {
try {
startPublisher();
publisher.publish(msg); // when multiple requests fail with the same exception, what will happen??
} catch (Exception e) {
log.error("Exception while publishing message : {}", msg, e);
publisher.stop();
isPublisherRunning.compareAndSet(true, false);
throw e;
}
}
We are using resilience4j retry functionality to retry the sendMessage method. This works fine in case of a single request. Consider a case when multiple requests are processed parallely and all of them fails with an exception. In this case, these requests will be retried and there is a chance that one thread will start the publisher while the other will stop it and it will throw exceptions again. How to handle this scenario in a cleaner way?

It isn't clear why the whole publisher should be stopped in case of failure. Nevertheless, if there are real reasons for that, I would change the stop method to use an atomic timer that will restart on each message sending and stop the publisher only after at least 5 seconds (or the time needed for a message to be successfully sent) have passed from the message sending.
Something like that:
#Slf4j
public class MessagePublisher {
private static final int RETRY_MSG_UPLOAD = 10;
#Autowired
private MQPublisher publisher;
private AtomicBoolean isPublisherRunning;
private AtomicLong publishStart;
public void startPublisher() {
if (!isPublisherRunning.get()) {
publisher.start();
isPublisherRunning.compareAndSet(false, true);
}
}
#Retryable(maxAttempts = RETRY_MSG_UPLOAD)
public void sendMessage(String msg) throws InterruptedException {
try {
startPublisher();
publishStart.set(System.nanoTime());
publisher.publish(msg); // when multiple requests fail with the same exception, what will happen??
} catch (Exception e) {
log.error("Exception while publishing message : {}", msg, e);
while (System.nanoTime() < publishStart.get() + 5000000000L) {
Thread.sleep(1000);
}
publisher.stop();
isPublisherRunning.compareAndSet(true, false);
throw e;
}
}
}
I think it is important to mention (as you just did) that this is a terrible design, and that such calculations should be done by the publisher implementer and not by the caller.

Related

java Catch exception inside a async callback

I have a callback which may throw a custom exception.
I'm trying to throw it, but it's not being catched on the outer scope, nor the compiler let me catch it, it says: "Exception is never thrown is the corresponding try block", even though it is.
this is my code:
public void openAsync(MessageAsyncCallback callback) {
try {
this.sendChannelOpen(this.getChannel(), getChannelOpenData().getFlags(), new MessageAsyncCallback() {
#Override
public void onComplete() throws NanoException {
// INanoPacket message = transport.getMessageByClassName(AudioServerHandshake.class.getName());
INanoPacket message = transport.getMessageByClassName(AudioClientHandshake.class.getName());
Log.info("Got audio server handshake, trying to client-handshake it");
sendClientHandshakeAsync((AudioServerHandshake) message, callback);
}
});
} catch (NanoException e) {
System.exit(-2);
}
}
and it doesn't let me catch NanoException
EDIT:
inside transport.getMessageByClassName I throw a NanoException.
EDIT2:
this is the method who invokes the exception:
public INanoPacket getMessageByClassName(String destClassName) throws NanoException {//} throws NanoException {
long startTime = System.currentTimeMillis(); // fetch starting time
INanoPacket message = this.getMessageFromTCPQueue();
while (!(message.getClass().getName().equals(destClassName)) && isRuntimeValid(startTime)) {
this.insertToTCPQueue(message); // put message back in queue
message = this.getMessageFromTCPQueue();
}
if (!(message.getClass().getName().equals(destClassName))) {
// timeout...
throw new NanoException("Couldn't find destination message: " + destClassName);
}
return message;
}
and I want to handle the exception not even in openAsync but on the method that calls openAsync.
why? because I'm handling messages coming from a remote device, this is why it's async. and I'm using some kind of timeout to wait for a specific message, and if the message isn't coming I want to restart the whole program.
Please notice that in your code you are not invoking onComplete method, you are defining it.
The exception would be thrown in a separate part of the code, possibly separate Thread (as it seems to be async). Therefore the "Exception is never thrown is the corresponding try block" message is right, as the exception will never be thrown when invoking this.sendChannelOpen(...) method.
Your try-catch statement needs to wrap the place where you invoke the onComplete method. As only by invoking onComplete method can you expect NanoException.
EDIT based on comments:
If you need to handle the exception throw in getMessageByClassName you can do it in onComplete method and not rethrow it. If you want to handle it somewhere else, you'd need to provide us the code of sendChannelOpen method or a place where the callback is invoked.
EDIT2 (based on question edits):
Please see the code below, as an example of how you can communicate between threads. I've used Latch, but there are other classes in java.util.concurrent that you may find useful.
BTW, I'm not going into the discussion why you want to restart the whole app on your NanoException, although there might be other options worth considering for recovering from that Exception.
import java.util.concurrent.CountDownLatch;
class NanoException extends Exception {}
interface MessageAsyncCallback {
void onComplete() throws NanoException;
}
public class AsyncApp {
private static final CountDownLatch errorLatch = new CountDownLatch(1);
public static void main(String[] args) {
new AsyncApp().run();
}
void run() {
sendChannelOpen("something", new MessageAsyncCallback() {
#Override
public void onComplete() throws NanoException {
// the whole try-catch-sleep is not really needed, just to wait a bit before exception is thrown
try {
// not needed, just to wait a bit before exception is thrown
Thread.sleep(5000);
} catch (InterruptedException e) {
throw new NanoException();
}
throw new NanoException();
}
});
try {
System.out.println("This is a main thread and we wait here, while the other thread executes...");
errorLatch.await();
System.out.println("Latch has reached 0, will now exit.");
System.exit(-2);
} catch (InterruptedException e) {
System.out.println("Error in main thread.");
System.exit(-1);
}
}
void sendChannelOpen(String notImportant, MessageAsyncCallback troublesomeCallback) {
runSomethingInSeparateThread(troublesomeCallback);
}
void runSomethingInSeparateThread(MessageAsyncCallback troublesomeCallback) {
new Thread(() -> {
try {
troublesomeCallback.onComplete();
} catch (NanoException e) {
System.out.println("You can catch it here, and do system exit here or synchronize with main Thread as below");
errorLatch.countDown();
}
}).start();
}
}

netty proxy server transaction

i'm new to netty and i would like to create a proxy server using netty that does the following :
_ upon receiving data from a client, the proxy server does some business logic that will possibly modify the data, and then forward it to the remote server, this business logic belongs to a transaction.
_ if the remote server return a success response then proxy server commit the transaction, otherwise the proxy server rollback the transaction.
Data flow diagram
I have taken a look at the proxy example at https://netty.io/4.1/xref/io/netty/example/proxy/package-summary.html but i havent figured out a good and simple way to implement the transaction logic mentioned above.
I should mention that i have create a separate thread pool to execute this business transaction to avoid blocking the Nio thread, my current solution is to actually use 2 thread pool with the same amount of threads : 1 on the frontendHandler and 1 on the backendHandler, the one at frontend will use wait() to wait for the response from the backend thread.
Here is my current code for the frontend handler:
#Override
public void channelActive(ChannelHandlerContext ctx) {
final Channel inboundChannel = ctx.channel();
// Start the connection attempt.
Bootstrap b = new Bootstrap();
b.group(inboundChannel.eventLoop())
.channel(ctx.channel().getClass())
.handler(new ServerBackendHandler(inboundChannel, response))
.option(ChannelOption.AUTO_READ, false);
ChannelFuture f = b.connect(remoteHost, remotePort);
outboundChannel = f.channel();
f.addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// connection complete start to read first data
inboundChannel.read();
} else {
// Close the connection if the connection attempt has failed.
inboundChannel.close();
}
}
});
}
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
//Executing business logic within a different thread pool to avoid blocking asynchronous i/o operation
frontendThreadPool.execute(new Runnable(){
#Override
public void run() {
//System.out.println("Starting business logic operation at front_end for message :" + m);
synchronized(response) {
//sleeping this thread to simulate business operation, insert business logic here.
int randomNum = ThreadLocalRandom.current().nextInt(1000, 2001);
try {
Thread.currentThread().sleep(randomNum);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
// was able to flush out data, start to read the next chunk
ctx.channel().read();
} else {
future.channel().close();
}
}
});
System.out.println("Blank response : " + response.getResponse());
//wait for response from remote server
try {
response.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Returned response from back end: " + response.getResponse());
//another piece of business logic here, if the remote server returned success then commit the transaction, if the remote server returned failure then throw exception to rollback
//stop current thread since we are done with it
Thread.currentThread().interrupt();
}
}
});
}
}
and for the backendHandler :
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
ByteBuf m = (ByteBuf) msg;
m = safeBuffer(m, ctx.alloc());
String str = m.toString(Charset.forName("UTF-8"));
backendThreadPool.execute(new Runnable() {
#Override
public void run() {
//System.out.println("Starting business logic operation at back_end.");
synchronized(response) {
int randomNum = ThreadLocalRandom.current().nextInt(1000, 2001);
try {
Thread.currentThread().sleep(randomNum);
} catch (InterruptedException e1) {
e1.printStackTrace();
}
response.setResponse(str);
System.out.println("Finished at back_end.");
response.notify();
Thread.currentThread().interrupt();
}
}
});
String s = "Message returned from remote server through proxy : " + str;
byte[] b = s.getBytes(Charset.forName("UTF-8"));
defaultResponse.writeBytes(b);
inboundChannel.writeAndFlush(defaultResponse).addListener(new ChannelFutureListener() {
public void operationComplete(ChannelFuture future) throws Exception {
if (future.isSuccess()) {
ctx.channel().read();
} else {
future.channel().close();
}
}
});
}
this solution is not at all optimized since the server have to use 2 threads to execute one transaction. So i guess my questions are :
_ Can i (and if i can, should i) use Spring #Transactional on the channelRead method ?
_ how can i implement the logic explained above in a simple way using netty ?
I have also used JMeter to test out the code above but it doesn't seem to be very stable, lots of requests didn't even have a response with the above code at around 2000 connections and 250 max threads in each thread pool
Thanks in advance

How to make JUnit4 "Wait" for asynchronous job to finish before running tests

I am trying to write a test for my android app that communicates with a cloud service.
Theoretically the flow for the test is supposed to be this:
Send request to the server in a worker thread
Wait for the response from the server
Check the response returned by the server
I am trying to use Espresso's IdlingResource class to accomplish that but it is not working as expected. Here's what I have so far
My Test:
#RunWith(AndroidJUnit4.class)
public class CloudManagerTest {
FirebaseOperationIdlingResource mIdlingResource;
#Before
public void setup() {
mIdlingResource = new FirebaseOperationIdlingResource();
Espresso.registerIdlingResources(mIdlingResource);
}
#Test
public void testAsyncOperation() {
Cloud.CLOUD_MANAGER.getDatabase().getCategories(new OperationResult<List<Category>>() {
#Override
public void onResult(boolean success, List<Category> result) {
mIdlingResource.onOperationEnded();
assertTrue(success);
assertNotNull(result);
}
});
mIdlingResource.onOperationStarted();
}
}
The FirebaseOperationIdlingResource
public class FirebaseOperationIdlingResource implements IdlingResource {
private boolean idleNow = true;
private ResourceCallback callback;
#Override
public String getName() {
return String.valueOf(System.currentTimeMillis());
}
public void onOperationStarted() {
idleNow = false;
}
public void onOperationEnded() {
idleNow = true;
if (callback != null) {
callback.onTransitionToIdle();
}
}
#Override
public boolean isIdleNow() {
synchronized (this) {
return idleNow;
}
}
#Override
public void registerIdleTransitionCallback(ResourceCallback callback) {
this.callback = callback;
}}
When used with Espresso's view matchers the test is executed properly, the activity waits and then check the result.
However plain JUNIT4 assert methods are ignored and JUnit is not waiting for my cloud operation to complete.
Is is possible that IdlingResource only work with Espresso methods ? Or am I doing something wrong ?
I use Awaitility for something like that.
It has a very good guide, here is the basic idea:
Wherever you need to wait:
await().until(newUserIsAdded());
elsewhere:
private Callable<Boolean> newUserIsAdded() {
return new Callable<Boolean>() {
public Boolean call() throws Exception {
return userRepository.size() == 1; // The condition that must be fulfilled
}
};
}
I think this example is pretty similar to what you're doing, so save the result of your asynchronous operation to a field, and check it in the call() method.
Junit will not wait for async tasks to complete. You can use CountDownLatch to block the thread, until you receive response from server or timeout.
Countdown latch is a simple yet elegant solution and does NOT need an external library. It also helps you focus on the actual logic to be tested rather than over-engineering the async wait or waiting for a response
void testBackgroundJob() {
Latch latch = new CountDownLatch(1);
//Do your async job
Service.doSomething(new Callback() {
#Override
public void onResponse(){
ACTUAL_RESULT = SUCCESS;
latch.countDown(); // notify the count down latch
// assertEquals(..
}
});
//Wait for api response async
try {
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
assertEquals(expectedResult, ACTUAL_RESULT);
}

GWT Asnyccall can not be executed

I have written a few Asynccall to run something in Server. They work fine. But some tasks need a long time to run. I want to know their status during the execution. Then I write the new Asynccall that sends query periodically to server to check the task's status. But for the Callback, I can get only the Failure Message. I have checked the log, this AsyncCallbak can not be executed. Can anyone give me some suggestion?
private BackendRemoteServiceAsync service = GWT.create(BackendRemoteService.class);
Timer timer = new Timer() {
public void run() {
try {
ApplicationController controller = ApplicationController
.getInstance();
BackendRequest request = new BackendRequest(
Command.COMMAND_TASK_CHECKSTATUS,
controller.getSessionToken());
request.setDomain(Domains.TASK);
...
service.callBackend(request, new AsyncCallback<BackendRequest>() {
public void onFailure(Throwable caught) {
task.setStatus("check status failure");
}
public void onSuccess(BackendRequest result) {
if (result.isValid()) {
task.setStatus("check status Success"); ...
} else
;
}
});
} catch (IllegalArgumentException ex) {
Window.alert("IllegalArgumentException: "+ ex.getMessage());
} catch (Throwable t) {
Window.alert(t.getMessage());
} finally {
}
}
};
timer.scheduleRepeating(5000);
I have found the bug. In my method I must send some objects to backend. These objects must implement Serializeable.

Request Queue implementation

I am currently involved in doing POC for an RPC layer. I have written the following method to throttle requests on the client side. Is this a good pattern to follow? I did not choose queueing the additional requests into a threadpool because I am interested only in synchronous invocations and I want the caller thread to block until it is woken up for executing the RPC request and also because threadpool seems additional overhead because of creation of additional threads.
I thought I can manage with the threads which are already issuing the requests. This works well, but the CPU usage is a bit unfair to other processes because as soon as a call ends, another call goes out. I load tested it with a huge number of requests and memory and CPU usage are stable. Can I somehow use ArrayBlockingQueue with poll to achieve the same? Is poll() too much of a CPU hog?
Note: I recognise a few concurrency issues with requestEnd method where it might not wake up all registered items correctly and I am thinking of a performant way to maintain atomicity there.
public class RequestQueue {
// TODO The capacity should come from the consumer which in turn comes from
// config
private static final int _OUTBOUND_REQUEST_QUEUE_MAXSIZE = 40000;
private static final int _CURRENT_REQUEST_QUEUE_INCREMENT = 1;
private static final int _CURRENT_REQUEST_POOL_MAXSIZE = 40;
private AtomicInteger currentRequestsCount = new AtomicInteger(0);
private ConcurrentLinkedQueue<RequestWaitItem> outboundRequestQueue = null;
public RequestQueue() {
outboundRequestQueue = new ConcurrentLinkedQueue<RequestWaitItem>();
}
public void registerForFuture(RequestWaitItem waitObject) throws Exception {
if (outboundRequestQueue.size() < _OUTBOUND_REQUEST_QUEUE_MAXSIZE) {
outboundRequestQueue.add(waitObject);
} else {
throw new Exception("Queue is full" + outboundRequestQueue.size());
}
}
public void requestStart() {
currentRequestsCount.addAndGet(_CURRENT_REQUEST_QUEUE_INCREMENT);
}
//Verify correctness
public RequestWaitItem requestEnd() {
int currentRequests = currentRequestsCount.decrementAndGet();
if (this.outboundRequestQueue.size() > 0 && currentRequests < _CURRENT_REQUEST_POOL_MAXSIZE) {
try {
RequestWaitItem waitObject = (RequestWaitItem)this.outboundRequestQueue.remove();
waitObject.setRequestReady(true);
synchronized (waitObject) {
waitObject.notify();
}
return waitObject;
} catch (NoSuchElementException ex) {
//Queue is empty so this is not an exception condition
}
}
return null;
}
public boolean isFull() {
return currentRequestsCount.get() > _CURRENT_REQUEST_POOL_MAXSIZE;
}
}
public class RequestWaitItem {
private boolean requestReady;
private RpcDispatcher dispatcher;
public RequestWaitItem() {
this.requestReady = false;
}
public RequestWaitItem(RpcDispatcher dispatcher) {
this();
this.dispatcher = dispatcher;
}
public boolean isRequestReady() {
return requestReady;
}
public void setRequestReady(boolean requestReady) {
this.requestReady = requestReady;
}
public RpcDispatcher getDispatcher() {
return dispatcher;
}
}
if (requestQueue.isFull()) {
try {
RequestWaitItem waitObject = new RequestWaitItem(dispatcher);
requestQueue.registerForFuture(waitObject);
//Sync
// Config and centralize this timeout
synchronized (waitObject) {
waitObject.wait(_REQUEST_QUEUE_TIMEOUT);
}
if (waitObject.isRequestReady() == false) {
throw new Exception("Request Issuing timedout");
}
requestQueue.requestStart();
try {
return waitObject.getDispatcher().dispatchRpcRequest();
}finally {
requestQueue.requestEnd();
}
} catch (Exception ex) {
// TODO define exception type
throw ex;
}
} else {
requestQueue.requestStart();
try {
return dispatcher.dispatchRpcRequest();
}finally {
requestQueue.requestEnd();
}
}
If I understood correctly, you want to throttle requests to remote service, by having at most 40 (say) concurrent requests. You can do this easily, without extra threads or services, with a semaphore.
Semaphore s = new Semaphore(40);
...
s.acquire();
try {
dispatcher.dispatchRpcRequest(); // Or whatever your remote call looks like
} finally {
s.release();
}
Use ExecutorService service = Executors.newFixedThreadPool(10); for this.
This will create at the max 10 threads and further requests will wait in the queue. I guess this should help.
Fixed Thread Pool

Categories

Resources