Fire and forget for HTTP in Java - java

We're implementing our own analytics for that we've exposed a web service which needs to be invoked which will capture the data in our DB.
The problem is that as this is analytics we would be making lot of calls (like for every page load, call after each js, CSS loads etc...) so there'll be many many such calls. So I don want the server to be loaded with lots of requests to be more precise pending for response. Because the response we get back will hardly be of any use to us.
So is there any way to just fire the web service request and forget that I've fired it?
I understand that every HTTP request will have as response as well.
So one thing that ticked my mind was what if we make the request timeout to zero second? But I'm pretty not sure if this is the right way of doing this.
Please provide me with more suggestions

You might find following AsyncRequestDemo.java useful:
import java.net.URI;
import java.net.URISyntaxException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import org.apache.http.client.fluent.Async;
import org.apache.http.client.fluent.Content;
import org.apache.http.client.fluent.Request;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.concurrent.FutureCallback;
/**
* Following libraries have been used:
*
* 1) httpcore-4.4.5.jar
* 2) httpclient-4.5.2.jar
* 3) commons-logging-1.2.jar
* 4) fluent-hc-4.5.2.jar *
*
*/
public class AsyncRequestDemo {
public static void main(String[] args) throws Exception {
URIBuilder urlBuilder = new URIBuilder()
.setScheme("http")
.setHost("stackoverflow.com")
.setPath("/questions/38277471/fire-and-forget-for-http-in-java");
final int nThreads = 3; // no. of threads in the pool
final int timeout = 0; // connection time out in milliseconds
URI uri = null;
try {
uri = urlBuilder.build();
} catch (URISyntaxException use) {
use.printStackTrace();
}
ExecutorService executorService = Executors.newFixedThreadPool(nThreads);
Async async = Async.newInstance().use(executorService);
final Request request = Request.Get(uri).connectTimeout(timeout);
Future<Content> future = async.execute(request, new FutureCallback<Content>() {
public void failed(final Exception e) {
System.out.println("Request failed: " + request);
System.exit(1);
}
public void completed(final Content content) {
System.out.println("Request completed: " + request);
System.out.println(content.asString());
System.exit(0);
}
public void cancelled() {
}
});
System.out.println("Request submitted");
}
}

I used this:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
URL url = new URL(YOUR_URL_PATH, "UTF-8"));
ExecutorService executor = Executors.newFixedThreadPool(1);
Future<HttpResponse> response = executor.submit(new HttpRequest(url));
executor.shutdown();
for HttpRequest,HttpResponse
public class HttpRequest implements Callable<HttpResponse> {
private URL url;
public HttpRequest(URL url) {
this.url = url;
}
#Override
public HttpResponse call() throws Exception {
return new HttpResponse(url.openStream());
}
}
public class HttpResponse {
private InputStream body;
public HttpResponse(InputStream body) {
this.body = body;
}
public InputStream getBody() {
return body;
}
}
that is.

Yes, you could initiate the request and break the connection without waiting for a response... But you probably don't want to do that. The overhead of the server-side having to deal with ungracefully broken connections will far outweigh letting it proceed with returning a response.
A better approach to solving this kind of performance problem in a Java servlet would bet to shove all the data from the requests into a queue, respond immediately, and have one or more worker threads pick up items out of the queue for processing (such as writing it into a database).

Related

How to check is a Websocket connection is alive

I have a websocket connection to a server:
import javax.websocket.*;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
#ClientEndpoint
public class WebsocketExample {
private Session userSession;
private void connect() {
try {
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
container.connectToServer(this, new URI("someaddress"));
} catch (DeploymentException | URISyntaxException | IOException e) {
e.printStackTrace();
}
}
#OnOpen
public void onOpen(Session userSession) {
// Set the user session
this.userSession = userSession;
System.out.println("Open");
}
#OnClose
public void onClose(Session userSession, CloseReason reason) {
this.userSession = null;
System.out.println("Close");
}
#OnMessage
public void onMessage(String message) {
// Do something with the message
System.out.println(message);
}
}
After some time, it seems I don't receive any more messages from the server but the onClose method has not been called.
I would like to have a sort of timer that would at least log an error (and at best try to reconnect) if I did not receive any message during the last five minutes for instance. The timer would be reset when I receive a new message.
How can I do this?
Here is what I did. I changed javax.websocket by jetty and implemented a ping call:
import org.eclipse.jetty.util.ssl.SslContextFactory;
import org.eclipse.jetty.websocket.api.Session;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketClose;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketConnect;
import org.eclipse.jetty.websocket.api.annotations.OnWebSocketMessage;
import org.eclipse.jetty.websocket.api.annotations.WebSocket;
import org.eclipse.jetty.websocket.client.WebSocketClient;
import java.io.IOException;
import java.net.URI;
import java.nio.ByteBuffer;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
#WebSocket
public class WebsocketExample {
private Session userSession;
private final ScheduledExecutorService executorService = Executors.newScheduledThreadPool(1);
private void connect() {
try {
SslContextFactory sslContextFactory = new SslContextFactory();
WebSocketClient client = new WebSocketClient(sslContextFactory);
client.start();
client.connect(this, new URI("Someaddress"));
} catch (Exception e) {
e.printStackTrace();
}
}
#OnWebSocketConnect
public void onOpen(Session userSession) {
// Set the user session
this.userSession = userSession;
System.out.println("Open");
executorService.scheduleAtFixedRate(() -> {
try {
String data = "Ping";
ByteBuffer payload = ByteBuffer.wrap(data.getBytes());
userSession.getRemote().sendPing(payload);
} catch (IOException e) {
e.printStackTrace();
}
},
5, 5, TimeUnit.MINUTES);
}
#OnWebSocketClose
public void onClose(int code, String reason) {
this.userSession = null;
System.out.println("Close");
}
#OnWebSocketMessage
public void onMessage(String message) {
// Do something with the message
System.out.println(message);
}
}
Edit: This is just a ping example... I don't know if all servers are supposed to answer by a pong...
Edit2: Here is how to deal with the pong message. The trick was not to listen for String messages, but to Frame messages:
#OnWebSocketFrame
#SuppressWarnings("unused")
public void onFrame(Frame pong) {
if (pong instanceof PongFrame) {
lastPong = Instant.now();
}
}
To manage server time out, I modified the scheduled task as follows:
scheduledFutures.add(executorService.scheduleAtFixedRate(() -> {
try {
String data = "Ping";
ByteBuffer payload = ByteBuffer.wrap(data.getBytes());
userSession.getRemote().sendPing(payload);
if (lastPong != null
&& Instant.now().getEpochSecond() - lastPong.getEpochSecond() > 60) {
userSession.close(1000, "Timeout manually closing dead connection.");
}
} catch (IOException e) {
e.printStackTrace();
}
},
10, 10, TimeUnit.SECONDS));
... and handle the reconnection in the onClose method
You should work around this problem by implementing a heartbeat system which one side sends ping and one side answers with pong. Almost every websocket client and server (as far as I know) support this feature internally. This ping/pong frames could be sent from both sides. I usually implement it on server side because I usually know it has better chance to stay alive than clients (my opinion). If clients dont send back pong for long time, I know the connection is dead. On client side, I check the same: If server has not sent ping messages for a long time, I know connection is dead.
If ping/pong are not implemented in libraries you use (which I think javax websocket has it) you could make your own protocol for that.
The accepted answer uses Jetty specific API. There's a standard API for this:
to send ping: session.getAsyncRemote().sendPing(data)
to send pong (just keep-alive, without answer) session.getAsyncRemote().sendPong(data)
to react to pongs either session.addMessageHandler(handler) where handler implements MessageHandler.Whole<PongMessage> or create a method that is annotated with #OnMessage and has PongMessage param:
#OnMessage
public void onMessage(PongMessage pong) {
// check if the pong has the same payload as ping that was sent etc...
}
Periodic ping/keep-alive sending can be scheduled for example using ScheduledExecutorService just as the accepted answer does, but proper care of synchronization must be taken: if session.getBasicRemote() is used then all calls to the remote need to be synchronized. In case of session.getAsyncRemote() probably all containers except Tomcat handle synchronization automatically: see the discussion in this bug report.
Finally, it's important to cancel the pinging task (ScheduledFuture obtained from executor.scheduleAtFixedRate(...)) in onClose(...).
I've developed a simple WebsocketPingerService to ease up things (available in maven central). Create an instance and store it somewhere as a static var:
public Class WhicheverClassInYourApp {
public static WebsocketPingerService pingerService = new WebsocketPingerService();
// more code here...
}
You can configure ping interval, ping size, failure limit after which sessions should be closed, etc by passing arguments to the constructor.
After that register your endpoints for pinging in onOpen(...) and deregister in onClose(...):
#ClientEndpoint // or #ServerEndpoint -> pinging can be done from both ends
public class WebsocketExample {
private Session userSession;
#OnOpen
public void onOpen(Session userSession) {
this.userSession = userSession;
WhicheverClassInYourApp.pingerService.addConnection(userSession);
}
#OnClose
public void onClose(Session userSession, CloseReason reason) {
WhicheverClassInYourApp.pingerService.removeConnection(userSession);
}
// other methods here
}

How to tuning HTTPClient performance in crawling large amount small files?

I just want to crawl some Hacker News Stories, and my code:
import org.apache.http.client.fluent.Request;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.PrintWriter;
import java.util.logging.Logger;
import java.util.stream.IntStream;
public class HackCrawler {
private static String getUrlResponse(String url) throws IOException {
return Request.Get(url).execute().returnContent().asString();
}
private static String crawlItem(int id) {
try {
String json = getUrlResponse(String.format("https://hacker-news.firebaseio.com/v0/item/%d.json", id));
if (json.contains("\"type\":\"story\"")) {
return json;
}
} catch (IOException e) {
System.out.println("crawl " + id + " failed");
}
return "";
}
public static void main(String[] args) throws FileNotFoundException {
Logger logger = Logger.getLogger("main");
PrintWriter printWriter = new PrintWriter("hack.json");
for (int i = 0; i < 10000; i++) {
logger.info("batch " + i);
IntStream.range(12530671 - (i + 1) * 100, 12530671 - i * 100)
.parallel()
.mapToObj(HackCrawler::crawlItem).filter(x -> !x.equals(""))
.forEach(printWriter::println);
}
}
}
Now it will cost 3 seconds to crawl 100(1 batch) items.
I found use multithreading by parallel will give a speed up (about 5 times), but I have no idea about how to optimise it further.
Could any one give some suggestion about that?
To achieve what Fayaz means I would use Jetty Http Client asynchronous features (https://webtide.com/the-new-jetty-9-http-client/).
httpClient.newRequest("http://domain.com/path")
.send(new Response.CompleteListener()
{
#Override
public void onComplete(Result result)
{
// Your logic here
}
});
This client internally uses Java NIO to listen for incoming responses with a single thread per connection. It then dispatches content to worker threads which are not involved in any blocking I/O operation.
You can try to play with the maximum number of connections per destination (a destination is basically an host)
http://download.eclipse.org/jetty/9.3.11.v20160721/apidocs/org/eclipse/jetty/client/HttpClient.html#setMaxConnectionsPerDestination-int-
Since you are heavily loading a single server, this should be quite high.
The following steps should get you started.
Use a single thread to get response from the site as this is basically an IO operation.
Put these responses into a queue(Read about various implementations of BlockingQueue)
Now you can have multiple threads to pick up these responses and process them as you wish.
Basically, you will be having a single producer thread that gets the responses from the sites and multiple consumers who process these responses.

Android NetworkOnMainThreadException seems unsolvable

Working on this app, which communicated with a webserver.
I'am using okHttp for sending http requests and getting responses.
And for some reason I'll get NetworkOnMainThreadException when the request is taking to long.
All the solutions I found just wont work out.
Here is the code which works until the receiving data is taking to long.
HttpGetRunnable.java
public class HttpGetRunnable implements Runnable {
private Request request;
private Response response;
public HttpGetRunnable(String route) {
String url = "http://10.0.2.2:8080" + route;
request = new Request.Builder()
.url(url)
.build();
}
#Override
public void run() {
try{
OkHttpClient client = new OkHttpClient();
response = client.newCall(request).execute();
} catch (IOException e) {
e.printStackTrace();
System.err.println(e.getMessage());
}
}
public Response getResponse() {
return response;
}
}
Usage
try {
HttpGetRunnable httpGet = new HttpGetRunnable("/timesheet/" + user.getId());
Thread thread = new Thread(httpGet);
thread.start();
thread.join();
Response response = httpGet.getResponse();
String jsonString = response.body().string();
// ^ throw exception on this line when taking to long
} catch (Exception e) {
e.printStackTrace(System.out);
}
Error
04-16 23:06:53.042 31145-31145/com.example.jim.app I/System.out: android.os.NetworkOnMainThreadException
.. (no caused at)
What I tried so far:
using Callable (same way as Runnable, returning Response)
using Callable returning response body String
Executing runnable in ExecutorService single pool
Submitting callable in ExecutorService single pool then future.get
Using okHttp deferred #Override methods onFailed onResponse
I just can't figure it out..
What am I doing wrong here?
Thanks
P.S
using this version com.squareup.okhttp3:okhttp:3.2.0
What am I doing wrong here?
You are blocking the main application thread, via join(). Get rid of that.
For example, here is a sample app that uses OkHttp3 to request the latest Stack Overflow android questions. I have a dedicated LoadThread that handles the HTTP I/O:
/***
Copyright (c) 2013-2016 CommonsWare, LLC
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy
of the License at http://www.apache.org/licenses/LICENSE-2.0. Unless required
by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS
OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
From _The Busy Coder's Guide to Android Development_
https://commonsware.com/Android
*/
package com.commonsware.android.okhttp;
import android.util.Log;
import com.google.gson.Gson;
import java.io.BufferedReader;
import java.io.Reader;
import de.greenrobot.event.EventBus;
import okhttp3.OkHttpClient;
import okhttp3.Request;
import okhttp3.Response;
class LoadThread extends Thread {
static final String SO_URL=
"https://api.stackexchange.com/2.1/questions?"
+ "order=desc&sort=creation&site=stackoverflow&tagged=android";
#Override
public void run() {
try {
OkHttpClient client=new OkHttpClient();
Request request=new Request.Builder().url(SO_URL).build();
Response response=client.newCall(request).execute();
if (response.isSuccessful()) {
Reader in=response.body().charStream();
BufferedReader reader=new BufferedReader(in);
SOQuestions questions=
new Gson().fromJson(reader, SOQuestions.class);
reader.close();
EventBus.getDefault().post(new QuestionsLoadedEvent(questions));
}
else {
Log.e(getClass().getSimpleName(), response.toString());
}
}
catch (Exception e) {
Log.e(getClass().getSimpleName(), "Exception parsing JSON", e);
}
}
}
But I just start that thread in onCreate() of a fragment. I do not then try to block the main application thread via join(). Instead, I use greenrobot's EventBus to find out when the data is loaded, and then use it.

Play 2.5: get response body in custom http action

I'm trying to create a custom http action (https://playframework.com/documentation/2.5.x/JavaActionsComposition) to log request and response bodies with Play 2.5.0 Java. This is what I've got so far:
public class Log extends play.mvc.Action.Simple {
public CompletionStage<Result> call(Http.Context ctx) {
CompletionStage<Result> response = delegate.call(ctx);
//request body is fine
System.out.println(ctx.request().body().asText())
//how to get response body string here while also not sabotaging http response flow of the framework?
//my guess is it should be somehow possible to access it below?
response.thenApply( r -> {
//???
return null;
});
return response;
}
}
Logging is often considered a cross-cutting feature. In such cases the preferred way to do this in Play is to use Filters:
The filter API is intended for cross cutting concerns that are applied indiscriminately to all routes. For example, here are some common use cases for filters:
Logging/metrics collection
GZIP encoding
Security headers
This works for me:
import java.util.concurrent.CompletionStage;
import java.util.function.Function;
import javax.inject.Inject;
import akka.stream.*;
import play.Logger;
import play.mvc.*;
public class LoggingFilter extends Filter {
Materializer mat;
#Inject
public LoggingFilter(Materializer mat) {
super(mat);
this.mat = mat;
}
#Override
public CompletionStage<Result> apply(
Function<Http.RequestHeader, CompletionStage<Result>> nextFilter,
Http.RequestHeader requestHeader) {
long startTime = System.currentTimeMillis();
return nextFilter.apply(requestHeader).thenApply(result -> {
long endTime = System.currentTimeMillis();
long requestTime = endTime - startTime;
Logger.info("{} {} took {}ms and returned {}",
requestHeader.method(), requestHeader.uri(), requestTime, result.status());
akka.util.ByteString body = play.core.j.JavaResultExtractor.getBody(result, 10000l, mat);
Logger.info(body.decodeString("UTF-8"));
return result.withHeader("Request-Time", "" + requestTime);
});
}
}
What is it doing?
First this creates a new Filter which can be used along with other filters you may have. In order to get the body of the response we actually use the nextFilter - once we have the response we can then get the body.
As of Play 2.5 Akka Streams are the weapon of choice. This means that once you use the JavaResultExtractor, you will get a ByteString, which you then have to decode in order to get the real string underneath.
Please keep in mind that there should be no problem in copying this logic in the Action that you are creating. I just chose the option with Filter because of the reason stated at the top of my post.

Acting on 403s in Spring-AMQP

I need to guarantee consumer exclusivity with a variable number of consumer threads in different runtimes consuming from a fixed number of queues (where the number of queues is much greater than that of consumers).
My general thought was that I'd have each consumer thread attempt to establish an exclusive connection to clear a queue, and, if it went a given period without receiving a message from that queue, redirect it to another queue.
Even if a queue is temporarily cleared, it's liable to receive messages again in the future, so that queue cannot simply be forgotten about -- instead, a consumer should return to it later. To achieve that rotation, I thought I'd use a queue-of-queues. The danger would be losing references to queues within the queue-of-queues when consumers fail; I thought that seemed solvable with acknowledgements, as follows.
Essentially, each consumer thread waits to get a message (A) with a reference to a queue (1) from the queue-of-queues; message (A) remains initially unacknowledged. The consumer happily attempts to clear queue (1), and once queue (1) remains empty for a given amount of time, the consumer requests a new queue name from the queue-of-queues. Upon receiving a second message (B) and a reference to a new queue (2), the reference to queue (1) is put back on the end of the queue-of-queues as a new message (C), and finally message (A) is acknowledged.
In fact, the queue-of-queue's delivered-at-least-and-probably-only-once guarantee almost gets me exclusivity for the normal queues (1, 2) here, but in order to make sure I absolutely don't lose references to queues, I need to republish queue (1) as message (C) before I acknowledge message (A). That means if a server fails after republishing queue (1) as message (C) but before acknowledging (A), two references to queue (1) could exist in the queue-of-queues, and exclusivity is no longer guaranteed.
Therefore, I'd need to use AMQP's exclusive consumers flags, which are great, but as it stands, I'd also like to NOT republish a reference to a queue if I received a "403 ACCESS REFUSED" for it, so that duplicate references do not proliferate.
However, I'm using Spring's excellent AMQP library, and I don't see how I can hook in with an error handler. The setErrorHandler method exposed on the container doesn't seem for the "403 ACCESS REFUSED" errors.
Is there a way that I can act on the 403s with the frameworks I'm currently using? Alternatively, is there another way I can achieve the guarantees that I need? My code is below.
The "monitoring service":
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.joda.time.Period;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.AmqpAuthenticationException;
import org.springframework.amqp.core.MessageListener;
import org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Collection;
import java.util.Optional;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
import java.util.function.Supplier;
public class ListenerMonitoringService {
private static final Logger log = LoggerFactory.getLogger(ListenerMonitoringService.class);
private static final Period EXPIRATION_PERIOD = Period.millis(5000);
private static final long MONTIORING_POLL_INTERVAL = 5000;
private static final long MONITORING_INITIAL_DELAY = 5000;
private final Supplier<AbstractMessageListenerContainer> messageListenerContainerSupplier;
private final QueueCoordinator queueCoordinator;
private final ScheduledExecutorService executorService;
private final Collection<Record> records;
public ListenerMonitoringService(Supplier<AbstractMessageListenerContainer> messageListenerContainerSupplier,
QueueCoordinator queueCoordinator, ScheduledExecutorService executorService) {
this.messageListenerContainerSupplier = messageListenerContainerSupplier;
this.queueCoordinator = queueCoordinator;
this.executorService = executorService;
records = new ArrayList<>();
}
public void registerAndStart(MessageListener messageListener) {
Record record = new Record(messageListenerContainerSupplier.get());
// wrap with listener that updates record
record.container.setMessageListener((MessageListener) (m -> {
log.trace("{} consumed a message from {}", record.container, Arrays.toString(record.container.getQueueNames()));
record.freshen(DateTime.now(DateTimeZone.UTC));
messageListener.onMessage(m);
}));
record.container.setErrorHandler(e -> {
log.error("{} received an {}", record.container, e);
// this doesn't get called for 403s
});
// initial start up
executorService.execute(() -> {
String queueName = queueCoordinator.getQueueName();
log.debug("Received queue name {}", queueName);
record.container.setQueueNames(queueName);
log.debug("Starting container {}", record.container);
record.container.start();
// background monitoring thread
executorService.scheduleAtFixedRate(() -> {
log.debug("Checking container {}", record.container);
if (record.isStale(DateTime.now(DateTimeZone.UTC))) {
String newQueue = queueCoordinator.getQueueName();
String oldQueue = record.container.getQueueNames()[0];
log.debug("Switching queues for {} from {} to {}", record.container, oldQueue, newQueue);
record.container.setQueueNames(newQueue);
queueCoordinator.markSuccessful(queueName);
}
}, MONITORING_INITIAL_DELAY, MONTIORING_POLL_INTERVAL, TimeUnit.MILLISECONDS);
});
records.add(record);
}
private static class Record {
private static final DateTime DATE_TIME_MIN = new DateTime(0);
private final AbstractMessageListenerContainer container;
private Optional<DateTime> lastListened;
private Record(AbstractMessageListenerContainer container) {
this.container = container;
lastListened = Optional.empty();
}
public synchronized boolean isStale(DateTime now) {
log.trace("Comparing now {} to {} for {}", now, lastListened, container);
return lastListened.orElse(DATE_TIME_MIN).plus(EXPIRATION_PERIOD).isBefore(now);
}
public synchronized void freshen(DateTime now) {
log.trace("Updating last listened to {} for {}", now, container);
lastListened = Optional.of(now);
}
}
}
The "queue-of-queues" handler:
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Envelope;
import com.rabbitmq.client.GetResponse;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.amqp.rabbit.connection.CachingConnectionFactory;
import org.springframework.amqp.rabbit.connection.Connection;
import org.springframework.amqp.rabbit.connection.ConnectionFactory;
import org.springframework.amqp.rabbit.core.RabbitTemplate;
import java.io.IOException;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
private class MetaQueueCoordinator implements QueueCoordinator {
private static final Logger log = LoggerFactory.getLogger(MetaQueueCoordinator.class);
private final Channel channel;
private final Map<String, Envelope> envelopeMap;
private final RabbitTemplate rabbitTemplate;
public MetaQueueCoordinator(ConnectionFactory connectionFactory) {
Connection connection = connectionFactory.createConnection();
channel = connection.createChannel(false);
envelopeMap = new ConcurrentHashMap<>();
rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setExchange("");
rabbitTemplate.setRoutingKey("queue_of_queues");
}
#Override
public String getQueueName() {
GetResponse response;
try {
response = channel.basicGet("queue_of_queues", false);
} catch (IOException e) {
log.error("Unable to get from channel");
throw new RuntimeException(e);
}
String queueName = new String(response.getBody());
envelopeMap.put(queueName, response.getEnvelope());
return queueName;
}
#Override
public void markSuccessful(String queueName) {
Envelope envelope = envelopeMap.remove(queueName);
if (envelope == null) {
return;
}
log.debug("Putting {} at the end of the line...", queueName);
rabbitTemplate.convertAndSend(queueName);
try {
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (IOException e) {
log.error("Unable to acknowledge {}", queueName);
}
}
#Override
public void markUnsuccessful(String queueName) {
Envelope envelope = envelopeMap.remove(queueName);
if (envelope == null) {
return;
}
try {
channel.basicAck(envelope.getDeliveryTag(), false);
} catch (IOException e) {
log.error("Unable to acknowledge {}", queueName);
}
}
}
The ErrorHandler is for handling errors during message delivery, not setting up the listener itself.
The upcoming 1.5 release publishes application events when exceptions such as this occur.
It will be released later this summer; this feature is currently only available in the 1.5.0.BUILD-SNAPSHOT; a release candidate should be available in the next few weeks.
The project page shows how to get the snapshot from the snapshots repo.

Categories

Resources