I have implemented a Restful web interface using Jersey for sending messages received from an internal JMS publisher to external clients via HTTP. I have managed to get a test message out to a Java client, but the Thread throws a null pointer exception before completing the write() execution, closing the connection and preventing further communication.
Here is my resource class:
#GET
#Path("/stream_data")
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput getServerSentEvents(#Context ServletContext context){
final EventOutput eventOutput = new EventOutput();
new Thread( new ObserverThread(eventOutput, (MService) context.getAttribute("instance")) ).start();
return eventOutput;
}
And here is my thread's run method:
public class ObserverThread implements Observer, Runnable {
//constructor sets eventOutput & mService objects
//mService notifyObservers() called when JMS message received
//text added to Thread's message queue to await sending to client
public void run() {
try {
String message = "{'symbol':'test','entryType'='0','price'='test'}";
Thread.sleep(1000);
OutboundEvent.Builder builder = new OutboundEvent.Builder();
builder.mediaType(MediaType.APPLICATION_JSON_TYPE);
builder.data(String.class, message);
OutboundEvent event = builder.build();
eventOutput.write(event);
System.out.println(">>>>>>SSE CLIENT HAS BEEN REGISTERED!");
mService.addObserver(this);
while(!eventOutput.isClosed()){
if(!updatesQ.isEmpty()){
pushUpdate(updatesQ.dequeue());
}
}
System.out.println("<<<<<<<SSE CLIENT HAS BEEN DEREGISTERED!");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Here is my client code:
Client client = ClientBuilder.newBuilder().register(SseFeature.class).build();
WebTarget target = client.target(url);
EventInput eventInput = target.request().get(EventInput.class);
try {
while (!eventInput.isClosed()) {
eventInput.setChunkType(MediaType.WILDCARD_TYPE);
final InboundEvent inboundEvent = eventInput.read();
if (inboundEvent != null) {
String theString = inboundEvent.readData();
System.out.println(theString + "\n");
}
}
} catch (Exception e) {
e.printStackTrace();
}
I am getting the "{'symbol':'test','entryType'='0','price'='test'}" test message printed to the client console, but the server then prints a NullPointerException before it can print the ">>>>SSE Client registered" message. This closes the connection so the client exits the while loop and stops listening for updates.
I converted the project to a webapp 3.0 version facet in order to add an async-supported tag to the web.xml but i am receiving the same null pointer error. I am inclined to think that it is caused by the servlet ending the Request/Response objects once the first message is returned, evidence is shown in the stack trace:
Exception in thread "Thread-20" java.lang.NullPointerException
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:741)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:299)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:981)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:98)
at org.glassfish.jersey.message.internal.CommittingOutputStream.flush(CommittingOutputStream.java:292)
at org.glassfish.jersey.server.ChunkedOutput$1.call(ChunkedOutput.java:241)
at org.glassfish.jersey.server.ChunkedOutput$1.call(ChunkedOutput.java:192)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:242)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:345)
at org.glassfish.jersey.server.ChunkedOutput.flushQueue(ChunkedOutput.java:192)
at org.glassfish.jersey.server.ChunkedOutput.write(ChunkedOutput.java:182)
at com.bpc.services.service.ObserverThread.run(MarketObserverThread.java:32)
at java.lang.Thread.run(Thread.java:745)
<<<<<<<SSE CLIENT HAS BEEN DEREGISTERED!
I have attempted to test an sse broadcaster as well. In this case I am not seeing any exceptions thrown, but the connection is closed once the first message has been received, leading me to believe it is something in the servlet forcing the connection to close. Can anyone advise me on how to debug this on the server-side?
I had a similar issue from what seems to be a long standing bug in Jersey's #Context injection for ExecutorService instances. In their current implementation of Sse (version 2.27),
class JerseySse implements Sse {
#Context
private ExecutorService executorService;
#Override
public OutboundSseEvent.Builder newEventBuilder() {
return new OutboundEvent.Builder();
}
#Override
public SseBroadcaster newBroadcaster() {
return new JerseySseBroadcaster(executorService);
}
}
the executorService field is never initialized, so the JerseySseBroadcaster raises a NullPointerException in my case. I worked around the bug by explicitly triggering the injection.
If you're using HK2 for CDI (Jersey's default), a rough sketch of a solution to the question above could look similar to the following:
#Singleton
#Path("...")
public class JmsPublisher {
private Sse sse;
private SseBroadcaster broadcaster;
private final ExecutorService executor;
private final BlockingQueue<String> jmsMessageQueue;
...
#Context
public void setSse(Sse sse, ServiceLocator locator) {
locator.inject(sse); // Inject sse.executorService
this.sse = sse;
this.broadcaster = sse.newBroadcaster();
}
...
#GET
#Path("/stream_data")
#Produces(MediaType.SERVER_SENT_EVENTS)
public void register(SseEventSink eventSink) {
broadcaster.register(eventSink);
}
...
#PostConstruct
private void postConstruct() {
executor.submit(() -> {
try {
while(true) {
String message = jmsMessageQueue.take();
broadcaster.broadcast(sse.newEventBuilder()
.mediaType(MediaType.APPLICATION_JSON_TYPE)
.data(String.class, message)
.build());
}
} catch(InterruptedException e) {
Thread.currentThread().interrupt();
}
});
}
#PreDestroy
private void preDestroy() {
executor.shutdownNow();
}
}
Related
I have a Play Application with a ConsumerService that I want to start and have it listen to a particular RabbitMQ queue on startup. In Play! 2.5, my understanding is that this is now done via a Guide Module so I have a Module.java class in my app's root directly that looks like this:
public class Module extends AbstractModule {
#Override
protected void configure() {
bind(ConsumerService.class).asEagerSingleton();
}
}
Here is my ConsumerService class:
#Singleton
public class ConsumerService {
private static final String TASK_QUEUE_NAME = "queue";
private final JPAApi jpaApi;
#Inject
public ConsumerService(JPAApi api) throws Exception {
this.jpaApi = api;
pullMessages();
}
#Transactional
public void pullMessages() throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
final Connection connection = factory.newConnection();
final Channel channel = connection.createChannel();
channel.queueDeclare(TASK_QUEUE_NAME, true, false, false, null);
Logger.info(" [*] Waiting for messagez. To exit press CTRL+C");
channel.basicQos(1);
final Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
try {
JPA.em();
} catch (Exception e) {
System.out.println("JPA.em() failed: " + e.getMessage());
}
try {
jpaApi.em();
} catch (Exception e) {
System.out.println("jpaApi.em() failed: " + e.getMessage());
}
}
};
channel.basicConsume(TASK_QUEUE_NAME, false, consumer);
}
}
Clearly binding this service as an Eager Singleton has its downsides as attempting to get an entityManager via either of these methods throws an exception. My understanding is that it's due to the fact that this class is binded/loaded before Play has initialized the EntityManager factory. Basically the application hasn't started.
Forgive me but even though I've worked with JPA for years, I find this very confusing and not sure what my best approach should be in working around the basic issue: Start up a "Listener" that ultimately needs to do some DB action when it consumes a message.
I'm curious if there's a way I can put the "handleDelivery" method in a transaction, or redesign my initialization flow such that I can call/inject the jpaApi cleanly.
Also, is there any way to start up this consumer in Play 2.5 than the way I'm doing here? I'm having trouble finding such.
I've looked into the JPAApi.withTransaction documentation, but I'm hoping there's a better way that I'm not aware of.
I believe this question is not a duplicate of Server sent event with Jersey: EventOutput is not closed after client drops, but probably related to Jersey Server-Sent Events - write to broken connection does not throw exception.
In chapter 15.4.2 of the Jersey documentation, the SseBroadcaster is described:
However, the SseBroadcaster internally identifies and handles also client disconnects. When a client closes the connection the broadcaster detects this and removes the stale connection from the internal collection of the registered EventOutputs as well as it frees all the server-side resources associated with the stale connection.
I cannot confirm this. In the following testcase, I see the subclassed SseBroadcaster's onClose() method never being called: not when the EventInput is closed, and not when another message is broadcasted.
public class NotificationsResourceTest extends JerseyTest {
final static Logger log = LoggerFactory.getLogger(NotificationsResourceTest.class);
final static CountingSseBroadcaster broadcaster = new CountingSseBroadcaster();
public static class CountingSseBroadcaster extends SseBroadcaster {
final AtomicInteger connectionCounter = new AtomicInteger(0);
public EventOutput createAndAttachEventOutput() {
EventOutput output = new EventOutput();
if (add(output)) {
int cons = connectionCounter.incrementAndGet();
log.debug("Active connection count: "+ cons);
}
return output;
}
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
#Override
public void onException(final ChunkedOutput<OutboundEvent> chunkedOutput, final Exception exception) {
log.trace("An exception has been detected", exception);
}
public int getConnectionCount() {
return connectionCounter.get();
}
}
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
return eventOutput;
}
}
#Override
protected Application configure() {
ResourceConfig config = new ResourceConfig(NotificationsResource.class);
config.register(SseFeature.class);
return config;
}
#Test
public void test() throws Exception {
// check that there are no connections
assertEquals(0, broadcaster.getConnectionCount());
// connect subscriber
log.info("Connecting subscriber");
EventInput eventInput = target("notifications").request().get(EventInput.class);
assertFalse(eventInput.isClosed());
// now there are connections
assertEquals(1, broadcaster.getConnectionCount());
// push data
log.info("Broadcasting data");
String payload = UUID.randomUUID().toString();
OutboundEvent chunk = new OutboundEvent.Builder()
.mediaType(MediaType.TEXT_PLAIN_TYPE)
.name("message")
.data(payload)
.build();
broadcaster.broadcast(chunk);
// read data
log.info("Reading data");
InboundEvent inboundEvent = eventInput.read();
assertNotNull(inboundEvent);
assertEquals(payload, inboundEvent.readData());
// close subscription
log.info("Closing subscription");
eventInput.close();
assertTrue(eventInput.isClosed());
// at this point, the subscriber has disconnected itself,
// but jersey doesnt realise that
assertEquals(1, broadcaster.getConnectionCount());
// wait, give TCP a chance to close the connection
log.debug("Sleeping for some time");
Thread.sleep(10000);
// push data again, this should really flush out the not-connected client
log.info("Broadcasting data again");
broadcaster.broadcast(chunk);
Thread.sleep(100);
// there is no subscriber anymore
assertEquals(0, broadcaster.getConnectionCount()); // FAILS!
}
}
Maybe JerseyTest is not a good way to test this. In a less ... clinical setup, where a JavaScript EventSource is used, I see onClose() being called, but only after a message is broadcasted on the previously closed connection.
What am I doing wrong?
Why doesn't SseBroadcaster detect the closing of the connection by the client?
Follow-up
I've found JERSEY-2833 which was rejected with Works as designed:
According to the Jersey Documentation in SSE chapter (https://jersey.java.net/documentation/latest/sse.html) in 15.4.1 it's mentioned that Jersey does not explicitly close the connection, it's the responsibility of the resource method or the client.
What does that mean exactly? Should the resource enforce a timeout and kill all active and closed-by-client connections?
In the documentation of the constructor org.glassfish.jersey.media.sse.SseBroadcaster.SseBroadcaster(), it says:
Creates a new instance. If this constructor is called by a subclass, it assumes the the reason for the subclass to exist is to implement onClose(org.glassfish.jersey.server.ChunkedOutput) and onException(org.glassfish.jersey.server.ChunkedOutput, Exception)methods, so it adds the newly created instance as the listener. To avoid this, subclasses may call SseBroadcaster(Class) passing their class as an argument.
So you should not leave default constructor and try implementing your constructor invoking super with your class:
public CountingSseBroadcaster(){
super(CountingSseBroadcaster.class);
}
I believe it might be better to set a timeout on your resource and kill only that connection, for example:
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
new Timer().schedule( new TimerTask()
{
#Override public void run()
{
eventOutput.close()
}
}, 10000); // 10 second timeout
return eventOutput;
}
}
Im wondering if by subclassing you may have changed the behaviour.
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
In this you don't close the ChunkedOutput so it won't release the connection. Could this be the problem?
I have an EJB timer (EJB 2.1) which has bean managed transaction.
The timer code calls a business method which deals with 2 resources in a single transaction. One is database and other one is MQ queue server.
Application server used is Websphere Application Server 7 (WAS). In order to ensure consistency across 2 resources (database and queue manager), we have enabled the option to support 2 phase commit in WAS. This is to ensure that in case of any exception during database operation, message posted in queue is rolled back along with database rollback and vice versa.
Below is the flow explained:
When timeout occurs in Timer code, startProcess() in DirectProcessor is called which is our business method. This method has a try block within which there is a method call to createPostXMLMessage() in the same class. This in turn has a call to another method postMessage() in class PostMsg.
The issue is when we encounter any database exception in createPostXMLMessage() method, the message posted earlier does not roll back although database part is successfully rolled back. Please help.
In ejb-jar.xml
<session id="Transmit">
<ejb-name>Transmit</ejb-name>
<home>com.TransmitHome</home>
<remote>com.Transmit</remote>
<ejb-class>com.TransmitBean</ejb-class>
<session-type>Stateless</session-type>
<transaction-type>Bean</transaction-type>
</session>
public class TransmitBean implements javax.ejb.SessionBean, javax.ejb.TimedObject {
public void ejbTimeout(Timer arg0) {
....
new DIRECTProcessor().startProcess(mySessionCtx);
}
}
public class DIRECTProcessor {
public String startProcess(javax.ejb.SessionContext mySessionCtx) {
....
UserTransaction ut= null;
ut = mySessionCtx.getUserTransaction();
try {
ut.begin();
createPostXMLMessage(interfaceObj, btch_id, dpId, errInd);
ut.commit();
}
catch (Exception e) {
ut.rollback();
ut=null;
}
}
public void createPostXMLMessage(ArrayList<InstrInterface> arr_instrObj, String batchId, String dpId,int errInd) throws Exception {
...
PostMsg pm = new PostMsg();
try {
pm.postMessage( q_name, final_msg.toString());
// database update operations using jdbc
}
catch (Exception e) {
throw e;
}
}
}
public class PostMsg {
public String postMessage(String qName, String message) throws Exception {
QueueConnectionFactory qcf = null;
Queue que = null;
QueueSession qSess = null;
QueueConnection qConn = null;
QueueSender qSender = null;
que = ServiceLocator.getInstance().getQ(qName);
try {
qConn = (QueueConnection) qcf.createQueueConnection(
Constants.QCONN_USER, Constants.QCONN_PSWD);
qSess = qConn.createQueueSession(true, Session.AUTO_ACKNOWLEDGE);
qSender = qSess.createSender(que);
TextMessage txt = qSess.createTextMessage();
txt.setJMSDestination(que);
txt.setText(message);
qSender.send(txt);
} catch (Exception e) {
retval = Constants.ERROR;
e.printStackTrace();
throw e;
} finally {
closeQSender(qSender);
closeQSession(qSess);
closeQConn(qConn);
}
return retval;
}
}
Here is my DataClientFactory class.
public class DataClientFactory {
public static IClient getInstance() {
return ClientHolder.INSTANCE;
}
private static class ClientHolder {
private static final DataClient INSTANCE = new DataClient();
static {
new DataScheduler().startScheduleTask();
}
}
}
Here is my DataClient class.
public class DataClient implements IClient {
private ExecutorService service = Executors.newFixedThreadPool(15);
private RestTemplate restTemplate = new RestTemplate();
// for initialization purpose
public DataClient() {
try {
new DataScheduler().callDataService();
} catch (Exception ex) { // swallow the exception
// log exception
}
}
#Override
public DataResponse getDataSync(DataKey dataKeys) {
DataResponse response = null;
try {
Future<DataResponse> handle = getDataAsync(dataKeys);
response = handle.get(dataKeys.getTimeout(), TimeUnit.MILLISECONDS);
} catch (TimeoutException e) {
// log error
response = new DataResponse(null, DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR);
} catch (Exception e) {
// log error
response = new DataResponse(null, DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR);
}
return response;
}
#Override
public Future<DataResponse> getDataAsync(DataKey dataKeys) {
Future<DataResponse> future = null;
try {
DataTask dataTask = new DataTask(dataKeys, restTemplate);
future = service.submit(dataTask);
} catch (Exception ex) {
// log error
}
return future;
}
}
I get my client instance from the above factory as shown below and then make a call to getDataSync method by passing DataKey object. DataKey object has userId and Timeout values in it. Now after this, call goes to my DataTask class to call method as soon as handle.get is called.
IClient dataClient = DataClientFactory.getInstance();
long userid = 1234l;
long timeout_ms = 500;
DataKey keys = new DataKey.Builder().setUserId(userid).setTimeout(timeout_ms)
.remoteFlag(false).secondaryFlag(true).build();
// call getDataSync method
DataResponse dataResponse = dataClient.getDataSync(keys);
System.out.println(dataResponse);
Here is my DataTask class which has all the logic -
public class DataTask implements Callable<DataResponse> {
private DataKey dataKeys;
private RestTemplate restTemplate;
public DataTask(DataKey dataKeys, RestTemplate restTemplate) {
this.restTemplate = restTemplate;
this.dataKeys = dataKeys;
}
#Override
public DataResponse call() {
DataResponse dataResponse = null;
ResponseEntity<String> response = null;
int serialId = getSerialIdFromUserId();
boolean remoteFlag = dataKeys.isRemoteFlag();
boolean secondaryFlag = dataKeys.isSecondaryFlag();
List<String> hostnames = new LinkedList<String>();
Mappings mappings = ClientData.getMappings(dataKeys.whichFlow());
String localPrimaryAdress = null;
String remotePrimaryAdress = null;
String localSecondaryAdress = null;
String remoteSecondaryAdress = null;
// use mappings object to get above Address by using serialId and basis on
// remoteFlag and secondaryFlag populate the hostnames linked list
if (remoteFlag && secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(localSecondaryHostIPAdress);
hostnames.add(remotePrimaryHostIPAdress);
hostnames.add(remoteSecondaryHostIPAdress);
} else if (remoteFlag && !secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(remotePrimaryHostIPAdress);
} else if (!remoteFlag && !secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
} else if (!remoteFlag && secondaryFlag) {
hostnames.add(localPrimaryHostIPAdress);
hostnames.add(localSecondaryHostIPAdress);
}
for (String hostname : hostnames) {
// If host name is null or host name is in local block host list, skip sending request to this host
if (hostname == null || ClientData.isHostBlocked(hostname)) {
continue;
}
try {
String url = generateURL(hostname);
response = restTemplate.exchange(url, HttpMethod.GET, dataKeys.getEntity(), String.class);
// make DataResponse
break;
} catch (HttpClientErrorException ex) {
// make DataResponse
return dataResponse;
} catch (HttpServerErrorException ex) {
// make DataResponse
return dataResponse;
} catch (RestClientException ex) {
// If it comes here, then it means some of the servers are down.
// Add this server to block host list
ClientData.blockHost(hostname);
// log an error
} catch (Exception ex) {
// If it comes here, then it means some weird things has happened.
// log an error
// make DataResponse
}
}
return dataResponse;
}
private String generateURL(final String hostIPAdress) {
// make an url
}
private int getSerialIdFromUserId() {
// get the id
}
}
Now basis on userId, I will get the serialId and then get the list of hostnames, I am suppose to make a call depending on what flag is passed. Then I iterate the hostnames list and make a call to the servers. Let's say, if I have four hostnames (A, B, C, D) in the linked list, then I will make call to A first and if I get the data back, then return the DataResponse back. But suppose if A is down, then I need to add A to block list instantly so that no other threads can make a call to A hostname. And then make a call to hostname B and get the data back and return the response (or repeat the same thing if B is also down).
I have a background thread as well which runs every 10 minutes and it gets started as soon we get the client instance from the factory and it parses my another service URL to get the list of block hostnames that we are not supposed to make a call. Since it runs every 10 minutes so any servers which are down, it will get the list after 10 minutes only, In general suppose if A is down, then my service will provide A as the block list of hostnames and as soon as A becomes up, then that list will be updated as well after 10 minutes.
Here is my background thread code DataScheduler-
public class DataScheduler {
private RestTemplate restTemplate = new RestTemplate();
private static final Gson gson = new Gson();
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public void startScheduleTask() {
scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
try {
callDataService();
} catch (Exception ex) {
// log an error
}
}
}, 0, 10L, TimeUnit.MINUTES);
}
public void callDataService() throws Exception {
String url = null;
// execute the url and get the responseMap from it as a string
parseResponse(responseMap);
}
private void parseResponse(Map<FlowsEnum, String> responses) throws Exception {
// .. some code here to calculate partitionMappings
// block list of hostnames
Map<String, List<String>> coloExceptionList = gson.fromJson(response.split("blocklist=")[1], Map.class);
for (Map.Entry<String, List<String>> entry : coloExceptionList.entrySet()) {
for (String hosts : entry.getValue()) {
blockList.add(hosts);
}
}
if (update) {
ClientData.setAllMappings(partitionMappings);
}
// update the block list of hostnames
if (!DataUtils.isEmpty(responses)) {
ClientData.replaceBlockedHosts(blockList);
}
}
}
And here is my ClientData class which holds all the information for block list of hostnames and partitionMappings details (which is use to get the list of valid hostnames).
public class ClientData {
private static final AtomicReference<ConcurrentHashMap<String, String>> blockedHosts = new AtomicReference<ConcurrentHashMap<String, String>>(
new ConcurrentHashMap<String, String>());
// some code here to set the partitionMappings by using CountDownLatch
// so that read is blocked for first time reads
public static boolean isHostBlocked(String hostName) {
return blockedHosts.get().contains(hostName);
}
public static void blockHost(String hostName) {
blockedHosts.get().put(hostName, hostName);
}
public static void replaceBlockedHosts(List<String> blockList) {
ConcurrentHashMap<String, String> newBlockedHosts = new ConcurrentHashMap<>();
for (String hostName : blockList) {
newBlockedHosts.put(hostName, hostName);
}
blockedHosts.set(newBlockedHosts);
}
}
Problem Statement:-
When all the servers are up (A,B,C,D as an example) above code works fine and I don't see any TimeoutException happening at all from the handle.get but if let's say one server (A) went down which I was supposed to make a call from the main thread then I start seeing lot of TimeoutException, by lot I mean, huge number of client timeouts happening.
And I am not sure why this is happening? In general this won't be happening right since as soon as the server goes down, it will get added to blockList and then no thread will be making a call to that server, instead it will try another server in the list? So it should be smooth process and then as soon as those servers are up, blockList will get updated from the background thread and then you can start making a call.
Is there any problem in my above code which can cause this problem? Any suggestions will be of great help.
In general, what I am trying to do is - make a hostnames list depending on what user id being passed by using the mappings object. And then make a call to the first hostname and get the response back. But if that hostname is down, then add to the block list and make a call to the second hostname in the list.
Here is the Stacktrace which I am seeing -
java.util.concurrent.TimeoutException\n\tat java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:258)
java.util.concurrent.FutureTask.get(FutureTask.java:119)\n\tat com.host.client.DataClient.getDataSync(DataClient.java:20)\n\tat
NOTE: For multiple userId's, we can have same server, meaning server A can get resolve to multiple userId's.
In DataClient class, at the below line:
public class DataClient implements IClient {
----code code---
Future<DataResponse> handle = getDataAsync(dataKeys);
//BELOW LINE IS PROBLEM
response = handle.get(dataKeys.getTimeout(), TimeUnit.MILLISECONDS); // <--- HERE
} catch (TimeoutException e) {
// log error
response = new DataResponse(null, DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR);
} catch (Exception e) {
// log error
response = new DataResponse(null, DataErrorEnum.ERROR_CLIENT, DataStatusEnum.ERROR);
----code code-----
You have assigned a timeout to handle.get(...), which is timing out before your REST connections can respond. The rest connections themselves may or may not be timing out, but since you are timing out of get method of future before the completion of the execution of the thread, the blocking of hosts has no visible effect, while the code inside the call method of DataTask may be performing as expected. Hope this helps.
You asked about suggestions, so here are some suggestions:
1.) Unexpected return value
Method returns unexpectedly FALSE
if (ClientData.isHostBlocked(hostname)) //this may return always false! please check
2.) Exception-Handling
Are you really sure, that a RestClientException occurs?
Only when this exception occured, the host will be added to blocked list!
Your posted code seems to ignore logging (it is commented out!)
...catch (HttpClientErrorException ex) {
// make DataResponse
return dataResponse;
} catch (HttpServerErrorException ex) {
// make DataResponse
return dataResponse;
} catch (RestClientException ex) {
// If it comes here, then it means some of the servers are down.
// Add this server to block host list
ClientData.blockHost(hostname);
// log an error
} catch (Exception ex) {
// If it comes here, then it means some weird things has happened.
// log an error
// make DataResponse
}
I'm a newbie to Netty.
I'm looking for some samples. (Preferably but not necessarity using Camel Netty Component and Spring)
Specifically a sample Netty app that consumes TCP messages.
Also how can I write a JUnit test that can test this netty app?
Thanks,
Dar
I assume you still want to integrate with Camel. I would first look at the camel documentation . After that frustrates you, you will need to start experimenting. I have one example where I created a Camel Processor as a Netty Server. The Netty components work such that a From endpoint is a server which consumes and a To endpoint is a client which produces. I needed a To endpoint that was a server and the component did not support that. I simply implemented a Camel Processor as a spring bean that started a Netty Server when it was initialized. The JBoss Netty documentation and samples are very good though. It is worthwhile to step through them.
Here is my slimmed down example. It is a server that sends a message to all the clients that are connected. If you are new to Netty I highly suggest going through the samples I linked to above:
public class NettyServer implements Processor {
private final ChannelGroup channelGroup = new DefaultChannelGroup();
private NioServerSocketChannelFactory serverSocketChannelFactory = null;
private final ExecutorService executor = Executors.newCachedThreadPool();
private String listenAddress = "0.0.0.0"; // overridden by spring-osgi value
private int listenPort = 51501; // overridden by spring-osgi value
#Override
public void process(Exchange exchange) throws Exception {
byte[] bytes = (byte[]) exchange.getIn().getBody();
// send over the wire
sendMessage(bytes);
}
public synchronized void sendMessage(byte[] message) {
ChannelBuffer cb = ChannelBuffers.copiedBuffer(message);
//writes to all clients connected.
this.channelGroup.write(cb);
}
private class NettyServerHandler extends SimpleChannelUpstreamHandler {
#Override
public void channelOpen(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelOpen(ctx, e);
//add client to the group.
NettyServer.this.channelGroup.add(e.getChannel());
}
// Perform an automatic recon.
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
super.channelConnected(ctx, e);
// do something here when a clien connects.
}
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
// Do something when a message is received...
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
// Log the exception/
}
}
private class PublishSocketServerPipelineFactory implements ChannelPipelineFactory {
#Override
public ChannelPipeline getPipeline() throws Exception {
// need to set the handler.
return Channels.pipeline(new NettyServerHandler());
}
}
// called by spring to start the server
public void init() {
try {
this.serverSocketChannelFactory = new NioServerSocketChannelFactory(this.executor, this.executor);
final ServerBootstrap serverBootstrap = new ServerBootstrap(this.serverSocketChannelFactory);
serverBootstrap.setPipelineFactory(new PublishSocketServerPipelineFactory());
serverBootstrap.setOption("reuseAddress", true);
final InetSocketAddress listenSocketAddress = new InetSocketAddress(this.listenAddress, this.listenPort);
this.channelGroup.add(serverBootstrap.bind(listenSocketAddress));
} catch (Exception e) {
}
}
// called by spring to shut down the server.
public void destroy() {
try {
this.channelGroup.close();
this.serverSocketChannelFactory.releaseExternalResources();
this.executor.shutdown();
} catch (Exception e) {
}
}
// injected by spring
public void setListenAddress(String listenAddress) {
this.listenAddress = listenAddress;
}
// injected by spring
public void setListenPort(int listenPort) {
this.listenPort = listenPort;
}
}
The camel release has a lot of examples but without a simple one for netty component.
Netty component can be use to setup a socket server to consume message and produce response back to the client. After some time of search on the web, I create my own tutorial using netty component in camel as a simple Camel-Netty hello world example to show:
Using netty component in camel to receive TCP message
Using POJO class to process the received message and create response
Sending response back to client.