I believe this question is not a duplicate of Server sent event with Jersey: EventOutput is not closed after client drops, but probably related to Jersey Server-Sent Events - write to broken connection does not throw exception.
In chapter 15.4.2 of the Jersey documentation, the SseBroadcaster is described:
However, the SseBroadcaster internally identifies and handles also client disconnects. When a client closes the connection the broadcaster detects this and removes the stale connection from the internal collection of the registered EventOutputs as well as it frees all the server-side resources associated with the stale connection.
I cannot confirm this. In the following testcase, I see the subclassed SseBroadcaster's onClose() method never being called: not when the EventInput is closed, and not when another message is broadcasted.
public class NotificationsResourceTest extends JerseyTest {
final static Logger log = LoggerFactory.getLogger(NotificationsResourceTest.class);
final static CountingSseBroadcaster broadcaster = new CountingSseBroadcaster();
public static class CountingSseBroadcaster extends SseBroadcaster {
final AtomicInteger connectionCounter = new AtomicInteger(0);
public EventOutput createAndAttachEventOutput() {
EventOutput output = new EventOutput();
if (add(output)) {
int cons = connectionCounter.incrementAndGet();
log.debug("Active connection count: "+ cons);
}
return output;
}
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
#Override
public void onException(final ChunkedOutput<OutboundEvent> chunkedOutput, final Exception exception) {
log.trace("An exception has been detected", exception);
}
public int getConnectionCount() {
return connectionCounter.get();
}
}
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
return eventOutput;
}
}
#Override
protected Application configure() {
ResourceConfig config = new ResourceConfig(NotificationsResource.class);
config.register(SseFeature.class);
return config;
}
#Test
public void test() throws Exception {
// check that there are no connections
assertEquals(0, broadcaster.getConnectionCount());
// connect subscriber
log.info("Connecting subscriber");
EventInput eventInput = target("notifications").request().get(EventInput.class);
assertFalse(eventInput.isClosed());
// now there are connections
assertEquals(1, broadcaster.getConnectionCount());
// push data
log.info("Broadcasting data");
String payload = UUID.randomUUID().toString();
OutboundEvent chunk = new OutboundEvent.Builder()
.mediaType(MediaType.TEXT_PLAIN_TYPE)
.name("message")
.data(payload)
.build();
broadcaster.broadcast(chunk);
// read data
log.info("Reading data");
InboundEvent inboundEvent = eventInput.read();
assertNotNull(inboundEvent);
assertEquals(payload, inboundEvent.readData());
// close subscription
log.info("Closing subscription");
eventInput.close();
assertTrue(eventInput.isClosed());
// at this point, the subscriber has disconnected itself,
// but jersey doesnt realise that
assertEquals(1, broadcaster.getConnectionCount());
// wait, give TCP a chance to close the connection
log.debug("Sleeping for some time");
Thread.sleep(10000);
// push data again, this should really flush out the not-connected client
log.info("Broadcasting data again");
broadcaster.broadcast(chunk);
Thread.sleep(100);
// there is no subscriber anymore
assertEquals(0, broadcaster.getConnectionCount()); // FAILS!
}
}
Maybe JerseyTest is not a good way to test this. In a less ... clinical setup, where a JavaScript EventSource is used, I see onClose() being called, but only after a message is broadcasted on the previously closed connection.
What am I doing wrong?
Why doesn't SseBroadcaster detect the closing of the connection by the client?
Follow-up
I've found JERSEY-2833 which was rejected with Works as designed:
According to the Jersey Documentation in SSE chapter (https://jersey.java.net/documentation/latest/sse.html) in 15.4.1 it's mentioned that Jersey does not explicitly close the connection, it's the responsibility of the resource method or the client.
What does that mean exactly? Should the resource enforce a timeout and kill all active and closed-by-client connections?
In the documentation of the constructor org.glassfish.jersey.media.sse.SseBroadcaster.SseBroadcaster(), it says:
Creates a new instance. If this constructor is called by a subclass, it assumes the the reason for the subclass to exist is to implement onClose(org.glassfish.jersey.server.ChunkedOutput) and onException(org.glassfish.jersey.server.ChunkedOutput, Exception)methods, so it adds the newly created instance as the listener. To avoid this, subclasses may call SseBroadcaster(Class) passing their class as an argument.
So you should not leave default constructor and try implementing your constructor invoking super with your class:
public CountingSseBroadcaster(){
super(CountingSseBroadcaster.class);
}
I believe it might be better to set a timeout on your resource and kill only that connection, for example:
#Path("notifications")
public static class NotificationsResource {
#GET
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput subscribe() {
log.debug("New stream subscription");
EventOutput eventOutput = broadcaster.createAndAttachEventOutput();
new Timer().schedule( new TimerTask()
{
#Override public void run()
{
eventOutput.close()
}
}, 10000); // 10 second timeout
return eventOutput;
}
}
Im wondering if by subclassing you may have changed the behaviour.
#Override
public void onClose(final ChunkedOutput<OutboundEvent> output) {
int cons = connectionCounter.decrementAndGet();
log.debug("A connection has been closed. Active connection count: "+ cons);
}
In this you don't close the ChunkedOutput so it won't release the connection. Could this be the problem?
Related
I have a list of objects that I put in Spring AMQP. Objects come from the controller. There is a service that processes these objects. And this service may crash with an OutOfMemoryException. Therefore, I run several instances of the application.
There is a problem: when the service crashes, I lose the received messages. I read about NACK. And could use it in case of Exception or RuntimeException. But my service crashes in Error. Therefore, I cannot send NACK. Is it possible to set a timeout in AMQP, after which I would be sent a message again if I had not confirmed the messages that had arrived earlier?
Here is the code I wrote:
public class Exchanges {
public static final String EXC_RENDER_NAME = "render.exchange.topic";
public static final TopicExchange EXC_RENDER = new TopicExchange(EXC_RENDER_NAME, true, false);
}
public class Queues {
public static final String RENDER_NAME = "render.queue.topic";
public static final Queue RENDER = new Queue(RENDER_NAME);
}
#RequiredArgsConstructor
#Service
public class RenderRabbitEventListener extends RabbitEventListener {
private final ApplicationEventPublisher eventPublisher;
#RabbitListener(bindings = #QueueBinding(value = #Queue(Queues.RENDER_NAME),
exchange = #Exchange(value = Exchanges.EXC_RENDER_NAME, type = "topic"),
key = "render.#")
)
public void onMessage(Message message, Channel channel) {
String routingKey = parseRoutingKey(message);
log.debug(String.format("Event %s", routingKey));
RenderQueueObject queueObject = parseRender(message, RenderQueueObject.class);
handleMessage(queueObject);
}
public void handleMessage(RenderQueueObject render) {
GenericSpringEvent<RenderQueueObject> springEvent = new GenericSpringEvent<>(render);
springEvent.setRender(true);
eventPublisher.publishEvent(springEvent);
}
}
And this is the method that sends messages:
#Async ("threadPoolTaskExecutor")
#EventListener (condition = "# event.queue")
public void start (GenericSpringEvent <RenderQueueObject> event) {
RenderQueueObject renderQueueObject = event.getWhat ();
send (RENDER_NAME, renderQueueObject);
}
private void send(String routingKey, Object queue) {
try {
rabbitTemplate.convertAndSend(routingKey, objectMapper.writeValueAsString(queue));
} catch (JsonProcessingException e) {
log.warn("Can't send event!", e);
}
}
You need to close the connection to get the message re-queued.
It's best to terminate the application after an OOME (which, of course, will close the connection).
I'm having some troubles with the right setup of the HTTP component. Currently a microservice pulls JSON Content from a provider, process it and send it to the next service for further processes. The main problem is that this microservice create a ton of CLOSE_WAIT socket connections. I understand that the whole concept of "KEEP-ALIVE" shall keep the connection open until I close it, but it's possible that the server will drop the connection for some reasons and creates this CLOSE_WAIT socket.
I've created a small service for debugging / testing purposes which make a GET Call to Google, but even this connection stays open until i close the program. I've tried many different solutions:
.setHeader("Connection", constant("Close"))
-Dhttp.keepAlive=false as VM argument
Switching from Camel-Http to Camel-Http4
httpClient.soTimeout=500 (Camel-HTTP), httpClient.socketTimeout=500 and connectionTimeToLive=500 (Camel-HTTP4)
.setHeader("Connection", simple("Keep-Alive")) and
.setHeader("Keep-Alive", simple("timeout=10")) (Camel-HTTP4)
Setting via debugging the response of DefaultConnectionKeepAliveStrategy from -1 (never ending) to a specific value in Camel-HTTP4 - that works but I was not able to inject my own strategy.
but i had no success. So maybe one of you can help me:
How can i tell the Camel-HTTP that it should close a connection when a specific time is passed? For example, the service pulls every hour from the content provider. After 3-4 hours the HttpComponent should close the connection after the pull and reopen it when the next pull is there. Currently every connection would be put back into the MultiThreadedHttpConnectionManager and the socket is still open.
If it's not possible to do that with Camel-HTTP: How can i inject a HttpClientBuilder into the Creation of my route? I know that it should be possible via httpClient option but I don't understand that specific part of the documentation.
Thank you all for your help
Unfortunately none of the proposed answers solved the CLOSE_WAIT connection status on my side until the application finally was closed.
I reproduced this problem with the following test case:
public class HttpInvokationTest extends CamelSpringTestSupport {
private static final Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
#EndpointInject(uri = "mock:success")
private MockEndpoint successEndpoint;
#EndpointInject(uri = "mock:failure")
private MockEndpoint failureEndpoint;
#Override
protected AbstractApplicationContext createApplicationContext() {
return new AnnotationConfigApplicationContext(ContextConfig.class);
}
#Configuration
#Import(HttpClientSpringTestConfig.class)
public static class ContextConfig extends CamelConfiguration {
#Override
public List<RouteBuilder> routes() {
List<RouteBuilder> routes = new ArrayList<>(1);
routes.add(new RouteBuilder() {
#Override
public void configure() {
from("direct:start")
.log(LoggingLevel.INFO, LOG, CONFIDENTIAL, "Invoking external URL: ${header[ERPEL_URL]}")
.setHeader("Connection", constant("close"))
.recipientList(header("TEST_URL"))
.log(LoggingLevel.DEBUG, "HTTP response code: ${header["+Exchange.HTTP_RESPONSE_CODE+"]}")
.bean(CopyBodyToHeaders.class)
.choice()
.when(header(Exchange.HTTP_RESPONSE_CODE).isGreaterThanOrEqualTo(300))
.to("mock:failure")
.otherwise()
.to("mock:success");
}
});
return routes;
}
}
#Test
public void testHttpInvocation() throws Exception {
successEndpoint.expectedMessageCount(1);
failureEndpoint.expectedMessageCount(0);
ProducerTemplate template = context.createProducerTemplate();
template.sendBodyAndHeader("direct:start", null, "TEST_URL", "http4://meta.stackoverflow.com");
successEndpoint.assertIsSatisfied();
failureEndpoint.assertIsSatisfied();
Exchange exchange = successEndpoint.getExchanges().get(0);
Map<String, Object> headers = exchange.getIn().getHeaders();
String body = exchange.getIn().getBody(String.class);
for (String key : headers.keySet()) {
LOG.info("Header: {} -> {}", key, headers.get(key));
}
LOG.info("Body: {}", body);
Thread.sleep(120000);
}
}
and issuing netstat -ab -p tcp | grep 151.101.129.69 requests, where the IP is the one of meta.stackoverflow.com.
This gave responses like:
tcp4 0 0 192.168.0.10.52183 151.101.129.69.https ESTABLISHED 37562 2118
tcp4 0 0 192.168.0.10.52182 151.101.129.69.http ESTABLISHED 885 523
right after the invocation followeb by
tcp4 0 0 192.168.0.10.52183 151.101.129.69.https CLOSE_WAIT 37562 2118
tcp4 0 0 192.168.0.10.52182 151.101.129.69.http CLOSE_WAIT 885 523
responses until the application was closed due to the Connection: keep-alive header even with a configuration like the one below:
#Configuration
#EnableConfigurationProperties(HttpClientSettings.class)
public class HttpClientSpringTestConfig {
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
#Resource
private HttpClientSettings httpClientSettings;
#Resource
private CamelContext camelContext;
private SocketConfig httpClientSocketConfig() {
/*
socket timeout:
Monitors the time passed between two consecutive incoming messages over the connection and
raises a SocketTimeoutException if no message was received within the given timeout interval
*/
LOG.info("Creating a SocketConfig with a socket timeout of {} seconds", httpClientSettings.getSoTimeout());
return SocketConfig.custom()
.setSoTimeout(httpClientSettings.getSoTimeout() * 1000)
.setSoKeepAlive(false)
.setSoReuseAddress(false)
.build();
}
private RequestConfig httpClientRequestConfig() {
/*
connection timeout:
The time span the application will wait for a connection to get established. If the connection
is not established within the given amount of time a ConnectionTimeoutException will be raised.
*/
LOG.info("Creating a RequestConfig with a socket timeout of {} seconds and a connection timeout of {} seconds",
httpClientSettings.getSoTimeout(), httpClientSettings.getConTimeout());
return RequestConfig.custom()
.setConnectTimeout(httpClientSettings.getConTimeout() * 1000)
.setSocketTimeout(httpClientSettings.getSoTimeout() * 1000)
.build();
}
#Bean(name = "httpClientConfigurer")
public HttpClientConfigurer httpConfiguration() {
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
return 5 * 1000;
}
};
PoolingHttpClientConnectionManager conMgr =
new PoolingHttpClientConnectionManager();
conMgr.closeIdleConnections(5, TimeUnit.SECONDS);
return builder -> builder.setDefaultSocketConfig(httpClientSocketConfig())
.setDefaultRequestConfig(httpClientRequestConfig())
.setConnectionTimeToLive(5, TimeUnit.SECONDS)
.setKeepAliveStrategy(myStrategy)
.setConnectionManager(conMgr);
}
#PostConstruct
public void init() {
LOG.debug("Initializing HTTP clients");
HttpComponent httpComponent = camelContext.getComponent("http4", HttpComponent.class);
httpComponent.setHttpClientConfigurer(httpConfiguration());
HttpComponent httpsComponent = camelContext.getComponent("https4", HttpComponent.class);
httpsComponent.setHttpClientConfigurer(httpConfiguration());
}
}
or defining the settings directly on the respective HttpComponent.
On examining the respective proposed methods in the HttpClient code it gets obvious that these methods are single-shot operations and not configurations that HttpClient internally will check every few milliseconds itself.
PoolingHttpClientConnectionManager states further that:
The handling of stale connections was changed in version 4.4. Previously, the code would check every connection by default before re-using it. The code now only checks the connection if the elapsed time since the last use of the connection exceeds the timeout that has been set. The default timeout is set to 2000ms
which only occurs if an attempt is done on re-using a connection, which makes sense for a connection pool, especially if multiple messages are exchanged via the same connection. For single-shot invocations, that should more behave like a Connection: close there might not be a reuse of that connection for some time, leaving the connection open or half-closed as no further attempt is done to read from that connection and therefore recognizing itself that the connection could be closed.
I noticed that I already solved such an issue a while back with traditional HttpClients and started to port this solution to Camel, which worked out quite easily.
The solution basically consists of registering HttpClients with a service and then periodically (5 seconds in my case) call closeExpiredConnections() and closeIdleConnections(...).
This logic is kept in a singleton enum, as this is actually in a library that a couple of applications use, each running in their own JVM.
/**
* This singleton monitor will check every few seconds for idle and stale connections and perform
* a cleanup on the connections using the registered connection managers.
*/
public enum IdleConnectionMonitor {
INSTANCE;
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
/** The execution service which runs the cleanup every 5 seconds **/
private ScheduledExecutorService executorService =
Executors.newScheduledThreadPool(1, new NamingThreadFactory());
/** The actual thread which performs the monitoring **/
private IdleConnectionMonitorThread monitorThread = new IdleConnectionMonitorThread();
IdleConnectionMonitor() {
// execute the thread every 5 seconds till the application is shutdown (or the shutdown method
// is invoked)
executorService.scheduleAtFixedRate(monitorThread, 5, 5, TimeUnit.SECONDS);
}
/**
* Registers a {#link HttpClientConnectionManager} to monitor for stale connections
*/
public void registerConnectionManager(HttpClientConnectionManager connMgr) {
monitorThread.registerConnectionManager(connMgr);
}
/**
* Request to stop the monitoring for stale HTTP connections.
*/
public void shutdown() {
executorService.shutdown();
try {
if (!executorService.awaitTermination(3, TimeUnit.SECONDS)) {
LOG.warn("Connection monitor shutdown not finished after 3 seconds!");
}
} catch (InterruptedException iEx) {
LOG.warn("Execution service was interrupted while waiting for graceful shutdown");
}
}
/**
* Upon invocation, the list of registered connection managers will be iterated through and if a
* referenced object is still reachable {#link HttpClientConnectionManager#closeExpiredConnections()}
* and {#link HttpClientConnectionManager#closeIdleConnections(long, TimeUnit)} will be invoked
* in order to cleanup stale connections.
* <p/>
* This runnable implementation holds a weakly referable list of {#link
* HttpClientConnectionManager} objects. If a connection manager is only reachable by {#link
* WeakReference}s or {#link PhantomReference}s it gets eligible for garbage collection and thus
* may return null values. If this is the case, the connection manager will be removed from the
* internal list of registered connection managers to monitor.
*/
private static class IdleConnectionMonitorThread implements Runnable {
// we store only weak-references to connection managers in the list, as the lifetime of the
// thread may extend the lifespan of a connection manager and thus allowing the garbage
// collector to collect unused objects as soon as possible
private List<WeakReference<HttpClientConnectionManager>> registeredConnectionManagers =
Collections.synchronizedList(new ArrayList<>());
#Override
public void run() {
LOG.trace("Executing connection cleanup");
Iterator<WeakReference<HttpClientConnectionManager>> conMgrs =
registeredConnectionManagers.iterator();
while (conMgrs.hasNext()) {
WeakReference<HttpClientConnectionManager> weakConMgr = conMgrs.next();
HttpClientConnectionManager conMgr = weakConMgr.get();
if (conMgr != null) {
LOG.trace("Found connection manager: {}", conMgr);
conMgr.closeExpiredConnections();
conMgr.closeIdleConnections(30, TimeUnit.SECONDS);
} else {
conMgrs.remove();
}
}
}
void registerConnectionManager(HttpClientConnectionManager connMgr) {
registeredConnectionManagers.add(new WeakReference<>(connMgr));
}
}
private static class NamingThreadFactory implements ThreadFactory {
#Override
public Thread newThread(Runnable r) {
Thread t = new Thread(r);
t.setName("Connection Manager Monitor");
return t;
}
}
}
As mentioned, this singleton service spawns an own thread that invokes the two, above mentioned methods every 5 seconds. These invocations take care of closing connections that are either unused for a certain amount of time or that are IDLE for the stated amount of time.
In order to camelize this service EventNotifierSupport can be utilized in order to let Camel take care of shutting down the monitor thread once it is closing down.
/**
* This Camel service with take care of the lifecycle management of {#link IdleConnectionMonitor}
* and invoke {#link IdleConnectionMonitor#shutdown()} once Camel is closing down in order to stop
* listening for stale connetions.
*/
public class IdleConnectionMonitorService extends EventNotifierSupport {
private final static Logger LOG = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
private IdleConnectionMonitor connectionMonitor;
#Override
public void notify(EventObject event) {
if (event instanceof CamelContextStartedEvent) {
LOG.info("Start listening for closable HTTP connections");
connectionMonitor = IdleConnectionMonitor.INSTANCE;
} else if (event instanceof CamelContextStoppingEvent){
LOG.info("Shutting down listener for open HTTP connections");
connectionMonitor.shutdown();
}
}
#Override
public boolean isEnabled(EventObject event) {
return event instanceof CamelContextStartedEvent || event instanceof CamelContextStoppingEvent;
}
public IdleConnectionMonitor getConnectionMonitor() {
return this.connectionMonitor;
}
}
In order to take advantage of that service, the connection manager that is used by the HttpClient Camel uses internally needs to be registered with the service, which is done in the code block below:
private void registerHttpClientConnectionManager(HttpClientConnectionManager conMgr) {
if (!getIdleConnectionMonitorService().isPresent()) {
// register the service with Camel so that on a shutdown the monitoring thread will be stopped
camelContext.getManagementStrategy().addEventNotifier(new IdleConnectionMonitorService());
}
IdleConnectionMonitor.INSTANCE.registerConnectionManager(conMgr);
}
private Optional<IdleConnectionMonitorService> getIdleConnectionMonitorService() {
for (EventNotifier eventNotifier : camelContext.getManagementStrategy().getEventNotifiers()) {
if (eventNotifier instanceof IdleConnectionMonitorService) {
return Optional.of((IdleConnectionMonitorService) eventNotifier);
}
}
return Optional.empty();
}
Last but not least the connection manager defined in httpConfiguration inside the HttpClientSpringTestConfig in my case needed to be past to the introduced register function
PoolingHttpClientConnectionManager conMgr = new PoolingHttpClientConnectionManager();
registerHttpClientConnectionManager(conMgr);
This might not be the prettiest solution, but it does close the half-closed connections on my machine.
#edit
I just learned that you can use a NoConnectionReuseStrategy which changes the connection state to TIME_WAIT rather than CLOSE_WAIT and therefore removes the connection after a short moment. Unfortunately, the request is still issued with a Connection: keep-alive header. This strategy will create a new connection per request, i.e. if you've got a 301 Moved Permanently redirect response the redirect would occur on a new connection.
The httpClientConfigurer bean would need to change to the following in order to make use of the above mentioned strategy:
#Bean(name = "httpClientConfigurer")
public HttpClientConfigurer httpConfiguration() {
return builder -> builder.setDefaultSocketConfig(socketConfig)
.setDefaultRequestConfig(requestConfig)
.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE);
}
It can be done by closing idle connections if they are idle for configured time. You can achieve same by configuring idle connection timeout for Camel Http Component.
Camel Http provide interface to do so.
Cast org.apache.camel.component.http4.HttpComponent to PoolingHttpClientConnectionManager
PoolingHttpClientConnectionManager poolingClientConnectionManager = (PoolingHttpClientConnectionManager) httpComponent
.getClientConnectionManager();
poolingClientConnectionManager.closeIdleConnections(5000, TimeUnit.MILLISECONDS);
Visit Here [http://hc.apache.org/httpcomponents-client-ga/httpclient/apidocs/org/apache/http/impl/conn/PoolingHttpClientConnectionManager.html#closeIdleConnections(long, java.util.concurrent.TimeUnit)]
Firstly Roman Vottner, your answer and just your sheer dedication to finding the issue helped me a truckload. I have been struggling with the CLOSE_WAIT for 2 days now and your answer was what helped. Here is what I did. Added the following code in my CamelConfiguration class which essentially tampers with CamelContext at startup.
HttpComponent http4 = camelContext.getComponent("https4", HttpComponent.class);
http4.setHttpClientConfigurer(new HttpClientConfigurer() {
#Override
public void configureHttpClient(HttpClientBuilder builder) {
builder.setConnectionReuseStrategy(NoConnectionReuseStrategy.INSTANCE);
}
});
Worked like a charm.
You can provide your own clientConnectionManager to HTTP4. Generally you should use an instance of org.apache.http.impl.conn.PoolingHttpClientConnectionManager, which you'd configure with your own org.apache.http.config.SocketConfig by passing it to setDefaultSocketConfig method of the connection manager.
If you're using Spring with Java config, you would have a method:
#Bean
PoolingHttpClientConnectionManager connectionManager() {
SocketConfig socketConfig = SocketConfig.custom()
.setSoKeepAlive(false)
.setSoReuseAddress(true)
.build();
PoolingHttpClientConnectionManager connectionManager = new PoolingHttpClientConnectionManager();
connectionManager.setDefaultSocketConfig(socketConfig);
return connectionManager;
}
and then you'd just use it in your endpoint definition like so: clientConnectionManager=#connectionManager
I have a Play Application with a ConsumerService that I want to start and have it listen to a particular RabbitMQ queue on startup. In Play! 2.5, my understanding is that this is now done via a Guide Module so I have a Module.java class in my app's root directly that looks like this:
public class Module extends AbstractModule {
#Override
protected void configure() {
bind(ConsumerService.class).asEagerSingleton();
}
}
Here is my ConsumerService class:
#Singleton
public class ConsumerService {
private static final String TASK_QUEUE_NAME = "queue";
private final JPAApi jpaApi;
#Inject
public ConsumerService(JPAApi api) throws Exception {
this.jpaApi = api;
pullMessages();
}
#Transactional
public void pullMessages() throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
final Connection connection = factory.newConnection();
final Channel channel = connection.createChannel();
channel.queueDeclare(TASK_QUEUE_NAME, true, false, false, null);
Logger.info(" [*] Waiting for messagez. To exit press CTRL+C");
channel.basicQos(1);
final Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
try {
JPA.em();
} catch (Exception e) {
System.out.println("JPA.em() failed: " + e.getMessage());
}
try {
jpaApi.em();
} catch (Exception e) {
System.out.println("jpaApi.em() failed: " + e.getMessage());
}
}
};
channel.basicConsume(TASK_QUEUE_NAME, false, consumer);
}
}
Clearly binding this service as an Eager Singleton has its downsides as attempting to get an entityManager via either of these methods throws an exception. My understanding is that it's due to the fact that this class is binded/loaded before Play has initialized the EntityManager factory. Basically the application hasn't started.
Forgive me but even though I've worked with JPA for years, I find this very confusing and not sure what my best approach should be in working around the basic issue: Start up a "Listener" that ultimately needs to do some DB action when it consumes a message.
I'm curious if there's a way I can put the "handleDelivery" method in a transaction, or redesign my initialization flow such that I can call/inject the jpaApi cleanly.
Also, is there any way to start up this consumer in Play 2.5 than the way I'm doing here? I'm having trouble finding such.
I've looked into the JPAApi.withTransaction documentation, but I'm hoping there's a better way that I'm not aware of.
I'm working on an application that uses Websockets (Java EE 7) to send messages to all the connected clients asynchronously. The server (Websocket endpoint) should send these messages whenever a new article (an engagement modal in my app) is created.
Everytime a connection is established to the websocket endpoint, I'm adding the corresponding session to a list, which I could be able to access outside.
But the problem I had is, when I'm accessing this created websocket endpoint to which all the clients connected from outside (any other business class), I've get the existing instance (like a singleton).
So, can you please suggest me a way I can get an existing instance of the websocket endpoint, as I can't create it as new MyWebsocketEndPoint() coz it'll be created by the websocket internal mechanism whenever the request from a client is received.
For a ref:
private static WebSocketEndPoint INSTANCE = null;
public static WebSocketEndPoint getInstance() {
if(INSTANCE == null) {
// Instead of creating a new instance, I need an existing one
INSTANCE = new WebSocketEndPoint ();
}
return INSTANCE;
}
Thanks in advance.
The container creates a separate instance of the endpoint for every client connection, so you can't do what you're trying to do. But I think what you're trying to do is send a message to all the active client connections when an event occurs, which is fairly straightforward.
The javax.websocket.Session class has the getBasicRemote method to retrieve a RemoteEndpoint.Basic instance that represents the endpoint associated with that session.
You can retrieve all the open sessions by calling Session.getOpenSessions(), then iterate through them. The loop will send each client connection a message. Here's a simple example:
#ServerEndpoint("/myendpoint")
public class MyEndpoint {
#OnMessage
public void onMessage(Session session, String message) {
try {
for (Session s : session.getOpenSessions()) {
if (s.isOpen()) {
s.getBasicRemote().sendText(message);
}
} catch (IOException ex) { ... }
}
}
But in your case, you probably want to use CDI events to trigger the update to all the clients. In that case, you'd create a CDI event that a method in your Websocket endpoint class observes:
#ServerEndpoint("/myendpoint")
public class MyEndpoint {
// EJB that fires an event when a new article appears
#EJB
ArticleBean articleBean;
// a collection containing all the sessions
private static final Set<Session> sessions =
Collections.synchronizedSet(new HashSet<Session>());
#OnOpen
public void onOpen(final Session session) {
// add the new session to the set
sessions.add(session);
...
}
#OnClose
public void onClose(final Session session) {
// remove the session from the set
sessions.remove(session);
}
public void broadcastArticle(#Observes #NewArticleEvent ArticleEvent articleEvent) {
synchronized(sessions) {
for (Session s : sessions) {
if (s.isOpen()) {
try {
// send the article summary to all the connected clients
s.getBasicRemote().sendText("New article up:" + articleEvent.getArticle().getSummary());
} catch (IOException ex) { ... }
}
}
}
}
}
The EJB in the above example would do something like:
...
#Inject
Event<ArticleEvent> newArticleEvent;
public void publishArticle(Article article) {
...
newArticleEvent.fire(new ArticleEvent(article));
...
}
See the Java EE 7 Tutorial chapters on WebSockets and CDI Events.
Edit: Modified the #Observer method to use an event as a parameter.
Edit 2: wrapped the loop in broadcastArticle in synchronized, per #gcvt.
Edit 3: Updated links to Java EE 7 Tutorial. Nice job, Oracle. Sheesh.
Actually, WebSocket API provides a way how you can control endpoint instantiation. See https://tyrus.java.net/apidocs/1.2.1/javax/websocket/server/ServerEndpointConfig.Configurator.html
simple sample (taken from Tyrus - WebSocket RI test):
public static class MyServerConfigurator extends ServerEndpointConfig.Configurator {
public static final MyEndpointAnnotated testEndpoint1 = new MyEndpointAnnotated();
public static final MyEndpointProgrammatic testEndpoint2 = new MyEndpointProgrammatic();
#Override
public <T> T getEndpointInstance(Class<T> endpointClass) throws InstantiationException {
if (endpointClass.equals(MyEndpointAnnotated.class)) {
return (T) testEndpoint1;
} else if (endpointClass.equals(MyEndpointProgrammatic.class)) {
return (T) testEndpoint2;
}
throw new InstantiationException();
}
}
You need to register this to an endpoint:
#ServerEndpoint(value = "/echoAnnotated", configurator = MyServerConfigurator.class)
public static class MyEndpointAnnotated {
#OnMessage
public String onMessage(String message) {
assertEquals(MyServerConfigurator.testEndpoint1, this);
return message;
}
}
or you can use it with programmatic endpoints as well:
public static class MyApplication implements ServerApplicationConfig {
#Override
public Set<ServerEndpointConfig> getEndpointConfigs(Set<Class<? extends Endpoint>> endpointClasses) {
return new HashSet<ServerEndpointConfig>
(Arrays.asList(ServerEndpointConfig.Builder
.create(MyEndpointProgrammatic.class, "/echoProgrammatic")
.configurator(new MyServerConfigurator())
.build()));
}
#Override
public Set<Class<?>> getAnnotatedEndpointClasses(Set<Class<?>> scanned) {
return new HashSet<Class<?>>(Arrays.asList(MyEndpointAnnotated.class));
}
Of course it is up to you if you will have one configurator used for all endpoints (ugly ifs as in presented snippet) or if you'll create separate configurator for each endpoint.
Please do not copy presented code as it is - this is only part of Tyrus tests and it does violate some of the basic OOM paradigms.
See https://github.com/tyrus-project/tyrus/blob/1.2.1/tests/e2e/src/test/java/org/glassfish/tyrus/test/e2e/GetEndpointInstanceTest.java for complete test.
When you design an a client that is going to connect to a lot of servers, like a crawler.
You will code something like that :
// the pipeline
public class CrawlerPipelineFactory implements ChannelPipelineFactory {
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(new CrawlerHandler());
}
}
// the channel handler
public class CrawlerHandler extends SimpleChannelHandler {
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
// ...
}
}
// the main :
public static void main(){
ChannelFactory factory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool(),Executors.newCachedThreadPool());
ClientBootstrap scannerBootstrap = new ClientBootstrap(factory);
scannerBootstrap.setPipelineFactory(new CrawlerPipelineFactory());
while(true){
MyURL url = stack.pop();
ChannelFuture connect = scannerBootstrap.connect(url.getSocketAddress());
}
}
Now when you are in your ApplicationHandler, the stuff that implements your SimpleChannelHandler or WhatEverStreamHandler, (CrawlerHander in the example) the only piece of information you get is the socketAdress you are connecting to that you can recover in "public void channelConnected()" function.
Ok but what if I want to recover some user data, like the MyURL object you see in my code example ?
I use a dirty hack, I use a Map<"ip:port",MyURL> so I can retrieve the associated data in channelConnected because I know ip:port i'm connected on.
This hack is really dirty, it won't work if you are connecting simultaneously to the same server (or you'll have to bind to a local port and use a key like "localport:ip:remoteport" but it's so dirty).
So I'm seeking what is the good way to pass data the the CrawlerHander ?
It would be cool if we could pass this data via the connect() method of the bootstrap. I know I can pass argument in my ChannelPipelineFactory.getPipeline() because it's invoked via connect(). But now we can't, so here is another dirty hack I use :
EDIT:
// the main
while(!targets.isEmpty()){
client.connect("localhost",111); // we will never connect to localhost, it's a hack
}
// the pipleline
public ChannelPipeline getPipeline() throws Exception {
return Channels.pipeline(
new CrawlerHandler(targets.pop()) // I specify each new host to connect here
);
}
// in my channel handler
// Now I have the data I want in the constructor, so I m sure I get them before everything is called
public class CrawlerHandler extends SimpleChannelHandler {
ExtraParameter target;
public CrawlerHandler(ExtraParameter target) {
this.target = target;
// but, and it's the most dirty part, I have to abort the connection to localhost, and reinit a new connection to the real target
boolean bFirstConnect=true;
#Override
public void connectRequested(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
if(bFirstConnect){
bFirstConnect = false;
ctx.getChannel().connect(target.getSocketAddr());
}
You can pass variables to Channel via Bootstrap.
Netty.io 4.1 & SO - Adding an attribute to a Channel before creation
Update to this answer while very late.
You can pass the data to the newly connected channel/channel handler using ChannelLocal or in ChannelHandlerContext (or in the Channel it self in latest Netty 3.x) using a connect future listener. In below example, ChannelLocal is used.
public class ChannelDataHolder {
public final static ChannelLocal<String> CHANNEL_URL = new ChannelLocal<String>(true);
}
// for each url in bootstrap
MyURL url = ....;
ChannelFuture cf = scannerBootstrap.connect(url.getSocketAddress());
final String urlString = url.getUrl();
cf.addListener(new ChannelFutureListener() {
#Override
public void operationComplete(ChannelFuture future) throws Exception {
ChannelDataHolder.CHANNEL_URL.set(future.getChannel(), urlString);
}
});
//In the handler
public class CrawlerHandler extends SimpleChannelHandler {
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception {
String urlString = ChannelDataHolder.CHANNEL_URL.get(ctx.getChannel());
// ...use the data here
}
}
Note: instead of ChannelLocal, you can set and get the data using
ChannelHandlerContext.setAttachment()/getAttachment()
Channel.setAttachment()/getAttachment() in latest 3.x version of Netty
but both approaches does not support type safety.