Best practive for socket client within EJB or CDI bean - java

my application must open a tcp socket connection to a server and listen to periodically incoming messages.
What are the best practices to implement this in a JEE 7 application?
Right now I have something like this:
#javax.ejb.Singleton
public class MessageChecker {
#Asynchronous
public void startChecking() {
// set up things
Socket client = new Socket(...);
[...]
// start a loop to retrieve the incoming messages
while((line = reader.readLine())!=null){
LOG.debug("Message from socket server: " + line);
}
}
}
The MessageChecker.startChecking() function is called from a #Startup bean with a #PostConstruct method.
#javax.ejb.Singleton
#Startup
public class Starter() {
#Inject
private MessageChecker checker;
#PostConstruct
public void startup() {
checker.startChecking();
}
}
Do you think this is the correct approach?
Actually it is not working well. The application server (JBoss 8 Wildfly) hangs and does not react to shutdown or re-deployment commands any more. I have the feeling that the it gets stuck in the while(...) loop.
Cheers
Frank

Frank, it is bad practice to do any I/O operations while you're in an EJB context. The reason behind this is simple. When working in a cluster:
They will inherently block each other while waiting on I/O connection timeouts and all other I/O related waiting timeouts. That is if the connection does not block for an unspecified amount of time, in which case you will have to create another Thread which scans for dead connections.
Only one of the EJBs will be able to connect and send/recieve information , the others will just wait in line. This way your system will not scale. No matter how many how many EJBs you have in your cluster, only one will actually do its work.
Apparently you already ran into problems by doing that :) . Jboss 8 seems not to be able to properly create and destroy the bean.
Now, I know your bean is a #Singleton so your architecture does not rely on transactionality, clustering and distribution of reading from that socket. So you might be ok with that.
However :D , you are asking for a java EE compliant way of solving this. Here is what should be done:
Redesign your solution to go with JMS. It 'smells' like you are trying to provide an async messaging functionality (Send a message & wait for reply). You might be using a synchronous protocol to do async messaging. Just give it a thought.
Create a JCA compliant adapter which will be injected in your EJB as a #Resource
You will have a connection pool configurable at AS level ( so you can have different values for different environments
You will have transactionality and rollback. Of course the rollback behavior will have to be coded by you
You can inject it via a #Resource annotation
There are some adapters out there, some might fit like a glove, some might be a bit overdesigned.
Oracle JCA Adapter

Related

WebSocket breaks after Tomcat 7 to 8 upgrade, using NIO Connector, while having ThreadLocal in code along with Spring’s TextWebSocketHandler

Websocket connection works on Spring’s WebSocket 4.1.6’s TextWebSocketHandler. For connection establishment we have HandshakeInterceptor, which in its beforeHandshake() method sets the user context:
AuthenticatedUser authUser = (AuthenticatedUser) httpServletRequest
.getSession().getAttribute(WEB_SOCKET_USER);
UserContextHolder.setUserContext(new UserContext(authUser));
The UserContextHolder class is modeled like Spring’s org.springframework.context.i18n.LocaleContextHolder - a class to hold the current UserContext in thread local storage:
private static ThreadLocal<UserContext> userContextHolder = new InheritableThreadLocal<UserContext>();
public static void setUserContext(UserContext userContext) {
userContextHolder.set(userContext);
if (userContext != null) {
LocaleContextHolder.setLocaleContext(userContext);
} else {
LocaleContextHolder.setLocaleContext(new LocaleContext() {
public Locale getLocale() {
return UserContext.getDefaultLocale();
}
});
}
}
This UserContext holds the thread’s authenticated User information for entire app, not only Spring (we also use another software, like Quartz, coexisting on this codebase on JVM, so we need to communicate between them).
Everything works okay when we run on the previous Tomcat 7 with standard BIO Connector. The problem arises with the upgrade to Tomcat 8 with the new NIO Connector being enabled by default.
When the WebSocket’s messages arrive they are processed in a call to Service methods which is validated in #SecurityValidation annotation, the MethodInterceptor checks if given thread has the User Context set:
AuthenticatedUser user = UserContextHolder.getUser();
if (user == null) {
throw new Exception(/* ... */);
}
But it’s null, so Exception gets thrown.
We believe that the problem is in threading change after the switch from BIO to NIO Connector.
BIO Scenario – we have one thread per one WebSocket model, so one
Handshake sets one UserContext and it works on this exact thread. It
works okay, even when there are more sockets, because i.e., when we
have 4 different WebSockets open, there are 4 different threads
handling them, that’s why ThreadLocal usage is working well.
NIO Scenario – the Non-Blocking IO concept is for reducing the threads
number (simplifying for our case), internally in 3rd party NIO
Connector there is used NIO’s Selector to manage the workload on a
single thread (with an Event Loop I guess, need to confirm). As we now
have just one single thread to handle all the WebSockets (or at least,
some part of them) the unexpected exception is thrown.
I’m not sure why once set UserContext gets nullified later, the code investigation brings us no clues, that’s why we think that might be a bug (or something).
The UserContext’s ThreadLocal being null in NIO seems to be the cause of the exception. Has anyone used ThreadLocal with NIO Connector? What is the best way to migrate Tomcat 7 BIO implementation to Tomcat 8 NIO implementation when using ThreadLocal in WebSocket communiction? Thanks!

JMS Queue Resource Leak Suspected - injecting Connection, Session, and Queue

I am trying to identify where a suspected memory / resource leak is occurring with regards to a JMS Queue I have built. I am new to JMS queues, so I have used many of the standard JMS class objects to ensure stability. But somewhere in my code or configuration I am doing something wrong, and my queue is filling up or resources are slowing down, perhaps inherent to unknown deficiencies within the architecture I am attempting to implement.
When load testing my API (using Gatling), I can run 20 messages a second through (which is a tiny load) for most of a ten minute duration. But after that, the messages seem to back up, and the ability to process them slows to a crawl. Generally time-out errors begin to occur once the overall requests exceed 60 seconds to complete. There is more business logic that processes data and persists it to a relational database, but none of that appears to be an issue.
Interestingly, subsequent test runs continue with the poor performance, indicating that whatever resource is leaking is transcending the tests. A restart of the application clears out whatever has become bloated leaking. Then the tests run fast again, for the first seven or eight minutes... upon which the cycle repeats itself. Only a restart of the App clears the issue. Since the issue doesn't self-correct itself, even after waiting for a period of time, something has filled up resources.
When pulling the JMS calls from the logic, I am able to process hundreds of messages a second. And I can run back-to-back tests runs without leaking or filling up the queue.
Although this is a Spring project, I am not using Spring's JMS Template, so I wrote my own Connection object, which I injected as a Spring Bean and implemented as a single connection to avoid creating a new connection for every JMS message I sent through.
Likewise, I configured my JMS Session to also be an injected Bean, in which I use the Connection Bean. That way I can persist my Connection and Session objects for sending all of my JMS messages through, which are sent one at a time. A Qpid Server I am calling receives these messages. While it is possible I am exceeding it's capacity to consume the messages I am producing, I expect that the resource leak is associated with my code, and not the JMS Server.
Here are some code snippets to give you an idea of my approach. Any feedback is appreciated.
JmsConfiguration (key methods)
#Bean
public ConnectionFactory jmsConnectionFactory() {
return new JmsConnectionFactory(user, pass, host);
}
#Bean(name="jmsSession")
public Session jmsConnection() throws JMSException {
Connection conn = jmsConnectionFactory().createConnection();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
return session; //Injected as Singleton
}
#Bean(name="jmsQueue")
public Queue jmsQueue() throws JMSException {
return jmsConnection().createQueue(queue);
}
//Jackson's objectMapper is heavy enough to warrant injecting and re-using it.
#Bean
public ObjectMapper objectMapper() {
return new ObjectMapper();
}
JmsMessageEnqueuer
#Component
public class MessageJmsEnqueuer extends CommonThreadScope {
#Autowired
#Qualifier("Session")
private Session jmsSession;
#Autowired
#Qualifier("jmsQueue")
private Queue jmsQueue;
#Value("${acme.jms.queue}")
private String jmsQueueName;
#Autowired
#Qualifier("jmsObjectMapper")
private ObjectMapper jmsObjectMapper;
public void enqueue(String message, String dataType) {
try {
String messageAsJson = objectMapper.writeValueAsString(message);
MessageProducer jmsMessageProducer = jmsSession.createProducer(jmsQueue);
TextMessage message = jmsSession.createTextMessage(message);
message.setStringProperty("dataType", dataType.name());
jmsMessageProducer.send(message);
logger.log(Level.INFO, "Message successfully sent. Queue=" + jmsQueueName + ", Message -> " + message);
} catch (JMSRuntimeException | JsonProcessingException jmsre) {
String msg = "JMS Message Processing encountered an error...";
logService.severe(logger, messagesBuilder() ... msg)
}
//Skip the close() method to persist connection...
//Reconnect logic exists to reset an expired connection from server.
}
}
I was able to solve my resource leak / deadlock issue simply by rewriting my code to use the simplified API provided with the release of JMS 2.0. Although I was never able to determine which of the Connection / Session / Queue objects was giving my code grief, using the Context object to build my connection and session was the golden ticket in this case.
Upon switching to the simplified API (since I was already pulling in the JMS 2.0 dependency), the resource leak immediately vanished! This leads me to believe that the simplified API does more than just make life easier by providing an easier API for the developer to code against. While that is already an advantage to begin with (even without the few features that the simplified API doesn't support), it is now clear to me that the underlying connection and session objects are being managed by the API, and thus resolved whatever was filling up or deadlocking.
Furthermore, because the resource build-up was no longer occurring, I was able to triple the number of messages I passed through, allowing me to process 60 users a second, instead of 20. That is a significant increase, and I have fixed the compatibility issues that prevented me from using the simplified JMS API to begin with.
While I would have liked to identify precisely what was fouling up the code, this works as a solution. Plus, the fact that version 2.0 of JMS was released in April of 2013 would indicate that the simplified API is definitely the preferred solution.
Just a guess, but a MessageProducer extends AutoClosable, suggesting it to be closed after it is no longer of use. Since you're not using a try-with-resources or explicitly close it afterwards, the jmsSession may contain more and more producers over time. Although I am not sure whether you should close per method call, or re-use the created producer.
Have you tried using a profiler such as VisualVM to visualize the heap and metaspace? If so, did you find any significant changes over time?

Stateful Rsocket Application

in my project I want to have multiple clients connecting to a service. I am using the java Rsocket implementation.
The service should maintain a state for each client. Now at this point I either can manage the clients by some identifier. This option I have already implemented. But I do not want to manage the session manually using strings.
So another idea is to identify the clients by the Rsocket connection. Is there a way to use Rsocket channel for identification of a specific client?
Imagine an example service and a couple of clients. Each client has the Rsocket channel with the service up and running. Is there a way to identify these clients on the server side using the Rsocket channel? Would be amazing if you could show a programmatic example of such behavior.
Thank you!
EDIT (describing the case more detailed)
Here is my example.
We currently have three CORBA objects that are used as demonstrated in the diagram:
LoginObject (to which a reference is retrieved via NamingService). Clients can call a login() method to obtain a session
The Session object has various methods for query details about the current serivce context and most importatly to obtain a Transaction object
The Transaction object can be used to execute various commands via a generic method that take a commandName and a list of key-value pairs as parameters.
After the client executed n commands he can commit or rollback the transaction (also via methods on the Transaction object).
so here we use the session object to execute transactions on our service.
Now we decided to move away from CORBA to Rsocket. Thus we need Rsocket microservice to be able to store the session's state, otherwise we can't know what's going to be commited or rolled back. Can this be done with just individual Publisher for each client?
Here's an example I made the other day that will create a stateful RSocket using Netifi's broker:
https://github.com/netifi/netifi-stateful-socket
Unfortunately you'd need to build our develop branch locally to try it out (https://github.com/netifi/netifi-java) - there should be a release with the code by the end of the week if you don't want to build it locally.
I'm working on a pure RSocket example too, but if you want to see how it would take a look at the StatefulSocket found in the example. It should give you a clue how to deal with the session with pure RSocket.
Regarding your other questions about a transaction manager - you would need to tie your transaction to the Reactive Streams signals that are being emitted - if you received an cancel, an onError you'd roll back, and if received a onComplete you would commit the transaction. There are side effect methods from Flux/Mono that should make this easy to deal with. Depending on what you are doing you could also use the BaseSubscriber as it has hooks to deal with the different Reactive Streams signals.
Thanks,
Robert
An example of resuming connections i.e. maintaining the state on the server, has landed in the rsocket-java repo
https://github.com/rsocket/rsocket-java/commit/d47629147dd1a4d41c7c8d5af3d80838e01d3ba5
The resumes a whole connection, including whatever state is associated with each individual channel etc.
There is an rsocket-cli project that lets you try this out. Start and stop the socat process and observe the client and server progress.
$ socat -d TCP-LISTEN:5001,fork,reuseaddr TCP:localhost:5000
$ ./rsocket-cli --debug --resume --server -i cli:time tcp://localhost:5000
$ ./rsocket-cli -i client --stream --resume tcp://localhost:5001
From your description it looks like channel will work best, I haven't used channel before so I can't really guarantee (sorry). But what I'd recommend you to try something like this:
A transcation contoller:
public class TransactionController implements Publisher<Payload> {
List<Transaction> transcations = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super Payload> subscriber) {
}
public void processPayload(Payload payload) {
// handle transcations...
}
}
And in your RSocket implementation override the requestChannel:
#Override
public Flux<Payload> requestChannel(Publisher<Payload> payloads) {
// Create new controller for each channel
TranscationController cntrl = new TranscationController();
    Flux.from(payloads)
      .subscribe(cntrl::processPayload);
    return Flux.from(cntrl);
}

java ee background service

You'll have to excuse me if I'm describing this incorrectly, but essentially I'm trying to get a service-like class to be instantiated just once at server start and to sort of "exist" in the background until it is killed off at server stop. At least from what I can tell, this is not exactly the same as a typical servlet (though I may be wrong about this). What's even more important is that I need to also be able to access this service/object later down the line.
As an example, in another project I've worked on, we used the Spring Framework to accomplish something similar. Essentially, we used the configuration XML file along with the built-in annotations to let Spring know to instantiate instances of some of our services. Later down the line, we used the annotation #Autowired to sort of "grab" the object reference of this pre-instantiated service/object.
So, though it may seem against some of the major concepts of Java itself, I'm just trying to figure out how to reinvent this wheel here. I guess sometimes I feel like these big app frameworks do too much "black-box magic" behind the scenes that I'd really like to be able to fine-tune.
Thanks for any help and/or suggestions!
Oh and I'm trying to run this all from JBoss 6
Here's one way to do it. Add a servlet context listener to your web.xml, e.g.:
<listener>
<listener-class>com.example.BackgroundServletContextListener</listener-class>
</listener>
Then create that class to manage your background service. In this example I use a single-threaded ScheduledExecutorService to schedule it to run every 5 minutes:
public class BackgroundServletContextListener implements ServletContextListener {
private ScheduledExecutorService executor;
private BackgroundService service;
public void contextInitialized(ServletContextEvent sce) {
service = new BackgroundService();
// setup single thread to run background service every 5 minutes
executor = Executors.newSingleThreadScheduledExecutor();
executor.scheduleAtFixedRate(service, 0, 5, TimeUnit.MINUTES);
// make the background service available to the servlet context
sce.getServletContext().setAttribute("service", service);
}
public void contextDestroyed(ServletContextEvent sce) {
executor.shutdown();
}
}
public class BackgroundService implements Runnable {
public void run() {
// do your background processing here
}
}
If you need to access the BackgroundService from web requests, you can access it through the ServletContext. E.g.:
ServletContext context = request.getSession().getServletContext();
BackgroundService service = (BackgroundService) context.getAttribute("service");
Have you considered using an EJB 3.1 Session bean? These can be deployed in a war file, and can be annotated with #Singleton and #Startup.
A number of annotations available with EJB 3.1 are designed to bring Spring goodies into the Java EE framework. It may be the re-invention you're considering has been done for you.
If you must roll your own, you can create a servlet and configure it start up when the application does using load-on-startup. I built a system like that a few years ago. We then used the new(ish) java.util.concurrent stuff like ExecutorService to have it process work from other servlets.
More information about what you're trying to do, and why the existing ways of doing things is insufficient, would be helpful.
You can use messaging for that. Just send message to the queue, and let the message listener do the processing asynchronously in the background.
You can use JMS for the implementation, and ActiveMQ for the message broker.
Spring has JMSTemplate, JMSGateWaySupport API to make JMS Implementation simple
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/jms.html

Message Driven Bean Selectors (JMS)

I have recently discovered message selectors
#ActivationConfigProperty(
propertyName="messageSelector",
propertyValue="Fragile IS TRUE")
My Question is: How can I make the selector dynamic at runtime?
Lets say a consumer decided they wanted only messages with the property "Fragile IS FALSE"
Could the consumer change the selector somehow without redeploying the MDB?
Note: I am using Glassfish v2.1
To my knowledge, this is not possible. There may be implementations that will allow it via some custom server hooks, but it would be implementation dependent. For one, it requires a change to the deployment descriptor, which is not read after the EAR is deployed.
JMS (Jakarta Messaging) is designed to provide simple means to do simple things and more complicated things to do more complicated but less frequently needed things. Message-driven beans are an example of the first case. To do some dynamic reconfiguration, you need to stop using MDBs and start consuming messages using the programmatic API, using an injected JMSContext and topic or queue. For example:
#Inject
private JMSContext context;
#Resource(lookup="jms/queue/thumbnail")
Queue thumbnailQueue;
JMSConsumer connectListener(String messageSelector) {
JMSConsumer consumer = context.createConsumer(logTopic, messageSelector);
consumer.setMessageListener(message -> {
// process message
});
return consumer;
}
You can call connectListener during startup, e.g. in a CDI bean:
public void start(#Observes #Initialized(ApplicationScoped.class) Object startEvent) {
connectListener("Fragile IS TRUE");
}
Then you can easily reconfigure it by closing the returned consumer and creating it again with a new selector string:
consumer.close();
consumer = connectListener("Fragile IS FALSE");

Categories

Resources