I want to implement a singleton bean in Java EE, which starts a VPN connection on demand.
Thus I created a Class like:
#Singleton
class VPNClient{
private boolean connected;
#Lock(LockType.READ)
public boolean isConnected(){
return this.connected;
}
#Asynchronous
#Lock(LockType.WRITE)
public void connect(){
// do connect, including a while loop for the socket:
while(true){
// read socket, do stuff like setting connected when VPN successfully established
}
}
}
Then I have another bean, which has the demand for the VPN connection and tries to create it:
class X {
#Inject
VPNClient client;
private void sendStuffToVPN(){
// call the async connect method
client.connect();
// wait for connect (or exception and stuff in original source)
while(!client.isConnected()){
// wait for connection to be established
Thread.sleep(5000);
}
}
}
My problem now is, that because of the connect method, that never ends until the connection is destroyed, the write lock it has, will block all reads to isConnected().
[Update]
This should hopefully illustrate the problem:
Thread 1 (Bean X) calls Thread 2 (Singleton Bean VPNClient) .connect()
Now there is an endless write lock on the singleton bean VPNClient. But because the method was called async. Thread 1 proceeds:
Thread 1 (Bean x) tries to call Thread 2 (VPNClient.isConnected()), but has to wait for the release of the write lock (which started with connect()).
Then the J2EE container throws an javax.ejb.ConcurrentAccessTimeoutException because it waited until timeout.
Is there a good pattern to solve this kind of concurrency problem?
#Lock(LockType.WRITE) locks all methods in the singleton bean until the called method has completed even if the caller has moved on via #Asynchronous.
This is the correct behaviour if you think about it - concurrency problems can happen from other method calls to the bean if processing is still in progress.
The way to get around this is to set #ConcurrencyManagement(ConcurrencyManagementType.BEAN) on your singleton to handle concurrency and locking of access to the connection yourself.
Have a look at http://docs.oracle.com/javaee/6/tutorial/doc/gipvi.html#indexterm-1449 for an introduction.
Try this.
class X {
private void sendStuffToVPN(){
VPNClient client = new VPNClient();
// call the async connect method
new Thread(new Runnable(){
public void run()
{
client.connect();
}
}).start();
// wait for connect (or exception and stuff in original source)
while(!client.isConnected()){
// wait for connection to be established
Thread.sleep(5000);
}
}
}
Related
I'm currently trying to build a tcp server with netty. The server should then be part of my main program.
My application needs to send messages to the connected clients. I know I can keep track of the channels using a concurrent hash map or a ChannelGroup inside a handler. To not block my application the server itself has to run in a seperate thread. From my pov the corresponding run method would look like this:
public class Server implements Runnable {
#Override
public void run() {
EventLoopGroup bossEventGroup = new NioEventLoopGroup();
EventLoopGroup workerEventGroup = new NioEventLoopGroup();
try {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap
.group(bossEventGroup, workerEventGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new MyServerInitializer());
ChannelFuture future = bootstrap.bind(8080).sync().channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
workerEventGroup.shutdownGracefully();
bossEventGroup.shutdownGracefully();
}
}
}
But now I have no idea how to integerate e.g. a sendMessage(Message message) method which can be used by my main application. I believe the function itself has to be defined in the handler to have access to the stored connected channels. But can someone give me an idea how to make such a function usable from the outside? Do I have to implement some sort of message queue which is checked in a loop after the bind? I could imagine that then the method invocation looks like this:
ServerHandlerTest t = (ServerHandlerTest) future.channel().pipeline().last();
(if newMessageInQueue) {
t.sendMessage(...);
}
Maybe someone is able to explain me what is the preferred implementation method for this use case.
I would go to create your own application handler to manage the business behavior within your own Netty handler, because that is the main logic (event based).
Your own (last) handler take care of all your application behavior, such that each client is served correctly, directly within the handler, using the ContextCHannelHandler ctx
Of course, you can still think of a particular application handler that would do something as:
Creation of the handler (in the pipeline creation within MyServerInitializer) will initiate the handler to look for a messageQueue to send
Then polling on the messageQueue to send but to the right client using a hashMap
But I believe it is far more complicated (which queue for which client or a global queue, how to handle the queue without blocking the server thread - not to do -, ...).
Moreover, sendMessage method ? Do you want to talk about write (or writeAndFlush) method ?
I'm developing a server based on the Netty libraby and I'm having a problem with how to structure the application with regards to business Logic.
currenty I have the business logic in the last handler and that's where I access the database. The thing I can't wrap my head around is the latency of accessing the database(blocking code). Is it advisable to do it in the handler or is there an alternative? code below:
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
super.channelRead(ctx, msg);
Msg message = (Msg)msg;
switch(message.messageType){
case MType.SIGN_UP:
userReg.signUp(message.user);// blocking database access
break;
}
}
you should execute the blocking calls in DefaultEventExecutorGroup or your custom threadpool that can be added to when the handler is added
pipeline.addLast(new DefaultEventExecutorGroup(50),"BUSSINESS_LOGIC_HANDLER", new BHandler());
ctx.executor().execute(new Runnable() {
#Override
public void run() {
//Blocking call
}});
Your custom handler is initialized by Netty everytime the Server receives a request, hence one instance of the handler is responsible for handling one Client.
So, it is perfectly fine for issuing blocking calls in your handler. It will not affect other Client's, as long as you don't block it indefinitely (or atleast not for very long time), thereby not blocking Netty's Thread for long and you do not get too much load on your server instance.
However, if you want to go for asynchronous design, then there can be more than a few design patterns that you can use.
For eg. with Netty, if you can implement WebSockets, then perhaps you can make the blocking calls in a separate Thread, and when the results are available, you can push them to the client through the WebSocket already established.
I'm processing messages using a JMS MessageConsumer with a MessageListener. If something happens that causes the MessageConsumer to stop receiving and processing messages -- for example, if the underlying connection closes -- how can I detect it? There doesn't seem to be any notification mechanism that I can find in the spec.
I think the question is clear as is, but if you'd like me to post code to clarify the question, just ask!
In case it's important, I'm using ActiveMQ 5.8, although obviously I'd like a scheme that's not implementation-specific.
Use ExceptionListener
If the JMS system detects a problem, it calls the listener's onException method:
public class MyConsumer implements ExceptionListener, MessageListener {
private void init(){
Connection connection = ... //create connection
connection.setExceptionListener(this);
connection.start();
}
public void onException(JMSException e){
String errorCode = e.getErrorCode();
Exception ex = e.getLinkedException();
//clean up resources, or, attempt to reconnect
}
public void onMessage(Message m){
...
}
Not much to it, really, the above is standard practice for standalone consumers; it's not implementation-specific; actually, quite the contrary as it's part of the spec!, so all JMS-compliant providers will support it.
I'm experimenting with Spring's DeferredResult on Tomcat, and I'm getting crazy results. Is what I'm doing wrong, or is there some bug in Spring or Tomcat? My code is simple enough.
#Controller
public class Test {
private DeferredResult<String> deferred;
static class DoSomethingUseful implements Runnable {
public void run() {
try { Thread.sleep(2000); } catch (InterruptedException e) { }
}
}
#RequestMapping(value="/test/start")
#ResponseBody
public synchronized DeferredResult<String> start() {
deferred = new DeferredResult<>(4000L, "timeout\n");
deferred.onTimeout(new DoSomethingUseful());
return deferred;
}
#RequestMapping(value="/test/stop")
#ResponseBody
public synchronized String stop() {
deferred.setResult("stopped\n");
return "ok\n";
}
}
So. The start request creates a DeferredResult with a 4 second timeout. The stop request will set a result on the DeferredResult. If you send stop before or after the deferred result times out, everything works fine.
However if you send stop at the same time as start times out, things go crazy. I've added an onTimeout action to make this easy to reproduce, but that's not necessary for the problem to occur. With an APR connector, it simply deadlocks. With a NIO connector, it sometimes works, but sometimes it incorrectly sends the "timeout" message to the stop client and never answers the start client.
To test this:
curl http://localhost/test/start & sleep 5; curl http://localhost/test/stop
I don't think I'm doing anything wrong. The Spring documentation seems to say it's okay to call setResult at any time, even after the request already expired, and from any thread ("the
application can produce the result from a thread of its choice").
Versions used: Tomcat 7.0.39 on Linux, Spring 3.2.2.
This is an excellent bug find !
Just adding more information about the bug (that got fixed) for a better understanding.
There was a synchronized block inside setResult() that extended up to the part of submitting a dispatch. This can cause a deadlock if a timeout occurs at the same time since the Tomcat timeout thread has its own locking that permits only one thread to do timeout or dispatch processing.
Detailed explanation:
When you call "stop" at the same time as the request "times out", two threads are attempting to lock the DeferredResult object 'deferred'.
The thread that executes the "onTimeout" handler
Here is the excerpt from the Spring doc:
This onTimeout method is called from a container thread when an async request times out before the DeferredResult has been set. It may invoke setResult or setErrorResult to resume processing.
Another thread that executes the "stop" service.
If the dispatch processing called during the stop() service obtains the 'deferred' lock, it will wait for a tomcat lock (say TomcatLock) to finish the dispatch.
And if the other thread doing timeout handling has already acquired the TomcatLock, that thread waits to acquire a lock on 'deferred' to complete the setResult()!
So, we end up in a classic deadlock situation !
i have a jsf application running on tomcat 6.0 and somewhere in the app i send e mails to some users.But sending mail slower than i thought, it causes lacks beetwen these related pages.
So my question is; is that a good(or doable) a way to give this proccess to another thread which i create, a thread that gets mail sending requests and put these in a queue and proccess these apart from main application.Hence the mail sending proccess would be out of the main flow and doesnt affect the app's speed.
Yes, that's definitely a good idea. You should only do it with an extreme care. Here's some food for thought:
Is it safe to start a new thread in a JSF managed bean?
Spawning threads in a JSF managed bean for scheduled tasks using a timer
As you're using Tomcat, which does not support EJB out the box (and thus #Asynchronus #Singleton is out of question), I'd create an application scoped bean which holds an ExecutorService to process the mail tasks. Here's a kickoff example:
#ManagedBean(eager=true)
#ApplicationScoped
public class TaskManager {
private ExecutorService executor;
#PostConstruct
public void init() {
executor = Executors.newSingleThreadExecutor();
}
public <T> Future<T> submit(Callable<T> task) {
return executor.submit(task);
}
// Or just void submit(Runnable task) if you want fire-and-forget.
#PreDestroy
public void destroy() {
executor.shutdown();
}
}
This creates a single thread and puts the tasks in a queue. You can use it in normal beans as follows:
#ManagedBean
#RequestScoped
public class Register {
#ManagedProperty("#{taskManager}")
private TaskManager taskManager;
public void submit() {
// ...
taskManager.submit(new MailTask(mail));
// You might want to hold the return value in some Future<Result>, but
// you should store it in view or session scope in order to get result
// later. Note that the thread will block whenever you call get() on it.
// You can just ignore it altogether (as the current example is doing).
}
}
To learn more about java.util.concurrent API, refer the official tutorial.