Is Session.sendToTarget() thread-safe? - java

I am trying to integrate QFJ into a single-threaded application. At first I was trying to utilize QFJ with my own TCP layer, but I haven't been able to work that out. Now I am just trying to integrate an initiator. Based on my research into QFJ, I would think the overall design should be as follows:
The application will no longer be single-threaded, since the QFJ initiator will create threads, so some synchronization is needed.
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
There are 2 aspects to the integration of the initiator into my application:
Receiving side (fromApp callback): I believe this is straightforward, I simply push messages to a thread-safe queue consumed by my MainProcessThread.
Sending side: I'm struggling to find documentation on this front. How should I handle synchronization? Is it safe to call Session.sendToTarget() from the MainProcessThread? Or is there some synchronization I need to put in place?

As Michael already said, it is perfectly safe to call Session.sendToTarget() from multiple threads, even concurrently. But as far as I see it you only utilize one thread anyway (MainProcessThread).
The relevant part of the Session class is in method sendRaw():
private boolean sendRaw(Message message, int num) {
// sequence number must be locked until application
// callback returns since it may be effectively rolled
// back if the callback fails.
state.lockSenderMsgSeqNum();
try {
.... some logic here
} finally {
state.unlockSenderMsgSeqNum();
}
Other points:
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
Will you always use only one Session? If yes, then there is no use in utilizing the ThreadedSocketInitiator since all it does is creating a thread per Session.
The application will no longer be single threaded, since the QFJ initiator will create threads
As already stated here Use own TCP layer implementation with QuickFIX/J you could try passing an ExecutorFactory. But this might not be applicable to your specific use case.

Related

Avoid waiting on Servlet streams

My Servler spends quite some time in reading request.getInputStream() and writing to response.getOutputStream(). In the long run, this can be a problem as its blocking a thread for nothing but reading/writing literally a few bytes per second. (*)
I'm never interested in a partial request data, the processing should not start before the request is completely available. Similarly for the response.
I guess, asynchronous IO would solve it, but I wonder what's the proper way. Maybe a servlet Filter replacing the ServletInputStream by a wrapped ByteArrayInputStream, using request.startAsync and calling the chained servlet after having collected the whole input?
Is there already such a filter?
Should I write one or should I use a different approach?
Note that what I mean is to avoid wasting threads on slow servlet streams. This isn't the same as startAsync which avoids wasting threads just waiting for some event.
And yes, at the moment it'd be a premature optimization.
My read loop as requested
There's nothing interesting in my current input stream reading method, but here you are:
private byte[] getInputBytes() throws IOException {
ServletInputStream inputStream = request.getInputStream();
final int len = request.getContentLength();
if (len >= 0) {
final byte[] result = new byte[len];
ByteStreams.readFully(inputStream, result);
return result;
} else {
return ByteStreams.toByteArray(inputStream);
}
}
That's all and it blocks when data aren't available; ByteStreams come from Guava.
Summary of my understanding so far
As the answers clearly state, it's impossible to work with servlet streams without wasting a thread on them. Neither the servlet architecture nor the common implementation expose anything allowing to say "buffer the whole data and call me only when you collected everything", albeit they use NIO and could do it.
The reason may be that usually a reverse proxy like nginx gets used, which can do it. nginx does this buffering by default and it couldn't be even switched off until two years ago.
Actually a supported case???
Given that many negative answer, I'm not sure, but it looks like my goal
to avoid wasting threads on slow servlet streams
is actually fully supported: Since 3.1, there's ServletInputStream.html#setReadListener which seems to be meant exactly for this. The thread allocated for processing Servlet#Service initially calls request.startAsync(), attaches the listener and gets returned to the pool by simply returning from service. The listener implements onDataAvailable(), which gets called when it's possible to read without blocking, adds a piece of data and returns. In onAllDataRead(), I can do the whole processing of the collected data.
There's an example, how it can be done with Jetty. It seems to cover non-blocking output as well.
(*) In the logfiles, I can see requests taking up to eight seconds which get spend on reading the input (100 bytes header + 100 bytes data). Such cases are rare, but they do happen, although the server is mostly idle. So I guess, it's a mobile client on a very bad connection (some users of ours connect from places having such bad connectivity).
HttpServletRequest#startAsync() isn't useful for this. That's only useful for push things like web sockets and the good 'ol SSE. Moreover, JSR356 Web Socket API is built on top of it.
Your concrete problem is understood, but this definitely can't be solved from the servlet on. You'd only end up wasting yet more threads for the very simple reason because the container has already dedicated the current thread to the servlet request until the request body is read fully up to the last bit, even if it's ultimately read by a newly spawned async thread.
To save threads, you actually need a servletcontainer which supports NIO and if necessary turn on that feature. With NIO, a single thread can handle as many TCP connections as the available heap memory allows it, instead of that a single thread is allocated per TCP connection. Then, in your servlet you don't at all need to worry about this delicate I/O task.
Almost all modern servletcontainers support it: Undertow (WildFly), Grizzly (GlassFish/Payara), Tomcat, Jetty, etc. Some have it by default enabled, others require extra configuration. Just refer their documentation using the keyword "NIO".
If you'd actually also want to save the servlet request thread itself, then you'd basically need to go a step back, drop servlets and implement a custom NIO based service on top of an existing NIO connector (Undertow, Grizzly, Jetty, etc).
You can't. The Servlet container allocates the thread to the request, and that's the end of it, it's allocated. That's the model. If you don't like that, you will have to stop using Servlets.
Even if you could solve (1), you can't start async I/O on an input stream.
The way to handle slow requests is to time them out, by setting the appropriate setting for whatever container you're using ... if you actually have a problem, and it's far from clear that you really do, with a mostly idle server and this only happening rarely.
Your read loop makes a distinction without a difference. Just read the request input stream to its end. The servlet container already ensures that end of stream happens at the content-length if provided.
There's a class called org.apache.catalina.connector.CoyoteAdapter, which is the class that receives the marshaled request from TCP worker thread. It has a method called "service" which does the bulk of the heavy lifting. This method is called by another class: org.apache.coyote.http11.Http11Processor which also has a method of the same name.
I find it interesting that I see so many hooks in the code to handle async io, which makes me wonder if this is not a built in feature of the container already? Anyway, with my limited knowledge, the best way that I can think of to implement the feature you are talking about, would be to create a class:
public class MyAsyncReqHandlingAdapter extends CoyoteAdapter and #Override service() method and roll your own... I don't have the time to devote to doing this now, but I may revisit in the future.
In this method you would need a way to identify slow requests and handle them, by handing them off to a single threaded nio processor and "complete" the request at that level, which, given the source code:
https://github.com/apache/tomcat/blob/075920d486ca37e0286586a9f017b4159ac63d65/java/org/apache/coyote/http11/Http11Processor.java
https://github.com/apache/tomcat/blob/3361b1321201431e65d59d168254cff4f8f8dc55/java/org/apache/catalina/connector/CoyoteAdapter.java
You should be able to figure out how to do. Interesting question and yes it can be done. Nothing I see in the spec says that it cannot...

Thread handling in Java HornetQ client

I'm trying to understand how to deal with threads within a Java client that connects to HornetQ. I'm not getting a specific error but fail to understand how I'm expected to deal with threads in the first place (with respect to the HornetQ client and specifically MessageHandler.onMessage() -- threads in general are no problem to me).
In case this is relevant: I'm using 'org.hornetq:hornetq-server:2.4.7.Final' to run the server embedded into my application. I don't intend this to make a difference. In my situation, that's just more convenient from an ops perspective than running a standalone server process.
What I did so far:
create an embedded server: new EmbeddedHornetQ(),
.setConfiguration()
create a server locator: HornetQClient.createServerLocator(false, new TransportConfiguration(InVMConnectorFactory.class.getName()))
create a session factory: serverLocator.createSessionFactory()
Now it seems obvious to me that I can create a session using hornetqClientSessionFactory.createSession(), create a producer and consumer for that session, and deal with messages within a single thread using .send() and .receive().
But I also discovered consumer.setMessageHandler(), and this tells me that I didn't understand threading in the client at all. I tried to use it, but then the consumer calls messageHandler.onMessage() in two threads that are distinct from the one that created the session. This seems to match my impression from looking at the code -- the HornetQ client uses a thread pool to dispatch messages.
This leaves me confused. The javadocs say that the session is a "single-thread object", and the code agrees -- no obvious synchronization going on there. But with onMessage() being called in multiple threads, message.acknowledge() is also called in multiple threads, and that one just delegates to the session.
How is this supposed to work? How would a scenario look in which MessageHandler does NOT access the session from multiple threads?
Going further, how would I send follow-up messages from within onMessage()? I'm using HornetQ for a persistent "to-do" work queue, so sending follow-up messages is a typical use case for me. But again, within onMessage(), I'm in the wrong thread for accessing the session.
Note that I would be okay with staying away from MessageHandler and just using send() / receive() in a way that allows me to control threading. But I'm convinced that I don't understand the whole situation at all, and that combined with multi-threading is just asking for trouble.
I can answer part of your question, although I hope you've already fixed the issue by now.
Form the HornetQ documentation on ClientConsumer (Emphasis mine):
A ClientConsumer receives messages from HornetQ queues.
Messages can be consumed synchronously by using the receive() methods which will block until a message is received (or a timeout expires) or asynchronously by setting a MessageHandler.
These 2 types of consumption are exclusive: a ClientConsumer with a MessageHandler set will throw HornetQException if its receive() methods are called.
So you have two choices on handling message reception:
Synchronize the reception yourself
Do not provide a MessageListener to HornetQ
In your own cunsumer Thread, invoke .receive() or .receive(long itmeout) at your leisure
Retrieve the (optional) ClientMessage object returned by the call
Pro: Using the Session you hopefully carry in the Consumer you can forward the message as you see fit
Con: All this message handling will be sequential
Delegate Thread synchronization to HornetQ
Do not invoke .receive() on a Consumer
Provide a MessageListener implementation of onMessage(ClientMessage)
Pro: All the message handling will be concurrent and fast, hassle-free
Con: I do not think it possible to retrieve the Session from this object, as it is not exposed by the interface.
Untested workaround: In my application (which is in-vm like yours), I exposed the underlying, thread-safe QueueConnection as a static variable available application-wide. From your MessageListener, you may invoke QueueSession jmsSession = jmsConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE); on it to obtain a new Session and send your messages from it... This is probably alright as far as I can see because the Session object is not really re-created. I also did this because Sessions had a tendency to become stale.
I don't think you should want so much to be in control of your Message execution threads, especially transient Threads that merely forward messages. HornetQ has built-in Thread pools as you guessed, and reuses these objects efficiently.
Also as you know you don't need to be in a single Thread to access an object (like a Queue) so it doesn't matter if the Queue is accessed through multiple Threads, or even through multiple Sessions. You need only make sure a Session is only accesed by one Thread, and this is by design with MessageListener.

Akka system from a QA perspective

I had been testing an Akka based application for more than a month now. But, if I reflect upon it, I have following conclusions:
Akka actors alone can achieve lot of concurrency. I have reached more than 100,000 messages/sec. This is fine and it is just message passing.
Now, if there is netty layer for connections at one end or you end up with akka actors eventually doing DB calls, REST calls, writing to files, the whole system doesn't make sense anymore. The actors' mailbox gets full and their throughput(here, ability to receive msgs/sec) goes slow.
From a QA perspective, this is like having a huge pipe in which you can forcefully pump lot of water and it can handle. But, if the input hose is bad, or the endpoints cannot handle the pressure, this huge pipe is of no use.
I need answers for the following so that I can suggest or verify in the system:
Should the blocking calls like DB calls, REST calls be handled by actors? Or they good only for message passing?
Can it be like, lets say you have the need of connecting persistently millions of android/ios devices to your akka system. Instead of sockets(so unreliable) etc., can remote actor be implemented as a persistent connection?
Is it ok to do any sort of computation in actor's handleMessage()? Like DB calls etc.
I would request this post to get through by the editors. I cannot ask all of these separately.
1) Yes, they can. But this operation should be done in separate (worker) actor, that uses fork-join-pool in combination with scala.concurrent.blocking around the blocking code, it needs it to prevent thread starvation. If target system (DB, REST and so on) supports several concurrent connections, you may use akka's routers for that (creating one actor per connection in pool). Also you can produce several actors for several different tables (resources, queues etc.), depending on your transaction isolation and storage's consistency requirements.
Another way to handle this is using asynchronous requests with acknowledges instead of blocking. You may also put the blocking operation inside some separate future (thread, worker), which will send acknowledge message at the operation's end.
2) Yes, actor may be implemented as a persistence connection. It will be just an actor, which holds connection's state (as actors are stateful). It may be even more reliable using Akka Persistence, which can save connection to some storage.
3) You can do any non-blocking computations inside the actor's receive (there is no handleMessage method in akka). The failures (like no connection to DB) will be managing automatically by Akka Supervising. For the blocking code, see 1.
P.S. about "huge pipe". The backend-application itself is a pipe (which is becoming huge with akka), so nothing can help you to improve performance if environement can't handle it - there is no pumps in this world. But akka is also a "water tank", which means that outer pressure may be stronger than inner. Btw, it means that developer should be careful with mailboxes - as "too much water" may cause OutOfMemory, the way to prevent that is to organize back pressure. It can be done by not acknowledging incoming message (or simply blocking an endpoint's handler) til it proceeded by akka.
I'm not sure I can understand all of your question, but in general actors are good also for slow work:
1) Yes, they are perfectly fine. Just create/assign 1 actor per every request (maybe behind an akka router for load balancing), and once it's done it can either mark itself as "free for new work" or self-terminate. Remember to execute the slow code in a future. Personally, I like avoiding the ask/pipe pattern due to the implicit timeouts and exception swallowing, just use tells with request id's, but if your latencies and error rates are low, go for ask/pipe.
2) You could, but in that case I'd suggest having a pool of connections rather than spawning them per-request, as that takes longer. If you can provide more details, I can maybe improve this answer.
3) Yes, but think about this: actors are cheap. Create millions of them, every time there is a blocking part, it should be a different, specialized actors. Bring single-responsibility to the extreme. If you have few, blocking actors, you lose all the benefits.

Tomcat websocket and java

Hi guys am getting following error am using Websocket and Tomcat8.
java.lang.IllegalStateException: The remote endpoint was in state [TEXT_FULL_WRITING] which is an invalid state for called method
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase$StateMachine.checkState(WsRemoteEndpointImplBase.java:1092)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase$StateMachine.textStart(WsRemoteEndpointImplBase.java:1055)
at org.apache.tomcat.websocket.WsRemoteEndpointImplBase.sendString(WsRemoteEndpointImplBase.java:186)
at org.apache.tomcat.websocket.WsRemoteEndpointBasic.sendText(WsRemoteEndpointBasic.java:37)
at com.iri.monitor.webSocket.IRIMonitorSocketServlet.broadcastData(IRIMonitorSocketServlet.java:369)
at com.iri.monitor.webSocket.IRIMonitorSocketServlet.access$0(IRIMonitorSocketServlet.java:356)
at com.iri.monitor.webSocket.IRIMonitorSocketServlet$5.run(IRIMonitorSocketServlet.java:279)
You are trying to write to a websocket that is not in a ready state. The websocket is currently in writing mode and you are trying to write another message to that websocket which raises an error. Using an async write or as not such good practice a sleep can prevent this from happening. This error is also normally raised when a websocket program is not thread safe.
Neither async or sleep can help.
The key problem is the send-method can not be called concurrently.
So it's just about concurrency, you can use locks or some other thing. Here is how I handle it.
In fact, I write a actor to wrap the socketSession. It will produce an event when the send-method is called. Each actor will be registered in an Looper which contains a work thread and an event queue. Meanwhile the work thread keeps sending message.
So, I will use the sync-send method inside, the actor model will make sure about the concurrency.
The key problem now is about the number of Looper. You know, you can't make neither too much or too few threads. But you can still estimate a number by your business cases, and keep adjusting it.
it is actually not a concurrency issue, you will have the same error in a single-threaded environment. It is about asynchronous calls that must not overlap.
You should use session.get**Basic**Remote().sendText instead of session.get**Async**Remote().sendText() to avoid this problem. Should not be an issue as long as the amount of data you are writing stays reasonable small.

Stateless Blocking Server Design

A little help please.
I am designing a stateless server that will have the following functionality:
Client submits a job to the server.
Client is blocked while the server tries to perform the job.
The server will spawn one or multiple threads to perform the job.
The job either finishes, times out or fails.
The appropriate response (based on the outcome) is created, the client is unblocked and the response is handed off to the client.
Here is what I have thought of so far.
Client submits a job to the server.
The server assigns an ID to the job, places the job on a Queue and then places the Client on an another queue (where it will be blocked).
Have a thread pool that will execute the job, fetch the result and appropriately create the response.
Based on ID, pick the client out of the queue (thereby unblocking it), give it the response and send it off.
Steps 1,3,4 seems quite straight forward however any ideas about how to put the client in a queue and then block it. Also, any pointers that would help me design this puppy would be appreciated.
Cheers
Why do you need to block the client? Seems like it would be easier to return (almost) immediately (after performing initial validation, if any) and give client a unique ID for a given job. Client would then be able to either poll using said ID or, perhaps, provide a callback.
Blocking means you're holding on to a socket which obviously limits the upper number of clients you can serve simultaneously. If that's not a concern for your scenario and you absolutely need to block (perhaps you have no control over client code and can't make them poll?), there's little sense in spawning threads to perform the job unless you can actually separate it into parallel tasks. The only "queue" in that case would be the one held by common thread pool. The workflow would basically be:
Create a thread pool (such as ThreadPoolExecutor)
For each client request:
If you have any parts of the job that you can execute in parallel, delegate them to the pool.
And / or do them in the current thread.
Wait until pooled job parts complete (if applicable).
Return results to client.
Shutdown the thread pool.
No IDs are needed per se; though you may need to use some sort of latch for 2.1 / 2.3 above.
Timeouts may be a tad tricky. If you need them to be more or less precise you'll have to keep your main thread (the one that received client request) free from work and have it signal submitted job parts (by flipping a flag) when timeout is reached and return immediately. You'll have to check said flag periodically and terminate your execution once it's flipped; pool will then reclaim the thread.
How are you communicating to the client?
I recommend you create an object to represent each job which holds job parameters and the socket (or other communication mechanism) to reach the client. The thread pool will then send the response to unblock the client at the end of job processing.
The timeouts will be somewhat tricky, and will have hidden gotcha's but the basic design would seem to be to straightforward, write a class that takes a Socket in the constructor. on socket.accept we just do a new socket processing instantiation, with great foresight and planning on scalability or if this is a bench-test-experiment, then the socket processing class just goes to the data processing stuff and when it returns you have some sort of boolean or numeric for the state or something, handy place for null btw, and ether writes the success to the Output Stream from the socket or informs client of a timeout or whatever your business needs are
If you have to have a scalable, effective design for long-running heavy-haulers, go directly to nio ... hand coded one-off solutions like I describe probably won't scale well but would provide fundamental conceptualizing basis for an nio design of code-correct work.
( sorry folks, I think directly in code - design patterns are then applied to the code after it is working. What does not hold up gets reworked then, not before )

Categories

Resources