Alternative of MultiThreading in Java - java

I have a question bother me a while.
For example,I have a multithreaded server, when it receives a request, it pass this request to a handler, this handler will process this request. One reason we make server multithreaded is:
if it is not multithreaded, when the server processing this request, during the meaning time,
another request coming, then this request will be drop, because the server is not available now.
So I wonder if there is an alternative of multithreaded server, for example, we can create a queue for non-multithreading server? when it can fetch another request from the queue once it finish one.

Yes, you can have an event-based server. This capability is offered by the java.nio package, though you could use a framework like netty rather than do it from scratch.
However, note that while this used to be considered a way to get better performance, it seems like a regular multithreaded server actually offers better performances with today's hardware and operating systems.

Yes you can. Have you considered SEDA-like techniques (i.e. event-driven techniques)? You may want to investigate the Netty library too. It does most of the job for you when it comes to using NIO.

You can still have a single threaded engine with a multi-threaded server.
consider the following skeleton - if you have an Engine that runs, it can be completely single threaded, just handing requests in the order they're received. This allows you to use non-thread-safe components in the business logic, and you've managed to separate your networking layer from your business logic layer! It's a win-win scenario.
class Engine implements Runnable {
private final Object requestLock = new Object();
private List<Request> requests = new LinkedList<Request>();
private boolean running = true;
private Request nextRequest() {
synchronized(requestLock) { return requests.poll(); }
}
/**
* The engine is single threaded. It doesn't care about server connections
*/
public void run() {
while(running) {
Request request = nextRequest();
// handle your request as normal
// also consider making a mechanism to send Responses
}
}
}

Related

Is Session.sendToTarget() thread-safe?

I am trying to integrate QFJ into a single-threaded application. At first I was trying to utilize QFJ with my own TCP layer, but I haven't been able to work that out. Now I am just trying to integrate an initiator. Based on my research into QFJ, I would think the overall design should be as follows:
The application will no longer be single-threaded, since the QFJ initiator will create threads, so some synchronization is needed.
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
There are 2 aspects to the integration of the initiator into my application:
Receiving side (fromApp callback): I believe this is straightforward, I simply push messages to a thread-safe queue consumed by my MainProcessThread.
Sending side: I'm struggling to find documentation on this front. How should I handle synchronization? Is it safe to call Session.sendToTarget() from the MainProcessThread? Or is there some synchronization I need to put in place?
As Michael already said, it is perfectly safe to call Session.sendToTarget() from multiple threads, even concurrently. But as far as I see it you only utilize one thread anyway (MainProcessThread).
The relevant part of the Session class is in method sendRaw():
private boolean sendRaw(Message message, int num) {
// sequence number must be locked until application
// callback returns since it may be effectively rolled
// back if the callback fails.
state.lockSenderMsgSeqNum();
try {
.... some logic here
} finally {
state.unlockSenderMsgSeqNum();
}
Other points:
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
Will you always use only one Session? If yes, then there is no use in utilizing the ThreadedSocketInitiator since all it does is creating a thread per Session.
The application will no longer be single threaded, since the QFJ initiator will create threads
As already stated here Use own TCP layer implementation with QuickFIX/J you could try passing an ExecutorFactory. But this might not be applicable to your specific use case.

How to optimize Tomcat for Feed pull

We have a mobile app which presents feed to users. The feed REST API is implemented on tomcat, which parallel makes calls to different data sources such as Couchbase, MYSQL to present the content. The simple code is given below:
Future<List<CardDTO>> pnrFuture = null;
Future<List<CardDTO>> newsFuture = null;
ExecutionContext ec = ExecutionContexts.fromExecutorService(executor);
final List<CardDTO> combinedDTOs = new ArrayList<CardDTO>();
// Array list of futures
List<Future<List<CardDTO>>> futures = new ArrayList<Future<List<CardDTO>>>();
futures.add(future(new PNRFuture(pnrService, userId), ec));
futures.add(future(new NewsFuture(newsService, userId), ec));
futures.add(future(new SettingsFuture(userPreferenceManager, userId), ec));
Future<Iterable<List<CardDTO>>> futuresSequence = sequence(futures, ec);
// combine the cards
Future<List<CardDTO>> futureSum = futuresSequence.map(
new Mapper<Iterable<List<CardDTO>>, List<CardDTO>>() {
#Override
public List<CardDTO> apply(Iterable<List<CardDTO>> allDTOs) {
for (List<CardDTO> cardDTOs : allDTOs) {
if (cardDTOs != null) {
combinedDTOs.addAll(cardDTOs);
}
}
Collections.sort(combinedDTOs);
return combinedDTOs;
}
}
);
Await.result(futureSum, Duration.Inf());
return combinedDTOs;
Right now we have around 4-5 parallel tasks per request. But it is expected to grow to almost 20-25 parallel tasks as we introduce new kinds of items in feed.
My question is, how can I improve this design? What kind of tuning is required in Tomcat to make sure such 20-25 parallel calls can be served optimally under heavy load.
I understand this is a broad topic, but any suggestions would be very helpful.
Tomcat just manages the incoming HTTP connections and pushes the bytes back and forth. There is no Tomcat optimization that can be done to make your application run any better.
If you need 25 parallel processes to run for each incoming HTTP request, and you think that's crazy, then you need to re-think how your application works.
No tomcat configuration will help with what you've presented in your question.
I understand you are calling this from a mobile app and the number of feeds could go up.
based on the amount of data being returned, would it be possible to return the results of some feeds in the same call?
That way the server does the work.
You are in control of the server - you are not in control of the users device and their connection speed.
As nickebbit suggested, things like DefferedResult are really easy to implement.
is it possible that the data from these feeds would not be updated in a quick fashion? If so - you should investigate the use of EHCache and the #Cacheable annotation.
You could come up with a solution where the user is always pulling a cached version of your content from your tomcat server. But your tomcat server is constantly updating that cache in the background.
Its an extra piece of work - but at the end of the day if the user experience is not fast - users will not want to use this app
It looks like your using Akka but not really embracing the Actor model, doing so will likely increase the parallelism and therefore scalability of your app.
If it was me I'd hand requests off from my REST API to a single or pool of coordinating actors that will process the request asynchronously. Using Spring's RestController this can be done using a Callable or DeferredResult but there will obviously be an equivalent in whatever framework you are using.
This coordinating actor would then in turn hand off processing to other actors (i.e. workers) that take care of the I/O bound tasks (preferably using their own dispatcher to ensure other CPU bound threads do not get blocked) and respond to the coordinator with their results.
Once all workers have fetched their data and replied to the coordinator with the results then the original request can be completed with the full result set.

Apache Thrift: Server with sync and async methods, it is possible?

I am trying to implement a twitter like service with client using java. I am using Apache thrift for RPC calls. The service uses a key-value store. I am trying to make the service fault-tolerant along with consistency and data-replication in the key-value store.
For eg: suppose at a time, there are 10 servers running with id
S1,S2,S3 etc. and one client calls put(key,value) on S1, now S1 saves
this value and calls a RPC put(key,value) on all the remaining servers
for data-replication. I want the server method to save and return
success to client and also start a thread with async calls on the
remaining 9 servers so that the client is not blocked during
replication.
The auto generated code has Iface and AsyncIface and I have currently implemented the Iface in a ServerHandler class.
My goal is to expose a backend server to the client and have normal (blocking) calls between a client and a server and async calls between servers. There will be multiple client-server pairs running at a time.
I understand, the data-replication model is crude but I am trying to learn distributed systems.
Can someone please help me with an example on how I can achieve this.
Also, if you think my design is flawed and there are better ways in
which I can achieve data-replication using Apache Thrift please do
point out.
Thank You.
A oneway method is asynchronous, any other method not marked with oneway is synchronous.
exception OhMyGosh {
1: string msg
}
service TwelfthNightOrWhatYouWill {
// A oneway method is a "one shot" method. The server may execute
// it asynchronously, depending on the server implementation
// Oneways can be very useful when used with messaging systems
// A oneway does NOT return anything, including exceptions
oneway void ImAsync(1: i32 foo, 2: string bar, 3: double baz)
// Any method not marked with oneway is synchronous. Even if the call does
// not return anything, it will be still a blocking call for the client.
void ImSynchronous(1: i32 foo, 2: string bar) throws (1: OhMyGosh omg)
i32 ImAsWell(1: double baz) throws (1: OhMyGosh omg)
void MeToo()
}
Whether or not the server does execute the oneway asynchronously with regard to the connection, depends on what server implementation you use. A Threaded or Threadpool server seems a good choice.
After the client has sent his oneway request, it will not wait for reply from the server and just continue in his execution flow. Technically, for oneway no recv_Xxxx() function is generated, only the send_Xxx() part.
If you need data sent back to the client, the best option is to set up a server in the client process as well, which seems the optimal choice in your particular use case to me. In cases where this is not possible (think HTTP) the typical workarounds are polling or long-running calls, however both techniques come with some disadvantages.
With apolagies to W.Shakespeare

What is the purpose of asynchronous JAX-RS

I'm reading "RESTful Java with JAX-RS 2.0" book. I'm completely confused with asynchronous JAX-RS, so I ask all questions in one. The book writes asynchronous server like this:
#Path("/customers")
public class CustomerResource {
#GET
#Path("{id}")
#Produces(MediaType.APPLICATION_XML)
public void getCustomer(#Suspended final AsyncResponse asyncResponse,
#Context final Request request,
#PathParam(value = "id") final int id) {
new Thread() {
#Override
public void run() {
asyncResponse.resume(Response.ok(new Customer(id)).build());
}
}.start();
}
}
Netbeans creates asynchronous server like this:
#Path("/customers")
public class CustomerResource {
private final ExecutorService executorService = java.util.concurrent.Executors.newCachedThreadPool();
#GET
#Path("{id}")
#Produces(MediaType.APPLICATION_XML)
public void getCustomer(#Suspended final AsyncResponse asyncResponse,
#Context final Request request,
#PathParam(value = "id") final int id) {
executorService.submit(new Runnable() {
#Override
public void run() {
doGetCustomer(id);
asyncResponse.resume(javax.ws.rs.core.Response.ok().build());
}
});
}
private void doGetCustomer(#PathParam(value = "id") final int id) {
}
}
Those that do not create background threads use some locking methods to store response objects for further processing. This example is for sending stock quotes to clients:
#Path("qoute/RHT")
public class RHTQuoteResource {
protected List<AsyncResponse> responses;
#GET
#Produces("text/plain")
public void getQuote(#Suspended AsyncResponse response) {
synchronized (responses) {
responses.add(response);
}
}
}
responses object will be shared with some background jobs and it will send quote to all clients when it is ready.
My questions:
In example 1 and 2 web server thread(the one that handle request) dies
and we create another background thread. The whole idea behind
asynchronous server is to reduce idle threads. These examples are
not reducing idle threads. One threads dies and another one born.
I thought creating unmanaged threads inside container is a bad idea.
We should only use managed threads using concurrency utilities in
Java EE 7.
Again one of ideas behind async servers is to scale. Example 3 does not scale, does it?
Executive Summary: You're over-thinking this.
In example 1 and 2 web server thread(the one that handle request) dies and we create another background thread. The whole idea behind asynchronous server is to reduce idle threads. These examples are not reducing idle threads. One threads dies and another one born.
Neither is particularly great, to be honest. In a production service, you wouldn't hold the executor in a private field like that but instead would have it as a separately configured object (e.g., its own Spring bean). On the other hand, such a sophisticated example would be rather harder for you to understand without a lot more context; applications that consist of systems of beans/managed resources have to be built to be that way from the ground up. It's also not very important for small-scale work to be very careful about this, and that's a lot of web applications.
The gripping hand is that the recovery from server restart is actually not something to worry about too much in the first place. If the server restarts you'll probably lose all the connections anyway, and if those AsyncResponse objects aren't Serializable in some way (no guarantee that they are or aren't), you can't store them in a database to enable recovery. Best to not worry about it too much as there's not much you can do! (Clients are also going to time out after a while if they don't get any response back; you can't hold them indefinitely.)
I thought creating unmanaged threads inside container is a bad idea. We should only use managed threads using concurrency utilities in Java EE 7.
It's an example! Supply the executor from outside however you want for your fancy production system.
Again one of ideas behind async servers is to scale. Example 3 does not scale, does it?
It's just enqueueing an object on a list, which isn't a very slow operation at all, especially when compared with the cost of all the networking and deserializing/serializing going on. What it doesn't show is the other parts of the application which take things off that list, perform the processing, and yield the result back; they could be poorly implemented and cause problems, or they could be done carefully and the system work well.
If you can do it better in your code, by all means do so. (Just be aware that you can't store the work items in the database, or at least you can't know for sure that you can do that, even if it happens to be actually possible. I doubt it though; there's likely information about the TCP network connection in there, and that's never easy to store and restore fully.)
I share your view expressed in question 1. Let me just add a little detail that the webserver thread doesn't die, it typically comes from a pool and frees itself for another web request. But that doesn't really change much in terms of efficiency of async processing. In those examples, async processing is merely used to pass the processing from one thread pool to another. I don't see any point at all in that.
But there is one use-case where I think async makes sense, eg. when you want to register multiple clients to wait for an event and send a response to all of them once the event occurs. It is described in this article: http://java.dzone.com/articles/whats-new-jax-rs-20
The throughput of the service improves if different thread pools manage request I/O and request processing. Freeing up the request-I/O thread managed by the container allows it to receive the next request, prepare it for processing and feed into the request-processing-thread-pool when a request-processing-thread has been released.

What's the level of asynchronism in Play! framework

Play! touts its asynchronous HTTP handling feature, though it is not very clear to me what else are truly async (non-blocking and without thread switching.) In the asynchronous examples I read, like the one below taken from the Play! Framework Cookbook:
public static void generateInvoice(Long orderId) {
Order order = Order.findById(orderId); // #a
InputStream is = await(new OrderAsPdfJob(order).now()); // #b
renderBinary(is);
}
They focuses on the long/expensive "business logic" step at #b, but my concern is at the DB calls at #a. In fact, majority of the controller methods in many apps will just try to do multiple CRUD to DB, like:
public static void generateInvoice(Long orderId) {
Order order = Order.findById(orderId); // #a
render(order);
}
I'm particularly concerned about the claim of using "small number of threads" when serving this DB access pattern.
So the questions are
Will Play! will block on the JDBC calls?
If we wrap such calls in future/promise/await, it will cause thread switching (besides the inconvenience due the pervasiveness of DB calls,) right?
In light of this, how does its asynchronism comparing to a servlet server with NIO connector (e.g. Tomcat + NIO connector but without using the new event handler) in serving this DB access pattern?
Is there any plan to support asynchronous DB driver, like http://code.google.com/p/adbcj/ ?
Play will block on JDBC calls--there's no magic to prevent that.
To wrap a j.u.c.Future in an F.Promise for Play, a loop is needed. This can result in a lot of context switches.
Servlet containers can use NIO e.g. to keep connections open between requests without tying up threads for inactive connections. But a JDBC call in request handling code will block and tie up a thread just the same.
ADBCJ implements j.u.c.Future, but also supports callbacks, which can be tied to an F.Promise, see https://groups.google.com/d/topic/play-framework/c4DOOtGF50c/discussion.
I'm not convinced Play's async feature is worthwhile, given how much it complicates the code and testing. Maybe if you need to handle thousands of requests per second on a single machine while calling slow services.

Categories

Resources