Multiple HTTP request safe Multithreaded Spring boot server - java

In Spring boot server, my Rest controller class is Multithreaded and many threads share (read and write) a variable for the same HTTP Request. The program is thread-safe for 1 HTTP Request.
But I have some static variables in the program which I use to share information between Java threads of the same HTTP request.
I understand that this would create an issue when Multiple HTTP Requests arrive together.
How should I make the program/server accept multiple HTTP requests simultaneously and yet not mess up the variables 'logger' ,'abc', 'xyz' for 2 different HTTP Requests?
How must one declare and use these variables?
#RestController
public class ABC {
private static final Logger logger = LoggerFactory.getLogger(ABC.class);
private static volatile byte[] abc = null;
static volatile boolean xyz = "true"
// Multithreaded program where multiple threads read and write
both 'abc' and 'xyz'

The standard HTTP model is one thread per request.
The new reactive model uses Netty and an event bus.
You know full well that shared, mutable data is a problem for multi-thread access. You'll have to write your code so it's thread safe. Use the java.util.concurrent package.
You make a mistake to deviate from the model of choice. Smart people have a hard time writing multi-threaded code that's correct.

Spring has some feature to help dealing with that, but it's hard to suggest as you haven't shared your use case...
To solve this particular problem you can use also AtomicReference's, which does not use locks.
Here's a good tutorial on how to use it:
http://tutorials.jenkov.com/java-util-concurrent/atomicreference.html

Related

Is Session.sendToTarget() thread-safe?

I am trying to integrate QFJ into a single-threaded application. At first I was trying to utilize QFJ with my own TCP layer, but I haven't been able to work that out. Now I am just trying to integrate an initiator. Based on my research into QFJ, I would think the overall design should be as follows:
The application will no longer be single-threaded, since the QFJ initiator will create threads, so some synchronization is needed.
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
There are 2 aspects to the integration of the initiator into my application:
Receiving side (fromApp callback): I believe this is straightforward, I simply push messages to a thread-safe queue consumed by my MainProcessThread.
Sending side: I'm struggling to find documentation on this front. How should I handle synchronization? Is it safe to call Session.sendToTarget() from the MainProcessThread? Or is there some synchronization I need to put in place?
As Michael already said, it is perfectly safe to call Session.sendToTarget() from multiple threads, even concurrently. But as far as I see it you only utilize one thread anyway (MainProcessThread).
The relevant part of the Session class is in method sendRaw():
private boolean sendRaw(Message message, int num) {
// sequence number must be locked until application
// callback returns since it may be effectively rolled
// back if the callback fails.
state.lockSenderMsgSeqNum();
try {
.... some logic here
} finally {
state.unlockSenderMsgSeqNum();
}
Other points:
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
Will you always use only one Session? If yes, then there is no use in utilizing the ThreadedSocketInitiator since all it does is creating a thread per Session.
The application will no longer be single threaded, since the QFJ initiator will create threads
As already stated here Use own TCP layer implementation with QuickFIX/J you could try passing an ExecutorFactory. But this might not be applicable to your specific use case.

Handling multiple requests efficiently in a REST api

I've built a REST api using Spring Boot that basically accepts two images via POST and performs image comparison on them. The api is invoked synchronously. I'm not using an external application server to host the service, rather I package it as a jar and run it.
#RequestMapping(method = RequestMethod.POST, value = "/arraytest")
public String compareTest(#RequestParam("query") MultipartFile queryFile,#RequestParam("test") MultipartFile testFile,RedirectAttributes redirectAttributes,Model model) throws IOException{
CoreDriver driver=new CoreDriver();
boolean imageResult=driver.initProcess(queryFile,testFile);
model.addAttribute("result",imageResult);
return "resultpage";
}
The service could be invoked in parallel across multiple machines and I would need my service to perform efficiently. I'm trying to understand how are parallel calls to a REST service handled?
When the request is sent to the service , does a single object of the service get created and same object get used in multiple threads to handle multiple requests?
A follow-up question would be whether if it is possible to improve the performance of a service on the perspective of handling requests rather than improving the performance of the service functionality.
Spring controllers (and most spring beans) are Singletons, i.e. there is a single instance in your application and it handles all requests.
Assuming this is not web sockets (and if you don't know what that means, it's probably not), servlet containers typically maintain a thread pool, and will take a currently unused thread from the pool and use it to handle the request.
You can tune this by, for example, changing some aspects of the thread pool (initial threads, max threads, etc...). This is the servlet container stuff (i.e. configuring tomcat/jetty/whatever you're using) not spring per se.
You can also tune other http aspects such as compression. This can usually be done via the container, but if I recall correctly spring offers a servlet filter that will do this.
The image library and image operations you perform will also matter. Many libraries convert the image into raw in memory in order to perform operations. This means a 3 meg jpg can take upwards of 100 megs of heap space. Implication of this is that you may need some kind of semaphore to limit concurrent image processing.
Best approach here is to experiment with different libraries and see what works best for your usecase. Hope this helps.
The controller will be singleton but there are ways to make the processing async. Like a thread pool or JMS. Also you can have multiple nodes. This way as long as you return a key and have a service for clients to poll to get the result later, you can scale out back end processing.
Besides you can cluster your app so there are more nodes to process. Also if possible cache results; if you get the same input and they have the same output for 30% or more of the requests.

How to process data from Servlet independently?

I have developed the following program/architecture:
A) A Java servlet receives POST requests, gets the Parameters from the POST requests and stores them in a public static LinkedList:
public static LinkedList incomingQueue = new LinkedList<myObjects>();
That is, for every POST request I do this:
incomingQueue.push(myObject);
Now, I want to periodically access the Queue and perform processing on the Objects:
while(true){
doProcessing(incomingQueue);
wait(someTime);
}
Obviously, I don't have a main class to do this. How do I create such a class that has access to the incomingQueue without being triggered by the servlet? What is the correct architecture to do this?
Thank you for your time.
First of all the queue should be placed in servlet context attributes (see: ServletContext.setAttribute(). Also access to this queue must be synchronized, consider ArrayBlockingQueue.
In plain servlets you can use ServletContextListener by starting a thread in contextInitialized() and interrupting it in contextDestroyed.
If you are using spring you can use #Scheduled annotation, in ejb: TimerService or #Schedule.
Finally there is a Timer class in standard Java. Last but not least, have a look at jms, it might be a better choice in your situation.
You have several options:
Use a scheduling library like Quartz
If you don't want to use a separate library, you should add a Listener to your web.xml that extends ServletContextListener and starts a separate thread on contextInitialized().
Also: Note the comment by #BrianRoach. The point about the synced list is rather important.
You need to synchronize your methods for concurrent access.
A very hard core solution would be to implement it like producer and consumer. Here is an example that uses stack and 1 producer and 3 consumers.
Much neater solution would be to use JMS.

What's the level of asynchronism in Play! framework

Play! touts its asynchronous HTTP handling feature, though it is not very clear to me what else are truly async (non-blocking and without thread switching.) In the asynchronous examples I read, like the one below taken from the Play! Framework Cookbook:
public static void generateInvoice(Long orderId) {
Order order = Order.findById(orderId); // #a
InputStream is = await(new OrderAsPdfJob(order).now()); // #b
renderBinary(is);
}
They focuses on the long/expensive "business logic" step at #b, but my concern is at the DB calls at #a. In fact, majority of the controller methods in many apps will just try to do multiple CRUD to DB, like:
public static void generateInvoice(Long orderId) {
Order order = Order.findById(orderId); // #a
render(order);
}
I'm particularly concerned about the claim of using "small number of threads" when serving this DB access pattern.
So the questions are
Will Play! will block on the JDBC calls?
If we wrap such calls in future/promise/await, it will cause thread switching (besides the inconvenience due the pervasiveness of DB calls,) right?
In light of this, how does its asynchronism comparing to a servlet server with NIO connector (e.g. Tomcat + NIO connector but without using the new event handler) in serving this DB access pattern?
Is there any plan to support asynchronous DB driver, like http://code.google.com/p/adbcj/ ?
Play will block on JDBC calls--there's no magic to prevent that.
To wrap a j.u.c.Future in an F.Promise for Play, a loop is needed. This can result in a lot of context switches.
Servlet containers can use NIO e.g. to keep connections open between requests without tying up threads for inactive connections. But a JDBC call in request handling code will block and tie up a thread just the same.
ADBCJ implements j.u.c.Future, but also supports callbacks, which can be tied to an F.Promise, see https://groups.google.com/d/topic/play-framework/c4DOOtGF50c/discussion.
I'm not convinced Play's async feature is worthwhile, given how much it complicates the code and testing. Maybe if you need to handle thousands of requests per second on a single machine while calling slow services.

Alternative of MultiThreading in Java

I have a question bother me a while.
For example,I have a multithreaded server, when it receives a request, it pass this request to a handler, this handler will process this request. One reason we make server multithreaded is:
if it is not multithreaded, when the server processing this request, during the meaning time,
another request coming, then this request will be drop, because the server is not available now.
So I wonder if there is an alternative of multithreaded server, for example, we can create a queue for non-multithreading server? when it can fetch another request from the queue once it finish one.
Yes, you can have an event-based server. This capability is offered by the java.nio package, though you could use a framework like netty rather than do it from scratch.
However, note that while this used to be considered a way to get better performance, it seems like a regular multithreaded server actually offers better performances with today's hardware and operating systems.
Yes you can. Have you considered SEDA-like techniques (i.e. event-driven techniques)? You may want to investigate the Netty library too. It does most of the job for you when it comes to using NIO.
You can still have a single threaded engine with a multi-threaded server.
consider the following skeleton - if you have an Engine that runs, it can be completely single threaded, just handing requests in the order they're received. This allows you to use non-thread-safe components in the business logic, and you've managed to separate your networking layer from your business logic layer! It's a win-win scenario.
class Engine implements Runnable {
private final Object requestLock = new Object();
private List<Request> requests = new LinkedList<Request>();
private boolean running = true;
private Request nextRequest() {
synchronized(requestLock) { return requests.poll(); }
}
/**
* The engine is single threaded. It doesn't care about server connections
*/
public void run() {
while(running) {
Request request = nextRequest();
// handle your request as normal
// also consider making a mechanism to send Responses
}
}
}

Categories

Resources