Complex Spring Framework Service Layer - java

EDITED SHORT VERSION OF THE POST:
Haven't had enough views, so I'm summarizing the question:
My architecture is completely stateless and async, the front-end makes a petition to a REST API and then long-polls for the response. This Rest API queues petitions into a messaging queue, and each petition is dequeued and processed by the Back-end.
I want this Back-end to follow the "traditional" Spring #Service interface and ServiceImpl approach, however is kind of hard because of how I'm doing it.
One Thread dequeues the petition (Producer), spawns a new Thread (Consumer), and then it processes all the petition within that thread, which later sends back to a "responses pool" where it gets polled. That petition might need to use several #Service's and merge the responses from each, maybe even the same #Service twice.
How would you do it? For more information, check the description below!
ORIGINAL LONG POST:
I have a large application with 3 layers like this:
Front-end (Spring-MVC): Views and Controllers, "Model" are async requests to REST API in Middleware to queue the petition first and then long-polling for an answer
Middleware (Spring-MVC): The rest API. Two main functions: receives a petition from front-end and queues it, receives an answer from Backend and stores it on responses cache until retrieved by front-end
Back-End (Spring Standalone App): Producer/Consumer pattern, ONE Producer dequeues petition and creates a Prototype Consumer for each petition. The consumer implements InitializingBean, so it goes something like this: It is initialized, many Autowired fields are initialized and then afterPropertiesSet is executed and many fields which depends on the petition are set.
I also have a Repository Layer of HibernateDaos, which does all the querying to the database.
I'm missing a properly built Service Layer and that's what this question is all about.
Let me put a little bit more of context. What I have right now is like one only HUGE service with 221 functions (The Consumer's file is very long), and one petition may need to invoke several of this functions, and the result of each is merged into a List of DTOs, which is later received by the front-end.
I want to split this one and only service into several, in a logical match to "it's" corresponding Repository, however I've faced the following problems:
Keep this in mind:
One petition has many Actions, one action is a call to a function of a Service.
Each Consumer is a single and unique Thread.
Every time a Consumer Thread starts, a transaction is started and right before returning it is commited, unless rollbacked.
I need all the services of that petition to be executed in the same thread and transaction.
When the consumer runs afterPropertiesSet, several fields specific to that request are initialized by some parameters which are always sent.
With a good Service Layer I want to acomplish:
I don't want to have to initialize all these parameters always for each service of the petition, I want them to be global to the petition/Thread, however, I don't want to have to pass then as parameters to all the 221 functions.
I want to lazily initialize the new services, only if needed, and when it is initialized, I want to set all the parameters I mentioned above. Each service needs to be a Prototype to the petition, however I feel like is dumb initializing it twice if needed within the same petition (2 actions for the same service on one petition), i.e. I want it to behave like a "Request" scope, however, it is not a request since it is not Web Based, it is a new Thread initialized by the Producer when de-queuing a petition.
I was thinking of having a prototype ServicesFactory per Consumer which is initialized with all the parameters afterPropetiesSet in the Consumer, inside this ServicesFactory all possible Services are declared as Class fields, and when a specific service is requested, if it's field is null it is initialized and all fields are set, if not null, the same instance is returned. The problem with this approach, I that I'm losing Dependency Injection on all the Services. I've been reading about ServiceFactoryBean thinking maybe this is the way to go, however I really can't get a hold to it. The fact that it needs all the parameters of the Consumer, and that it needs to be an unique ServiceFactoryBean per Consumer is really confusing.
Any thoughts?
Thanks

Based on the description I don't think this is a good case for using the protoype scope, in this case the ideal scope seems to be thread scope.
As a solution, the simplest would be to make all services singleton. Then the consumer reads the petition from the inbound queue and starts processing.
One of the services that is also singleton and gets injected in all services needed, let's call it PetitionScopedService.
This service internally uses a ThreadLocal, which is a thread scoped holder for a variable of type PetitionContext. PetitionContext on it's turn contain all information that is global to that petition.
All the consumer needs to do is to set the initial values of the petition context, and any caller of PetitionScopedService on the same thread will be able to read those values in a transparent way. Here is some sample code:
public class PetitionContext {
... just a POJO, getters and setters etc.
}
#Service
public class PetitionScopedService {
private ThreadLocal<PetitionContext> = new ThreadLocal<PetitionContext>();
public doSomethingPetitionSpecific() {
... uses the petition context ...
}
}
#Service
public class SomeOtherService {
#Autowired
private PetitionScopedService petitionService;
... use petition service that is a singleton with thread scoped internal state, effectivelly thread scoped ...
}

Points 2 and 3 need more reorganizing, prefer to check "Spring Integration" for both "Middleware" and "(Spring Standalone App): Producer/Consumer pattern" actually spring integration made to solve these 2 points, and using publish/subscribe if you are doing 2 or more actions at same time, the other point why you are using REST in "Middleware" are these "Middleware" services exposed by another app rather than your front end, in this case you can integrate this part in your Spring-MVC front end app using "content negotiation", otherwise if you are going to use "Spring Integration" you will find multiple ways for communication.

Related

Spring 5.x - How to cleanup a ThreadLocal entry

Apologies for the long question..
I'm fairly new to Spring and don't understand the inner working fully yet.
So, my current java project has Spring 4.x code written way back in 2015 that uses ThreadLocal variable to store some user permission data.
The flow starts as a REST call in a REST controller which then calls the backend code and checks for user permissions from the DB.
There is a #Repository class that has a static instance of ThreadLocal where this user permission is stored. The ThreadLocal variable is updated by the calling thread.
So, if the thread finds data in the ThreadLocal instance already present for it, it just reads that data from the ThreadLocal variable and works away. If not, it goes to DB tables and fetches new permission data and also updates the ThreadLocal variable.
So my understanding is that ThreadLocal variable was used as these user permissions are needed multiple times within the same REST Call. So the idea was for a given REST request since the thread is the same, it needn't fetch user permissions from DB and instead can refer to its entry in the ThreadLocal variable within the same REST request.
Now, this seems to work fine in Spring 4.3.29.RELEASE as every REST call was being serviced by a different thread.(I printed Thread IDs to confirm.)
Spring 4.x ThreadStack up to Controller method call:
com.xxx.myRESTController.getDoc(MyRESTController.java),
org.springframework.web.context.request.async.WebAsyncManager$5.run(WebAsyncManager.java:332),
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511),
java.util.concurrent.FutureTask.run(FutureTask.java:266),
java.lang.Thread.run(Thread.java:748)]
However, when I upgraded to Spring 5.2.15.RELEASE this breaks when calling different REST endpoints that try to fetch user permissions from the backend.
On printing the Stacktrace in the backend, I see there is a ThreadPoolExecutor being used in Spring 5.x.
Spring 5.x ThreadStack:
com.xxx.myRESTController.getDoc(MyRESTController.java),
org.springframework.web.context.request.async.WebAsyncManager.lambda$startCallableProcessing$4(WebAsyncManager.java:337),
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511),
java.util.concurrent.FutureTask.run(FutureTask.java:266),
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149),
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624),
java.lang.Thread.run(Thread.java:748)]
So in Spring 5.x, it looks like the same thread is being put back in the ThreadPool and later gets called for multiple different REST calls.
When this thread looks up the ThreadLocal instance, it finds stale data stored by it for an earlier unrelated REST call. So quite a few of my test cases fail due to stale data permissions being read by it.
I read that calling ThreadLocal's remove() clears the calling thread's entry from the variable (which wasn't implemented at the time).
I wanted to do this in a generic way so that all REST calls call the remove() before the REST Response is sent back.
Now, in order to clear the ThreadLocal entry, I tried
writing an Interceptor by implementing HandlerInterceptor but this didn't work.
I also wrote another Interceptor extending HandlerInterceptorAdapter and calling ThreadLocal's remove() in its afterCompletion().
I then tried implementing ServletRequestListener and called the ThreadLocal's remove() from its requestDestroyed() method.
In addition, I implemented a Filter and called remove() in doFilter() method.
All these 4 implementations failed cos when I printed the Thread IDs in their methods they were the exact same as each other, but different to the Thread ID being printed in RestController method.
So, the Thread calling the REST endpoint is a different thread from those being called by the above 4 classes. So the remove() call in the above classes never clears anything from ThreadLocal variable.
Can someone please provide some pointers on how to clear the ThreadLocal entry for a given thread in a generic way in Spring?
As you noticed, both the HandlerInterceptor and the ServletRequestListener are executed in the original servlet container thread, where the request is received. Since you are doing asynchronous processing, you need a CallableProcessingInterceptor.
Its preProcess and postProcess methods are executed on the thread where asynchronous processing will take place.
Therefore you need something like this:
WebAsyncUtils.getAsyncManager(request)//
.registerCallableInterceptor("some_unique_key", new CallableProcessingInterceptor() {
#Override
public <T> void postProcess(NativeWebRequest request, Callable<T> task,
Object concurrentResult) throws Exception {
// remove the ThreadLocal
}
});
in a method that has access to the ServletRequest and executes in the original servlet container thread, e.g. in a HandlerInterceptor#preHandle method.
Remark: Instead of registering your own ThreadLocal, you can use Spring's RequestAttributes. Use the static method:
RequestContextHolder.currentRequestAttributes()
to retrieve the current instance. Under the hood a ThreadLocal is used, but Spring takes care of setting it and removing it on every thread where the processing of your request takes place (asynchronous processing included).

Is Session.sendToTarget() thread-safe?

I am trying to integrate QFJ into a single-threaded application. At first I was trying to utilize QFJ with my own TCP layer, but I haven't been able to work that out. Now I am just trying to integrate an initiator. Based on my research into QFJ, I would think the overall design should be as follows:
The application will no longer be single-threaded, since the QFJ initiator will create threads, so some synchronization is needed.
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
There are 2 aspects to the integration of the initiator into my application:
Receiving side (fromApp callback): I believe this is straightforward, I simply push messages to a thread-safe queue consumed by my MainProcessThread.
Sending side: I'm struggling to find documentation on this front. How should I handle synchronization? Is it safe to call Session.sendToTarget() from the MainProcessThread? Or is there some synchronization I need to put in place?
As Michael already said, it is perfectly safe to call Session.sendToTarget() from multiple threads, even concurrently. But as far as I see it you only utilize one thread anyway (MainProcessThread).
The relevant part of the Session class is in method sendRaw():
private boolean sendRaw(Message message, int num) {
// sequence number must be locked until application
// callback returns since it may be effectively rolled
// back if the callback fails.
state.lockSenderMsgSeqNum();
try {
.... some logic here
} finally {
state.unlockSenderMsgSeqNum();
}
Other points:
Here I am using an SocketInitiator (I only handle a single FIX session), but I would expect a similar setup should I go for the threaded version later on.
Will you always use only one Session? If yes, then there is no use in utilizing the ThreadedSocketInitiator since all it does is creating a thread per Session.
The application will no longer be single threaded, since the QFJ initiator will create threads
As already stated here Use own TCP layer implementation with QuickFIX/J you could try passing an ExecutorFactory. But this might not be applicable to your specific use case.

OOA&D / Java / Software Architecture - advise on structuring event handling code to avoid a complicated data flow

I've implemented the producer/consumer paradigm with a message broker in Spring and my producers use WebSocket to extract and publish data into the queue.
A producer is therefore something like:
AcmeProducer.java
handler/AcmeWebSocketHandler.java
and the handler has been a pain to deal with.
Firstly, the handler has two events:
onOpen()
onMessage()
onOpen has to send message to the web socket to subscribe to specific channels
onMessage receives messages from WebSocket and adds them into the queue
onMessage has some dependencies from AcmeProducer.java, such as it needs to know currency pairs to subscribe to, it needs the message broker service, and an ObjectMapper (json deserializer) and a benchmark service.
When messages are consumed from the queue they are transformed into a format for OrderBook.java. Every producer has its own format of OrderBook and therefore its own AcmeOrderBook.java.
I feel the flow is difficult to follow, even though I've put classes for one producer in the same package. Every time I want to fix something in a producer I have to change between classes and search where it is.
Are there some techniques for reducing the complicated data flow into something easy to follow?
As you know, event handlers like AcmeHandler.java hold callbacks that get called from elsewhere in the system (from the websocket implementation) and hence they can be tricky to organize. The data flow with events is also more convoluted because when handlers are in separate files they cannot use variables defined in the main file.
If this code would not use the event driven paradigm the data flow would be easy to follow.
Finally, is there any best practice for structuring code when using web sockets with onOpen and onMessage? Producer/Consumer is the best I could come up with, but I don't want to scatter classes of Acme in different packages. For example, the AcmeOrderbook should be in a consumer class, but as it depends on the AcmeProducer.java and AcmeHandler.java they are often edited at the same time, hence I've put them together.
As the dependencies inside every WebSocket handler are the same (only different implementations of those same interfaces) I think there should be only one thing injected, and that will be some Context variable. Is that the best practice?
I've solved it using Message Dispatcher and Message Handlers.
The dispatcher only checks if the message is a data message (snapshot or update) and passes control to the message handler class which itself checks the type of the message (either snapshot or update) and handles that type properly. If the message is not a data message but something else the dispatcher will dispatch the message depending on what it is (some event).
I've also added callbacks using anonymous functions and they are much shorter now, the callbacks are finally transparent.
For example, inside an anonymous callback function there is only this:
messageDispatcher.dispatchMessage(context);
Another key approach here is the use of context.MessageDispatcher is a separate class (autowired).
I've separated orderbook into its directory inside every producer.
Well, everything requires knowledge of everything to solve this elegantly.
Last pattern for solving this: Java EE uses annotations for their endpoints and control functions such as onOpen, onMessage, etc. That is also a nice approach because with it the callback becomes invisible and onOpen / onMessage are automatically called by the container (using the annotation).

Asynchronous processing using user-defined Spring #Components

I have three main classes: Controller, Service1, Service2
Controller: Simple controller that retrieves input (list) and passes it to Service1
Service1: Receives input from controller procceses and passes it to Service2
Service2: Receives processed input from Service 1 and sends a POST request to an external service
These three are all annotated with #Component and based from what I was reading, Spring beans, by default, are singletons unless specified to be prototype which creates a new instance every time the bean is utilized. It is important to note that these are all stateless.
The main flow works like this:
Controller -> Service 1 -> Service 2.
It's simple as is and works well synchronously. But I want asynchronous, fire and forget, behavior for these three classes.
My main concern is that Service1 can be a possible bottleneck since it's doing a lot of processing that takes (4-5 seconds). I mean, sure, the controller can spawn a lot of CompleteableFutures but since Service1 is a singleton, a single thread locks the whole method until it finishes which results to somewhat synchronous behavior.
My question is would setting the scope to 'prototype' for Service1 solve this issue?
since Service1 is a singleton, a single thread locks the whole method until it finishes
No, not at all. That would only be true if the method was synchronized. But there's no reason to make it synchronized, since the object is stateless. Several threads execute the methods of a given object concurrently. That's the whole point of threads.
Note that
prototype which creates a new instance every time the bean is utilized
is also wrong. A new instance is created every time a bean is requested to the application context. So if you have two controllers both using a prototype bean Foo, bith will have their own instance of Foo. But there will only be 2 instances, used and shared by all threads executing the methods of Foo. Read the documentation, which explains it.
If you really want your controller to send back the response before the processing is finished, then use Async, as described in the documentation.

Long running webservice architecture

We use axis2 for building our webservices and a Jboss server to run the logic of all of our applications. We were asked to build a webservice that talks to a bean that could take up to 1 hour to respond (depending on the size of the request) so we would not be able to keep the connection with the consumers opened during that time.
We could use an asynchronous webservice but that hasn't come out all that well so we decided we could implement a bean that will do the logic behind the webservice and have the service invoke that bean asynchronously. The webservice will generate a token that will pass to the consumer and the consumer can use it to query the status of the request.
The questions I have are:
How to I query the status of the bean on the Jboss server once I have returned from the method in the service that created that bean. Do I need to use stateful beans?
Can I use stateful beans if I want to do asynchronous calls from the webservice side?
Another approach you could take is to make use of JMS and a DB.
The process would be
In web service call, put a message on a JMS Queue
Insert a record into a DB table, and return a unique id for that record to the client
In an MDB that listens to the Queue, call the bean
When the bean returns, update the DB record with a "Done" status
When the client calls for status, read the DB record, return "Not Done" or "Done" depending on the record.
When the client calls and the record indicates "Done", return "Done" and delete the record
This process is a bit heavier on resource usage, but has some advantages
A Durable JMS Queue will redeliver if your bean method throws an Exception
A Durable JMS Queue will redeliver if your server restarts
By using a DB table instead of some static data, you can support a clustered or load balanced environment
I don't think stateful session beans are the answer to your problem, they're designed for long-running conversational sessions, which isn't your scenario.
My recommendation would be to use a Java5-style ExecutorService thread pool, created using the Executors factory class:
When the web service server initializes, create an ExecutorService instance.
Web service call comes in, the handler creates an instance of Callable. The Callable.call() method would make the actual invocation on the business logic bean, in whatever form that takes.
This Callable is passed to ExecutorService.submit(), which immediately returns a Future object representing the eventual result of the call. The Executor will start to invoke your Callable in a separate thread.
Generate a random token, store the Future in a Map with the token as the key.
Return the token to the web service client (steps 1 to 4 should happen immediately)
Later, he web service client makes another call asking for the result, passing in the token
The server looks up the Future using the token, and calls get() on the Future, with a timeout value so that it only waits a short time for the answer. The get() call will return the execution result of whatever the Callable invoked.
If the answer is available, return it to the client, and remove the Future from the `Map.
Otherwise, tell the client to come back later.
It's a pretty robust approach. You can even configure the ExecutorService to limit the number of calls that can be in execution at the same time, if you so desire.

Categories

Resources