I am trying to use BlockingQueue inside Spring Boot. My design was like this: user submit request via a controller and controller in turn puts some objects onto a blocking queue. After that the consumer should be able to take the objects and process further.
I have used Asnyc, ThreadPool and EventListener. However with my code below I found consumer class is not consuming objects. Could you please help point out how to improve?
Queue Configuration
#Bean
public BlockingQueue<MyObject> myQueue() {
return new PriorityBlockingQueue<>();
}
#Bean
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(3);
executor.setMaxPoolSize(3);
executor.setQueueCapacity(10);
executor.setThreadNamePrefix("Test-");
executor.initialize();
return executor;
}
Rest Controller
#Autowired
BlockingQueue<MyObject> myQueue;
#RequestMapping(path = "/api/produce")
public void produce() {
/* Do something */
MyObject myObject = new MyObject();
myQueue.put(myObject);
}
Consumer Class
#Autowired
private BlockingQueue<MyObject> myQueue;
#EventListener
public void onApplicationEvent(ContextRefreshedEvent event) {
consume();
}
#Async
public void consume() {
while (true) {
try {
MyObject myObject = myQueue.take();
}
catch (Exception e) {
}
}
}
Your idea is using Queue to store messages, consumer listens to spring events and consume.
I didn't see your code have actually publish the event, just store them in queue.
If you want to use Spring Events, producers could like this:
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
public void doStuffAndPublishAnEvent(final String message) {
System.out.println("Publishing custom event. ");
CustomSpringEvent customSpringEvent = new CustomSpringEvent(this, message);
applicationEventPublisher.publishEvent(customSpringEvent);
}
check this doc
If you still want to use BlockingQueue, your consumer should be a running thread, continuously waiting for tasks in the queue, like:
public class NumbersConsumer implements Runnable {
private BlockingQueue<Integer> queue;
private final int poisonPill;
public NumbersConsumer(BlockingQueue<Integer> queue, int poisonPill) {
this.queue = queue;
this.poisonPill = poisonPill;
}
public void run() {
try {
while (true) {
Integer number = queue.take(); // always waiting
if (number.equals(poisonPill)) {
return;
}
System.out.println(Thread.currentThread().getName() + " result: " + number);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
could check this code example
#Async doesn't actually start a new thread if the target method is called from within the same object instance, this could be the problem in your case.
Also note that you need to put #EnableAsync on a config class to enable the #Async annotation.
See Spring documentation: https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#scheduling-annotation-support
The default advice mode for processing #Async annotations is proxy which allows for interception of calls through the proxy only. Local calls within the same class cannot get intercepted that way. For a more advanced mode of interception, consider switching to aspectj mode in combination with compile-time or load-time weaving.
In the end I came up with this solution.
Rest Controller
#Autowired
BlockingQueue<MyObject> myQueue;
#RequestMapping(path = "/api/produce")
public void produce() {
/* Do something */
MyObject myObject = new MyObject();
myQueue.put(myObject);
Consumer.consume();
}
It is a little bit weird because you have to first put the object on queue yourself then consume that object by yourself. Any suggestions on improvement is highly appreciated.
Related
I am trying to write my own Async service implementation alongside my already existing Synchronous version.
I have the following so far:
#Service("asynchronousProcessor")
public class AsynchronousProcessor extends Processor {
private BlockingQueue<Pair<String, MyRequest>> requestQueue = new LinkedBlockingQueue<>();
public AsynchronousProcessor(final PBRequestRepository pbRequestRepository,
final JobRunner jobRunner) {
super(pbRequestRepository, jobRunner);
}
#Override
public MyResponse process(MyRequest request, String id) {
super.saveTheRequestInDB(request);
// add task to blocking queue and have it processed in the background
}
}
Basically I have an endpoint RestController class that calls process(). The async version should queue the request in a BlockingQueue and have it processed in the background.
I am unsure how to implement this code to solve this problem. Whether I should use ExecutorService and how best to fit with this current design.
It would be useful to have some controls such as before executing a task or after executing a task calls.
Any answer with some code samples to show design would be really helpful :)
If the only requirement is to process it asynchronously then I'd strongly recommend consider using spring inbuilt #Async for this purpose. Using this approach however will not be interface compatible with your existing process method of Processor since the return type MUST be either void or wrapped in Future type. This limitation is for good reasons since the async execution can not return the response immediately thus Future wrapper is the only way to get access to result should that be needed.
Following solution outline lays out what should be done in order to switch from sync execution to async execution while retaining interface compatibility. All important points are mentioned with inline comments. Please note, although this is interface compatible, the return type is null (for the reasons stated above). If you MUST need the return value within your controller than this approach (or any async approach for that matter) is NOT going to work unless you switch to async controller as well (a different topic with much wider change and design though). Following outline also include pre and post execution hooks.
/**
* Base interface extracted from existing Processor.
* Use this interfae as injection type in the controller along
* with #Qualifier("synchProcessor") for using sync processor.
* Once ready, switch the Qualifier to asynchronousProcessor
* to start using async instead.
*/
public interface BaseProcessor {
public MyResponse process(MyRequest request, String id);
}
#Service("synchProcessor")
#Primary
public class Processor implements BaseProcessor {
#Override
public MyResponse process(MyRequest request, String id) {
// normal existing sync logic
}
}
#Service("asynchronousProcessor")
public class AsynchronousProcessor implements BaseProcessor {
#Autowired
private AsynchQueue queue;
public MyResponse process(MyRequest request, String id) {
queue.process(request,id);
// async execution can not return result immediately
// this is a hack to have this implementation interface
// compatible with existing BaseProcessor
return null;
}
}
#Component
public class AsynchQueue {
#Autowired
#Qualifier("synchProcessor")
private BaseProcessor processor;
/**
* This method will be scheduled by spring scheduler and executd
* asynchronously using an executor. Presented outline will
* call preProcess and postProcess methods before actual method
* execution. Actual method execution is delegated to existing
* synchProcessor resuing it 100% AS-IS.
*/
#Override
#Async
public void process(MyRequest request, String id) {
preProcess(request, id);
MyResponse response = processor.process(request, id);
postProcess(request, id, response);
}
private void preProcess(MyRequest request, String id) {
// add logic for pre processing here
}
private void postProcess(MyRequest request, String id, MyResponse response) {
// add logic for post processing here
}
}
Another use case could be to batch process the db updates instead of processing them using one by one as you are doing already. This is especially useful if you have high volume and db updates are becoming bottleneck. For this case, using a BlockingQueue makes sense. Following is the solution outline that you can use for this purpose. Again, although this is interface compatible, the return type is still null. You can further fine tune this outline to have multiple processing threads (or spring executor for that matter) should that be needed for batch processing. For one similar use case, a single processing thread with batch updates was sufficient for my needs, concurrent db updates were presenting bigger problems due to db level locks in concurrent execution.
public class MyRequestAndID {
private MyRequest request;
prviate String id;
public MyRequestAndID(MyRequest request, String id){
this.request = request;
this.id = id;
}
public MyRequest getMyRequest() {
return this.request;
}
public String MyId() {
return this.id;
}
}
#Service("asynchronousProcessor")
public class BatchProcessorQueue implements BaseProcessor{
/* Batch processor which can process one OR more items using a single DB query */
#Autowired
private BatchProcessor batchProcessor;
private LinkedBlockingQueue<MyRequestAndID> inQueue = new LinkedBlockingQueue<>();
private Set<MyRequestAndID> processingSet = new HashSet<>();
#PostConstruct
private void init() {
Thread processingThread = new Thread(() -> processQueue());
processingThread.setName("BatchProcessor");
processingThread.start();
}
public MyResponse process(MyRequest request, String id) {
enqueu(new MyRequestAndID(request, id));
// async execution can not return result immediately
// this is a hack to have this implementation interface
// compatible with existing BaseProcessor
return null;
}
public void enqueu(MyRequestAndID job) {
inQueue.add(job);
}
private void processQueue() {
try {
while (true) {
processQueueCycle();
}
} catch (InterruptedException ioex) {
logger.error("Interrupted while processing queue", ioex);
}
}
private void processQueueCycle() throws InterruptedException {
// blocking call, wait for at least one item
MyRequestAndID job = inQueue.take();
processingSet.add(job);
updateSetFromQueue();
processSet();
}
private void processSet() {
if (processingSet.size() < 1)
return;
int qSize = processingSet.size();
preProcess(processingSet)
batchProcessor.processAll(processingSet);
postProcess(processingSet)
processingSet.clear();
}
private void updateSetFromQueue() {
List<MyRequestAndID> inData = Arrays.asList(inQueue.toArray(new MyRequestAndID[0]));
if (inData.size() < 1)
return;
inQueue.removeAll(inData);
processingSet.addAll(inData);
}
private void preProcess(Set<MyRequestAndID> currentSet) {
// add logic for pre processing here
}
private void postProcess(Set<MyRequestAndID> currentSet) {
// add logic for post processing here
}
}
I have created an Osgi service. I want to create a new instance of my service each time the service request comes.
Code look likes this -
#Component(immediate=true)
#Service(serviceFactory = true)
#Property(name = EventConstants.EVENT_TOPIC, value = {DEPLOY, UNDEPLOY })
public class XyzHandler implements EventHandler {
private Consumer consumer;
public static setConsumer(Consumer consumer) {
this.consumer = consumer;
}
#Override
public void handleEvent(final Event event) {
consumer.notify();
}
}
public class Consumer {
private DataSourceCache cache;
public void notify() {
updateCache(cache);
System.out.println("cache updated");
}
public void updateCache(DataSourceCache cache) {
cache = null;
}
}
In my Consumer class, I want to access the service instance of XyzHandler & set the attribute consumer. Also I would like to have a new service instance of XyzHandler created every time for each request.
I found few articles where it is mentioned that using osgi declarative service annotations this can be achieved.
OSGi how to run mutliple instances of one service
But I want to achieve this without using DS 1.3.
How can I do this without using annotations or how can it be done using DS 1.2?
To me this looks like a case of having asked a question based on what you think the answer is rather than describing what you're trying to achieve. If we take a few steps back then a more elegant solution exists.
In general injecting objects into stateful services is a bad pattern in OSGi. It forces you to be really careful about the lifecycle, and risks memory leaks. From the example code it appears as though what you really want is for your Consumer to get notified when an event occurs on an Event Admin topic. The easiest way to do this would be to remove the XyzHandler from the equation and make the Consumer an Event Handler like this:
#Component(property= { EventConstants.EVENT_TOPIC + "=" + DEPLOY,
EventConstants.EVENT_TOPIC + "=" + UNDEPLOY})
public class Consumer implements EventHandler {
private DataSourceCache cache;
#Override
public void handleEvent(final Event event) {
notify();
}
public void notify() {
updateCache(cache);
System.out.println("cache updated");
}
public void updateCache(DataSourceCache cache) {
cache = null;
}
}
If you really don't want to make your Consumer an EventHandler then it would still be easier to register the Consumer as a service and use the whiteboard pattern to get it picked up by a single XyzHandler:
#Component(service=Consumer.class)
public class Consumer {
private DataSourceCache cache;
public void notify() {
updateCache(cache);
System.out.println("cache updated");
}
public void updateCache(DataSourceCache cache) {
cache = null;
}
}
#Component(property= { EventConstants.EVENT_TOPIC + "=" + DEPLOY,
EventConstants.EVENT_TOPIC + "=" + UNDEPLOY})
public class XyzHandler implements EventHandler {
// Use a thread safe list for dynamic references!
private List<Consumer> consumers = new CopyOnWriteArrayList<>();
#Reference(cardinality=MULTIPLE, policy=DYNAMIC)
void addConsumer(Consumer consumer) {
consumers.add(consumer);
}
void removeConsumer(Consumer consumer) {
consumers.remove(consumer);
}
#Override
public void handleEvent(final Event event) {
consumers.forEach(this::notify);
}
private void notify(Consumer consumer) {
try {
consumer.notify();
} catch (Exception e) {
// TODO log this?
}
}
}
Using the whiteboard pattern in this way avoids you needing to track which XyzHandler needs to be created/destroyed when a bundle is started or stopped, and will keep your code much cleaner.
It sounds like your service needs to be a prototype scope service. This was introduced in Core R6. DS 1.3, from Compendium R6, includes support for components to be prototype scope services.
But DS 1.2 predates Core R6 and thus has no knowledge or support for prototype scope services.
In my application I configure some channels as follows:
#Bean
public MessageChannel eventFilterChannel() {
return new ExecutorChannel(asyncConfiguration.getAsyncExecutor());
}
#Bean
public MessageChannel processEventChannel() {
return new ExecutorChannel(asyncConfiguration.getAsyncExecutor());
}
I am using ExecutorChannel and using my custom Executor as follows:
#Configuration
#EnableAsync
public class AsyncConfiguration extends AsyncConfigurerSupport {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(100);
executor.setMaxPoolSize(100);
executor.setQueueCapacity(1000);
executor.setThreadNamePrefix("MyAppThread");
executor.initialize();
return executor;
}
}
I have the following MessageEndpoint which is a subscriber to the eventFilterChannel channel:
#MessageEndpoint
public class MyEventFilter {
#Filter(inputChannel = "eventFilterChannel", outputChannel = "processEventChannel")
public boolean filterEvents(final MyEvent myEvent) {
//filter logic
}
}
Ideally, I would expect my event filter message endpoint to be multi-threaded as I am using ExecutorChannel. I would like to understand if this is the correct implementation of multithreaded endpoint?
However, I am doubtful because I could see the following in my logs:
Channel 'application.eventFilterChannel' has 1 subscriber(s).
Is my implementation correct or is there a standard I can follow?
Well, there is a bit of misleading. Your eventFilterChannel really has only one subscriber - your #Filter. But it is indeed multi-threaded. The same stateless component is used in several threads.
The ExecutorChannel queues incoming tasks and they are performed on the threads in the pool - in parallel. In our case the story is about messages delivery. Not sure if code can help you but it looks like:
public final boolean dispatch(final Message<?> message) {
if (this.executor != null) {
Runnable task = createMessageHandlingTask(message);
this.executor.execute(task);
return true;
}
return this.doDispatch(message);
}
Where that Runnable is like this:
public void run() {
doDispatch(message);
}
...
handler.handleMessage(message);
This handler is exactly a subscriber for that #Filter.
So, the same method is called from different threads. Since this is passive and stateless component it is just safe to keep it only once and reuse from different threads.
On the other hand, out of topic: if you add more subscribers to this channel, they are not going to be called in parallel anyway: By default it is round-robin strategy: the handler for next message is selected according the index.
If one handler fails to process message, we try the next and so on. You can inject any other custom implementation though. Or even reset it to null to always start from the first one.
I want to use Quartz Scheduler in my server application that uses HK2 for dependency injection. In order for Quartz jobs to have access to DI, they need to be DI-managed themselves. As a result, I wrote a super simple HK2-aware job factory and registered it with the scheduler.
It works fine with instantiation of services, observing the requested #Singleton or #PerLookup scope. However, it's failing to destroy() non-singleton services (= jobs) after they are finished.
Question: how do I get HK2 to manage jobs properly, including tearing them down again?
Do I need to go down the path of creating the service via serviceLocator.getServiceHandle() and later manually destroy the service, maybe from a JobListener (but how get the ServiceHandle to it)?
Hk2JobFactory.java
#Service
public class Hk2JobFactory implements JobFactory {
private final Logger log = LoggerFactory.getLogger(getClass());
#Inject
ServiceLocator serviceLocator;
#Override
public Job newJob(TriggerFiredBundle bundle, Scheduler scheduler) throws SchedulerException {
JobDetail jobDetail = bundle.getJobDetail();
Class<? extends Job> jobClass = jobDetail.getJobClass();
try {
log.debug("Producing instance of Job '" + jobDetail.getKey() + "', class=" + jobClass.getName());
Job job = serviceLocator.getService(jobClass);
if (job == null) {
log.debug("Unable to instantiate job via ServiceLocator, returning unmanaged instance.");
return jobClass.newInstance();
}
return job;
} catch (Exception e) {
SchedulerException se = new SchedulerException(
"Problem instantiating class '"
+ jobDetail.getJobClass().getName() + "'", e);
throw se;
}
}
}
HelloWorldJob.java
#Service
#PerLookup
public class HelloWorldJob implements Job {
private final Logger log = LoggerFactory.getLogger(this.getClass());
#PostConstruct
public void setup() {
log.info("I'm born!");
}
#PreDestroy
public void shutdown() {
// it's never called... :-(
log.info("And I'm dead again");
}
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
log.info("Hello, world!");
}
}
Similar to #jwells131313 suggestion, I have implemented a JobListener that destroy()s instances of jobs where appropriate. To facilitate that, I pass along the ServiceHandle in the job's DataMap.
The difference is only that I'm quite happy with the #PerLookup scope.
Hk2JobFactory.java:
#Service
public class Hk2JobFactory implements JobFactory {
private final Logger log = LoggerFactory.getLogger(getClass());
#Inject
ServiceLocator serviceLocator;
#Override
public Job newJob(TriggerFiredBundle bundle, Scheduler scheduler) throws SchedulerException {
JobDetail jobDetail = bundle.getJobDetail();
Class<? extends Job> jobClass = jobDetail.getJobClass();
try {
log.debug("Producing instance of job {} (class {})", jobDetail.getKey(), jobClass.getName());
ServiceHandle sh = serviceLocator.getServiceHandle(jobClass);
if (sh != null) {
Class scopeAnnotation = sh.getActiveDescriptor().getScopeAnnotation();
if (log.isTraceEnabled()) log.trace("Service scope is {}", scopeAnnotation.getName());
if (scopeAnnotation == PerLookup.class) {
// #PerLookup scope means: needs to be destroyed after execution
jobDetail.getJobDataMap().put(SERVICE_HANDLE_KEY, sh);
}
return jobClass.cast(sh.getService());
}
log.debug("Unable to instantiate job via ServiceLocator, returning unmanaged instance");
return jobClass.newInstance();
} catch (Exception e) {
SchedulerException se = new SchedulerException(
"Problem instantiating class '"
+ jobDetail.getJobClass().getName() + "'", e);
throw se;
}
}
}
Hk2CleanupJobListener.java:
public class Hk2CleanupJobListener extends JobListenerSupport {
public static final String SERVICE_HANDLE_KEY = "hk2_serviceHandle";
private final Map<String, String> mdcCopy = MDC.getCopyOfContextMap();
#Override
public String getName() {
return getClass().getSimpleName();
}
#Override
public void jobWasExecuted(JobExecutionContext context, JobExecutionException jobException) {
JobDetail jobDetail = context.getJobDetail();
ServiceHandle sh = (ServiceHandle) jobDetail.getJobDataMap().get(SERVICE_HANDLE_KEY);
if (sh == null) {
if (getLog().isTraceEnabled()) getLog().trace("No serviceHandle found");
return;
}
Class scopeAnnotation = sh.getActiveDescriptor().getScopeAnnotation();
if (scopeAnnotation == PerLookup.class) {
if (getLog().isTraceEnabled()) getLog().trace("Destroying job {} after it was executed (Class {})",
jobDetail.getKey(),
jobDetail.getJobClass().getName()
);
sh.destroy();
}
}
}
Both are registered with the Scheduler.
For Singletons:
Seems like a Singleton service would NOT be destroyed when the job is finished, because it is a Singleton, right? If you are expecting the Singleton to be destroyed at the end of the Job then it seems like the service is more of a "JobScope" and not really a Singleton scope.
JobScope:
If "Jobs" follow certain rules then it might be an good candidate for an "Operation" scope (please see Operation Example). In particular jobs can be in an "Operation" scope if:
There can be many parallel jobs going at once
There can only be one job active on a thread at a time
Note that the above rules also means that Jobs can exists on multiple threads at the same or at different times. The most important rule is that on a single thread only one Job can be active at a time.
If those two rules apply then I highly recommend writing an Operation scope that's something like "JobScope".
This is how you could define a JobScope if Jobs follow the rules above:
#Scope
#Proxiable(proxyForSameScope = false)
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.TYPE)
public #interface JobScope {
}
And this would be the entire implementation of the corresponding Context:
#Singleton
public class JobScopeContext extends OperationContext<JobScope> {
public Class<? extends Annotation> getScope() {
return JobScope.class;
}
}
You would then use the OperationManager service to start and stop Jobs when, you know, Jobs start and stop.
Even if Jobs do not follow the rules for an "Operation" you still might want to use a "JobScope" scope that would know to destroy its services when a "Job" comes to its end.
PerLookup:
So if your question is about PerLookup scope objects, you could run into some trouble because you probably need the original ServiceHandle, which it sounds like you wouldn't have. In that case, and if you can at least find out that the original service WAS in fact in PerLookup scope you can use ServiceLocator.preDestroy to destroy the object.
I have a JSF application where the users create some files. The problem is, they must upload them and download the confirmation messages too and the process of uploading/downloading is exclusive, only one user at the time, because the authentication requires a technical user/password. My question is, how can I make the waiting process transparent for the user, a kind of protocol, for example:
waiting to get the connection
authentication
upload file
download confirmation file
done
Use a single thread executor.
#ManagedBean
#ApplicationScoped
public class FileManager {
private ExecutorService executor;
#PostConstruct
public void init() {
executor = Executors.newSingleThreadExecutor();
}
public Result process(Task task) throws InterruptedException, ExecutionException {
return executor.submit(task).get();
}
#PreDestroy
public void destroy() {
executor.shutdownNow();
}
}
Where Result is just your javabean object containing the desired result and Task look like this:
public class Task implements Callable<Result> {
private Data data;
public Task(Data data) {
this.data = data;
}
#Override
public Result call() throws Exception {
Result result = process(data); // Do your upload/download/auth job here.
return result;
}
}
Data is just your javabean object containing the input data (uploaded file?). Finally invoke it from in your managed bean as follows:
#ManagedBean
#RequestScoped
public class Bean {
#ManagedProperty("#{fileManager}")
private FileManager fileManager;
public void submit() {
try {
Data data = prepareItSomehow();
Result result = fileManager.process(new Task(data));
// Now do your job with result.
}
catch (Exception e) {
// Handle
}
}
// ...
}
This way all tasks will be processed by a single thead in the first in - first out order.
If your container supports EJB, then there are other ways.