executing a method in parallel from a call method - java

I have a library which is being used by customer and they are passing DataRequest object which has userid, timeout and some other fields in it. Now I use this DataRequest object to make a URL and then I make an HTTP call using RestTemplate and my service returns back a JSON response which I use it to make a DataResponse object and return this DataResponse object back to them.
Below is my DataClient class used by customer by passing DataRequest object to it. I am using timeout value passed by customer in DataRequest to timeout the request if it is taking too much time in getSyncData method.
public class DataClient implements Client {
private RestTemplate restTemplate = new RestTemplate();
// first executor
private ExecutorService service = Executors.newFixedThreadPool(15);
#Override
public DataResponse getSyncData(DataRequest key) {
DataResponse response = null;
Future<DataResponse> responseFuture = null;
try {
responseFuture = getAsyncData(key);
response = responseFuture.get(key.getTimeout(), key.getTimeoutUnit());
} catch (TimeoutException ex) {
response = new DataResponse(DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR);
responseFuture.cancel(true);
// logging exception here
}
return response;
}
#Override
public Future<DataResponse> getAsyncData(DataRequest key) {
DataFetcherTask task = new DataFetcherTask(key, restTemplate);
Future<DataResponse> future = service.submit(task);
return future;
}
}
DataFetcherTask class:
public class DataFetcherTask implements Callable<DataResponse> {
private DataRequest key;
private RestTemplate restTemplate;
public DataFetcherTask(DataRequest key, RestTemplate restTemplate) {
this.key = key;
this.restTemplate = restTemplate;
}
#Override
public DataResponse call() throws Exception {
// In a nutshell below is what I am doing here.
// 1. Make an url using DataRequest key.
// 2. And then execute the url RestTemplate.
// 3. Make a DataResponse object and return it.
// I am calling this whole logic in call method as LogicA
}
}
As of now my DataFetcherTask class is responsible for one DataRequest key as shown above..
Problem Statement:-
Now I have a small design change. Customer will pass DataRequest (for example keyA) object to my library and then I will make a new http call to another service (which I am not doing in my current design) by using user id present in DataRequest (keyA) object which will give me back list of user id's so I will use those user id's and make few other DataRequest (keyB, keyC, keyD) objects one for each user id returned in the response. And then I will have List<DataRequest> object which will have keyB, keyC and keyD DataRequest object. Max element in the List<DataRequest> will be three, that's all.
Now for each of those DataRequest object in List<DataRequest> I want to execute above DataFetcherTask.call method in parallel and then make List<DataResponse> by adding each DataResponse for each key. So I will have three parallel calls to DataFetcherTask.call. Idea behind this parallel call is to get the data for all those max three keys in the same global timeout value.
So my proposal is - DataFetcherTask class will return back List<DataResponse> object instead of DataResponse and then signature of getSyncData and getAsyncData method will change as well. So here is the algorithm:
Use DataRequest object passed by customer to make List<DataRequest> by calling another HTTP service.
Make a parallel call for each DataRequest in List<DataRequest> to DataFetcherTask.call method and return List<DataResponse> object to customer instead of DataResponse.
With this way, I can apply same global timeout on step 1 along with step 2 as well. If either of above step is taking time, we will just timeout in getSyncData method.
DataFetcherTask class after design change:
public class DataFetcherTask implements Callable<List<DataResponse>> {
private DataRequest key;
private RestTemplate restTemplate;
// second executor here
private ExecutorService executorService = Executors.newFixedThreadPool(10);
public DataFetcherTask(DataRequest key, RestTemplate restTemplate) {
this.key = key;
this.restTemplate = restTemplate;
}
#Override
public List<DataResponse> call() throws Exception {
List<DataRequest> keys = generateKeys();
CompletionService<DataResponse> comp = new ExecutorCompletionService<>(executorService);
int count = 0;
for (final DataRequest key : keys) {
comp.submit(new Callable<DataResponse>() {
#Override
public DataResponse call() throws Exception {
return performDataRequest(key);
}
});
}
List<DataResponse> responseList = new ArrayList<DataResponse>();
while (count-- > 0) {
Future<DataResponse> future = comp.take();
responseList.add(future.get());
}
return responseList;
}
// In this method I am making a HTTP call to another service
// and then I will make List<DataRequest> accordingly.
private List<DataRequest> generateKeys() {
List<DataRequest> keys = new ArrayList<>();
// use key object which is passed in contructor to make HTTP call to another service
// and then make List of DataRequest object and return keys.
return keys;
}
private DataResponse performDataRequest(DataRequest key) {
// This will have all LogicA code here which is shown in my original design.
// everything as it is same..
}
}
Now my question is -
Does it have to be like this? What is the right design to solve this problem? I mean having call method in another call method looks weird?
Do we need to have two executors like I have in my code? Is there any better way to solve this problem or any kind of simplification/design change we can do here?
I have simplified the code so that idea gets clear what I am trying to do..

As already mentioned in the comments of your question, you can use Java's ForkJoin framework. This will save you the extra thread pool within your DataFetcherTask.
You simply need to use a ForkJoinPool in your DataClient and convert your DataFetcherTask into a RecursiveTask (one of ForkJoinTask's subtypes). This allows you to easily execute other subtasks in parallel.
So, after these modifications your code will look something like this:
DataFetcherTask
The DataFetcherTask is now a RecursiveTask which first generates the keys and invokes subtasks for each generated key. These subtasks are executed in the same ForkJoinPool as the parent task.
public class DataFetcherTask extends RecursiveTask<List<DataResponse>> {
private final DataRequest key;
private final RestTemplate restTemplate;
public DataFetcherTask(DataRequest key, RestTemplate restTemplate) {
this.key = key;
this.restTemplate = restTemplate;
}
#Override
protected List<DataResponse> compute() {
// Create subtasks for the key and invoke them
List<DataRequestTask> requestTasks = requestTasks(generateKeys());
invokeAll(requestTasks);
// All tasks are finished if invokeAll() returns.
List<DataResponse> responseList = new ArrayList<>(requestTasks.size());
for (DataRequestTask task : requestTasks) {
try {
responseList.add(task.get());
} catch (InterruptedException | ExecutionException e) {
// TODO - Handle exception properly
Thread.currentThread().interrupt();
return Collections.emptyList();
}
}
return responseList;
}
private List<DataRequestTask> requestTasks(List<DataRequest> keys) {
List<DataRequestTask> tasks = new ArrayList<>(keys.size());
for (DataRequest key : keys) {
tasks.add(new DataRequestTask(key));
}
return tasks;
}
// In this method I am making a HTTP call to another service
// and then I will make List<DataRequest> accordingly.
private List<DataRequest> generateKeys() {
List<DataRequest> keys = new ArrayList<>();
// use key object which is passed in contructor to make HTTP call to another service
// and then make List of DataRequest object and return keys.
return keys;
}
/** Inner class for the subtasks. */
private static class DataRequestTask extends RecursiveTask<DataResponse> {
private final DataRequest request;
public DataRequestTask(DataRequest request) {
this.request = request;
}
#Override
protected DataResponse compute() {
return performDataRequest(this.request);
}
private DataResponse performDataRequest(DataRequest key) {
// This will have all LogicA code here which is shown in my original design.
// everything as it is same..
return new DataResponse(DataErrorEnum.OK, DataStatusEnum.OK);
}
}
}
DataClient
The DataClient will not change much except for the new thread pool:
public class DataClient implements Client {
private final RestTemplate restTemplate = new RestTemplate();
// Replace the ExecutorService with a ForkJoinPool
private final ForkJoinPool service = new ForkJoinPool(15);
#Override
public List<DataResponse> getSyncData(DataRequest key) {
List<DataResponse> responsList = null;
Future<List<DataResponse>> responseFuture = null;
try {
responseFuture = getAsyncData(key);
responsList = responseFuture.get(key.getTimeout(), key.getTimeoutUnit());
} catch (TimeoutException | ExecutionException | InterruptedException ex) {
responsList = Collections.singletonList(new DataResponse(DataErrorEnum.CLIENT_TIMEOUT, DataStatusEnum.ERROR));
responseFuture.cancel(true);
// logging exception here
}
return responsList;
}
#Override
public Future<List<DataResponse>> getAsyncData(DataRequest key) {
DataFetcherTask task = new DataFetcherTask(key, this.restTemplate);
return this.service.submit(task);
}
}
Once you are on Java8 you may consider changing the implementation to CompletableFutures. Then it would look something like this:
DataClientCF
public class DataClientCF {
private final RestTemplate restTemplate = new RestTemplate();
private final ExecutorService executor = Executors.newFixedThreadPool(15);
public List<DataResponse> getData(DataRequest initialKey) {
return CompletableFuture.supplyAsync(() -> generateKeys(initialKey), this.executor)
.thenApply(requests -> requests.stream().map(this::supplyRequestAsync).collect(Collectors.toList()))
.thenApply(responseFutures -> responseFutures.stream().map(future -> future.join()).collect(Collectors.toList()))
.exceptionally(t -> { throw new RuntimeException(t); })
.join();
}
private List<DataRequest> generateKeys(DataRequest key) {
return new ArrayList<>();
}
private CompletableFuture<DataResponse> supplyRequestAsync(DataRequest key) {
return CompletableFuture.supplyAsync(() -> new DataResponse(DataErrorEnum.OK, DataStatusEnum.OK), this.executor);
}
}
As mentioned in the comments, Guava's ListenableFutures would provide similar functionality for Java7 but without Lambdas they tend to get clumsy.

As I know, RestTemplate is blocking, it is said in ForkJoinPool JavaDoc in ForkJoinTask:
Computations should avoid synchronized methods or blocks, and should minimize other blocking synchronization apart from joining other tasks or using synchronizers such as Phasers that are advertised to cooperate with fork/join scheduling. ...
Tasks should also not perform blocking IO,...
Call in call is redundant.
And you don't need two executors. Also you can return partial result in getSyncData(DataRequest key). This can be done like this
DataClient.java
public class DataClient implements Client {
private RestTemplate restTemplate = new RestTemplate();
// first executor
private ExecutorService service = Executors.newFixedThreadPool(15);
#Override
public List<DataResponse> getSyncData(DataRequest key) {
List<DataResponse> responseList = null;
DataFetcherResult response = null;
try {
response = getAsyncData(key);
responseList = response.get(key.getTimeout(), key.getTimeoutUnit());
} catch (TimeoutException ex) {
response.cancel(true);
responseList = response.getPartialResult();
}
return responseList;
}
#Override
public DataFetcherResult getAsyncData(DataRequest key) {
List<DataRequest> keys = generateKeys(key);
final List<Future<DataResponse>> responseList = new ArrayList<>();
final CountDownLatch latch = new CountDownLatch(keys.size());//assume keys is not null
for (final DataRequest _key : keys) {
responseList.add(service.submit(new Callable<DataResponse>() {
#Override
public DataResponse call() throws Exception {
DataResponse response = null;
try {
response = performDataRequest(_key);
} finally {
latch.countDown();
return response;
}
}
}));
}
return new DataFetcherResult(responseList, latch);
}
// In this method I am making a HTTP call to another service
// and then I will make List<DataRequest> accordingly.
private List<DataRequest> generateKeys(DataRequest key) {
List<DataRequest> keys = new ArrayList<>();
// use key object which is passed in contructor to make HTTP call to another service
// and then make List of DataRequest object and return keys.
return keys;
}
private DataResponse performDataRequest(DataRequest key) {
// This will have all LogicA code here which is shown in my original design.
// everything as it is same..
return null;
}
}
DataFetcherResult.java
public class DataFetcherResult implements Future<List<DataResponse>> {
final List<Future<DataResponse>> futures;
final CountDownLatch latch;
public DataFetcherResult(List<Future<DataResponse>> futures, CountDownLatch latch) {
this.futures = futures;
this.latch = latch;
}
//non-blocking
public List<DataResponse> getPartialResult() {
List<DataResponse> result = new ArrayList<>(futures.size());
for (Future<DataResponse> future : futures) {
try {
result.add(future.isDone() ? future.get() : null);
//instead of null you can return new DataResponse(DataErrorEnum.NOT_READY, DataStatusEnum.ERROR);
} catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
//ExecutionException or CancellationException could be thrown, especially if DataFetcherResult was cancelled
//you can handle them here and return DataResponse with corresponding DataErrorEnum and DataStatusEnum
}
}
return result;
}
#Override
public List<DataResponse> get() throws ExecutionException, InterruptedException {
List<DataResponse> result = new ArrayList<>(futures.size());
for (Future<DataResponse> future : futures) {
result.add(future.get());
}
return result;
}
#Override
public List<DataResponse> get(long timeout, TimeUnit timeUnit)
throws ExecutionException, InterruptedException, TimeoutException {
if (latch.await(timeout, timeUnit)) {
return get();
}
throw new TimeoutException();//or getPartialResult()
}
#Override
public boolean cancel(boolean mayInterruptIfRunning) {
boolean cancelled = true;
for (Future<DataResponse> future : futures) {
cancelled &= future.cancel(mayInterruptIfRunning);
}
return cancelled;
}
#Override
public boolean isCancelled() {
boolean cancelled = true;
for (Future<DataResponse> future : futures) {
cancelled &= future.isCancelled();
}
return cancelled;
}
#Override
public boolean isDone() {
boolean done = true;
for (Future<DataResponse> future : futures) {
done &= future.isDone();
}
return done;
}
//and etc.
}
I wrote it with CountDownLatch and it looks great, but note there is a nuance.
You can get stuck for a little while in DataFetcherResult.get(long timeout, TimeUnit timeUnit) because CountDownLatch is not synchronized with future's state. And it could happen that latch.getCount() == 0 but not all futures would return future.isDone() == true at the same time. Because they have already passed latch.countDown(); inside finally {} Callable's block but didn't change internal state which is still equals to NEW.
And so calling get() inside get(long timeout, TimeUnit timeUnit) can cause a small delay.
Similar case was described here.
Get with timeout DataFetcherResult.get(...) could be rewritten using futures future.get(long timeout, TimeUnit timeUnit) and you can remove CountDownLatch from a class.
public List<DataResponse> get(long timeout, TimeUnit timeUnit)
throws ExecutionException, InterruptedException{
List<DataResponse> result = new ArrayList<>(futures.size());
long timeoutMs = timeUnit.toMillis(timeout);
boolean timeout = false;
for (Future<DataResponse> future : futures) {
long beforeGet = System.currentTimeMillis();
try {
if (!timeout && timeoutMs > 0) {
result.add(future.get(timeoutMs, TimeUnit.MILLISECONDS));
timeoutMs -= System.currentTimeMillis() - beforeGet;
} else {
if (future.isDone()) {
result.add(future.get());
} else {
//result.add(new DataResponse(DataErrorEnum.NOT_READY, DataStatusEnum.ERROR)); ?
}
}
} catch (TimeoutException e) {
result.add(new DataResponse(DataErrorEnum.TIMEOUT, DataStatusEnum.ERROR));
timeout = true;
}
//you can also handle ExecutionException or CancellationException here
}
return result;
}
This code was given as an example and it should be tested before using in production, but seems legit :)

Related

Mocking completable future

I am trying to create unit test for the following code. The code utilizes AWS Java 2 SDK. The code calls selectObjectContent in S3AsyncClient class which returns a CompletableFuture (https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3AsyncClient.html). My test is returning null pointer exception while invoking future.get()
Here is the method I want to unit test.
public <T> Collection<T> queryWithS3Select(
List<String> s3Keys,
String s3SelectQuery,
InputSerialization inputSerialization,
Class<T> modelObject,
Comparator<T> comparator
) throws ExecutionException, InterruptedException, IOException {
TreeSet<T> collection = new TreeSet<>(comparator);
List<SelectObjectContentRequest> selectObjectContentRequest =
buildS3SelectRequests(s3Keys, s3SelectQuery, inputSerialization);
S3SelectContentHandler s3SelectContentHandler = new S3SelectContentHandler();
StringBuilder selectionResult = new StringBuilder();
for (SelectObjectContentRequest socr : selectObjectContentRequest) {
CompletableFuture<Void> future = s3AsyncClient.selectObjectContent(socr, s3SelectContentHandler);
future.get();
s3SelectContentHandler.getReceivedEvents().forEach(e -> {
if (e.sdkEventType() == SelectObjectContentEventStream.EventType.RECORDS) {
RecordsEvent response = (RecordsEvent) e;
selectionResult.append(response.payload().asUtf8String());
}
});
}
JsonParser parser = objectMapper.createParser(selectionResult.toString());
collection.addAll(Lists.newArrayList(objectMapper.readValues(parser, modelObject)));
return collection;
}
My unit test so far. Running this code I get null pointer exception at future.get() line above. How can I use the mock s3AsyncClient to return a valid future?
#Mock
private S3AsyncClient s3AsyncClient;
#Test
public void itShouldReturnQueryResults() throws IOException, ExecutionException, InterruptedException {
List<String> keysToQuery = List.of("key1", "key2");
InputSerialization inputSerialization = InputSerialization.builder()
.json(JSONInput.builder().type(JSONType.DOCUMENT).build())
.compressionType(String.valueOf(CompressionType.GZIP))
.build();
Comparator<S3SelectObject> comparator =
Comparator.comparing((S3SelectObject e) -> e.getStartTime());
underTest.queryWithS3Select(keysToQuery, S3_SELECT_QUERY, inputSerialization, S3SelectObject.class, comparator );
}
Here is the S3SelectContentHandler
public class S3SelectContentHandler implements SelectObjectContentResponseHandler {
private SelectObjectContentResponse response;
private List<SelectObjectContentEventStream> receivedEvents = new ArrayList<>();
private Throwable exception;
#Override
public void responseReceived(SelectObjectContentResponse response) {
this.response = response;
}
#Override
public void onEventStream(SdkPublisher<SelectObjectContentEventStream> publisher) {
publisher.subscribe(receivedEvents::add);
}
#Override
public void exceptionOccurred(Throwable throwable) {
exception = throwable;
}
#Override
public void complete() {}
public List<SelectObjectContentEventStream> getReceivedEvents() {
return receivedEvents;
}
}
I will share unit test for similar functionality and show you how to work with completable future when your code has .join() to continue the execution
Code under test
private final S3AsyncClient s3AsyncClient;
public long getSize(final S3AsyncClient client, final String bucket, final String key) {
return client.headObject(HeadObjectRequest.builder().bucket(bucket).key(key).build()).join().contentLength();
}
and in this code the client.headObject() returns the CompletableFuture<HeadObjectResponse> which we are going to mock and test in Unit Test as shown below
#Test
#DisplayName("Verify getSize returns the size of the given key in the bucket")
void verifyGetSizeRetunsSizeOfFileInS3() {
CompletableFuture<HeadObjectResponse> headObjectResponseCompletableFuture =
CompletableFuture.completedFuture(HeadObjectResponse.builder().contentLength(20000L).build());
when(s3AsyncClient.headObject(headObjectRequestArgumentCaptor.capture()))
.thenReturn(headObjectResponseCompletableFuture);
long size = s3Service.getSize(s3AsyncClient, "somebucket", "someFile");
assertThat(headObjectRequestArgumentCaptor.getValue()).hasFieldOrPropertyWithValue("bucket", "somebucket")
.hasFieldOrPropertyWithValue("key", "someFile");
assertThat(size).isEqualTo(20000L);
}

Facing Race Condition In Flink connected Stream in apache flink

Facing a Race Condition while implementing process function in flink
connected streams. I am having Cache Map that is being shared between two
functions processElement1 & processElement2 that is being called parallelly
by 2 different threads.
Streams1--->(sending offerdata)
Streams2--->(sending lms(loyality management system data))
connect=Streams1.connect(Streams2);
connect.process(new TriggerStream);
In TriggerStream Class I am storing the data using unique Id: MemberId as unique Key to Store & lookup data in cache. When the data is flowing in I am not getting consisted results
class LRUConcurrentCache<K,V>{
private final Map<K,V> cache;
private final int maxEntries;
public LRUConcurrentCache(final int maxEntries) {
this.cache = new LinkedHashMap<K,V>(maxEntries, 0.75F, true) {
private static final long serialVersionUID = -1236481390177598762L;
#Override
protected boolean removeEldestEntry(Map.Entry<K,V> eldest){
return size() > maxEntries;
}
};
}
//Why we cant lock on the key
public void put(K key, V value) {
synchronized(key) {
cache.put(key, value);
}
}
//get methode
public V get(K key) {
synchronized(key) {
return cache.get(key);
}
}
public class TriggerStream extends CoProcessFunction<IOffer, LMSData, String> {
private static final long serialVersionUID = 1L;
LRUCache cache;
private String offerNode;
String updatedValue, retrivedValue;
Subscriber subscriber;
TriggerStream(){
this.cache== new LRUCache(10);
}
#Override
public void processElement1(IOffer offer) throws Exception {
try {
ObjectMapper mapper = new ObjectMapper();
mapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false);
mapper.enableDefaultTyping();
// TODO Auto-generated method stub
IOffer latestOffer = offer;
//Check the subscriber is there or not
retrivedValue = cache.get(latestOffer.getMemberId().toString());
if ((retrivedValue == null)) {
//Subscriber is the class that is used and converted as Json String & then store into map
Subscriber subscriber = new Subscriber();
subscriber.setMemberId(latestOffer.getMemberId());
ArrayList<IOffer> offerList = new ArrayList<IOffer>();
offerList.add(latestOffer);
subscriber.setOffers(offerList);
updatedValue = mapper.writeValueAsString(subscriber);
cache.set(subscriber.getMemberId().toString(), updatedValue);
} else {
Subscriber subscriber = mapper.readValue(retrivedValue, Subscriber.class);
List<IOffer> offers = subscriber.getOffers();
offers.add(latestOffer);
updatedValue= mapper.writeValueAsString(subscriber);
cache.set(subscriber.getMemberId().toString(), subscriberUpdatedValue);
}
} catch (Exception pb) {
applicationlogger.error("Exception in Offer Loading:"+pb);
applicationlogger.debug("*************************FINISHED OFFER LOADING*******************************");
}
applicationlogger.debug("*************************FINISHED OFFER LOADING*******************************");
}
#Override
public void processElement2(LMSData lms) throws Exception {
try {
ObjectMapper mapper = new ObjectMapper();
mapper.configure(SerializationFeature.FAIL_ON_EMPTY_BEANS, false);
mapper.enableDefaultTyping();
// TODO Auto-generated method stub
//Check the subscriber is there or not
retrivedValue = cache.get(lms.getMemberId().toString());
if(retrivedValue !=null){
Subscriber subscriber = mapper.readValue(retrivedValue, Subscriber.class);
//do some calculations
String updatedValue = mapper.writeValueAsString(subscriber);
//Update value
cache.set(subscriber.getMemberId().toString(), updatedValue);
}
} catch (Exception pb) {
applicationlogger.error("Exception in Offer Loading:"+pb);
applicationlogger.debug("*************************FINISHED OFFER LOADING*******************************");
}
applicationlogger.debug("*************************FINISHED OFFER LOADING*******************************");
}
}
Flink does not give guarantees in which order a CoProcessFunction (or any other Co*Function) ingests the data. Maintaining some kind of deterministic order across distributed, parallel tasks would be too expensive.
Instead, you have to work around that in your function with state and possibly timers. The LRUCache in your function should be maintained as state (probably keyed state). Otherwise, it will be lost in case of a failure. You can add another state for the first stream and buffer records until the lookup value from the second stream has arrived.

Task execution in Java web application

I'm developing Spring MVC web application. One of it's functionalities is file converting (uploading file -> converting -> storing on server).
Some files could be too big for converting on-the-fly so I decided to put them in shared queue after upload.
Files will be converted with priority based on upload time, i.e. FIFO.
My idea is to add task to queue in controller after upload.
There would also be service executing all tasks in queue, and if empty, then wait until new task is added. I don't need scheduling - tasks should be executing always when queue is not empty.
I've read about ExecutorService but I didn't find any example that fit to my case.
I'd appreciate any suggestions.
EDIT
Thanks for answers, I need to clarify my problem:
Basically, I know how to execute tasks, I need to manage with handling the queue of tasks. User should be able to view the queue and pause, resume or remove task from queue.
My task class:
public class ConvertTask implements Callable<String> {
private Converter converter;
private File source;
private File target;
private State state;
private User user;
public ConvertTask(Converter converter, File source, File target, User user) {
this.converter = converter;
this.source = source;
this.target = target;
this.user = user;
this.state = State.READY;
}
#Override
public String call() throws Exception {
if (this.state == State.READY) {
BaseConverterService converterService = ConverterUtils.getConverterService(this.converter);
converterService.convert(this.source, this.target);
MailSendServiceUtil.send(user.getEmail(), target.getName());
return "success";
}
return "task not ready";
}
}
I also created class responsible for managing queue/tasks followed by your suggestions:
#Component
public class MyExecutorService {
private LinkedBlockingQueue<ConvertTask> converterQueue = new LinkedBlockingQueue<>();
private ExecutorService executorService = Executors.newSingleThreadExecutor();
public void add(ConvertTask task) throws InterruptedException {
converterQueue.put(task);
}
public void execute() throws InterruptedException, ExecutionException {
while (!converterQueue.isEmpty()) {
ConvertTask task = converterQueue.peek();
Future<String> statusFuture = executorService.submit(task);
String status = statusFuture.get();
converterQueue.take();
}
}
}
My point is, how to execute tasks if queue is not empty and resume when new task is added and queue was previously empty. I think of some code that fits in add(ConvertTask task) method.
Edited after question updates
You don't need to create any queue for the tasks since the ThreadPoolExecutor implementation has its own queue. Here's the source code of Oracle's Java 8 implementation of newSingleThreadExecutor() method:
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}
So you just submit a new task directly and it's getting queued by the ThreadPoolExecutor
#Component
public class MyExecutorService {
private ExecutorService executorService = Executors.newSingleThreadExecutor();
public void add(ConvertTask task) throws InterruptedException {
Future<String> statusFuture = executorService.submit(task);
}
}
If you're worried about the bounds of your queue, you can create a queue instance explicitly and supply it to a ThreadPoolExecutor constructor.
private executorService = new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>(MAX_SIZE));
Please note that I have removed the line
String status = statusFuture.get();
because get() call is blocking. If you have this line in the same thread where you submit, your code is not asynchronous anymore. You should store the Future objects and check them asynchronously in a different thread. Or you can consider using CompletableFuture introduced in Java 8. Check out this post.
After upload you should return response immediately. The client can't wait for resource too long. However you can change it in the client settings. Anyway if you are running a background task you can do it without interacting with the client or notify the client while execution is in progress. This is an example of callable demo used by the executor service
/**
* Created by Roma on 17.02.2015.
*/
class SumTask implements Callable<Integer> {
private int num = 0;
public SumTask(int num){
this.num = num;
}
#Override
public Integer call() throws Exception {
int result = 0;
for(int i=1;i<=num;i++){
result+=i;
}
return result;
}
}
public class CallableDemo {
Integer result;
Integer num;
public Integer getNumValue() {
return 123;
}
public Integer getNum() {
return num;
}
public void setNum(Integer num) {
this.num = num;
}
public Integer getResult() {
return result;
}
public void setResult(Integer result) {
this.result = result;
}
ExecutorService service = Executors.newSingleThreadExecutor();
public String execute() {
try{
Future<Integer> future = service.submit(new SumTask(num));
result = future.get();
//System.out.println(result);
service.shutdown();
}
catch(Exception e)
{
e.printStackTrace();
}
return "showlinks";
}
}

JavaRx on ErrorReturn return the different type

Observable is tied to a type. When onError I don't want to return the same type but different Object. Example Response object with status=400. How to achieve this?
public class Test{
#Autowired
private Server server;
public Response getResponse(String id){
Observable<Person> personObservable = server.get(id);
ExecutorService executorService = Executors.newFixedThreadPool(100);
List<Person> persons = new ArrayList<Person>();
personObservable.onErrorReturn(new Func1<Throwable, Person>() {
#Override
public Person call(Throwable throwable) {
//I would like to return a HttpResponseObject taking the message
//from throwable error information how to do it?
// How to use Transform() in this case ?
return null;
}
}).subscribeOn(Schedulers.from(executorService)).subscribe(new Action1<Person>() {
// If i use subscribe() will it be not async?
// I think subscribe still run on the main thread so is this
// subscribeOn use fine ?
#Override
public void call(Person person) {
// Is this fine to use the list outside the observable ?
persons.add(person);
}
});
Response r = new Response;
r.addPersons(persons);
return r;
}
}
Use onErrorResumeNext:
Observable<Person> personObservable = ...;
return personObservable
.toList()
.map(persons -> new Response(persons))
.onErrorResumeNext(error -> new Response(error.getMessage())
.toBlocking().single();

Java ExecutorService Task Spawning

I have an ExecutorService that is used to handle a stream of tasks. The tasks are represented by my DaemonTask class, and each task builds a response object which is passed to a response call (outside the scope of this question). I am using a switch statement to spawn the appropriate task based on a task id int. It looks something like;
//in my api listening thread
executorService.submit(DaemonTask.buildTask(int taskID));
//daemon task class
public abstract class DaemonTask implements Runnable {
public static DaemonTask buildTask(int taskID) {
switch(taskID) {
case TASK_A_ID: return new WiggleTask();
case TASK_B_ID: return new WobbleTask();
// ...very long list ...
case TASK_ZZZ_ID: return new WaggleTask();
}
}
public void run() {
respond(execute());
}
public abstract Response execute();
}
All of my task classes (such as WiggleTask() ) extend DaemonTask and provide an implementation for the execute() method.
My question is simply; is this pattern reasonable? Something feels wrong when I look at my huge switch case with all its return statements. I have tried to come up with a more elegant lookup table solution using reflection in some way but can't seem to figure out an approach that would work.
Do you really need so many classes? You could have one method per taskId.
final ResponseHandler handler = ... // has many methods.
// use a map or array or enum to translate transIds into method names.
final Method method = handler.getClass().getMethod(taskArray[taskID]);
executorService.submit(new Callable<Void>() {
public Void call() throws Exception {
method.invoke(handler);
}
});
If you have to have many classes, you can do
// use a map or array or enum to translate transIds into methods.
final Runnable runs = Class.forName(taskClassArray[taskID]).newInstance();
executorService.submit(new Callable<Void>() {
public Void call() throws Exception {
runs.run();
}
});
You can use an enum:
public enum TaskBuilder
{
// Task definitions
TASK_A_ID(1){
#Override
public DaemonTask newTask()
{
return new WiggleTask();
}
},
// etc
// Build lookup map
private static final Map<Integer, TaskBuilder> LOOKUP_MAP
= new HashMap<Integer, TaskBuilder>();
static {
for (final TaskBuilder builder: values())
LOOKUP_MAP.put(builder.taskID, builder);
}
private final int taskID;
public abstract DaemonTask newTask();
TaskBuilder(final int taskID)
{
this.taskID = taskID;
}
// Note: null needs to be handled somewhat
public static TaskBuilder fromTaskID(final int taskID)
{
return LOOKUP_MAP.get(taskID);
}
}
With such an enum, you can then do:
TaskBuilder.fromTaskID(taskID).newTask();
Another possibility is to use a constructor field instead of a method, that is, you use reflection. It is much easier to write and it works OK, but exception handling then becomes nothing short of a nightmare:
private enum TaskBuilder
{
TASK_ID_A(1, WiggleTask.class),
// others
// Build lookup map
private static final Map<Integer, TaskBuilder> LOOKUP_MAP
= new HashMap<Integer, TaskBuilder>();
static {
for (final TaskBuilder builder: values())
LOOKUP_MAP.put(builder.taskID, builder);
}
private final int index;
private final Constructor<? extends DaemonTask> constructor;
TaskBuilder(final int index, final Class<? extends DaemonTask> c)
{
this.index = index;
// This can fail...
try {
constructor = c.getConstructor();
} catch (NoSuchMethodException e) {
throw new ExceptionInInitializerError(e);
}
}
// Ewww, three exceptions :(
public DaemonTask newTask()
throws IllegalAccessException, InvocationTargetException,
InstantiationException
{
return constructor.newInstance();
}
// Note: null needs to be handled somewhat
public static TaskBuilder fromTaskID(final int taskID)
{
return LOOKUP_MAP.get(taskID);
}
}
This enum can be used the same way as the other one.

Categories

Resources