How to make other threads wait for a given task result - java

I have a heavy operation with a server, lets call it String getData(), I want always to get an updated version of this data, so I do not apply a cache to the call.
My goal now is to avoid more than one getData call running at the same time. Making all the calls done after the first call (but before the first request end) to wait for the first result.
Example:
Thread 1 Thread 2 Thread 3
getData()
getData()[waiting]
getData()[waiting]
result1 received return result1 return result1
getData()
result2 received
return result2
how can I achieve that?

My rather inelegant idea is to store a Future when the first call comes in, and return this same future to other calls received while the first call is still pending. Then, when the first call completes, discard this Future, and create a new one when the next request comes in:
class OneAtATime<T> {
private final ExecutorService executor = Executors.newFixedThreadPool(1);
private final Supplier<T> supplier;
private Future<T> future;
OneAtATime(Supplier<T> supplier) {
this.supplier = supplier;
}
synchronized Future<T> submit() {
if (future == null) {
future = CompletableFuture.supplyAsync(supplier, executor);
future.thenRunAsync(() -> {
synchronized (JustOneExecutor.this) {
future = null;
}
}, executor);
}
return future;
}
}

An easy solution that does not involve any extra thread is to use ConcurrentHashMap#computeIfAbsent:
private final ConcurrentHashMap<String, String> instance =
new ConcurrentHashMap<>(1);
private String getData() {
final AtomicBoolean computed = new AtomicBoolean(false);
String data = instance.computeIfAbsent("KEY", () -> {
String data = internalGetData();
computed.set(true);
return data;
});
if(computed.get()) {
instance.clear();
}
return data;
}
private String internalGetData() {
// ...
}

You are describing the functionality of a BlockingQueue.
BlockingQueue<Data> dataQueue = new ArrayBlockingQueue(1);
Now all you need to do is dataQueue.take() and only one thread will get it's own data.

Related

Why isn't this multithreading efficient in the for loop in this code?

I want to use multithreading to speed up queries in a for loop code in Spring.
Some people say that: the thread pool get method is Blocking methods, so it is no different from writing without threads.
So how does this code work with thread acceleration in a Java loop?
Some languages, such as C# ,JS , use await remoteCall(id), use 'await' Whether the same problem exists ?
#Data
class ResultDto {
private BaseData baseData;
}
#Data
public class BaseData {
public String baseInfo;
public String remoteInfo;
}
ExecutorService exector = Executors.newCachedThreadPool();
public List<ResultDto> queryAll(List<String> ids) throws ExecutionException, InterruptedException {
List<ResultDto> res = new ArrayList<>();
for (String id : ids) {
ResultDto resultDto = new ResultDto();
BaseData baseData = new BaseData();
baseData.setBaseInfo("baseData" + id);
//using thread blocking ?
String remoteResult = exector.submit(() -> remoteCall(id)).get();
baseData.setRemoteInfo(remoteResult);
resultDto.setBaseData(baseData);
res.add(resultDto);
}
return res;
}
String remoteCall(String id) {
return " httpUtils.get()" + id;
}
With my limited experience with multithread Programing, my code looks ugly.
How to improve it?
Please help me rewrite the code below.
Do something like this.
ExecutorService exector = Executors.newCachedThreadPool();
public List<ResultDto> queryAll(List<String> ids) throws ExecutionException, InterruptedException {
List<CompletableFuture> cfs = new ArrayList<>(ids.size());
for (String id : ids) {
CompletableFuture cf = Completableuture.supplyAsync(() -> getResult(id), exector);
cfs.add(cf);
}
CompletableFuture allOfThem = CompletableFuture.allOf(cfs.toArray(new CompletableFuture[0]);
CompletableFuture<List<ResultDto>> allCompletableFutures = allOfThem .thenApply(future -> {
return cfs.stream()
.map(completableFuture -> completableFuture.join())
.collect(Collectors.toList());
});
return allCompletableFutures.get();
}
ResultDto getResult(String id) {
String remoteResult = " httpUtils.get()" + id;
BaseData baseData = new BaseData();
baseData.setBaseInfo("baseData" + id);
baseData.setRemoteInfo(remoteResult);
ResultDto resultDto = new ResultDto();
resultDto.setBaseData(baseData);
return resultDto;
}
This will give you a non-blocking solution (it will only block in the end).
Or you can make it even easier and just use a parallelStream and use the default fork-join pool.
public List<ResultDto> queryAll(List<String> ids) throws ExecutionException, InterruptedException {
return ids.parallelStream().map(id -> getResult(id)).collect(Collectors.toList());
}
ResultDto getResult(String id) {
String remoteResult = " httpUtils.get()" + id;
BaseData baseData = new BaseData();
baseData.setBaseInfo("baseData" + id);
baseData.setRemoteInfo(remoteResult);
ResultDto resultDto = new ResultDto();
resultDto.setBaseData(baseData);
return resultDto;
}
You can probably decide which is better to read...
Some people say that: the thread pool get method is Blocking methods, so it is no different from writing without threads.
They are correct. If you call get() at that point, it immediately blocks until the that particular task has completed. So the 2nd task isn't submitted until the first one completes ... and so on.
What you need to do is submit all of the tasks before calling get(). Something like the following pseudo-code.
List<Future> futures
for each id in ids:
futures.add(executor.submit(...))
for each future in futures:
result = future.get()
results.add(process(result))
If you can arrange that each task does the processing of its result, you potentially get more parallelism.
As noted, you could use CompleteableFuture.allOf instead of the second loop.

Empty Map when using Executors

I'm trying to count for each minute the records with the same type as in the parameter in a List, in a given time range. Using the following Method:
public Map<String, Object> getCountPerMinuteForType(final String type,
final long startTimestamp,
final long endTimestamp) {
final Map<String, Object> countsPerMinForType = new HashMap<>();
Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> {
int counter = 0;
List<Data> dataList = storage.retrieveData();
for(Data data: dataList){
if (data.getType().equals(type) &&
data.getUnixTimestamp() >= startTimestamp &&
data.getUnixTimestamp() <= endTimestamp){
counter++;
}
}
countsPerMinForType.put(type, counter);
}, 0, 1, TimeUnit.MINUTES);
return countsPerMinForType;
}
The problem is, this method returns an empty Map.
When I print the contents of the Map inside the Executors, I can see that it has data.
It happens because an another thread is performing put operation in the Map. The main thread starts execution of the thread and then returns back to the place where it was called. To solve this issue, you may need to create a listener interface which gets called whenever the second thread performs the task.
Here, below is the code sample which you can use and modify according to your needs.
class Test implements Listener {
private Listener listener;
public Test() {
listener = this; //Set this class as your listener
}
//Make your function return nothing
public void getCountPerMinuteForType(final String type, final long startTimestamp,
final long endTimestamp) {
final Map<String, Object> countsPerMinForType = new HashMap<>();
ScheduledExecutorService service =
Executors.newSingleThreadScheduledExecutor().scheduleAtFixedRate(() -> {
int counter = 0;
List<Data> dataList = storage.retrieveData();
for (Data data : dataList) {
if (data.getType().equals(type) &&
data.getUnixTimestamp() >= startTimestamp &&
data.getUnixTimestamp() <= endTimestamp) {
counter++;
}
}
listener.onNewData(type, counter);
countsPerMinForType.put(type, counter);
}, 0, 1, TimeUnit.MINUTES);
//return countsPerMinForType;
//If service is terminated call the listener and perform your operation there
if (service.isTerminated()) {
listener.dataFilled(countsPerMinForType);
}
}
#Override
public void onNewData(String str, Object obj) {
//Perform your task here
}
#Override
public void dataFilled(Map<String, Object> data) {
//Perform your task here
}
}
interface Listener {
void dataFilled(Map<String, Object> data);
void onNewData(Map<String, Object> data);
}
The issue you have is that you're expecting the thread you're spinning off has completed the work and the results populated in the countsPerMinForType are returned. This is not what is happening...
What is happening is:
you call the method from the main/current thread of execution
the Map is created
a new thread is spun off to do some work
almost immediately, the method returns and the map is still empty.
...
after the method has completed, the work being performed by the spun off thread is then carried out... and subsequently the calling method never sees the result.
You can confirm this is the case with a test that returns a timestamp for when the getCountPerMinuteForType starts and ends, and another timestamp for when the Thread starts and ends. The start times will be in order, the end times will not be in order.
Also, you may want to consider using a ConcurrentHashMap for a multi-threaded application.

How to execute a service on backgroud and check its status

I'm executing a heavy calculation on the server. The execution is launched from the front and the front is checking the status of the execution each 3 sec.
So I wrote a service like the following :
public class SomeService {
private final ExecutorService executor = Executors.newSingleThreadExecutor();
private final Future<?> noop = CompletableFuture.completedFuture(null);
private final AtomicReference<Future<?>> currentExecution = new AtomicReference<>(noop);
public void execute() {
Future<?> execution = executor.submit(() -> {
// do some heavy calculation here
// ...
// ...
currentExecution.set(noop);
});
currentExecution.set(execution);
}
public boolean isRunning() {
return !currentExecution.get().isDone();
}
}
isRunning method is exposed as an api to the front.
I'm wondering if there's bugs here?
Maybe there's another elegant solution for this requirement?
A simple flag, set when the computation completes, would suffice, as long as it's volatile.
private volatile boolean done;
public void execute() {
executor.submit(() -> {
/* Do some heavy calculation here. */
done = true;
});
}
public boolean isDone() {
return done;
}

rxJava Ordered (by key) task execution

I have a bunch of objects representing some data. These objects can be written to their corresponding files. User may request some changes to be made quicker than previous changes written to the file.
Say, I make changes to File A, File B and File C and submit them for execution. Then, while they are being written, I make changes to File A and post it. For instance, there are 3 threads operating. Once first changes to A, B and C executed (written to files), 1st and 2nd changes to A will be executed almost simultaneously. However, I want the 2nd change to be applied after the 1st one is done.
How can I do that in rxJava?
Another point. In a different place I want to run action with the latest changes. One option is to wait until all tasks finished.
Is there appropriate RxJava primitive/approach that would hopefully cover these 2 use cases?
I am new to RxJava, but I hope this makes sense. Subjects come to my mind as relevant, but there gonna be hundreds of files.
I already have the implementation using custom Executor.
public class OrderingExecutor
implements Executor
{
#Delegate
private final Executor delegate;
private final Map<Object, Queue<Runnable>> keyedTasks = new HashMap<>();
public OrderingExecutor(
Executor delegate)
{
this.delegate = delegate;
}
public void execute(
Runnable task,
Object key)
{
Objects.requireNonNull(key);
boolean first;
Runnable wrappedTask;
synchronized (keyedTasks)
{
Queue<Runnable> dependencyQueue = keyedTasks.get(key);
first = (dependencyQueue == null);
if (dependencyQueue == null)
{
dependencyQueue = new LinkedList<>();
keyedTasks.put(key, dependencyQueue);
}
wrappedTask = wrap(task, dependencyQueue, key);
if (!first)
{
dependencyQueue.add(wrappedTask);
}
}
// execute method can block, call it outside synchronize block
if (first)
{
delegate.execute(wrappedTask);
}
}
private Runnable wrap(
Runnable task,
Queue<Runnable> dependencyQueue,
Object key)
{
return new OrderedTask(task, dependencyQueue, key);
}
class OrderedTask
implements Runnable
{
private final Queue<Runnable> dependencyQueue;
private final Runnable task;
private final Object key;
public OrderedTask(
Runnable task,
Queue<Runnable> dependencyQueue,
Object key)
{
this.task = task;
this.dependencyQueue = dependencyQueue;
this.key = key;
}
#Override
public void run()
{
try
{
task.run();
}
finally
{
Runnable nextTask = null;
synchronized (keyedTasks)
{
if (dependencyQueue.isEmpty())
{
keyedTasks.remove(key);
}
else
{
nextTask = dependencyQueue.poll();
}
}
if (nextTask != null)
{
delegate.execute(nextTask);
}
}
}
}
}
Maybe some sensible way to plug it into rxJava?
It's not fully clear what you try to achieve here, but you can layer a priority queue on
top of RxJava.
class OrderedTask implements Comparable<OrderedTask> { ... }
PriorityBlockingQueue<OrderedTask> queue = new PriorityBlockingQueue<>();
PublishSubject<Integer> trigger = PublishSubject.create();
trigger.flatMap(v -> {
OrderedTask t = queue.poll();
return someAPI.workWith(t);
}, 1)
.subscribe(result -> { }, error -> { });
queue.offer(new SomeOrderedTask(1));
trigger.onNext(1);
queue.offer(new SomeOrderedTask(2));
trigger.onNext(2);

Task execution in Java web application

I'm developing Spring MVC web application. One of it's functionalities is file converting (uploading file -> converting -> storing on server).
Some files could be too big for converting on-the-fly so I decided to put them in shared queue after upload.
Files will be converted with priority based on upload time, i.e. FIFO.
My idea is to add task to queue in controller after upload.
There would also be service executing all tasks in queue, and if empty, then wait until new task is added. I don't need scheduling - tasks should be executing always when queue is not empty.
I've read about ExecutorService but I didn't find any example that fit to my case.
I'd appreciate any suggestions.
EDIT
Thanks for answers, I need to clarify my problem:
Basically, I know how to execute tasks, I need to manage with handling the queue of tasks. User should be able to view the queue and pause, resume or remove task from queue.
My task class:
public class ConvertTask implements Callable<String> {
private Converter converter;
private File source;
private File target;
private State state;
private User user;
public ConvertTask(Converter converter, File source, File target, User user) {
this.converter = converter;
this.source = source;
this.target = target;
this.user = user;
this.state = State.READY;
}
#Override
public String call() throws Exception {
if (this.state == State.READY) {
BaseConverterService converterService = ConverterUtils.getConverterService(this.converter);
converterService.convert(this.source, this.target);
MailSendServiceUtil.send(user.getEmail(), target.getName());
return "success";
}
return "task not ready";
}
}
I also created class responsible for managing queue/tasks followed by your suggestions:
#Component
public class MyExecutorService {
private LinkedBlockingQueue<ConvertTask> converterQueue = new LinkedBlockingQueue<>();
private ExecutorService executorService = Executors.newSingleThreadExecutor();
public void add(ConvertTask task) throws InterruptedException {
converterQueue.put(task);
}
public void execute() throws InterruptedException, ExecutionException {
while (!converterQueue.isEmpty()) {
ConvertTask task = converterQueue.peek();
Future<String> statusFuture = executorService.submit(task);
String status = statusFuture.get();
converterQueue.take();
}
}
}
My point is, how to execute tasks if queue is not empty and resume when new task is added and queue was previously empty. I think of some code that fits in add(ConvertTask task) method.
Edited after question updates
You don't need to create any queue for the tasks since the ThreadPoolExecutor implementation has its own queue. Here's the source code of Oracle's Java 8 implementation of newSingleThreadExecutor() method:
public static ExecutorService newSingleThreadExecutor() {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
}
So you just submit a new task directly and it's getting queued by the ThreadPoolExecutor
#Component
public class MyExecutorService {
private ExecutorService executorService = Executors.newSingleThreadExecutor();
public void add(ConvertTask task) throws InterruptedException {
Future<String> statusFuture = executorService.submit(task);
}
}
If you're worried about the bounds of your queue, you can create a queue instance explicitly and supply it to a ThreadPoolExecutor constructor.
private executorService = new ThreadPoolExecutor(1, 1,
0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<>(MAX_SIZE));
Please note that I have removed the line
String status = statusFuture.get();
because get() call is blocking. If you have this line in the same thread where you submit, your code is not asynchronous anymore. You should store the Future objects and check them asynchronously in a different thread. Or you can consider using CompletableFuture introduced in Java 8. Check out this post.
After upload you should return response immediately. The client can't wait for resource too long. However you can change it in the client settings. Anyway if you are running a background task you can do it without interacting with the client or notify the client while execution is in progress. This is an example of callable demo used by the executor service
/**
* Created by Roma on 17.02.2015.
*/
class SumTask implements Callable<Integer> {
private int num = 0;
public SumTask(int num){
this.num = num;
}
#Override
public Integer call() throws Exception {
int result = 0;
for(int i=1;i<=num;i++){
result+=i;
}
return result;
}
}
public class CallableDemo {
Integer result;
Integer num;
public Integer getNumValue() {
return 123;
}
public Integer getNum() {
return num;
}
public void setNum(Integer num) {
this.num = num;
}
public Integer getResult() {
return result;
}
public void setResult(Integer result) {
this.result = result;
}
ExecutorService service = Executors.newSingleThreadExecutor();
public String execute() {
try{
Future<Integer> future = service.submit(new SumTask(num));
result = future.get();
//System.out.println(result);
service.shutdown();
}
catch(Exception e)
{
e.printStackTrace();
}
return "showlinks";
}
}

Categories

Resources