I'd like to parallel into 3 threads. Then I read the output, if there's returning output more than 99, stop the two other threads. Then main thread will give an output as "99+". Otherwise if not reach 99, store it as is (integer value) then wait until other threads end with giving another value, then accumulate it. In short, accumulate value from all of those threads. If more than 99, give it as "99+" then stop unfinished thread. This is how I implemented it:
RequestDTO request; //this is http request data
ExecutorService executor = Executors.newFixedThreadPool(3);
//for flagging purpose, for counting how many sub threads end
//but I can't reference it directly just like I did to DTOResponse totalAll;
short asyncFlag = 0;
Cancellable
cancellableThreads1,
cancellableThreads2,
cancellableThreads3;
DTOResponse totalAll = new DTOResponse(); totalAll.total = 0;
LOGGER.info("start threads 1");
cancellableThreads1 =
Uni.createFrom().item(asyncFlag)
.runSubscriptionOn(executor).subscribe().with(consumer ->
{//it runs on new thread
Response response = method1(request).await().indefinitely();
LOGGER.info("got uniMethod1!");
DTOResponse totalTodo = response.readEntity(DTOResponse.class);
Integer total =(Integer) totalTodo.total;
totalAll.total = (Integer) totalAll.total + total;
LOGGER.info("total thread1 done: "+total);
if ((Integer) totalAll.total > 99){
totalAll.total = "99+";
}
//as I mentioned on comments above, I can't refer asyncFlag directly, so I put those as .item() parameter
//then I just refer it as consumer, but no matter how many consumer increase, it not change the asyncFlag on main thread
consumer++;
});
LOGGER.info("thread 1 already running asynchronus");
LOGGER.info("start threads 2");
cancellableThreads2 =
Uni.createFrom().item(asyncFlag)
.runSubscriptionOn(executor).subscribe().with(consumer ->
{//it runs on new thread
Response response = method2(request).await().indefinitely();
LOGGER.info("got uniMethod2!");
DTOResponse totalTodo = response.readEntity(DTOResponse.class);
Integer total =(Integer) totalTodo.total;
totalAll.total = (Integer) totalAll.total + total;
LOGGER.info("total thread2 done: "+total);
if ((Integer) totalAll.total > 99){
totalAll.total = "99+";
}
//as I mentioned on comments above, I can't refer asyncFlag directly, so I put those as .item() parameter
//then I just refer it as consumer, but no matter how many consumer increase, it not change the asyncFlag on main thread
consumer++;
});
LOGGER.info("thread 2 already running asynchronus");
LOGGER.info("start threads 3");
cancellableThreads2 =
Uni.createFrom().item(asyncFlag)
.runSubscriptionOn(executor).subscribe().with(consumer ->
{//it runs on new thread
Response response = method3(request).await().indefinitely();
LOGGER.info("got uniMethod3!");
DTOResponse totalTodo = response.readEntity(DTOResponse.class);
Integer total =(Integer) totalTodo.total;
totalAll.total = (Integer) totalAll.total + total;
LOGGER.info("total thread3 done: "+total);
if ((Integer) totalAll.total > 99){
totalAll.total = "99+";
}
//as I mentioned on comments above, I can't refer asyncFlag directly, so I put those as .item() parameter
//then I just refer it as consumer, but no matter how many consumer increase, it not change the asyncFlag on main thread
consumer++;
});
LOGGER.info("thread 3 already running asynchronus");
do{
//executed by main threads.
//I wanted to block in here until those condition is met
//actually is not blocking thread but forever loop instead
if(totalAll.total instanceof String || asyncFlag >=3){
cancellableThreads1.cancel();
cancellableThreads2.cancel();
cancellableThreads3.cancel();
}
//asyncFlag isn't increase even all of 3 threads has execute consumer++
}while(totalAll.total instanceof Integer && asyncFlag <3);
ResponseBuilder responseBuilder = Response.ok().entity(totalAll);
return Uni.createFrom().item("").onItem().transform(s->responseBuilder.build());
totalAll is able to be accessed by those subthreads, but not asyncFlag. my editor gave me red line with Local variable asyncFlag defined in an enclosing scope must be final or effectively finalJava(536871575) if asyncFlag written inside subthreads block. So I use consumer but it doesn't affected. Making loop is never ending unless total value turned into String (first condition)
You are better switching gears to use a reactive(-native) approach to your problem.
Instead of subscribing to each Uni then collecting their results individually in an imperative approach monitoring their progress, here down the series of steps that you should rather use in a rxified way:
Create all your Uni request-representing objects with whatever concurrency construct you would like: Uni#emitOn
Combine all your requests Unis into a Multi merging all of your initial requests executing them concurrently (not in an ordered fashion): MultiCreatedBy#merging
Scan the Multi emitted items, which are your requests results, as they come adding each item to an initial seed: MultiOnItem#scan
Keep on skipping the items sum until you first see a value exceeding a threshold (99 in your case) in which case you let the result flow through your stream pipeline: MultiSkip#first (not that the skip stage will automatically cancel upstream requests hence stop any useless request processing already inflight)
In case no item has been emitted downstream, meaning that the requests sum has not exceeded the , you sum up the initial Uni results (which are cached to avoid re-triggering the requests): UniOnNull#ifNull
Here down a pseudo implementation of the described stages:
public Uni<Response> request() {
RequestDTO request; //this is http request data
Uni<Object> requestOne = method1(request)
.emitOn(executor)
.map(response -> response.readEntity(DTOResponse.class))
.map(dtoResponse -> dtoResponse.total)
.memoize()
.atLeast(Duration.ofSeconds(3));
Uni<Object> requestTwo = method2(request)
.emitOn(executor)
.map(response -> response.readEntity(DTOResponse.class))
.map(dtoResponse -> dtoResponse.total)
.memoize()
.atLeast(Duration.ofSeconds(3));
Uni<Object> requestThree = method3(request)
.emitOn(executor)
.map(response -> response.readEntity(DTOResponse.class))
.map(dtoResponse -> dtoResponse.total)
.memoize()
.atLeast(Duration.ofSeconds(3));
return Multi.createBy()
.merging()
.withConcurrency(1)
.streams(requestOne.toMulti(), requestTwo.toMulti(), requestThree.toMulti())
.onItem()
.scan(() -> 0, (result, itemTotal) -> result + (Integer) itemTotal)
.skip()
.first(total -> total < 99)
.<Object>map(ignored -> "99+")
.toUni()
.onItem()
.ifNull()
.switchTo(
Uni.combine()
.all()
.unis(requestOne, requestTwo, requestThree)
.combinedWith((one, two, three) -> (Integer) one + (Integer) two + (Integer) three)
)
.map(result -> Response.ok().entity(result).build());
}
Related
I want to create an API Load Test where number of parallel users(threads) increase to the pool size and after a while decrease. Right now I have a test where I can start all threads start at once.
// Thread pull - How many threads at once we will use as concurrent USERS
ExecutorService executor = Executors.newFixedThreadPool(Integer.parseInt(prop.getProperty("threadPool")));
// numberOfRequests - How many requests we want to send in total
int numberOfRequests = Integer.parseInt(prop.getProperty("numberOfRequests"));
CountDownLatch latch = new CountDownLatch(numberOfRequests);
List<PostCallableData> tasks = IntStream.range(0, numberOfRequests).mapToObj(i -> {
return new PostCallableData("Thread ", branch, 4, 5, latch);
}).collect(Collectors.toList());
List<Future<List<Integer>>> futures = executor.invokeAll(tasks);
latch.await();
executor.shutdown();
List<List<Integer>> results = futures.stream()
.map(future -> {
try {
return future.get();
} catch (Exception e) {
throw new RuntimeException(e);
}
})
.collect(Collectors.toList());
My goal is to start with one thread and add next after interval(can be changed by variable) to max thread pool. ex. till 30 threads add next every 30s.
Keep that for ex. for 45 minutes(variable) or number of and then decrease number of threads by one every 30 seconds.
List<PostCallableData> tasks = IntStream.range(0, numberOfRequests).mapToObj(i -> {
return new PostCallableData("Thread ", branch, 4, 5, latch);
}).collect(Collectors.toList());
List<Future<List<Integer>>> futures = executor.invokeAll(tasks);
Ideally lines above will be replaced by sort of a random action - I want to Post/Get/Delete actions to be run in this test.
What I need to do to have it increasing and decreasing gradually?
I wrote this code:
Flux.range(0, 300)
.doOnNext(i -> System.out.println("i = " + i))
.flatMap(i -> Mono.just(i)
.subscribeOn(Schedulers.elastic())
.delayElement(Duration.ofMillis(1000))
)
.doOnNext(i -> System.out.println("end " + i))
.blockLast();
When running it, the first System.out.println shows that the Flux stop emitting numbers at the 256th element, then it waits for the older to be completed before emitting new ones.
Why is this happening?
Why 256?
Why this happening?
The flatMap operator can be characterized as operator that (rephrased from javadoc):
subscribes to its inners eagerly
does not preserve ordering of elements.
lets values from different inners interleave.
For this question the first point is important. Project Reactor restricts the
number of in-flight inner sequences via concurrency parameter.
While flatMap(mapper) uses the default parameter the flatMap(mapper, concurrency) overload accepts this parameter explicitly.
The flatMaps javadoc describes the parameter as:
The concurrency argument allows to control how many Publisher can be subscribed to and merged in parallel
Consider the following code using concurrency = 500
Flux.range(0, 300)
.doOnNext(i -> System.out.println("i = " + i))
.flatMap(i -> Mono.just(i)
.subscribeOn(Schedulers.elastic())
.delayElement(Duration.ofMillis(1000)),
500
// ^^^^^^^^^^
)
.doOnNext(i -> System.out.println("end " + i))
.blockLast();
In this case there is no waiting:
i = 297
i = 298
i = 299
end 0
end 1
end 2
In contrast if you pass 1 as concurrency the output will be similar to:
i = 0
end 0
i = 1
end 1
Awaiting one second before emitting the next element.
Why 256?
256 is the default value for concurrency of flatMap.
Take a look at Queues.SMALL_BUFFER_SIZE:
public static final int SMALL_BUFFER_SIZE = Math.max(16,
Integer.parseInt(System.getProperty("reactor.bufferSize.small", "256")));
I have a rest call api where max count of result return by the api is 1000.start page=1
{
"status": "OK",
"payload": {
"EMPList":[],
count:5665
}
So to get other result I have to change the start page=2 and again hit the service.again will get 1000 results only.
but after first call i want to make it as a parallel call and I want to collect the result and combine it and send it back to calling service in java. Please suggest i am new to java.i tried using callable but it's not working
It seems to me that ideally you should be able to configure your max count to one appropriate for your use case. I'm assuming you aren't able to do that. Here is a simple, lock-less, multi threading scheme that acts as a simple reduction operation for your two network calls:
// online runnable: https://ideone.com/47KsoS
int resultSize = 5;
int[] result = new int[resultSize*2];
Thread pg1 = new Thread(){
public void run(){
System.out.println("Thread 1 Running...");
// write numbers 1-5 to indexes 0-4
for(int i = 0 ; i < resultSize; i ++) {
result[i] = i + 1;
}
System.out.println("Thread 1 Exiting...");
}
};
Thread pg2 = new Thread(){
public void run(){
System.out.println("Thread 2 Running");
// write numbers 5-10 to indexes 5-9
for(int i = 0 ; i < resultSize; i ++) {
result[i + resultSize] = i + 1 + resultSize;
}
System.out.println("Thread 2 Exiting...");
}
};
pg1.start();
pg2.start();
// ensure that pg1 execution finishes
pg1.join();
// ensure that pg2 execution finishes
pg2.join();
// print result of reduction operation
System.out.println(Arrays.toString(result));
There is a very important caveat with this implementation however. You will notice that both of the threads DO NOT overlap in their memory writes. This is very important as if you were to simply change our int[] result to ArrayList<Integer> this could lead to catastrophic failure in our reduction operation between the two threads called a Race Condition (I believe the standard ArrayList implementation in Java is not thread safe). Since we can guarantee how large our result will be I would highly suggest sticking to my usage of an array for this multi-threaded implementation as ArrayLists hide a lot of implementation logic from you that you likely won't understand until you take a basic data-structures course.
Let's take the following execution example:
MyRequest request = new MyRequest(args);
request.execute(params);
How can I perform the above 1 to n times (i.e. n=50) per second?
Edit
Furthermore, if we have i objects, each of which call n requests:
for(MyObject obj : objects) {
// Execute n requests (i.e. in for loop)
}
How can I ensure that the execution happens within one second?
To ensure that n requests are executed in 1 Second you would have to know how long 1 execution lasts to run them sequential, otherwise you should use Threads to run them in parralel and start them with a delay to exactly fit 1 Second
for(int i=0;i<n;i++){
MyRequest request = new MyRequest(args);
Thread th=new Thread(()-> request.execute());
th.start();
Thread.sleep(1000/n);
}
I have an app that fetches a lot of data, so I would like to paginate the data into chunks and process those chunks individually rather than dealing with the data all at once. So I wrote a function I am calling every n seconds to check if a chunk is done and then process that chunk. My problem is I have no way of keeping track of the fact that I just processed a chunk and that I should move onto the next chunk when it is available. I was thinking something along the lines of the code below, however I cannot call multiplier++; as it complains that it is not behaving like a final variable anymore. I would like to use something like multiplier so that once the code processes a chunk it 1) doesn't process the same chunk again and 2) moves onto the next chunk. Is it possible to do this? Is there a modifier one can put on multiplier to help avoid race conditions?
int multiplier = 1;
CompletableFuture<String> completionFuture = new CompletableFuture<>();
final ScheduledFuture<?> checkFuture = executor.scheduleAtFixedRate(() -> {
// parse json response
String response = getJSONResponse();
JsonObject jsonObject = ConverterUtils.parseJson(response, true)
.getAsJsonObject();
int pages = jsonObject.get("stats").getAsJsonObject().get("pages").getAsInt();
// if we have a chunk of n pages records then process them with dataHandler function
if (pages > multiplier * bucketSize) {
dataHandler.apply(getResponsePaginated((multiplier - 1) * bucketSize, bucketSize));
multiplier++;
}
if (jsonObject.has("finishedAt") && !jsonObject.get("finishedAt").isJsonNull()) {
// we are done!
completionFuture.complete("");
}
}, 0, sleep, TimeUnit.SECONDS);
You can use an AtomicInteger. Since this is a mutable type, you can assign it to a final variable while still being able to change its value. This also addresses the synchronization issue between the callbacks:
final AtomicInteger multiplier = new AtomicInteger(1);
executor.scheduleAtFixedRate(() -> {
//...
multiplier.incrementAndGet();
}, 0, sleep, TimeUnit.SECONDS);