I am trying to simulate the delay while emitting items in a specific sequence
Here I am trying to simulate the problem
List<Integer> integers = new ArrayList<>();
integers.add(1);
integers.add(2);
integers.add(3);
integers.add(4);
Disposable d = Observable
.just(integers)
.flatMap(integers1 -> {
return Observable
.zip(Observable.just(1L).concatWith(Observable.interval(10, 5, TimeUnit.SECONDS)),
Observable.fromIterable(integers1), (aLong, integer1) -> {
return new Pair<Long, Integer>(aLong, integer1);
})
.flatMap(longIntegerPair -> {
System.out.println("DATA " + longIntegerPair.getValue());
return Observable.just(longIntegerPair.getValue());
})
.toList()
.toObservable();
})
.subscribe(integers1 -> {
System.out.println("END");
}, throwable -> {
System.out.println("Error " + throwable.getMessage());
});
The output of the above code is
DATA 1
wait for 10 seconds
DATA 2
wait for 5
DATA 3
wait for 5
DATA 4
wait for 5 min
what I am expecting is to perform an operation once 10 seconds or 5 seconds delay is over at each stage but sure where do I inject that part in current flow.
DATA 1
wait for 10 seconds
[perform operation]
DATA 2
wait for 5
[perform operation]
DATA 3
wait for 5
[perform operation]
DATA 4
wait for 5 min
[perform operation]
Use concatMap to make the sequence orderly and use delay to delay the processing of a data to get your printout pattern:
Observable
.zip(Observable.just(-1L).concatWith(Observable.interval(10, 5, TimeUnit.SECONDS)),
Observable.range(1, 5),
(aLong, integer1) -> {
return new Pair<Long, Integer>(aLong, integer1);
}
)
.concatMap(longIntegerPair -> {
System.out.println("DATA " + longIntegerPair.getValue());
return Observable.just(longIntegerPair.getValue())
.delay(longIntegerPair.getKey() < 0 ? 10 : 5, TimeUnit.SECONDS)
.flatMap(value -> {
System.out.println("[performing operation] " + value);
return Observable.just(value);
});
})
.toList()
.toObservable()
.blockingSubscribe();
How do I let my discord java bot to delete the message after for example 5 seconds???
String[] messageArgs = event.getMessage().getContentRaw().toLowerCase().split(" ");
for (String args : messageArgs) {
if (blacklistWords.contains(args)) {
if (memberRoles.contains(warn0)) {
event.getMessage().delete().queue();
event.getMessage().getChannel().sendMessage
("**" + MemberMention + "** unterlasse bitte diese **Wortwahl.**"
+ "\nBei **3 Verwarnungen** wirst du **gekickt!**").queue();
event.getGuild().addRoleToMember(user, warn1).complete();
event.getJDA().getGuildById(Secrets.guildID).removeRoleFromMember(user, warn0).complete();
break;
}
}
}
The proper way to do this would be to pass a consumer to your .queue method when sending the message. Together with using RestAction#queueAfter you would get
channel.sendMessage("message").queue(message -> message.delete().queueAfter(5,
TIMEUNIT.SECONDS);
see queue with Consumer and queueAfter.
There is another way of doing this using RestAction#delay
You can start a new thread and pause that thread before running the delete command on the message. That way you won't cause issues with the other code lines.
String[] messageArgs = event.getMessage().getContentRaw().toLowerCase().split(" ");
for (String args : messageArgs) {
if (blacklistWords.contains(args)) {
if (memberRoles.contains(warn0)) {
Runnable r = new Runnable() {
#Override
public void run() {
try {
// pause for 5 seconds
wait(5000);
event.getMessage().delete().queue();
event.getMessage().getChannel().sendMessage
("**" + MemberMention + "** unterlasse bitte diese **Wortwahl.**"
> here + "\nBei **3 Verwarnungen** wirst du **gekickt!**").queue();
event.getGuild().addRoleToMember(user, warn1).complete();
event.getJDA().getGuildById(Secrets.guildID).removeRoleFromMember(user, warn0)
.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
Thread t = new Thread(r);
t.start();
break;
}
}
}
Just put a pause in with something like Thread.sleep(5000); before you call the function to delete the message. This will pause for 5000 milliseconds or 5 seconds.
Using java concurrent executor, future cancel method not stopping the current task.
I have followed this solution of timeout and stop processing of current task. But it doesn't stop the processing.
I am trying this with cron job. Every 30 seconds my cron job gets executed and I am putting 10 seconds timeout. Debug comes in future cancel method, but it is not stopping current task.
Thank you.
#Scheduled(cron = "*/30 * * * * *")
public boolean cronTest()
{
System.out.println("Inside cron - start ");
DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
Date date = new Date();
System.out.println(dateFormat.format(date));
System.out.println("Inside cron - end ");
ExecutorService executor = Executors.newCachedThreadPool();
Callable<Object> task = new Callable<Object>() {
public Object call() {
int i=1;
while(i<100)
{
System.out.println("i: "+ i++);
try {
TimeUnit.SECONDS.sleep(1);
}
catch(Exception e)
{
}
}
return null;
}
};
Future<Object> future = executor.submit(task);
try {
Object result = future.get(10, TimeUnit.SECONDS);
} catch (Exception e)
} finally {
future.cancel(true);
return true;
}
}
The expected result is cron job runs every 30 seconds and after 10 seconds it should time out and wait for approximately 20 seconds for a cron job to start again. And should not continue the older loop because we have timeout on 10 seconds.
Current result is:
Inside cron - start
2019/07/25 11:09:00
Inside cron - end
i: 1
i: 2
i: 3
i: 4 ... upto i: 31
Inside cron - start
2019/07/25 11:09:30
Inside cron - end
i: 1
i: 32
i: 2
i: 3
i: 33
...
Expected result is:
Inside cron - start
2019/07/25 11:09:00
Inside cron - end
i: 1
i: 2
i: 3
i: 4 ... upto i: 10
Inside cron - start
2019/07/25 11:09:30
Inside cron - end
i: 1
i: 2
i: 3 ... upto i:10
The first problem is in this part of code:
catch(Exception e)
{
}
When you invoke future.cancel(true); your thread is being interrupted with Thread.interrupt()
Which means that when a thread is sleeping, it gets awoken and throws InterruptedException which is caught by the catch block and ignored. To fix this problem you have to handle this exception:
catch(InterruptedException e) {
break; //breaking from the loop
}
catch(Exception e)
{
}
The second problem: Thread.interrupt() may be invoked while the thread is not sleeping. In this case InterruptedException is not thrown. Instead, the interrupted flag of the thread is raised. What you have to do is to check for this flag from time to time, and if it's raised, handle interruption. The basic code for it would look like:
try {
if (Thread.currentThread().isInterrupted()) {
break;
}
TimeUnit.SECONDS.sleep(1);
}
...
// rest of the code
UPDATE:
Here's the full code of Callable:
Callable<Object> task = new Callable<Object>() {
public Object call() {
int i=1;
while(i<100)
{
System.out.println("i: "+ i++);
try {
if (Thread.currentThread().isInterrupted()) {
break; //breaking from the while loop
}
TimeUnit.SECONDS.sleep(1);
} catch(InterruptedException e) {
break; //breaking from the while loop
} catch(Exception e)
{
}
}
return null;
}
};
I am trying to understand CompletableFutures and chaining of calls that that return completed futures and I have created the bellow example which kind of simulates two calls to a database.
The first method is supposed to be giving a completable future with a list of userIds and then I need to make a call to another method providing a userId to get the user (a string in this case).
to summarise:
1. fetch the ids
2. fetch a list of the users coresponding to those ids.
I created simple methods to simulate the responses with sleap threads.
Please check the code bellow
public class PipelineOfTasksExample {
private Map<Long, String> db = new HashMap<>();
PipelineOfTasksExample() {
db.put(1L, "user1");
db.put(2L, "user2");
db.put(3L, "user3");
db.put(4L, "user4");
}
private CompletableFuture<List<Long>> returnUserIdsFromDb() {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("building the list of Ids" + " - thread: " + Thread.currentThread().getName());
return CompletableFuture.supplyAsync(() -> Arrays.asList(1L, 2L, 3L, 4L));
}
private CompletableFuture<String> fetchById(Long id) {
CompletableFuture<String> cfId = CompletableFuture.supplyAsync(() -> db.get(id));
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("fetching id: " + id + " -> " + db.get(id) + " thread: " + Thread.currentThread().getName());
return cfId;
}
public static void main(String[] args) {
PipelineOfTasksExample example = new PipelineOfTasksExample();
CompletableFuture<List<String>> result = example.returnUserIdsFromDb()
.thenCompose(listOfIds ->
CompletableFuture.supplyAsync(
() -> listOfIds.parallelStream()
.map(id -> example.fetchById(id).join())
.collect(Collectors.toList()
)
)
);
System.out.println(result.join());
}
}
My question is, does the join call (example.fetchById(id).join()) ruins the nonblocking nature of process. If the answer is positive how can I solve this problem?
Thank you in advance
Your example is a bit odd as you are slowing down the main thread in returnUserIdsFromDb(), before any operation even starts and likewise, fetchById slows down the caller rather than the asynchronous operation, which defeats the entire purpose of asynchronous operations.
Further, instead of .thenCompose(listOfIds -> CompletableFuture.supplyAsync(() -> …)) you can simply use .thenApplyAsync(listOfIds -> …).
So a better example might be
public class PipelineOfTasksExample {
private final Map<Long, String> db = LongStream.rangeClosed(1, 4).boxed()
.collect(Collectors.toMap(id -> id, id -> "user"+id));
PipelineOfTasksExample() {}
private static <T> T slowDown(String op, T result) {
LockSupport.parkNanos(TimeUnit.MILLISECONDS.toNanos(500));
System.out.println(op + " -> " + result + " thread: "
+ Thread.currentThread().getName()+ ", "
+ POOL.getPoolSize() + " threads");
return result;
}
private CompletableFuture<List<Long>> returnUserIdsFromDb() {
System.out.println("trigger building the list of Ids - thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("building the list of Ids", Arrays.asList(1L, 2L, 3L, 4L)),
POOL);
}
private CompletableFuture<String> fetchById(Long id) {
System.out.println("trigger fetching id: " + id + " thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("fetching id: " + id , db.get(id)), POOL);
}
static ForkJoinPool POOL = new ForkJoinPool(2);
public static void main(String[] args) {
PipelineOfTasksExample example = new PipelineOfTasksExample();
CompletableFuture<List<String>> result = example.returnUserIdsFromDb()
.thenApplyAsync(listOfIds ->
listOfIds.parallelStream()
.map(id -> example.fetchById(id).join())
.collect(Collectors.toList()
),
POOL
);
System.out.println(result.join());
}
}
which prints something like
trigger building the list of Ids - thread: main
building the list of Ids -> [1, 2, 3, 4] thread: ForkJoinPool-1-worker-1, 1 threads
trigger fetching id: 2 thread: ForkJoinPool-1-worker-0
trigger fetching id: 3 thread: ForkJoinPool-1-worker-1
trigger fetching id: 4 thread: ForkJoinPool-1-worker-2
fetching id: 4 -> user4 thread: ForkJoinPool-1-worker-3, 4 threads
fetching id: 2 -> user2 thread: ForkJoinPool-1-worker-3, 4 threads
fetching id: 3 -> user3 thread: ForkJoinPool-1-worker-2, 4 threads
trigger fetching id: 1 thread: ForkJoinPool-1-worker-3
fetching id: 1 -> user1 thread: ForkJoinPool-1-worker-2, 4 threads
[user1, user2, user3, user4]
which might be a surprising number of threads on the first glance.
The answer is that join() may block the thread, but if this happens inside a worker thread of a Fork/Join pool, this situation will be detected and a new compensation thread will be started, to ensure the configured target parallelism.
As a special case, when the default Fork/Join pool is used, the implementation may pick up new pending tasks within the join() method, to ensure progress within the same thread.
So the code will always make progress and there’s nothing wrong with calling join() occasionally, if the alternatives are much more complicated, but there’s some danger of too much resource consumption, if used excessively. After all, the reason to use thread pools, is to limit the number of threads.
The alternative is to use chained dependent operations where possible.
public class PipelineOfTasksExample {
private final Map<Long, String> db = LongStream.rangeClosed(1, 4).boxed()
.collect(Collectors.toMap(id -> id, id -> "user"+id));
PipelineOfTasksExample() {}
private static <T> T slowDown(String op, T result) {
LockSupport.parkNanos(TimeUnit.MILLISECONDS.toNanos(500));
System.out.println(op + " -> " + result + " thread: "
+ Thread.currentThread().getName()+ ", "
+ POOL.getPoolSize() + " threads");
return result;
}
private CompletableFuture<List<Long>> returnUserIdsFromDb() {
System.out.println("trigger building the list of Ids - thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("building the list of Ids", Arrays.asList(1L, 2L, 3L, 4L)),
POOL);
}
private CompletableFuture<String> fetchById(Long id) {
System.out.println("trigger fetching id: " + id + " thread: "
+ Thread.currentThread().getName());
return CompletableFuture.supplyAsync(
() -> slowDown("fetching id: " + id , db.get(id)), POOL);
}
static ForkJoinPool POOL = new ForkJoinPool(2);
public static void main(String[] args) {
PipelineOfTasksExample example = new PipelineOfTasksExample();
CompletableFuture<List<String>> result = example.returnUserIdsFromDb()
.thenComposeAsync(listOfIds -> {
List<CompletableFuture<String>> jobs = listOfIds.parallelStream()
.map(id -> example.fetchById(id))
.collect(Collectors.toList());
return CompletableFuture.allOf(jobs.toArray(new CompletableFuture<?>[0]))
.thenApply(_void -> jobs.stream()
.map(CompletableFuture::join).collect(Collectors.toList()));
},
POOL
);
System.out.println(result.join());
System.out.println(ForkJoinPool.commonPool().getPoolSize());
}
}
The difference is that first, all asynchronous jobs are submitted, then, a dependent action calling join on them is scheduled, to be executed only when all jobs have completed, so these join invocations will never block. Only the final join call at the end of the main method may block the main thread.
So this prints something like
trigger building the list of Ids - thread: main
building the list of Ids -> [1, 2, 3, 4] thread: ForkJoinPool-1-worker-1, 1 threads
trigger fetching id: 3 thread: ForkJoinPool-1-worker-1
trigger fetching id: 2 thread: ForkJoinPool-1-worker-0
trigger fetching id: 4 thread: ForkJoinPool-1-worker-1
trigger fetching id: 1 thread: ForkJoinPool-1-worker-0
fetching id: 4 -> user4 thread: ForkJoinPool-1-worker-1, 2 threads
fetching id: 3 -> user3 thread: ForkJoinPool-1-worker-0, 2 threads
fetching id: 2 -> user2 thread: ForkJoinPool-1-worker-1, 2 threads
fetching id: 1 -> user1 thread: ForkJoinPool-1-worker-0, 2 threads
[user1, user2, user3, user4]
showing that no compensation threads had to be created, so the number of threads matches the configured target parallelism.
Note that if the actual work is done in a background thread rather than within the fetchById method itself, you now don’t need a parallel stream anymore, as there is no blocking join() call. For such scenarios, just using stream() will usually result in a higher performance.
Why does my app throw a
java.lang.ArrayIndexOutOfBoundsException: -1
when I invoke future.get() on java.utils.concurrent.Future??
List<Future> tableLoadings = new LinkedList<>();
ExecutorService executor = Executors.newFixedThreadPool(8);
try{
for(Entry<Integer, String> entry: farmIds.entrySet())
{
int id = entry.getKey();
String username = entry.getValue();
psLog.println("START ELABORAZIONE FARMACIA ID : "+ id+" TPH_USERNAME : "+username );
/*SdajSdaj*/
tableLoadings.add(executor.submit(new StatusMultiThreading(id, username, psLog, connSTORY, connCF, mongoDatabase)));
}
for (Future<Void> future : tableLoadings) {
try{
future.get();
}catch(Exception e){
psLog.println("[EE] ERORE ELABORAZIONE THREAD FARMACIA [EE] "+e.getMessage());
}
}
}finally{
executor.shutdown();
psLog.println("END CONSOLIDA STATUS FARMACIE");
}
this is the log..
START ELABORAZIONE FARMACIA ID : 62 TPH_USERNAME : A0102987
START ELABORAZIONE FARMACIA ID : 63 TPH_USERNAME : A0103019
START SENDING DATA TO DB FARMID = 66
...
START SENDING DATA TO DB FARMID = 17
[EE] ERORE ELABORAZIONE THREAD FARMACIA [EE] java.lang.ArrayIndexOutOfBoundsException: -1
[EE] ERRORE ELABORAZIONE THREAD FARMACIA [EE] java.lang.ArrayIndexOutOfBoundsException: -1
END CONSOLIDA STATUS FARMACIE
I can't find anything wrong if I debug.
I cannot go inside .get() method, so I don't understand which line of code is invalid.
What can be said so far: you use the ExecutorService to pass in tasks:
new StatusMultiThreading(id, username, psLog, connSTORY, connCF, mongoDatabase)
Later on, when you call get() the corresponding task is triggered. So that exception takes place inside that class of yours.