Why does my app throw a
java.lang.ArrayIndexOutOfBoundsException: -1
when I invoke future.get() on java.utils.concurrent.Future??
List<Future> tableLoadings = new LinkedList<>();
ExecutorService executor = Executors.newFixedThreadPool(8);
try{
for(Entry<Integer, String> entry: farmIds.entrySet())
{
int id = entry.getKey();
String username = entry.getValue();
psLog.println("START ELABORAZIONE FARMACIA ID : "+ id+" TPH_USERNAME : "+username );
/*SdajSdaj*/
tableLoadings.add(executor.submit(new StatusMultiThreading(id, username, psLog, connSTORY, connCF, mongoDatabase)));
}
for (Future<Void> future : tableLoadings) {
try{
future.get();
}catch(Exception e){
psLog.println("[EE] ERORE ELABORAZIONE THREAD FARMACIA [EE] "+e.getMessage());
}
}
}finally{
executor.shutdown();
psLog.println("END CONSOLIDA STATUS FARMACIE");
}
this is the log..
START ELABORAZIONE FARMACIA ID : 62 TPH_USERNAME : A0102987
START ELABORAZIONE FARMACIA ID : 63 TPH_USERNAME : A0103019
START SENDING DATA TO DB FARMID = 66
...
START SENDING DATA TO DB FARMID = 17
[EE] ERORE ELABORAZIONE THREAD FARMACIA [EE] java.lang.ArrayIndexOutOfBoundsException: -1
[EE] ERRORE ELABORAZIONE THREAD FARMACIA [EE] java.lang.ArrayIndexOutOfBoundsException: -1
END CONSOLIDA STATUS FARMACIE
I can't find anything wrong if I debug.
I cannot go inside .get() method, so I don't understand which line of code is invalid.
What can be said so far: you use the ExecutorService to pass in tasks:
new StatusMultiThreading(id, username, psLog, connSTORY, connCF, mongoDatabase)
Later on, when you call get() the corresponding task is triggered. So that exception takes place inside that class of yours.
Related
Set up kafka consumer with this configuration
kafkaconfig:
acks: 1
autoCommit: true
bootstrapServers: example.com:9092
topic: item
groupId: EWok-group
keyDeserializer: org.apache.kafka.common.serialization.StringDeserializer
valueDeserializer: org.apache.kafka.common.serialization.StringDeserializer
maxPollRecords: 1
pollMillisTime: 15
retries: 5
heartBeatInterval: 300
sessionTimeout: 100000
maxPollInterval: 30000
code
while (true) {
try {
ConsumerRecords<String, String> consumerRecords = eWokIntegrationConsumer.poll(Duration.of(kafkaCommConfig.getPollMillisTime(), ChronoUnit.SECONDS));
if (!consumerRecords.isEmpty()) {
LOG.info("Consumed Record Count: {}", consumerRecords.count());
consumerRecords.forEach(record -> {
System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value());
eWokMessageProcessor.onMessage(record.value());
eWokIntegrationConsumer.commitSync();
});
} else {
LOG.info("Polling returned without any records.");
}
} catch (Exception exception) {
LOG.error("Consumer was interrupted. But still continue to poll. Exception:", exception);
eWokIntegrationConsumer.close();
}
}
10000 ms is taking for processing the data which we have received from kafka consumer.but getting
exception saying
java.lang.IllegalStateException: This consumer has already been closed.
Exception Logs
java.lang.IllegalStateException: This consumer has already been closed.
at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:2202)
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1332)
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1298)
Kafka version : kafka-clients-2.0.1
Could you pls suggest any one how should configurations Kafka consume looks like.
I have put System.exit(0) in other place in the source code.That is why consumer has leave the group and mark as closed.
I have remove System.exit(0) from source code.Now it's working fine.
Using java concurrent executor, future cancel method not stopping the current task.
I have followed this solution of timeout and stop processing of current task. But it doesn't stop the processing.
I am trying this with cron job. Every 30 seconds my cron job gets executed and I am putting 10 seconds timeout. Debug comes in future cancel method, but it is not stopping current task.
Thank you.
#Scheduled(cron = "*/30 * * * * *")
public boolean cronTest()
{
System.out.println("Inside cron - start ");
DateFormat dateFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss");
Date date = new Date();
System.out.println(dateFormat.format(date));
System.out.println("Inside cron - end ");
ExecutorService executor = Executors.newCachedThreadPool();
Callable<Object> task = new Callable<Object>() {
public Object call() {
int i=1;
while(i<100)
{
System.out.println("i: "+ i++);
try {
TimeUnit.SECONDS.sleep(1);
}
catch(Exception e)
{
}
}
return null;
}
};
Future<Object> future = executor.submit(task);
try {
Object result = future.get(10, TimeUnit.SECONDS);
} catch (Exception e)
} finally {
future.cancel(true);
return true;
}
}
The expected result is cron job runs every 30 seconds and after 10 seconds it should time out and wait for approximately 20 seconds for a cron job to start again. And should not continue the older loop because we have timeout on 10 seconds.
Current result is:
Inside cron - start
2019/07/25 11:09:00
Inside cron - end
i: 1
i: 2
i: 3
i: 4 ... upto i: 31
Inside cron - start
2019/07/25 11:09:30
Inside cron - end
i: 1
i: 32
i: 2
i: 3
i: 33
...
Expected result is:
Inside cron - start
2019/07/25 11:09:00
Inside cron - end
i: 1
i: 2
i: 3
i: 4 ... upto i: 10
Inside cron - start
2019/07/25 11:09:30
Inside cron - end
i: 1
i: 2
i: 3 ... upto i:10
The first problem is in this part of code:
catch(Exception e)
{
}
When you invoke future.cancel(true); your thread is being interrupted with Thread.interrupt()
Which means that when a thread is sleeping, it gets awoken and throws InterruptedException which is caught by the catch block and ignored. To fix this problem you have to handle this exception:
catch(InterruptedException e) {
break; //breaking from the loop
}
catch(Exception e)
{
}
The second problem: Thread.interrupt() may be invoked while the thread is not sleeping. In this case InterruptedException is not thrown. Instead, the interrupted flag of the thread is raised. What you have to do is to check for this flag from time to time, and if it's raised, handle interruption. The basic code for it would look like:
try {
if (Thread.currentThread().isInterrupted()) {
break;
}
TimeUnit.SECONDS.sleep(1);
}
...
// rest of the code
UPDATE:
Here's the full code of Callable:
Callable<Object> task = new Callable<Object>() {
public Object call() {
int i=1;
while(i<100)
{
System.out.println("i: "+ i++);
try {
if (Thread.currentThread().isInterrupted()) {
break; //breaking from the while loop
}
TimeUnit.SECONDS.sleep(1);
} catch(InterruptedException e) {
break; //breaking from the while loop
} catch(Exception e)
{
}
}
return null;
}
};
I'm trying to use resizer in akka routing with round-robin-pool. But it is not creating the instances. It is working on the instances which I mentioned in the lower-bound. I'm following the documents of akka version 2.5.3.
My configuration :
akka.actor.deployment {
/round-robin-resizer {
router = round-robin-pool
resizer {
lower-bound = 4
upper-bound = 30
pressure-threshold = 0
rampup-rate = 0.5
messages-per-resize = 1
}
}
Actor class :
return receiveBuilder()
.match(Integer.class, msg -> {
System.out.println("Message : " + msg + " Thread id : " + Thread.currentThread().getId());
Thread.sleep(5000);
})
.matchAny(msg -> {
System.out.println("Error Message : " + msg + " Thread id : " + Thread.currentThread().getId());
}).build();
}
Creation of actor :
ActorRef roundRobin = system.actorOf(FromConfig.getInstance().props(Props.create(RoutingActor.class)), "round-robin-resizer");
for (int i = 0; i < 15; i++) {
roundRobin.tell(i, ActorRef.noSender());
}
Output :
Message : 2 Thread id : 18
Message : 1 Thread id : 16
Message : 0 Thread id : 15
Message : 3 Thread id : 17
Message : 7 Thread id : 17
Message : 4 Thread id : 15
Message : 6 Thread id : 18
Message : 5 Thread id : 16
Message : 11 Thread id : 17
Message : 9 Thread id : 16
Message : 10 Thread id : 18
Message : 8 Thread id : 15
Message : 13 Thread id : 16
Message : 14 Thread id : 18
Message : 12 Thread id : 15
After every 4 result it is waiting for 5 seconds to complete the job of the previous instances.
See the thread IDs. For every creation of actor instance I'm letting my thread to sleep some time. At the time the new instance should be allocated on different thread. But this process in running till the first three instance. After that it is not creating the new instance as per the resizer. It is appending the message as per the normal flow of round robin pool.
You are getting confused with thread-id and actual actor instance. The number of actors instances does not match with the number of threads. Please refer to this answer in other similar question: Akka ConsistentHashingRoutingLogic not routing to the same dispatcher thread consistently
I am using RetryExecutor from : https://github.com/nurkiewicz/async-retry
Below id my code :
ScheduledExecutorService executorService = Executors.newScheduledThreadPool(10);
RetryExecutor retryExecutor = new AsyncRetryExecutor(executorService)
.retryOn(IOException.class)
.withExponentialBackoff(500, 2)
.withMaxDelay(5_000) // 5 seconds
.withUniformJitter()
.withMaxRetries(5);
I have submitted a few tasks to retryExecutor.
retryExecutor.getWithRetry(ctx -> {
if(ctx.getRetryCount()==0)
System.out.println("Starting download from : " + url);
else
System.out.println("Retrying ("+ctx.getRetryCount()+") dowload from : "+url);
return downloadFile(url);
}
).whenComplete((result, error) -> {
if(result!=null && result){
System.out.println("Successfully downloaded!");
}else{
System.out.println("Download failed. Error : "+error);
}
});
Now, how do I wait for all submitted tasks to finish?
I want to wait until all retries are finished (if any).
don't think it will be as simple as executorService.shutdown();
CompletableFuture<DownloadResult> downloadPromise =
retryExecutor.getWithRetry(...)
.whenComplete(...);
DownloadResult downloadResult = downloadPromise.get()
I have a java application with below properties
kafkaProperties = new Properties();
kafkaProperties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaBrokersList);
kafkaProperties.put(ConsumerConfig.GROUP_ID_CONFIG, consumerGroupName);
kafkaProperties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
kafkaProperties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
kafkaProperties.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, consumerSessionTimeoutMs);
kafkaProperties.put(ConsumerConfig.MAX_PARTITION_FETCH_BYTES_CONFIG, maxPartitionFetchBytes);
kafkaProperties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
I've created 15 consumer threads and let them process the below runnable .I don't have any other consumer with this consumer group name consuming .
#Override
public void run() {
try {
logger.info("Starting ConsumerWorker, consumerId={}", consumerId);
consumer.subscribe(Arrays.asList(kafkaTopic), offsetLoggingCallback);
while (true) {
boolean isPollFirstRecord = true;
logger.debug("consumerId={}; about to call consumer.poll() ...", consumerId);
ConsumerRecords<String, String> records = consumer.poll(pollIntervalMs);
Map<Integer,Long> partitionOffsetMap = new HashMap<>();
for (ConsumerRecord<String, String> record : records) {
if (isPollFirstRecord) {
isPollFirstRecord = false;
logger.info("Start offset for partition {} in this poll : {}", record.partition(), record.offset());
}
messageProcessor.processMessage(record.value(), record.offset());
partitionOffsetMap.put(record.partition(),record.offset());
}
if (!records.isEmpty()) {
logger.info("Invoking commit for partition/offset : {}", partitionOffsetMap);
consumer.commitAsync(offsetLoggingCallback);
}
}
} catch (WakeupException e) {
logger.warn("ConsumerWorker [consumerId={}] got WakeupException - exiting ... Exception: {}",
consumerId, e.getMessage());
} catch (Exception e) {
logger.error("ConsumerWorker [consumerId={}] got Exception - exiting ... Exception: {}",
consumerId, e.getMessage());
} finally {
logger.warn("ConsumerWorker [consumerId={}] is shutting down ...", consumerId);
consumer.close();
}
}
I also have a OffsetCommitCallbackImpl like below . It basically maintains the partition's and their commited offset as map .It also logs whenever offset is committed .
#Override
public void onComplete(Map<TopicPartition, OffsetAndMetadata> offsets, Exception exception) {
if (exception == null) {
offsets.forEach((topicPartition, offsetAndMetadata) -> {
partitionOffsetMap.put(topicPartition, offsetAndMetadata);
logger.info("Offset position during the commit for consumerId : {}, partition : {}, offset : {}", Thread.currentThread().getName(), topicPartition.partition(), offsetAndMetadata.offset());
});
} else {
offsets.forEach((topicPartition, offsetAndMetadata) -> logger.error("Offset commit error, and partition offset info : {}, partition : {}, offset : {}", exception.getMessage(), topicPartition.partition(), offsetAndMetadata.offset()));
}
}
Problem/Issue :
I noticed that i miss events/messages whenever i (restart) bring the application down and bring it back up . So when i closely looked at the logging . by comparing the offsets that are committed(using offsetcommitcallback logging) before shutdown vs offsets that are picked up for processing after restart, i see that for certain partition we did not pickup the offset where we left before shutdown. sometimes the start offset for certain partition's are like 1000 more than the committed offset .
NOTE : This happens to like 8 out of 40 partitions
If you closely look at the logging in run method there is one log statement where i actually print the offset before invoking async commit . For example if that last log before shutdown shows that as 10 for partition 1 . After restart the first offset we are processing for partition 1 would be like 100 . And i validated that we are exactly missing 90 messages .
Can any one think of a reason why this would be happening ?