I have a simple http vertx based server with the following code:
public class JdbcVertx extends AbstractVerticle{
private static int cnt;
#Override
public void start() throws Exception {
this.vertx.createHttpServer()
.requestHandler(request -> {
JdbcVertx.cnt++;
System.out.println("Request "+JdbcVertx.cnt+" "+Thread.currentThread().getName());
this.vertx.executeBlocking(future -> {
System.out.println("Blocking: "+Thread.currentThread().getName());
final String resp = this.dbcall();
future.complete(resp);
}, asyncResp -> {
request.response().putHeader("content-type", "text/html");
if (asyncResp.succeeded()) {
request.response().end(asyncResp.result().toString());
} else {
request.response().end("ERROR");
}
});
}).listen(8080);
}
private String dbcall(){
try {
Thread.sleep(2000);
System.out.println("From sleep: "+Thread.currentThread().getName());
} catch (InterruptedException ex) {
Logger.getLogger(JdbcVertx.class.getName()).log(Level.SEVERE, null, ex);
}
return UUID.randomUUID().toString();
}
From official docs i have read that default worker pool size is 20. But this is my output
Request 1 vert.x-eventloop-thread-0
Blocking: vert.x-worker-thread-0
Request 2 vert.x-eventloop-thread-0
Request 3 vert.x-eventloop-thread-0
Request 4 vert.x-eventloop-thread-0
Request 5 vert.x-eventloop-thread-0
From sleep: vert.x-worker-thread-0
Blocking: vert.x-worker-thread-0
Request 6 vert.x-eventloop-thread-0
From sleep: vert.x-worker-thread-0
I have two questions:
1)Why my verticle use only one worker thread?
2) From output
Request 1 vert.x-eventloop-thread-0
Blocking: vert.x-worker-thread-0
Request 2 vert.x-eventloop-thread-0
Request 3 vert.x-eventloop-thread-0
Request 4 vert.x-eventloop-thread-0
Request 5 vert.x-eventloop-thread-0
server get first request , put it to the worker thread and then get 2,3,4,5 requests.Why it works in this way? Maybe responses are put to the queue for worker pool?
Thank in advance
BTW i deploy using console (vertx run JdbcVertx.java)
That's an excellent question.
executeBlocking() actually has three parameters blockingHandler, ordered and resultHandler
When you call it with only two arguments, ordered is defaults to true
For that reasons all requests within the same context will receive the same worker thread - they're executed sequentially.
Set it to false to see that all worker threads start working.
You can also check this example of mine:
https://github.com/AlexeySoshin/VertxAnswers/blob/master/src/main/java/clientServer/ClientWithExecuteBlocking.java
And here you can see that it's actually being put on the queue:
https://github.com/eclipse/vert.x/blob/master/src/main/java/io/vertx/core/impl/ContextImpl.java#L280
Related
I wanted to enable the manual commit for my consumer and for that i have below code + configuration. Here i am trying to manually commit the offset in case signIn client throws exception and till manually comitting offet itw works fine but with this code the message which failed to process is not being consumed again so for that what i want to do is calling seek method and consume same failed offset again -
consumer.seek(newTopicPartition(atCommunityTopic,communityFeed.partition()),communityFeed.offset());
But the actual problem is here how do i get partition and offset details from. If somehow i can get ConsumerRecord object along with message then it will work.
spring.cloud.stream.kafka.bindings.atcommnity.consumer.autoCommitOffset=false
And Below is the consumer code through StreamListener
#StreamListener(ConsumerConstants.COMMUNITY_IN)
public void handleCommFeedConsumer(
#Payload Account consumerRecords,
#Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer,
#Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment) {
consumerRecords.forEach(communityFeed -> {
try{
AccountClient.signIn(
AccountIn.builder()
.Id(atCommunityEvent.getId())
.build());
log.debug("Calling Client for Id : "
+ communityEvent.getId());
}catch(RuntimeException ex){
log.info("");
//consumer.seek(new TopicPartition(communityTopic,communityFeed.partition()),communityFeed.offset());
return;
}
acknowledgment.acknowledge();
});
}
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#consumer-record-metadata
#Header(KafkaHeaders.PARTITION_ID) int partition
#Header(KafkaHeaders.OFFSET) long offset
IMPORTANT
Seeking the consumer yourself might not do what you want because the container may already have other records after this one; it's best to throw an exception and the error handler will do the seeks for you.
Short description:
I am currently working in Android Studio with OneTimeWorkRequest(). What I want to achieve is to create a background-worker that runs and repeats "almost" on a specific time, like every hour (09:00, 10:00, etc). It has not to be exactly but should not variate too much after a long time running.
I already know that the worker only runs every 15 minutes at minimum due to the android restrictions (like battery saving mechanism and so on). I do not need the worker to run exactly at the given time but at least almost around a target time! That is why I used OneTimeWorkRequest() instead of PeriodicWorkRequest() because I needed the possibility of variation in setting the intervall for the worker since the documentation mentions that the PeriodicWorkRequest() will add up a time delay from one execution to another.
What I did:
I have created a custom Worker-Class and used OneTimeWorkRequest() in my MainActivity to create the BackgroundWorker. I have set the setInitialDelay() of the worker to 20 minutes for testing purpose. Everytime the worker did doWork() it creates another OneTimeWorkRequest() at the end of execution so a chain of worker gets created in at a given time. The worker gets queued with the enqueueUniqueWork() method from the WorkerManager.getInstance(context) and the intervall is calculated.
The Problem:
Everytime I close the App's process and reopen the App, the Worker executes directly. Also when I list all worker created by the specified Tag, it lists many workers. It seems to me that my logic created too many worker without closing the old ones, or it creates multiple ones? Yet I thought the enqueueUniqueWork() would replace or create only unique/single worker with the given tag... In addition the WorkManager.getInstance(this).cancelAllWorkByTag(TAG) function does not close the (later in this post) listed worker!
Right now it is not important for me how to create worker execution at a given time but how to create consistent Worker-Chain with OneTimeWorkRequest() that do not create a "worker-overload", if possible. Yet I am open for alternative solutions.
So again:
Why does the worker execute after closing the process and opening the App?
Why are there so many workers listed?
Does my logic create one single worker chain or multiple ones?
Is my logic even consistent/usable like this?
Why are the worker not closing using .cancelAllWorkByTag(TAG)?
Code:
// MainActivity.java:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
...
// For testing purpose used...
ListScheduledWorker(TAG);
WorkManager.getInstance(this).cancelAllWorkByTag(TAG);
ListScheduledWorker(TAG);
// ...until here.
CreateOneTimeWorker();
...
}
private void CreateOneTimeWorker(){
long timeValue = 20;
TimeUnit timeUnit = TimeUnit.MINUTES;
String workerTag = MhdExpirationPushNotification.class.getSimpleName();
OneTimeWorkRequest worker = new OneTimeWorkRequest.Builder(CustomPeriodicallyWorker.class)
.setInitialDelay(timeValue, timeUnit)
.addTag(workerTag)
.setConstraints(Constraints.NONE)
.build();
WorkManager.getInstance(this).enqueueUniqueWork(workerTag, ExistingWorkPolicy.KEEP, worker);
}
// CustomPeriodicallyWorker.java:
public Result doWork(){
Log.v(TAG, "Work is in progress");
try {
CustomDateFormatter currentDateTime = new CustomDateFormatter();
CustomDateFormatter targetDateTime = new CustomDateFormatter();
targetDateTime.AddMinutes(20);
long timeDifference = targetDateTime.GetDateTime().getTime() - currentDateTime.GetDateTime().getTime();
OneTimeWorkRequest worker = new OneTimeWorkRequest.Builder(CustomPeriodicallyWorker.class)
.setInitialDelay(timeDifference, TimeUnit.MILLISECONDS)
.addTag(TAG)
.build();
WorkManager.getInstance(context).enqueueUniqueWork(TAG, ExistingWorkPolicy.REPLACE, worker);
} catch (Exception e) {
e.printStackTrace();
}
Log.v(TAG, "Work finished");;
return Result.success();
}
// The function in MainActivity.java that lists all scheduled worker:
private boolean ListScheduledWorker(String tag) {
WorkManager instance = WorkManager.getInstance(this);
ListenableFuture<List<WorkInfo>> statuses = instance.getWorkInfosByTag(tag);
try {
boolean running = false;
List<WorkInfo> workInfoList = statuses.get();
for (WorkInfo workInfo : workInfoList) {
Log.i(TAG, "Scheduled Worker running with ID: " + workInfo.getId());
WorkInfo.State state = workInfo.getState();
running = state == WorkInfo.State.RUNNING | state == WorkInfo.State.ENQUEUED;
}
return running;
} catch (ExecutionException e) {
e.printStackTrace();
return false;
} catch (InterruptedException e) {
e.printStackTrace();
return false;
}
}
// The ListScheduledWorker(String tag) prints me this out:
I/TAG: Scheduled Worker running with ID: 27bb31ed-5984-434f-a6ca-08b50462b3df
Scheduled Worker running with ID: 2d6abbb1-3a55-4652-83ca-60617631e0ab
Scheduled Worker running with ID: 3e89851d-7e0b-410d-86b8-e664a4d710f0
Scheduled Worker running with ID: 430e77b2-5fb8-4596-acd5-51e35a6a538b
Scheduled Worker running with ID: 73b57443-8195-4c55-a24d-bd643b88e13c
Scheduled Worker running with ID: 74c8a44b-2a9a-4448-b3d5-e2c085be3d06
Scheduled Worker running with ID: 75deabd3-08e8-403a-b9d7-6c23f114a908
Scheduled Worker running with ID: 89ec6239-e215-4ea1-a7bc-fcaa8b63065c
Scheduled Worker running with ID: 9363038e-be74-4a83-9d1f-eeeda35ebbfa
Scheduled Worker running with ID: 9a09806f-f0cf-43c1-a4f6-1f10448904f4
Scheduled Worker running with ID: c6686c56-fd8a-4866-8eb1-5124654b6cb7
Scheduled Worker running with ID: d3343328-db8f-4c8d-8055-a1acfc9d1c5c
Scheduled Worker running with ID: dea9272f-6770-45f0-ba66-2c845e156d7b
Scheduled Worker running with ID: eb4c111c-97c5-46c3-ba5c-ceefe652398c
Scheduled Worker running with ID: fc71f8dc-1785-43cd-9a44-1fe4e913ca6e
Scheduled Worker running with ID: fca1bcea-97d9-4066-8b5a-8b5496ffed1e
..and the list grows everytime when I rebuild/restart the App in Android Studio or on my physical device.
Below code returned a timeout in client (Elasticsearch Client) when number of records are higher.
CompletableFuture<BulkByScrollResponse> future = new CompletableFuture<>();
client.reindexAsync(request, RequestOptions.DEFAULT, new ActionListener<BulkByScrollResponse>() {
#Override
public void onResponse(BulkByScrollResponse bulkByScrollResponse) {
future.complete(bulkByScrollResponse);
}
#Override
public void onFailure(Exception e) {
future.completeExceptionally(e);
}
});
BulkByScrollResponse response = future.get(10, TimeUnit.MINUTES); // client timeout occured before this timeout
Below is the client config.
connectTimeout: 60000
socketTimeout: 600000
maxRetryTimeoutMillis: 600000
Is there a way to wait indefinitely until the re-indexing complete?
submit the reindex request as a task:
TaskSubmissionResponse task = esClient.submitReindexTask(reindex, RequestOptions.DEFAULT);
acquire the task id:
TaskId taskId = new TaskId(task.getTask());
then check the task status periodically:
GetTaskRequest taskQuery = new GetTaskRequest(taskId.getNodeId(), taskId.getId());
GetTaskResponse taskStatus;
do {
Thread.sleep(TimeUnit.MINUTES.toMillis(1));
taskStatus = esClient.tasks()
.get(taskQuery, RequestOptions.DEFAULT)
.orElseThrow(() -> new IllegalStateException("Reindex task not found. id=" + taskId));
} while (!taskStatus.isCompleted());
Elasticsearch java api doc about task handling just sucks.
Ref
I don't think its a better choice to wait indefinitely to complete the re-indexing process and give very high value for timeout as this is not a proper fix and will cause more harm than good.
Instead you should examine the response, add more debugging logging to find the root-cause and address them. Also please have a look at my tips to improve re-indexing speed, which should fix some of your underlying issues.
https://pulsar.apache.org/api/client/2.4.0/org/apache/pulsar/client/api/Consumer.html#seek-long-
When calling seek(long timestamp) method on the consumer, does timestamp have to equal the exact time a message was published?
For example, if i sent three messages at t=1, 5, 7 and if i call consumer.seek(3), will i get an error? or will my consumer get reset to t=3, so that if i call consumer.next(), i'll get my second message?
Thanks in advance,
The Consumer#seek(long timestamp) allows you to reset your subscription to a given timestamp. After seeking the consumer will start receiving messages with a publish time equal to or greater than the timestamp passed to the seek method.
The below example show how to reset a consumer to the previous hour:
try (
// Create PulsarClient
PulsarClient client = PulsarClient
.builder()
.serviceUrl("pulsar://localhost:6650")
.build();
// Create Consumer subscription
Consumer<String> consumer = client.newConsumer(Schema.STRING)
.topic("my-topic")
.subscriptionName("my-subscription")
.subscriptionMode(SubscriptionMode.Durable)
.subscriptionType(SubscriptionType.Key_Shared)
.subscriptionInitialPosition(SubscriptionInitialPosition.Latest)
.subscribe()
) {
// Seek consumer to previous hour
consumer.seek(Instant.now().minus( Duration.ofHours(1)).toEpochMilli());
while (true) {
final Message<String> msg = consumer.receive();
System.out.printf(
"Message received: key=%s, value=%s, topic=%s, id=%s%n",
msg.getKey(),
msg.getValue(),
msg.getTopicName(),
msg.getMessageId().toString());
consumer.acknowledge(msg);
}
}
Note that if you have multiple consumers that belong to the same subscriptio ( e.g., Key_Shared) then all consumers will be reset.
I have below code to get the data from Redis asynchronously. By default get() call in lettuce library uses nio-event thread pool.
Code 1:
StatefulRedisConnection<String, String> connection = redisClient.connect();
RedisAsyncCommands<String, String> command = connection.async();
CompletionStage<String> result = command.get(id)
.thenAccept(code ->
logger.log(Level.INFO, "Thread Id " + Thread.currentThread().getName());
//Sample code to print thread ID
Thread Id printed is lettuce-nioEventLoop-6-2.
Code 2:
CompletionStage<String> result = command.get(id)
.thenAcceptAsync(code -> {
logger.log(Level.INFO, "Thread Id " + Thread.currentThread().getName());
//my original code
}, executors);
Thread Id printed is pool-1-thread-1.
My questions:
Is there a way to pass my executors?
Is it recommended approach to use nio-event thread pool to get(using get() call) the data from redis?
Lettuce version: 5.2.2.RELEASE
thanks,
Ashok
class io.lettuce.core.RedisClient has a creator method:
public static RedisClient create(ClientResources clientResources, String uri) {
assertNotNull(clientResources);
LettuceAssert.notEmpty(uri, "URI must not be empty");
return create(clientResources, RedisURI.create(uri));
}
You can build your ClientResources by ClientResources#builder(), and pass anything you want. Refer the JavaDoc, there is something you can customize:
EventLoopGroupProvider to obtain particular EventLoopGroups
EventExecutorGroup to perform internal computation tasks
Timer for scheduling
EventBus for client event dispatching
EventPublisherOptions
CommandLatencyCollector to collect latency details. Requires the HdrHistogram library.
DnsResolver to collect latency details. Requires the LatencyUtils library.
Reconnect Delay.
Tracing to trace Redis commands.