How to setting http thread pool max size on quarkus - java

Sorry, I'm not really know about http thread pool how work on Quarkus.
I want setting http thread pool max size, But I see the Quarkus doc have a lot config property about thread.
I try setting each that and check it working or not on prometheus. but prometheus base_thread_max_count always not match my config.
So, I want know how setting and check that.
Thanks so much

If you mean the pool that handles HTTP I/O events only, then that's quarkus.http.io-threads
If you mean the worker pool that handles blocking operations, that's quarkus.vertx.worker-pool-size
The metric base_thread_max_count isn't really relevant to this, it shows you the maximum number of threads ever being active in that JVM, so it takes into account various threads unrelated to the HTTP layer. To see the number of active IO threads, I'd suggest taking a thread dump and counting the threads named vert.x-eventloop-thread-*, for worker threads it is executor-thread-*.
Also bear in mind that worker threads are created lazily, so setting a high number to quarkus.vertx.worker-pool-size might not have any immediate effect.

Related

SimpleMessageListener vs DirectMessageListener

I'm trying to see difference between DirectMessageListener and SimpleMessageListener. I have this drawing just to ask if it is correct.
Let me try to describe how I understood it and maybe you tell me if it is correct.
In front of spring-rabbit there is rabbit-client java library, that is connecting to rabbit-mq server and delivering messages to spring-rabbit library. This client has some ThreadPoolExecutor (which has in this case I think - 16 threads). So, it does not matter how many queues are there in rabbit - if there is a single connection, I get 16 threads. These same threads are reused if I use DirectMessageListener - and this handler method listen is executed in all of these 16 threads when messages arrive. So if I do something complex in handler, rabbit-client must wait for thread to get free in order to get next message using this thread. Also if I increase setConsumersPerQueue to lets say 20, It will create 20 consumer per queue, but not threads. These 20*5 consumers in my case will all reuse these 16 threads offered by ThreadPoolExecutor?
SimpleMessageListener on the other hand, would have its own threads. If concurrent consumers == 1 (I guess default as in my case) it has only one thread. Whenever there is a message on any of secondUseCase* queues, rabbit-client java library will use one of its 16 threads in my case, to forward message to single internal thread that I have in SimpleMessageListener. As soon as it is forwarded, rabbit-client java library thread is freed and it can go back fetching more messages from rabbit server.
Your understanding is correct.
The main difference is that, with the DMLC, all listeners in all listener containers are called on the shared thread pool in the amqp-client (you can increase the 16 if needed). You need to ensure the pool is large enough to handle your expected concurrency across all containers, otherwise you will get starvation.
It's more efficient because threads are shared.
With the SMLC, you don't have to worry about that, but at the expense of having a thread per concurrency. In that case, a small pool in the amqp-client will generally be sufficient.

how to tune up wildfly managed-executor-service thread pool parameter

I have questions regarding the performance tuning.
I'm using Linux 64bit server, java 1.8 with wildfly 10.0.0.final . I developed a webservice which uses thread factory and managed executor service through the wildfly configuration.
the purpose of my webervice is to receive the request which has large data, save data, and then create a new thread to process this data, then return response to request. This way the webservice can return response quickly without waiting for data processing to finish.
The configured managed-executor-service holds a thread pool config specifically for this purpose.
for my understanding in configuration, the core-thread defines how many threads will be alive in the thread pool. when core-thread is full, new requests will be put in queue, when queue is full, then new threads will be created, but these newly created thread will be terminated after some time.
I'm trying to figure out what is the best combination to set the thread pool. The following is my concerns:
if this core-thread is set too small (like 5), maybe the responding time will be long because only 5 active threads are processing data, the rest are put in queue until queue is full. the response time won't look good at heavy load time
if I set core-thread to be big, (like 100 maybe), that means even the system is not busy, there still will be 100 live threads in the pool. I don't see any configuration that can allow these threads to be terminated. I'm concerned it is too many live threads idle.
Does anyone have any suggestions on how to set parameters to handle both heavy load and light load situation without too many idle threads left in pool? I'm actually not familiar with this area, like how many idle threads means too many, how to measure it.
the following is the configuration for thread factory and managed-executor-service.
<managed-thread-factory name="UploadThreadFactory" jndi-name="java:jboss/ee/concurrency/factory/uploadThreadFactory"/>
<managed-executor-service name="UploadManagedExecutor" Jodi-name="java:jboss/ee/concurrency/executor/uploadManagedExecutor" context-service="default" thread-factory="UploadThreadFactory" hung-task-threshold="60000" core-thread="5" max-thread="100" keep-alive-time="5000" queue-length="500"/>
Thanks a lot for your help,
Helen

Camel Split EIP and use of threads in pool doesn't seem to go over min threads

I am using Apache Camel 2.15 and found an interesting behavior.
I am placing data received through a REST API call into a Camel route that is a direct endpoint. This route in turn uses the split EIP and calls another Camel route that is also a direct endpoint.
Here is what the relevant Camel code looks like
from("direct:activationInputChannel").routeId("cbr_activation_route")
// removed some processes
.split(simple("${body}"))
.parallelProcessing()
.to("direct:activationItemEndPoint")
.end()
// removed some processes
and
from("direct:activationItemEndPoint").routeId("cbr_activation_item_route")
.process(exchange -> this.doSomething(exchange))
// removed some processes
The use of the direct endpoint should cause the calls to be synchronous and run on the same thread. As expected the split/parallelProcessing usage causes the second route to run on a separate thread. The splitter is using the default thread pool.
When I ran some load tests against the application using jMeter I found that the split route was becoming a bottleneck. With a load test using 200 threads I observed the Tomcat http thread pool as having a currentThreadCount of 200 in jConsole. I also observed the Camel route cbr_activation_route had 200 ExchangesInflight.
The problem was the cbr_activation_item_route only had 50 ExchangesInflight. The number 50 corresponded to the poolSize set for the default pool. The maxPoolSize was set to 500 and the maxQueueSize was set to 1000 (the default).
The number of inflight exchanges for this route never rose above the min pool size. Even though there were plenty of requests queued up and threads available. When I changed the poolSize in the Camel default thread pool to 200 then the cbr_activation_item_route used the new min value and had 200 ExchangesInflight. It seems that Camel would not use more threads than the minimum even when there were more threads available and even when under load.
Is there a setting or something that I could be missing that is causing this behavior? Why wouldn't Camel use 200 threads in the first test run when the minimum was set to 50?
Thanks
Agree with Frederico's answer about the behavior of Java's Thread Pool Executor. It prefers to add new requests to the queue instead of creating more threads after 'corePoolSize' threads have been reached.
If you want your TPE to add more threads as requests come in after 'corePoolSize' has been reached, there is a slightly hacky way of achieving this based on the fact that Java calls the offer() method in BlockingQueue to queue the requests. If the offer() method returns false, it creates a new thread and calls the Executor's rejectedExecutionHandler. It is possible to override the offer() method and create your own version of the ThreadPool executor that can scale the number of threads based on load.
I found an example of this here: https://github.com/kimchy/kimchy.github.com/blob/master/_posts/2008-11-23-juc-executorservice-gotcha.textile
That's the expected behavior. This has nothing to do with Camel itself, but with Java's ThreadPoolExecutor in general.
If you read the linked docs, there it says:
If there are more than corePoolSize but less than maximumPoolSize threads running, a new thread will be created only if the queue is full.
If you set maxQueueSize to 1000, you have to create 1050 requests before new threads are created, up to 200. Try telling Camel to use a SynchronousQueue if you don't want your requests to be queued (not recommended, IMHO).

Multiple Request Handling in java using concurrency api

I have a server which can handle 1000 threads simultaneously.
So for handling the request, i implemented the producer consumer pattern in my code similar kind of servlet container.
At a time, we can have more than 3000 request so for handling this
scenario, what should be the queue size and why?
let's assume we have a queue size of 2000, then what should we do if
we have 4000 request. how can we handle this scenario(easiest way is
to discard the extra request but we need to handle each and every
request)?
I want to generate 20 parallel thread just like jmeter does. How can
i do that using java concurrency API.
In the above scenario, what type of ThreadPool should we need to
utilize like CachedThreadPool or any other and why?
There are two dimensions of bounds to think about: one dimension is how many threads you can use (which is ultimately usually bounded by having enough memory for the corresponding thread stacks -- on a modern JVM the default is a megabyte of stack per thread so 1000 threads is a gigabyte of memory for thread stacks alone). The other dimension is the bounds on your work queue.
For request serving, you probably want a fixed size thread pool (sized according to Little's Law for your workload) and a queue that can grow as needed. For a request-based server an unbounded queue probably has the best graceful degradation result, although you might want to experiment with a bounded queue and a load-shedding RejectedExecutionPolicy.
All of these options can be configured in ThreadPoolExecutor which is probably the implementation you want to use.
Based on your requirement "To process each and every request" you should consider using a Queue (just for suggestion ActiveMQ, it can be any queue implementation).
Put every incoming request into queue and have a thread pool of consumers to consume data from queue. In this way you can grantee that every request is getting processed and your application can scale without changing any stuff.

How to prioritise specific threads in tomcat

I am working on a Java web application for tomcat6 that offers suggest functionality. This means a user types in a free text and get suggestions for completing his input. It is essential that the web application needs to react very fast to make sense.
This web application makes suggestions for data that can be modified at any time. If new data is available the suggest index will be created completely new in the background with the help of a daemon thread. If the data preparation process is finished, the old index is thrown away and the new index comes into play. This offers the advantage of no service gaps.
The background process for data preparation costs a lot of CPU power. This causes the service to hang sometimes for more than a seconds, which makes the service less usable and causes a bad user experience.
My first attempt to solve the problem was to pause all background data preparation threads, when a request has to be processed. This attempt narrows the problem a bit, but the service is still not smooth. It is not that the piece of code for generating the suggests itself gets slower, but it seems as if tomcat does not start the thread for the request immediately all the time because of high load in other threads (I guess)
I have no idea how to face this problem. Can I interact with the thread scheduler of tomcat and tell him to force execution of requests? As expected adjusting the thread priority also did not help. I did not find configuration options for tomcat that offer help. Or is there no way to deal with this and I have to modify the software concept? I am helpless. Do you have any hints for my how to face this problem?
JanP
I would not change the thread priority, By doing that you are slowing down other Threads and will slow down other users. If you have synchronized data then you will run into a priority inversion problem, where your faster threads are waiting on lower priority threads to release locks on data.
Instead I would look at how to optimize the data generation process. What are you doing there ?
EDIT:
You could create an ExecutorService and send messages to it through a Queue like in this example: java thread pool keep running In order to be able to change the Thread priority of the tasks instead of calling ExecutorService pool = Executors.newFixedThreadPool(3); you would create a ThreadFactory and then have the ThreadFactory lower the priority of the Threads, then call ExecutorService pool = Executors.newSingleThreadExecutor(threadFactory);

Categories

Resources