how to tune up wildfly managed-executor-service thread pool parameter - java

I have questions regarding the performance tuning.
I'm using Linux 64bit server, java 1.8 with wildfly 10.0.0.final . I developed a webservice which uses thread factory and managed executor service through the wildfly configuration.
the purpose of my webervice is to receive the request which has large data, save data, and then create a new thread to process this data, then return response to request. This way the webservice can return response quickly without waiting for data processing to finish.
The configured managed-executor-service holds a thread pool config specifically for this purpose.
for my understanding in configuration, the core-thread defines how many threads will be alive in the thread pool. when core-thread is full, new requests will be put in queue, when queue is full, then new threads will be created, but these newly created thread will be terminated after some time.
I'm trying to figure out what is the best combination to set the thread pool. The following is my concerns:
if this core-thread is set too small (like 5), maybe the responding time will be long because only 5 active threads are processing data, the rest are put in queue until queue is full. the response time won't look good at heavy load time
if I set core-thread to be big, (like 100 maybe), that means even the system is not busy, there still will be 100 live threads in the pool. I don't see any configuration that can allow these threads to be terminated. I'm concerned it is too many live threads idle.
Does anyone have any suggestions on how to set parameters to handle both heavy load and light load situation without too many idle threads left in pool? I'm actually not familiar with this area, like how many idle threads means too many, how to measure it.
the following is the configuration for thread factory and managed-executor-service.
<managed-thread-factory name="UploadThreadFactory" jndi-name="java:jboss/ee/concurrency/factory/uploadThreadFactory"/>
<managed-executor-service name="UploadManagedExecutor" Jodi-name="java:jboss/ee/concurrency/executor/uploadManagedExecutor" context-service="default" thread-factory="UploadThreadFactory" hung-task-threshold="60000" core-thread="5" max-thread="100" keep-alive-time="5000" queue-length="500"/>
Thanks a lot for your help,
Helen

Related

LDAP VLV throws error "Other sort requests already in progress"

I am trying to implement pagination in LDAP using vlv, using reference from document https://docs.ldap.com/ldap-sdk/docs/javadoc/com/unboundid/ldap/sdk/controls/VirtualListViewRequestControl.html
it is working fine with single thread, but when try with multiple threads concurrently upto 5 threads it works fine, but as number of threads increased only 5 threads can run successfully exceed threads got failed with below error message:
LDAPException(resultCode=51 (busy), numEntries=0, numReferences=0, diagnostiMessage='Other sort requests already in progress', ldapSDKVersion=5.1.1..
I am using OpenLDAP, Unboundid api for connection with Java. About data size it is around 100k.
Tried with single connection and multiple connections(with multiple concurrent threads) getting same error in both cases.
Tried to synchronize block for fetching data.
On exception, make thread to wait and try again.
All above things didn't worked, threads cannot fetch data from LDAP.
After trying to close and reconnect connection as described in https://www.openldap.org/lists/openldap-technical/201107/msg00006.html
failed thread can fetch data but after retry lot of times, in my case thread retried about 2k times then it started fetching data.
Is there any better solution, retrying 2k times and getting result is not a good option.
From my experience in JAVA, it is better to use thread pools which shifts your solution from "how to manage threads" into a more robust and tasks oriented one.
To the point (of your use case): you may want to define a thread pool with a fixed size of thread. The pool will manage all incoming loads by re-using the threads in the pool. This is very efficient because more threads does not equal more performance. You may want to use a mechanism that re-uses threads, rather than just open and close threads and use too much of them.
You may start with something similar to this:
ExecutorService executorService = Executors.newFixedThreadPool(10);
Future<SearchResult> task1 = executorService.submit(() -> {
// your logic goes here
return result;
});
SearchResult result = task1.get();
This is an over simplified piece of code but you can clearly see that:
Tasks may be initiated from a stack (dynamically)
Results can be fetched by using a listener (you grab results only when they are ready - no polling needed)
The thread pool manages loads - so you can tweak your configuration and boost performance without changing your code (perfect for various environments that may want to configure your solution to suit their hardware profile)
I think you should give it a try.. after all - retrying 2000 times before success is really not that kind of idle 🙃

How to setting http thread pool max size on quarkus

Sorry, I'm not really know about http thread pool how work on Quarkus.
I want setting http thread pool max size, But I see the Quarkus doc have a lot config property about thread.
I try setting each that and check it working or not on prometheus. but prometheus base_thread_max_count always not match my config.
So, I want know how setting and check that.
Thanks so much
If you mean the pool that handles HTTP I/O events only, then that's quarkus.http.io-threads
If you mean the worker pool that handles blocking operations, that's quarkus.vertx.worker-pool-size
The metric base_thread_max_count isn't really relevant to this, it shows you the maximum number of threads ever being active in that JVM, so it takes into account various threads unrelated to the HTTP layer. To see the number of active IO threads, I'd suggest taking a thread dump and counting the threads named vert.x-eventloop-thread-*, for worker threads it is executor-thread-*.
Also bear in mind that worker threads are created lazily, so setting a high number to quarkus.vertx.worker-pool-size might not have any immediate effect.

SimpleMessageListener vs DirectMessageListener

I'm trying to see difference between DirectMessageListener and SimpleMessageListener. I have this drawing just to ask if it is correct.
Let me try to describe how I understood it and maybe you tell me if it is correct.
In front of spring-rabbit there is rabbit-client java library, that is connecting to rabbit-mq server and delivering messages to spring-rabbit library. This client has some ThreadPoolExecutor (which has in this case I think - 16 threads). So, it does not matter how many queues are there in rabbit - if there is a single connection, I get 16 threads. These same threads are reused if I use DirectMessageListener - and this handler method listen is executed in all of these 16 threads when messages arrive. So if I do something complex in handler, rabbit-client must wait for thread to get free in order to get next message using this thread. Also if I increase setConsumersPerQueue to lets say 20, It will create 20 consumer per queue, but not threads. These 20*5 consumers in my case will all reuse these 16 threads offered by ThreadPoolExecutor?
SimpleMessageListener on the other hand, would have its own threads. If concurrent consumers == 1 (I guess default as in my case) it has only one thread. Whenever there is a message on any of secondUseCase* queues, rabbit-client java library will use one of its 16 threads in my case, to forward message to single internal thread that I have in SimpleMessageListener. As soon as it is forwarded, rabbit-client java library thread is freed and it can go back fetching more messages from rabbit server.
Your understanding is correct.
The main difference is that, with the DMLC, all listeners in all listener containers are called on the shared thread pool in the amqp-client (you can increase the 16 if needed). You need to ensure the pool is large enough to handle your expected concurrency across all containers, otherwise you will get starvation.
It's more efficient because threads are shared.
With the SMLC, you don't have to worry about that, but at the expense of having a thread per concurrency. In that case, a small pool in the amqp-client will generally be sufficient.

Need suggestion about java thread pool execution queue processing

In my application we have number of clients Databases, every hour we
get new data for processing in that databases
There is a cron to checks data from this databases and pickup the data and
Then Create thread pool and It start execution of 30 threads in parallel and
remaining thread are store in queue
it takes several hours to process this all threads
So while execution, if new data arrives then it has to wait, because this cron
will not pickup this newly arrived data until it's current execution is not
got finished.
Sometimes we have priority data for processing but due to this case that
clients also need to wait for several hours for processing their data.
Please give me the suggestion to avoid this wait state for newly arrived data
(I am working on java 1.7 , tomcat7 and SQL server2012)
Thank you in advance
Please let me know, for more information on this if not clear
Each of your thread should procces data in bulk (for example 100/1000 records) and this records should be selected from DB by priority. Each time you select new records for proccesing data with highest priority go first.
I can't create comment yet :(
For this problem we are thinking about two solution
Create more then one thread pool for processing normal and high
priority data.
Create more then one tomcat instance with same code for processing normal and priority
data
But I am not understanding which solution is best for my case 1 or 2
Please give me suggestions about above solutions, so that i can take decision on it
You can use ExecutorService newCachedThreadPool()
Benefits of using a cached thread pool :
The pool creates new threads if needed but reuses previously constructed threads if they are available.
Only if no threads are available for reuse will a new thread be created and added to the pool.
Threads that have not been used for more than sixty seconds are terminated and removed from the cache. Hence a pool which has not been used long enough will not consume any resources.

How to prioritise specific threads in tomcat

I am working on a Java web application for tomcat6 that offers suggest functionality. This means a user types in a free text and get suggestions for completing his input. It is essential that the web application needs to react very fast to make sense.
This web application makes suggestions for data that can be modified at any time. If new data is available the suggest index will be created completely new in the background with the help of a daemon thread. If the data preparation process is finished, the old index is thrown away and the new index comes into play. This offers the advantage of no service gaps.
The background process for data preparation costs a lot of CPU power. This causes the service to hang sometimes for more than a seconds, which makes the service less usable and causes a bad user experience.
My first attempt to solve the problem was to pause all background data preparation threads, when a request has to be processed. This attempt narrows the problem a bit, but the service is still not smooth. It is not that the piece of code for generating the suggests itself gets slower, but it seems as if tomcat does not start the thread for the request immediately all the time because of high load in other threads (I guess)
I have no idea how to face this problem. Can I interact with the thread scheduler of tomcat and tell him to force execution of requests? As expected adjusting the thread priority also did not help. I did not find configuration options for tomcat that offer help. Or is there no way to deal with this and I have to modify the software concept? I am helpless. Do you have any hints for my how to face this problem?
JanP
I would not change the thread priority, By doing that you are slowing down other Threads and will slow down other users. If you have synchronized data then you will run into a priority inversion problem, where your faster threads are waiting on lower priority threads to release locks on data.
Instead I would look at how to optimize the data generation process. What are you doing there ?
EDIT:
You could create an ExecutorService and send messages to it through a Queue like in this example: java thread pool keep running In order to be able to change the Thread priority of the tasks instead of calling ExecutorService pool = Executors.newFixedThreadPool(3); you would create a ThreadFactory and then have the ThreadFactory lower the priority of the Threads, then call ExecutorService pool = Executors.newSingleThreadExecutor(threadFactory);

Categories

Resources