I need to send a email during registration process , so for this reason i am using Java Mail API , this is working fine , but observed that
the email process is taking nearly 6 seconds (which is too long ) so Ajax call making the user wait too long for response
so for this reason i have decided to use background thread for sending email so the user need not wait for the Ajax call response (Jersey REST Web Service call)
My question is it a good practice to creating threads in a webapplication for every request ??
#Path("/insertOrUpdateUser")
public class InsertOrUpdateUser {
final static Logger logger = Logger.getLogger(InsertOrUpdateUser.class);
#GET
#Consumes("application/text")
#Produces("application/json")
public String getSalesUserData(#QueryParam(value = "empId") String empId
)
throws JSONException, SQLException {
JSONObject final_jsonobject = new JSONObject();
ExecutorService executorService = Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed",e);
}
}
});
}
} catch (SQLException e) {
} catch (Exception e) {
}
finally {
}
return response;
}
}
And this is my Utility class for sending email
public class SendEmailUtility
{
public static String sendmail(String sendto)
throws IOException
{
String result = "fail";
Properties props_load = getProperties();
final String username = props_load.getProperty("username");
final String password = props_load.getProperty("password");
Properties props_send = new Properties();
props_send.put("mail.smtp.auth", "true");
props_send.put("mail.smtp.starttls.enable", "true");
props_send.put("mail.smtp.host", props_load.getProperty("mail.smtp.host"));
props_send.put("mail.smtp.port", props_load.getProperty("mail.smtp.port"));
Session session = Session.getInstance(props_send,
new javax.mail.Authenticator() {
#Override
protected PasswordAuthentication getPasswordAuthentication()
{
return new PasswordAuthentication(username, password);
}
});
try {
Message message = new MimeMessage(session);
message.setFrom(new InternetAddress(props_load.getProperty("setFrom")));
message.setRecipients(Message.RecipientType.TO, InternetAddress.parse(sendto));
message.setText("Some Text to be send in mail");
Transport.send(message);
result = "success";
} catch (MessagingException e) {
result = "fail";
logger.error("Exception Occured - sendto: " + sendto, e);
}
return result;
}
}
Could you please let me know if this is best practice to do in a web application ??
There are host of ways you can handle it, so it all depends on whether your application server has that much resources (memory, threads etc.) to handle your implementation, so it makes you best person to decide on which approach to go.
As such it is not bad practice to spawn parallel threads for doing something if it is justified by design, but typically you should go with controlled threads.
Please note that whether you use newSingleThreadExecutor() or newFixedThreadPool(nThreads), under-the-hoods there will always be a ThreadPoolExecutor object created.
My recommendation will be to use seconds option in below list i.e. "Controlled number of threads", and in that specify max thread count as you see fir.
One thread for each request
In this approach one thread will be created for each incoming request from GUI, so if you are getting 10 requests for inserting/updating user then 10 threads will be spawned which will send emails.
Downside of this approach is that there is no control on number of threads so you can end with StackOverflowException or may be memory issue.
Please make sure to shutdown your executor service else you will end up wasting JVM resources.
// inside your getSalesUserData() method
ExecutorService emailExecutor = Executors.newSingleThreadExecutor();
emailExecutor.execute(new Runnable() {
#Override
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed", e);
}
}
});
emailExecutor.shutdown(); // it is very important to shutdown your non-singleton ExecutorService.
Controlled number of threads
In this approach, some pre-defined number of threads will be present and those will process your email sending requirement. In below example I am starting a thread pool with max of 10 threads, then I am using a LinkedBlockingQueue implementation so this will ensure that if there are more than 10 requests and currently all my 10 threads are busy then excess of requests will be queued and not lost, this is the advantage you get with LinkedBlockingQueue implementation of Queue.
You can initialize you singleton ThreadPoolExecutor upon application server start, if there are no requests then no threads will be present so it is safe to do so. In fact I use similar configuration for my prod application.
I am using time to live seconds as 1 seconds so if a thread is ideal in JVM for more than 1 seconds then it will die.
Please note that since same thread pool is used for processing all you requests, so it should be singleton and do not shutdown this thread pool else your tasks will never be executed.
// creating a thread pool with 10 threads, max alive time is 1 seconds, and linked blocking queue for unlimited queuing of requests.
// if you want to process with 100 threads then replace both instances of 10 with 100, rest can remain same...
// this should be a singleton
ThreadPoolExecutor executor = new ThreadPoolExecutor(10, 10, 1, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
// inside your getSalesUserData() method
executor.execute(new Runnable() {
#Override
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed", e);
}
}
});
Java's default cached thread pool
This approach is much like above, only that Java will initialize the ThreadPoolExecutor for you as ThreadPoolExecutor(0, Integer.MAX_VALUE, 60L, TimeUnit.SECONDS, new SynchronousQueue<Runnable>());
Here max number of threads will be Integer.MAX_VALUE, so threads will be created as needed and time to live will be 60 seconds.
If you want to use this way then below is the way.
// this should be a singleton
ExecutorService emailExecutor = Executors.newCachedThreadPool();
// from you getSalesUserData() method
emailExecutor.execute(new Runnable() {
#Override
public void run() {
try {
SendEmailUtility.sendmail(emaildummy);
} catch (IOException e) {
logger.error("failed", e);
}
}
});
Manually creating of ExecutorService on java web serer is bad idea. In your implementation for each request you create 10 threads.
Better solution is to use ManagedExecutorService (example) if you work with JEE7 or ThreadPoolTaskExecutor if you work with Spring(docs).
If you work with Tomcat you should read this thread.
The best practice is to use a single ExecutorService to provide a thread pool for all requests. You probably want to configure the ExecutorService with a non-zero, but limited, number of threads.
The idea here is that you will have some threads that are reused throughout the lifetime of the application. You get the added benefit that if there is a temporary slowdown (or halt) in sending emails, you don't end up with a growing number of threads Instead, you end up with a growing number of pieces of work (emails to send) to be executed, which is much less resource intensive than extra threads.
I am using Java EmailSender class.
I simply started a new thread to send mail because it was blocking the main thread and I was getting Time out an exception.
String link = "http://localhost:PORT/api/v1/registration/confirm?token=" +token;
//Sending mail in thread beacause it block main thread
new Thread(
() -> emailSender.sendMail(request.getEmail(),buildEmail(request.getFirstName(),
link))).start();
Related
I'm using hazel cast IMGD for my app. I have used queues for internal communication. I added an item listener to queue and it works great. Whenever a queue gets a message, listener wakes up and needed processing is done.
Problem is its single threaded. Sometimes, a message takes 30 seconds to process and messages in queue just have to wait until previous message is done processing. I'm told to use Java executor service to have a pool of threads and add an item listener to every thread so that multiple messages can be processed at same time.
Is there any better way to do it ? may be configure some kind of MDB or make the processing asynchronous so that my listener can process the messages faster
#PostConstruct
public void init() {
logger.info(LogFormatter.format(BG_GUID, "Starting up GridMapper Queue reader"));
HazelcastInstance hazelcastInstance = dc.getInstance();
queue = hazelcastInstance.getQueue(FactoryConstants.QUEUE_GRIDMAPPER);
queue.addItemListener(new Listener(), true);
}
class Listener implements ItemListener<QueueMessage> {
#Override
public void itemAdded(ItemEvent<QueueMessage> item) {
try {
QueueMessage message = queue.take();
processor.process(message.getJobId());
} catch (Exception ex) {
logger.error(LogFormatter.format(BG_GUID, ex));
}
}
#Override
public void itemRemoved(ItemEvent<QueueMessage> item) {
logger.info("Item removed: " + item.getItem().getJobId());
}
}
Hazelcast IQueue does not support asynchronous interface. Anyway, asynchronous access would not be faster. MDB requires JMS, which is pure overhead.
What you really need is multithreaded executor. You can use default executor:
private final ExecutorService execService = ForkJoinPool.commonPool();
I am trying to change Quartz Sequential execution to Parallel Execution.
It is working fine, Performance wise, it is seems good but Spawned (created) threads are not destroyed.
It is Still in Runnable State; why and How can I fix that?
Please Guide me.
Code is here :
#Override
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
logger.error("Result Processing executed");
List<Object[]> lstOfExams = examService.getExamEntriesForProcessingResults();
String timeZone = messageService.getMessage("org.default_timezone", null, Locale.getDefault());
if(lstOfExams!=null&&!lstOfExams.isEmpty()){
ThreadPoolTaskExecutor threadPoolExecuter = new ThreadPoolTaskExecutor();
threadPoolExecuter.setCorePoolSize(lstOfExams.size());
threadPoolExecuter.setMaxPoolSize(lstOfExams.size()+1);
threadPoolExecuter.setBeanName("ThreadPoolTaskExecutor");
threadPoolExecuter.setQueueCapacity(100);
threadPoolExecuter.setThreadNamePrefix("ThreadForUpdateExamResult");
threadPoolExecuter.initialize();
for(Object[] obj : lstOfExams){
if(StringUtils.isNotBlank((String)obj[2]) ){
timeZone = obj[2].toString();
}
try {
Userexams userexams=examService.findUserExamById(Long.valueOf(obj[0].toString()));
if(userexams.getExamresult()==null){
UpdateUserExamDataThread task=new UpdateUserExamDataThread(obj,timeZone);
threadPoolExecuter.submit(task);
}
// testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
// logger.error("Percentage Marks:::::"+result.getPercentageCatScore());
} catch (Exception e) {
Log.error("Exception at ResultProcessingJob extends QuartzJobBean executeInternal(JobExecutionContext context) throws JobExecutionException",e);
continue;
}
}
threadPoolExecuter.shutdown();
}
}
UpdateUserExamDataThread .class
#Component
//#Scope(value="prototype", proxyMode=ScopedProxyMode.TARGET_CLASS)
//public class UpdateUserExamDataThread extends ThreadLocal<String> //implements Runnable {
public class UpdateUserExamDataThread implements Runnable {
private Logger log = Logger.getLogger(UpdateUserExamDataThread.class);
#Autowired
ExamService examService;
#Autowired
TestEvaluator testEvaluator;
private Object[] obj;
private String timeZone;
public UpdateUserExamDataThread(Object[] obj,String timeZone) {
super();
this.obj = obj;
this.timeZone = timeZone;
}
#Override
public void run() {
String threadName=String.valueOf(obj[0]);
log.info("UpdateUserExamDataThread Start For:::::"+threadName);
testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
//update examResult
log.info("UpdateUserExamDataThread End For:::::"+threadName);
}
}
TestEvaluatorImpl.java
#Override
#Transactional
public Examresult generateTestResultAsPerEvaluator(Long userExamId, String evaluatorType, String codingLanguage,String timeZoneFollowed ,String inctenceId ,String userId) {
dbSchema = messageService.getMessage("database.default_schema", null, Locale.getDefault());
try {
//Some Methods
return examResult;
}catch(Exception e){
log.erorr(e);
}
}
I can provide Thread Dump file if needed.
it seems you create a thread pool in the same size of exams which is not quite optimal.
// Core pool size is = number of exams
threadPoolExecuter.setCorePoolSize(lstOfExams.size());
// Max pool size is just 1 + exam size.
threadPoolExecuter.setMaxPoolSize(lstOfExams.size()+1);
You have to consider that:
- If you create a thread pool and started it as many threads as defined in core size started immediately.
The max pool size is only than effective when you submit more than core pool threads can process right now AND when the queue size is full (in this case 100). So that means a new thread will be only then created when the number of submitted tasks exceeded 100+exam size.
In your case I would set the core pool size 5 or 10 (it actually depends on the how many core your target CPU have and/or how IO bound the submitted tasks are).
The max pool size can be double of that but it doesn't effective until the queue is full.
To let the size of live threads decrease after the submitted work done you have to set 2 parameters.
setKeepAliveSeconds(int keepAliveSeconds) : Which let the threads shut down automatically if they are not used along the defined seconds (by default 60 seconds, which is optimal) BUT this is normally only used to shut down threads of non-core pool threads.
To shut down threads of core part after keepAliveSeconds you have to set setAllowCoreThreadTimeOut(boolean allowCoreThreadTimeOut) as true. Which is normally false to keep core pool alive as long as the application is running.
I hope it helps.
I suspect that one of your threads waits indefinitely for an IO request answer. For example, you try to connect to a remote host where you did not set connection timeout and the host does not answer. In this case, you can shutdown all executing tasks forcefully by running shutdownNow method of the underlying ExecutorService then you can analyze InterruptedIOException thrown by the offending threads.
Replace
threadPoolExecuter.shutdown();
with below so you can examine errors.
ExecutorService executorService = threadPoolExecuter.getThreadPoolExecutor();
executorService.shutdownNow();
This will send interrupt signal to all running threads.
The threads do not wait on IO from some remote server, because the executed method on the threads would be in some jdbc driver classes, but they are currently all in UpdateUserExamDataThread.run(), line 37.
Now the question is: what is the code at UpdateUserExamDataThread.java line 37 ?
Unfortunately, the UpdateUserExamDataThread.java given at the moment is incomplete and/or not the version really executed: the package declaration is missing and it ends at line 29.
I suspect the issue is simply that you are calling run() instead of execute() when spawning the task thread using submit(). There is probably some expectation when using submit that threads kill themselves when the task is finished rather than terminating at the end of the run method.
Just Needed to increase the priority of threads and create number of threads as per number of cores in processor.
protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
logger.error("Result Processing executed");
List<Object[]> lstOfExams = examService.getExamEntriesForProcessingResults();
String timeZone = messageService.getMessage("org.default_timezone", null, Locale.getDefault());
int cores = Runtime.getRuntime().availableProcessors();
if(lstOfExams!=null&&!lstOfExams.isEmpty()){
ThreadPoolTaskExecutor threadPoolExecuter = new ThreadPoolTaskExecutor();
threadPoolExecuter.setCorePoolSize(cores);
// threadPoolExecuter.setMaxPoolSize(Integer.MAX_VALUE);
threadPoolExecuter.setBeanName("ThreadPoolTaskExecutor");
// threadPoolExecuter.setQueueCapacity(Integer.MAX_VALUE);
threadPoolExecuter.setQueueCapacity(lstOfExams.size()+10);
threadPoolExecuter.setThreadNamePrefix("ThreadForUpdateExamResult");
threadPoolExecuter.setWaitForTasksToCompleteOnShutdown(true);
threadPoolExecuter.setThreadPriority(10);
threadPoolExecuter.initialize();
for(Object[] obj : lstOfExams){
if(StringUtils.isNotBlank((String)obj[2]) ){
timeZone = obj[2].toString();
}
try {
Userexams userexam=examService.findUserExamById(Long.valueOf(obj[0].toString()));
if(userexam.getExamresult()==null){
UpdateUserExamDataThread task=new UpdateUserExamDataThread(obj,timeZone,testEvaluator);
// threadPoolExecuter.submit(task);
threadPoolExecuter.execute(task);
}
// testEvaluator.generateTestResultAsPerEvaluator(Long.valueOf(obj[0].toString()), obj[4].toString(), obj[3]==null?null:obj[3].toString(),timeZone ,obj[5].toString() ,obj[1].toString());
// logger.error("Percentage Marks:::::"+result.getPercentageCatScore());
} catch (Exception e) {
logger.error("Exception at ResultProcessingJob extends QuartzJobBean executeInternal(JobExecutionContext context) throws JobExecutionException",e);
continue;
}
}
threadPoolExecuter.shutdown();
}
}
I have a dagger-singleton-wrapper handling my basic Realm requests. One of which looks like this:
public void insertOrUpdateAsync(final List<RealmMessage> messages, #Nullable final OnInsertListener listener) {
Realm instance = getRealmInstance();
instance.executeTransactionAsync(realm -> {
List<RealmMessage> newMessages = insertOrUpdateMessages(realm, messages);
},
() -> success(listener, instance),
error -> error(listener, error, instance));
}
private List<RealmMessage> insertOrUpdateMessages(#NonNull Realm realm, #NonNull final List<RealmMessage> messages) {
...
return realm.copyToRealmOrUpdate(unattendedMessages);
}
Which works great.
However there is a corner case where - long story short - I launch insertOrUpdateAsynch() many, many times. And after some requests I get this:
Caused by: java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask#b7b848 rejected from io.realm.internal.async.RealmThreadPoolExecutor#80f96e1[Running, pool size = 17, active threads = 17, queued tasks = 100, completed tasks = 81]
My question is: how should I handle this without rebuilding whole application flow.
My idea was to queue incoming requests via RxJava. Am I right? Which operators should I consider and educate myself?
Or am I approaching this in a completely wrong way?
From most of my googling I've noticed that mostly the problem is in launching method like mine in a loop. I'm not using any. In my case problem is that this method is launched by multiple responses, and changing that is kind of impossible because of the current backend implementation.
If you do not want to redesign your application you may use a counting semaphore. You will see that two Threads will instantly acquire the the lock. The other thread will block until some call will release one lock. It is not recommanded to use acquire() without an Timeout.
In order to use RxJava you would have to change the design of your application and rate-limiting in RxJava is not that easy, because it is all about throughoutput.
private final Semaphore semaphore = new Semaphore(2);
#Test
public void name() throws Exception {
Thread t1 = new Thread(() -> {
doNetworkStuff();
});
Thread t2 = new Thread(() -> {
doNetworkStuff();
});
Thread t3 = new Thread(() -> {
doNetworkStuff();
});
t1.start();
t2.start();
t3.start();
Thread.sleep(1500);
}
private void doNetworkStuff() {
try {
System.out.println("enter doNetworkStuff");
semaphore.acquire();
System.out.println("acquired");
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace(); // Don't do this!!
} finally {
semaphore.release();
}
}
tyrus websockets ClientManager connectToServer 'Handshake response not received'
how do I retry the connection without more and more daemon and Grizzly-kernel and Grizzly-worker threads created.
Is there a call to Session or client to kill/cleanup
Thread-1 to 4 and Grizzly-kernel and Grizzly-worker threads?
Example JAVA main line which attempts forever to make and maintain a connection with a server which may not be running or is periodically restart.
public void onClose(Session session, CloseReason closeReason) {
latch.countDown();
}
enter code here
public static void main(String[] args) {
while (true) {
latch = new CountDownLatch(1);
ClientManager client = ClientManager.createClient();
try {
client.connectToServer(wsListener.class, new URI("wss://<host>/ws"));
latch.await();
}
catch (DeploymentException e) {
try {
Thread.sleep(1000);
} catch (InterruptedException ie) {
break;
}
}
catch (Exception e) {
throw new RuntimeException(e);
}
client = null;
latch = null;
// HERE... clean up
}
}
client.connectToServer returns Session instance and when you call Session.close(), client runtime should be shut down (no threads left).
You did not specify version of Tyrus you are using (I recommend 1.3.3, we made some improvements in this area). Also you might be interested in our shared container support, see TYRUS-275. You could combine it with Thread pool config and you should have much better control of number of spawned/running threads.
We are always looking for new use cases, so if you think you have something which should be better supported in Tyrus, feel free to create new enhancement request on our JIRA.
I got this exact same behavior. I was using a lot of threads and synchronization and managed to accidently get the onOpen method of the ClientEndpoint blocking which caused the handshake to time out.
I have a service which process a request from a user.
And this service call another external back-end system(web services). but I need to execute those back-end web services in parallel. How would you do that? What is the best approach?
thanks in advance
-----edit
Back-end system can run requests in parallel, we use containers like (tomcat for development) and websphere finally for production.
So I'm already in one thread(servlet) and need to spawn two tasks and possibly run them in parallel as close together as possible.
I can imagine using either quartz or thread with executors or let it be on Servlet engine. What is proper path to take in such a scenario?
You can use Threads to run the requests in parallel.
Depending on what you want to do, it may make sense to build on some existing technology like Servlets, that do the threading for you
The answer is to run the tasks in separate threads.
For something like this, I think you should be using a ThreadPoolExecutor with a bounded pool size rather than creating threads yourself.
The code would look something like this. (Please note that this is only a sketch. Check the javadocs for details, info on what the numbers mean, etc.)
// Create the executor ... this needs to be shared by the servlet threads.
Executor exec = new ThreadPoolExecutor(1, 10, 120, TimeUnit.SECONDS,
new ArrayBlockingQueue(100), ThreadPoolExecutor.CallerRunsPolicy);
// Prepare first task
final ArgType someArg = ...
FutureTask<ResultType> task = new FutureTask<ResultType>(
new Callable<ResultType>() {
public ResultType call() {
// Call remote service using information in 'someArg'
return someResult;
}
});
exec.execute(task);
// Repeat above for second task
...
exec.execute(task2);
// Wait for results
ResultType res = task.get(30, TimeUnit.SECONDS);
ResultType res2 = task2.get(30, TimeUnit.SECONDS);
The above does not attempt to handle exceptions, and you need to do something more sophisticated with the timeouts; e.g. keeping track of the overall request time and cancelling tasks if we run over time.
This is not a problem that Quartz is designed to solve. Quartz is a job scheduling system. You just have some tasks that you need to be executed ASAP ... possibility with the facility to cancel them.
Heiko is right that you can use Threads. Threads are complex beasts, and need to be treated with care. The best solution is to use a standard library, such as java.util.concurrent. This will be a more robust way of managing parallel operations. There are performance benefits which coming with this approach, such as thread pooling. If you can use such a solution, this would be the recommended way.
If you want to do it yourself, here is a very simple way of executing a number of threads in parallel, but probably not very robust. You'll need to cope better with timeouts and destruction of threads, etc.
public class Threads {
public class Task implements Runnable {
private Object result;
private String id;
public Task(String id) {
this.id = id;
}
public Object getResult() {
return result;
}
public void run() {
System.out.println("run id=" + id);
try {
// call web service
Thread.sleep(10000);
result = id + " more";
} catch (InterruptedException e) {
// TODO do something with the error
throw new RuntimeException("caught InterruptedException", e);
}
}
}
public void runInParallel(Runnable runnable1, Runnable runnable2) {
try {
Thread t1 = new Thread(runnable1);
Thread t2 = new Thread(runnable2);
t1.start();
t2.start();
t1.join(30000);
t2.join(30000);
} catch (InterruptedException e) {
// TODO do something nice with exception
throw new RuntimeException("caught InterruptedException", e);
}
}
public void foo() {
Task task1 = new Task("1");
Task task2 = new Task("2");
runInParallel(task1, task2);
System.out.println("task1 = " + task1.getResult());
System.out.println("task2 = " + task2.getResult());
}
}