Why is my multi-threaded application being paused? - java

My multi-threaded application has a main class that creates multiple threads. The main class will wait after it has started some threads. The runnable class I created will get a file list, get a file, and remove a file by calling a web service. After the thread is done it will notify the main class to run again. My problem is it works for a while but possibly after an hour or so it will get to the bottom of the run method from the output I see in the log and that is it. The Java process is still running but it does not do anything based on what I am looking at in the log.
Main class methods:
Main method
while (true) {
// Removed the code here, it was just calling a web service to get a list of companies
// Removed code here was creating the threads and calling the start method for threads
mainClassInstance.waitMainClass();
}
public final synchronized void waitMainClass() throws Exception {
// synchronized (this) {
this.wait();
// }
}
public final synchronized void notifyMainClass() throws Exception {
// synchronized (this) {
this.notify();
// }
}
I originally did the synchronization on the instance but changed it to the method. Also no errors are being recorded in the web service log or client log. My assumption is I did the wait and notify wrong or I am missing some piece of information.
Runnable Thread Code:
At the end of the run method
// This is a class member variable in the runnable thread class
mainClassInstance.notifyMainClass();
The reason I did a wait and notify process because I do not want the main class to run unless there is a need to create another thread.
The purpose of the main class is to spawn threads. The class has an infinite loop to run forever creating and finishing threads.
Purpose of the infinite loop is for continually updating the company list.

I'd suggest moving from the tricky wait/notify to one of the higher-level concurrency facilities in the Java platform. The ExecutorService probably offers the functionality you require out of the box. (CountDownLatch could also be used, but it's more plumbing)
Let's try to sketch an example using your code as template:
ExecutorService execSvc = Executors.newFixedThreadPool(THREAD_COUNT);
while (true) {
// Removed the code here, it was just calling a web service to get a list of companies
List<FileProcessingTask> tasks = new ArrayList<FileProcessingTask>();
for (Company comp:companyList) {
tasks.add(new FileProcessingTask(comp));
}
List<Future<FileProcessingTask>> results = execSvc.invokeAll(tasks); // This call will block until all tasks are executed.
//foreach Future<FileProcessingTask> in results: check result
}
class FileProcessingTask implements Callable<FileResult> { // just like runnable but you can return a value -> very useful to gather results after the multi-threaded execution
FileResult call() {...}
}
------- edit after comments ------
If your getCompanies() call can give you all companies at once, and there's no requirement to check that list continuously while processing, you could simplify the process by creating all work items first and submit them to the executor service all at once.
List<FileProcessingTask> tasks = new ArrayList<FileProcessingTask>();
for (Company comp:companyList) {
tasks.add(new FileProcessingTask(comp));
}
The important thing to understand is that the executorService will use the provided collection as an internal queue of tasks to execute. It takes the first task, gives it to a thread of the pool, gathers the result, places the result in the result collection and then takes the next task in the queue.
If you don't have a producer/consumer scenario (cfr comments), where new work is produced at the same time that task are executed (consumed), then, this approach should be sufficient to parallelize the processing work among a number of threads in a simple way.
If you have additional requirements why the lookup of new work should happen interleaved from the processing of the work, you should make it clear in the question.

Related

synchronization strategy for android async task

I am trying to implement a simple synchronization strategy in android.
A service instantiates class A and calls it's method sendToServer() for every iteration of a loop. This results in multiple Async tasks being started and the service ends immediately. The service may run again anytime and repeat the process.
So, to prevent two Async tasks from taking the same input, i store the Ids in a synchronized list and check the list before i start the async task.
But i am confused which piece of code i need to put in a synchronized block? Do i define the entire method isAlreadyRunning() as synchronized? Or do i not need to define any synchronized block of code at all?
Here is my class :
public class A{
private static List<Integer> idList = Collections.synchronizedList(new ArrayList<Integer>());
private boolean isAlreadyRunning(id){
//iterate through the list and return true if the id is already present
....
}
private class sendToServerAsyncTask extends AsyncTask<Void, Void, Boolean>{
#Override
protected Boolean doInBackground(Void... params) {
//send http request
}
#Override
protected void onPostExecute(Boolean result){
idList.remove(id);
}
}
public void sendToServer(int id) {
if(isAlreadyRunning(id)){
// an async task is already running for this id.
//,so dont start the async task again, just exit
return;
else {
idList.add(id);
new sendToServerAsyncTask(id).execute();
}
}
}
As per Android's documentation
ASYNC TASK's ORDER OF EXECUTION
When first introduced, AsyncTasks were executed serially on a single background thread. Starting with DONUT, this was changed to a pool of threads allowing multiple tasks to operate in parallel. Starting with HONEYCOMB, tasks are executed on a single thread to avoid common application errors caused by parallel execution.
The instances of Asynctask are already placed in a queue maintained by the framework and they are executed sequentially i.e. only after one task finishes the other will start so there is no chance of issue due to parallel execution because it doesn't exist.
So you need not do anything and the framework will take care of it for you.

Threads getting wrong member variable values

In my project I am facing weird issue with thread.
This issue is only occurring when I am running multiple thread at
once(load testing).
In my project there are multiple Interceptors which intercepts request/response at different level in application and send the request/response to WritetoFile class which writes the details into a flat file using log4j framework.
Below is the sample interceptor code. There are multiple interceptor and each can process in parallel.
/*we have multiple Interceptor class which will pre-process the
request/response and then send it to WritetoFile*/
public class IntercerptorA {
// some code ...
public synchronized void sendRequestToWritetoFile(IRequest request,IResponse response){
WritetoFile wtf = new WritetoFile(); //this class is responsible for writing request/response information into LOG file
wtf.setRequest(request);
wtf.setResponse(response);
Thread thread=new Thread(wtf, "t1");//**assume wtf.getRequest is having "ABC"**
thread.start();
}
}
Now suppose there 2 more Interceptor and has only single line difference in the code.
//For interceptorB
Thread thread=new Thread(wtf, "t2");//**assume wtf.getRequest is having "DEF"**
//For interceptorC
Thread thread=new Thread(wtf, "t3");//**assume wtf.getRequest is having "XYZ"**
Below is the code for WritetoFile class -:
public class WritetoFile implements Runnable{
private volatile IRequest request;
private volatile IResponse response;
public synchronized IRequest getRequest() {
return request;
}
public synchronized void setRequest(IRequest request) {
this.request = request;
}
public synchronized IResponse getResponse() {
return response;
}
public synchronized void setResponse(IResponse response) {
this.response = response;
}
#Override
public void run() {
// I have added synchronized as I was trying to resolve the issue
synchronized(WritetoFile.class){
putItInFile(this.request,this.response);
}
}
private synchronized void putItInFile (IRequest request,IResponse response){
// This is the logger where I find discrepancies
LOGGER.info("Current thread is : "+Thread.currentThread().getName()+" data is"+request);
//some code and method call
}
}
Having said that, now when I am running a single request the LOGGER.info("Current thread is : "+Thread.currentThread().getName()+" data is"+request); line is giving output as below -:
Current thread is t1 data is ABC
Current thread is t2 data is DEF
Current thread is t3 data is XYZ
which is perfectly fine. BUT on running multiple thread at once I am getting sometime wrong output as below -:
Current thread is t1 data is DEF
Current thread is t2 data is DEF
Current thread is t3 data is XYZ
It seems to be that before thread t1 can use the value of "wtf" object in method putItInFile , thread t2 have already reset the wtf value using setter in interceptorB. But what my thinking is, when I am creating new instance WritetoFile class for each thread ,how is thread t2 operation changing thread t1 cache. Please let me know where am I going wrong and what I need to change.
Thanks in advance :)
Using synchronized everywhere does not make a class thread safe.
In your case, as soon as WritetoFile.setRequest(request1) returns there is a window where the lock is not held and any other thread is free to call it before there is an opportunity for it to be used.
Rather than assigning the requests to an instance variable you would be better off adding them to one of the java.util.concurrent queue classes and consuming them from the queue in the Thread.run() method.
Have a look at the java.util.concurrent javadoc as there are heaps of examples in there.
Most likely the DEF request is getting intercepted at two different levels, resulting in the request getting logged twice.
Your problem is a textbook concurrency problem.
You have multiple threads running at the same time that are able to read/write variables.
In order to make sure that these values stay correct you need to add a lock around the code that modifies your variables so that only one thread can modify these variables at any one time.
1) code needs to wait until a method that modifies variables becomes available.
2) when a thread is done modifying a variable and is about to exit the code block it needs to notify the other waiting threads that it is done.
Please read the API and review your code, keeping the above points in mind you should have no problems fixing it.

How to make calling a Method as a background process in java

In my application , I have this logic when the user logins , it will call the below method , with all the symbols the user owns .
public void sendSymbol(String commaDelimitedSymbols) {
try {
// further logic
} catch (Throwable t) {
}
}
my question is that as this task of sending symbols can be completed slowly but must be completed , so is there anyway i can make this as a background task ??
Is this possible ??
please share your views .
Something like this is what you're looking for.
ExecutorService service = Executors.newFixedThreadPool(4);
service.submit(new Runnable() {
public void run() {
sendSymbol();
}
});
Create an executor service. This will keep a pool of threads for reuse. Much more efficient than creating a new Thread each time for each asynchronous method call.
If you need a higher degree of control over your ExecutorService, use ThreadPoolExecutor. As far as configuring this service, it will depend on your use case. How often are you calling this method? If very often, you probably want to keep one thread in the pool at all times at least. I wouldn't keep more than 4 or 8 at maximum.
As you are only calling sendSymbol once every half second, one thread should be plenty enough given sendSymbols is not an extremely time consuming routine. I would configure a fixed thread pool with 1 thread. You could even reuse this thread pool to submit other asynchronous tasks.
As long as you don't submit too many, it would be responsive when you call sendSymbol.
There is no really simple solution. Basically you need another thread which runs the method, but you also have to care about synchronization and thread-safety.
new Thread(new Runnable() {
public void run() {
sendSymbol(String commaDelimitedSymbols);
}
}).start();
Maybe a better way would be to use Executors
But you will need to case about thread-safety. This is not really a simple task.
It sure is possible. Threading is the way to go here. In Java, you can launch a new thread like this
Runnable backGroundRunnable = new Runnable() {
public void run(){
//Do something. Like call your function.
}};
Thread sampleThread = new Thread(backGroundRunnable);
sampleThread.start();
When you call start(), it launches a new thread. That thread will start running the run() function. When run() is complete, the thread terminates.
Be careful, if you are calling from a swing app, then you need to use SwingUtil instead. Google that up, sir.
Hope that works.
Sure, just use Java Threads, and join it to get the results (or other proper sync method, depends on your requirements)
You need to spawn a separate thread to perform this activity concurrently. Although this will not be a separate process, but you can keep performing other task while you complete sending symbols.
The following is an example of how to use threads. You simply subclass Runnable which contains your data and the code you want to run in the thread. Then you create a thread with that runnable object as the parameter. Calling start on the thread will run the Runnable object's run method.
public class MyRunnable implements Runnable {
private String commaDelimitedSymbols;
public MyRunnable(StringcommaDelimitedSymbols) {
this.commaDelimitedSymbols = commaDelimitedSymbols;
}
public void run() {
// Your code
}
}
public class Program {
public static void main(String args[]) {
MyRunnable myRunnable = new MyRunnable("...");
Thread t = new Thread(myRunnable)
t.start();
}
}

Is there a way to put tasks back in the executor queue

I have a series of tasks (i.e. Runnables) to be executed by an Executor.
Each task requires a certain condition to be valid in order to proceed. I would be interested to know if there is a way to somehow configure Executor to move tasks in the end of the queue and try to execute them later when the condition would be valid and the task be able to execute and finish.
So the behavior be something like:
Thread-1 take tasks from queue and run is called
Inside run the condition is not yet valid
Task stops and Thread-1 places task in the end of the queue and
gets next task to execute
Later on Thread-X (from thread pool) picks task again from queue condition is valid
and task is being executed
In Java 6, the ThreadPoolExecutor constructor takes a BlockingQueue<Runnable>, which is used to store the queued tasks. You can implement such a blocking queue which overrides the poll() so that if an attempt is made to remove and execute a "ready" job, then poll proceeds as normal. Otherwise the runnable is place at the back of the queue and you attempt to poll again, possibly after a short timeout.
Unless you have to have busy waiting, you can add a repeating task to a ScheduledExecutorService with an appropriate polling interval which you cancel or kill after it is "valid" to run.
ScheduleExecutorService ses = ...
ses.scheduleAtFixedRate(new Runnable() {
public void run() {
if (!isValid()) return;
preformTask();
throw new RuntimeException("Last run");
}
}, PERIOD, PERIOD, TimeUnit.MILLI_SECONDS);
Create the executor first.
You have several possibilites.
If I suppose that your tasks implement a simple interface to query their status (something like an enum with 'NeedReschedule' or 'Completed'), then implement a wrapper (implementing Runnable) for your tasks which will take the task and the executor as instanciation parameters. This wrapper will run the task it is bound to, check its status afterwards, and if necessary reschedule a copy of itself in the executor before terminating.
Alternatively, you could use an execption mechanism to signal the wrapper that the task must be rescheduled.
This solution is simpler, in the sense that it doesn't require a particular interface for you task, so that simple Runnable could be thrown in the system without trouble. However, exceptions incur more computation time (object construction, stack trace etc.).
Here's a possible implementation of the wrapper using the exception signaling mechanism.
You need to implement the RescheduleException class extending Throwable, which may be fired by the wrapped runnable (no need for a more specific interface for the task in this setup). You could also use a simple RuntimeException as proposed in another answer, but you will have to test the message string to know if this is the exception you are waiting for.
public class TaskWrapper implements Runnable {
private final ExecutorService executor;
private final Runnable task;
public TaskWrapper(ExecutorService e, Runnable t){
executor = e;
task = t;
}
#Override
public void run() {
try {
task.run();
}
catch (RescheduleException e) {
executor.execute(this);
}
}
Here's a very simple application firing up 200 wrapped tasks randomly asking a reschedule.
class Task implements Runnable {
#Override
public void run(){
if (Maths.random() > 0.5)
throw new RescheduleException();
}
}
public class Main {
public static void main(String[] args){
ExecutorService executor = Executors.newFixedThreadPool(10);
int i = 200;
while(i--)
executor.execute(new TaskWrapper(executor, new Task());
}
}
You could also have a dedicated thread to monitor the other threads results (using a message queue) and reschedule if necessary, but you lose one thread, compared to the other solution.

Pattern for executing concurrent tasks just once

I am working on a java server which dispatches xmpp messages and workers execute the tasks from my clients.
private static ExecutorService threadpool = Executors.newCachedThreadPool();
DispatchWorker worker = new DispatchWorker(connection, packet);
threadpool.execute(worker);
Works fine, but i need a bit more than that.
I don't want to execute the same request multiple times.
My worker may start another thread with a backround task also only allowed to run once at a time. A Threadpool in the worker threads.
I can identify the requests by a string and i can also give the backround tasks an id to identify them.
My solution would be a synchronized hashmap where my running tasks are registered with their id. The reference of the map will be passed to the worker threads that they remove their entry when they finished.
Feels a bit clumsy this solution so i wanted to know if there are more elegant patterns/best practices.
best regards, m
This is exactly what Quartz does (although it does a lot more, like scheduling jobs in the future).
You can use a Singleton thread pool or pass the thread pool as an argument. (I would have the pool final)
You can use a HashSet to guard adding duplicate tasks.
I believe using Map is okay for this. But instead of synchronized HashMap you can also use ConcurrenHashMap which allows you to specify concurrency levels, i.e. how many thread can work with map at the same time. And also it has atomic putIfAbsent operation.
I would use queues and daemon worker threads that are always running and wait for something to arrive in the queue. This way it is guaranteed, that only one worker is working on a request.
If you only want one thread to run, turn POOLSIZE down to 1, or use newSingleThreadExecutor.
I do not quite understand your second requirement: do you mean only 1 thread is allowed to run as background task? If so, you could create another SingleThreadExecutor and use that for the background task. Then it would not make too much sense to have POOLSIZE>1, unless the work done in the background thread is very short compared to that done in the worker itself.
private static interface Request {};
private final int POOLSIZE = 10;
private final int QUEUESIZE = 1000;
BlockingQueue<Request> e = new LinkedBlockingQueue<Request>(QUEUESIZE);
public void startWorkers() {
ExecutorService threadPool = Executors.newFixedThreadPool(POOLSIZE);
for(int i=0; i<POOLSIZE; i++) {
threadPool.execute(new Runnable() {
#Override
public void run() {
try {
final Request request = e.take();
doStuffWithRequest(request);
} catch (InterruptedException e) {
// LOG
// Shutdown worker thread.
}
}
});
}
}
public void handleRequest(Request request) {
if(!e.offer(request)) {
//Cancel request, queue is full;
}
}
At startup-time, startworkers starts the workers (surprise!).
handleRequest handles requests coming from a webservice, servlet or whatever.
Of course you need to adapt "Request" and "doStuffWithRequest" to your need, and add some additional logic for shutdown etc.
We originally wrote our own utilities to handle this, but if you want the results memoised, then Guava's ComputingMap encapsulates the initialisation by one and only one thread (with other threads blocking and waiting for the result), and the memoisation.
It also supports various expiration strategies.
Usage is simple, you construct it with an initialisation function:
Map<Long, Foo> cache = new MapMaker().makeComputingMap(new Function<Long, Foo>() {
public Foo apply(String key) {
return … // init with expensive calculation
}
});
and then just call it:
Foo foo = cache.get("key");
The first thread to ask for "key" will be the one who performs the initialisation

Categories

Resources