I am using the Java Executor Service to create a singlethread.
Code:-
ExecutorService executor = Executors.newSingleThreadExecutor();
try {
executor.submit(new Runnable() {
#Override
public void run() {
Iterator<FileObject> itr = mysortedList.iterator();
while (itr.hasNext()) {
myWebFunction(itr.next();
}
};
}).get(Timeout * mysortedList.size() - 10, TimeUnit.SECONDS);
} catch (Exception ex) {
} finally {
executor.shutdownNow();
}
Details: myWebfunction processes files of different size and content.Processing involves extracting the entire content and applying further actions on the file content.
The program runs in 64bit Centos.
Problem: When the myWebfunction gets file of size greater than some threshold, say 10MB, the executor service is unable to create a native thread. I tried various -Xmx and -Xms settings, but still the executor service throws the same error.
My guess is you calling this many times, and you are not waiting for the thread which has timed out, leaving lots of threads lying around. When you run out of stack space, or you reach about 32K threads, you cannot create any more.
I suggest using a different approach which doesn't use so many threads or kills them off when you know you don't need them any more. E.g. have the while loop check for interrupts and call Future.cancel(true) to interrupt it.
Related
I have a JSP application in which a webpage calls five methods one by one (all of them fetch data from different sources) and display charts based on data.
To load the webpage fastly, I planned to call all the five methods in parallel with the help of FixedThreadPool Executor.
Should I shut down my executor once I get the result from all five methods? Shutting down the executor is a bad idea according to me, since if someone opens the webpage a second time it will require the executor to initialize again in order to call the five methods parallelly.
However, I'm not sure about the consequences of leaving the executor open so not sure how to proceed.
Leaving it open is the normal way to use a thread pool. That's the whole point of thread pools: It's to prevent your application from having to create and then destroy however many new threads every time it needs to load a page. Instead, it can just use the same threads again and again.
In chapter 7 of "Java Concurrency in Practice" there is an example just like this, where a so called one-shot execution service is proposed:
If a method needs to process a batch of tasks and does not return until all the
tasks are finished, it can simplify service lifecycle management by using a private
Executor whose lifetime is bounded by that method.
Its code example:
boolean checkMail(Set<String> hosts, long timeout, TimeUnit unit)
throws InterruptedException {
ExecutorService exec = Executors.newCachedThreadPool();
final AtomicBoolean hasNewMail = new AtomicBoolean(false);
try {
for (final String host : hosts)
exec.execute(new Runnable() {
public void run() {
if (checkMail(host))
hasNewMail.set(true);
}
});
} finally {
exec.shutdown();
exec.awaitTermination(timeout, unit);
}
return hasNewMail.get();
}
I'd suggest simplifying your code using this approach.
I'm using an external lib (Generex) in my project, and one constructor may take a very long time to execute, so I'd like to have a timeout (let's say 50 ms), and be able to know if the timeout has been reached or not.
So I was thinking at using a dedicated thread, and wrote the following code:
#Test
public void isComputable() throws InterruptedException {
for (int i=0; i<10;i++)
System.out.println(check());
Thread.sleep(300000);
}
private static boolean check() {
final Thread stuffToDo = new Thread(() -> {while(true){}});
final ExecutorService executor = Executors.newSingleThreadExecutor();
final Future future = executor.submit(stuffToDo);
executor.shutdown();
try {
future.get(50, TimeUnit.MILLISECONDS);
}
catch (InterruptedException | ExecutionException | TimeoutException ie) {
stuffToDo.interrupt();
stuffToDo.stop();
return false;
}
if (!executor.isTerminated())
executor.shutdownNow();
return true;
}
I replaced the call to the external lib with a while(true) loop, yet it is important to note that, in my case, I cannot use a loop to check if the thread was interrupted.
When executing this code, I've got well the answer after 50 ms for each call, yet the thread is not destroyed, and there is a high CPU usage, as we can see with JProfiler (note that the loop in the test over i is just to have a nicer chart):
Does anyone have any idea on how to solve this issue please?
Note: I know that I should not use the deprecated stop method, I just tried everything I know to kill the thread.
You either have to check for an interrupt regularly in your code callex or you have to run the code in another process. These are the only ways you can either interrupt or kill the process running the code.
I suggest taking a stack trace of the long running thread to help fix it in the future.
I have a parallel running java application that consumes huge log files and applies some custom logic. Each log row is processed in a separate thread using fire-and-forget approach.
However sometimes the java process just stops processing, what I mean with that is that the java application doesn't get assigned CPU to execute the process even if the application is still hasn't finished consuming the file.
Running top I get quite low load average considering 16 cores that I have:
Running vmstat I can see that non of the user processes are running neither the kernel processes, rather it's idle 99%
The output of iostat shows me that there are no pending IO tasks running either:
I also haven't spotted any deadlocks or starvation taking a thread dump. The most of the threads are WAITING or RUNNABLE.
What am I missing? I got lost, and I don;t really know where to investigate further.
=UPDATE=
This is the part that initiates parallel execution, after this there are thousand lines of code applying modification incl. elasticsearch, akka etc
So I don't really know what the relevant code would be that might causes any troubles.
BlockingQueue<Runnable> workQueue = new ArrayBlockingQueue<Runnable>(100);
ExecutorService executorService = new MetricsThreadPoolExecutor(numThreadCore, numThreadCore, idleTime, TimeUnit.SECONDS, workQueue, new ThreadPoolExecutor.AbortPolicy(), "process.concurrent", metrics);
FileInputStream fileStream = new FileInputStream(file);
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(new GZIPInputStream(fileStream));
String strRow = bufferedReader.readLine();
while (strRow != null) {
final Row row = new Row(strRow);
try {
executorService.submit(new Runnable() {
#Override
public void run() {
if (!StringUtil.isBlank(row.getLine())) {
processor.process(row);
}
}
});
strRow = bufferedReader.readLine();
} catch (RejectedExecutionException ree) {
try {
logger.warn(ree.getMessage());
Thread.sleep(50L);
} catch (InterruptedException ie) {
logger.warn("Wait interrupted", ie);
}
}
However sometimes the java process just stops processing, what I mean with that is that the java application doesn't get assigned CPU to execute the process even if the application is still hasn't finished consuming the file.
Don't think about this at the CPU/vmstat/iostat level. That's just confusing the debugging of the problem. You should think about this in terms of threads only and trust the OS to schedule them appropriately.
I see no reason why the main thread shouldn't finish after all of the rows have been submitted for processing. As an aside, you may instead want to just block the producer instead of regenerating the rows in your spin/sleep loop like you are doing. See: RejectedExecutionException free threads but full queue
If you application is not completing then either one of the worker threads is hung while processing the row or maybe the MetricsThreadPoolExecutor has not been shutdown. I suspect the latter. The producer thread, after it exits the while (strRow != null) { loop should call executorService.shutdown(). Otherwise the threads will be waiting for more rows to be added.
You could do a thread-dump on your application to see if it is stuck in a worker. You could add logging when the producer thread finishes which should let you know if it completed it's work. Both might help figure out where the problem lies.
I am working on a project in which I will be having different Bundles. Let's take an example, Suppose I have 5 Bundles and each of those bundles will have a method name process.
Now currently, I am calling the process method of all those 5 bundles in parallel using multithread code below.
But somehow, everytime when I am running the below multithread code, it always give me out of memory exception. But if I am running it sequentially meaning, calling process method one by one, then it don't give me any Out Of memory exception.
Below is the code-
public void callBundles(final Map<String, Object> eventData) {
// Three threads: one thread for the database writer, two threads for the plugin processors
final ExecutorService executor = Executors.newFixedThreadPool(3);
final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
try {
final Map<String, String> response = entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
}
Below is the exception, I am getting whenever I am running above Multithreaded code.
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt' in response to an event
UTE430: can't allocate buffer
UTE437: Unable to load formatStrings for j9mm
JVMDUMP010I Java dump written to S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt
JVMDUMP032I JVM requested Snap dump using 'S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc' in response to an event
UTE001: Error starting trace thread for "Snap Dump Thread": -1
JVMDUMP010I Snap dump written to S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
ERROR: Bundle BullseyeModellingFramework [1] EventDispatcher: Error during dispatch. (java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12)
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175302.12608.0005.txt' in response to an event
I am using JDK1.6.0_26 as the installed JRE's in my eclipse.
Each call of callBundles() will create a new threadpool by creating an own executor. Each thread has its own stack space! So if you say you start the JVM, the first call will create three threads with a sum of 3M heap (1024k is the default stack size of a 64-bit JVM), the next call another 3M etc. 1000 calls/s will need 3GB/s!
The second problem is you never shutdown() the created executor services, so the thread will live on until the garbage collector removes the executor (finalize() also call shutdown()). But the GC will never clear the stack memory, so if the stack memory is the problem and the heap is not full, the GC will never help!
You need to use one ExecutorService, lets say with 10 to 30 threads or a custom ThreadPoolExecutor with 3-30 cached threads and a LinkedBlockingQueue. Call shutdown() on the service before your application stops if possible.
Check the physical RAM, load and response time of your application to tune the parameters heap size, maximum threads and keep alive time of the threads in the pool. Have a look on other locking parts of the code (size of a database connection pool, ...) and the number of CPUs/cores of your server. An staring point for a thread pool size may be number of CPUs/core plus 1., with much I/O wait more become useful.
The main problem is that you aren't really using the thread pooling properly. If all of your "process" threads are of equal priority, there's no good reason not to make one large thread pool and submit all of your Runnable tasks to that. Note - "large" in this case is determined via experimentation and profiling: adjust it until your performance in terms of speed and memory is what you expect.
Here is an example of what I'm describing:
// Using 10000 purely as a concrete example - you should define the correct number
public static final LARGE_NUMBER_OF_THREADS = 10000;
// Elsewhere in code, you defined a static thread pool
public static final ExecutorService EXECUTOR =
Executors.newFixedThreadPool(LARGE_NUMBER_OF_THREADS);
public void callBundles(final Map<String, Object> eventData) {
final Map<String, String> outputs =
(Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
// "Three threads: one thread for the database writer,
// two threads for the plugin processors"
// so you'll need to repeat this future = E.submit() pattern two more times
Future<?> processFuture = EXECUTOR.submit(new Runnable() {
public void run() {
final Map<String, String> response =
entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
}
}
// Note, I'm catching the exception out here instead of inside the task
// This also allows me to force order on the three component threads
try {
processFuture.get();
} catch (Exception e) {
System.err.println("Should really do something more useful");
e.printStackTrace();
}
// If you wanted to ensure that the three component tasks run in order,
// you could future = executor.submit(); future.get();
// for each one of them
}
For completeness, you could also use a cached thread pool to avoid repeated creation of short-lived Threads. However, if you're already worried about memory consumption, a fixed pool might be better.
When you get to Java 7, you might find that Fork-Join is a better pattern than a series of Futures. Whatever fits your needs best, though.
we use JDK 7 watchservice to watch directory which can have xml or csv files. These files are put in threadpool and later on processed and pushed into database.This application runs for ever watching the directory and keeps processing files as and when available. XML file are small and does not take time, however each csv file can contain more than 80 thousand records so processing takes time to put in database. Java application give us outofmemory error when there are 15 csv files getting processed from threadpool. Is there any way where when csv files comes into threadpool, it can be serially processed i.e only one at a time.
Java application give us outofmemory error when there are 15 csv files getting processed from threadpool. Is there any way where when csv files comes into threadpool, it can be serially processed i.e only one at a time.
If I'm understanding, you want to stop adding to the pool if you are over some threshold. There is an easy way to do that which is by using a blocking-queue and the rejected execution handler.
See the following answer:
Process Large File for HTTP Calls in Java
To summarize it, you do something like the following:
// only allow 100 jobs to queue
final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(100);
ThreadPoolExecutor threadPool =
new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, queue);
// we need our RejectedExecutionHandler to block if the queue is full
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
// this will block the producer until there's room in the queue
executor.getQueue().put(r);
} catch (InterruptedException e) {
throw new RejectedExecutionException(
"Unexpected InterruptedException", e);
}
}
});
This will mean that it will block adding to the queue and should not exhaust memory.
I would take a different route to solve your problem, I guess you have everything right except when you start reading too much data into memory.
Not sure how are you reading csv files, would suggest to use a LineReader and read e.g. 500 lines process them and then read next 500 lines, all large files should be handled this way only, because no matter how much you increase your memory arguments, you will hit out of memory as soon as you will have a bigger file to process, so use an implementation that can handle records in batches. This would require some extra coding effort but will never fail no matter how big file you have to process.
Cheers !!
You can try:
Increase the memory of JVM using the -Xmx JVM option
Use a different executor to reduce the number of processed files at a time. A drastical solution is to use a SingleThreadExecutor:
public class FileProcessor implements Runnable {
public FileProcessor(String name) { }
public void run() {
// process file
}
}
// ...
ExecutorService executor = Executors.newSingleThreadExecutor();
// ...
public void onNewFile(String fileName) {
executor.submit(new FileProcessor(fileName));
}