I have a parallel running java application that consumes huge log files and applies some custom logic. Each log row is processed in a separate thread using fire-and-forget approach.
However sometimes the java process just stops processing, what I mean with that is that the java application doesn't get assigned CPU to execute the process even if the application is still hasn't finished consuming the file.
Running top I get quite low load average considering 16 cores that I have:
Running vmstat I can see that non of the user processes are running neither the kernel processes, rather it's idle 99%
The output of iostat shows me that there are no pending IO tasks running either:
I also haven't spotted any deadlocks or starvation taking a thread dump. The most of the threads are WAITING or RUNNABLE.
What am I missing? I got lost, and I don;t really know where to investigate further.
=UPDATE=
This is the part that initiates parallel execution, after this there are thousand lines of code applying modification incl. elasticsearch, akka etc
So I don't really know what the relevant code would be that might causes any troubles.
BlockingQueue<Runnable> workQueue = new ArrayBlockingQueue<Runnable>(100);
ExecutorService executorService = new MetricsThreadPoolExecutor(numThreadCore, numThreadCore, idleTime, TimeUnit.SECONDS, workQueue, new ThreadPoolExecutor.AbortPolicy(), "process.concurrent", metrics);
FileInputStream fileStream = new FileInputStream(file);
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(new GZIPInputStream(fileStream));
String strRow = bufferedReader.readLine();
while (strRow != null) {
final Row row = new Row(strRow);
try {
executorService.submit(new Runnable() {
#Override
public void run() {
if (!StringUtil.isBlank(row.getLine())) {
processor.process(row);
}
}
});
strRow = bufferedReader.readLine();
} catch (RejectedExecutionException ree) {
try {
logger.warn(ree.getMessage());
Thread.sleep(50L);
} catch (InterruptedException ie) {
logger.warn("Wait interrupted", ie);
}
}
However sometimes the java process just stops processing, what I mean with that is that the java application doesn't get assigned CPU to execute the process even if the application is still hasn't finished consuming the file.
Don't think about this at the CPU/vmstat/iostat level. That's just confusing the debugging of the problem. You should think about this in terms of threads only and trust the OS to schedule them appropriately.
I see no reason why the main thread shouldn't finish after all of the rows have been submitted for processing. As an aside, you may instead want to just block the producer instead of regenerating the rows in your spin/sleep loop like you are doing. See: RejectedExecutionException free threads but full queue
If you application is not completing then either one of the worker threads is hung while processing the row or maybe the MetricsThreadPoolExecutor has not been shutdown. I suspect the latter. The producer thread, after it exits the while (strRow != null) { loop should call executorService.shutdown(). Otherwise the threads will be waiting for more rows to be added.
You could do a thread-dump on your application to see if it is stuck in a worker. You could add logging when the producer thread finishes which should let you know if it completed it's work. Both might help figure out where the problem lies.
Related
I am using the Java Executor Service to create a singlethread.
Code:-
ExecutorService executor = Executors.newSingleThreadExecutor();
try {
executor.submit(new Runnable() {
#Override
public void run() {
Iterator<FileObject> itr = mysortedList.iterator();
while (itr.hasNext()) {
myWebFunction(itr.next();
}
};
}).get(Timeout * mysortedList.size() - 10, TimeUnit.SECONDS);
} catch (Exception ex) {
} finally {
executor.shutdownNow();
}
Details: myWebfunction processes files of different size and content.Processing involves extracting the entire content and applying further actions on the file content.
The program runs in 64bit Centos.
Problem: When the myWebfunction gets file of size greater than some threshold, say 10MB, the executor service is unable to create a native thread. I tried various -Xmx and -Xms settings, but still the executor service throws the same error.
My guess is you calling this many times, and you are not waiting for the thread which has timed out, leaving lots of threads lying around. When you run out of stack space, or you reach about 32K threads, you cannot create any more.
I suggest using a different approach which doesn't use so many threads or kills them off when you know you don't need them any more. E.g. have the while loop check for interrupts and call Future.cancel(true) to interrupt it.
I am working on a project in which I will be having different Bundles. Let's take an example, Suppose I have 5 Bundles and each of those bundles will have a method name process.
Now currently, I am calling the process method of all those 5 bundles in parallel using multithread code below.
But somehow, everytime when I am running the below multithread code, it always give me out of memory exception. But if I am running it sequentially meaning, calling process method one by one, then it don't give me any Out Of memory exception.
Below is the code-
public void callBundles(final Map<String, Object> eventData) {
// Three threads: one thread for the database writer, two threads for the plugin processors
final ExecutorService executor = Executors.newFixedThreadPool(3);
final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
try {
final Map<String, String> response = entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
}
Below is the exception, I am getting whenever I am running above Multithreaded code.
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt' in response to an event
UTE430: can't allocate buffer
UTE437: Unable to load formatStrings for j9mm
JVMDUMP010I Java dump written to S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt
JVMDUMP032I JVM requested Snap dump using 'S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc' in response to an event
UTE001: Error starting trace thread for "Snap Dump Thread": -1
JVMDUMP010I Snap dump written to S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
ERROR: Bundle BullseyeModellingFramework [1] EventDispatcher: Error during dispatch. (java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12)
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175302.12608.0005.txt' in response to an event
I am using JDK1.6.0_26 as the installed JRE's in my eclipse.
Each call of callBundles() will create a new threadpool by creating an own executor. Each thread has its own stack space! So if you say you start the JVM, the first call will create three threads with a sum of 3M heap (1024k is the default stack size of a 64-bit JVM), the next call another 3M etc. 1000 calls/s will need 3GB/s!
The second problem is you never shutdown() the created executor services, so the thread will live on until the garbage collector removes the executor (finalize() also call shutdown()). But the GC will never clear the stack memory, so if the stack memory is the problem and the heap is not full, the GC will never help!
You need to use one ExecutorService, lets say with 10 to 30 threads or a custom ThreadPoolExecutor with 3-30 cached threads and a LinkedBlockingQueue. Call shutdown() on the service before your application stops if possible.
Check the physical RAM, load and response time of your application to tune the parameters heap size, maximum threads and keep alive time of the threads in the pool. Have a look on other locking parts of the code (size of a database connection pool, ...) and the number of CPUs/cores of your server. An staring point for a thread pool size may be number of CPUs/core plus 1., with much I/O wait more become useful.
The main problem is that you aren't really using the thread pooling properly. If all of your "process" threads are of equal priority, there's no good reason not to make one large thread pool and submit all of your Runnable tasks to that. Note - "large" in this case is determined via experimentation and profiling: adjust it until your performance in terms of speed and memory is what you expect.
Here is an example of what I'm describing:
// Using 10000 purely as a concrete example - you should define the correct number
public static final LARGE_NUMBER_OF_THREADS = 10000;
// Elsewhere in code, you defined a static thread pool
public static final ExecutorService EXECUTOR =
Executors.newFixedThreadPool(LARGE_NUMBER_OF_THREADS);
public void callBundles(final Map<String, Object> eventData) {
final Map<String, String> outputs =
(Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
// "Three threads: one thread for the database writer,
// two threads for the plugin processors"
// so you'll need to repeat this future = E.submit() pattern two more times
Future<?> processFuture = EXECUTOR.submit(new Runnable() {
public void run() {
final Map<String, String> response =
entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
}
}
// Note, I'm catching the exception out here instead of inside the task
// This also allows me to force order on the three component threads
try {
processFuture.get();
} catch (Exception e) {
System.err.println("Should really do something more useful");
e.printStackTrace();
}
// If you wanted to ensure that the three component tasks run in order,
// you could future = executor.submit(); future.get();
// for each one of them
}
For completeness, you could also use a cached thread pool to avoid repeated creation of short-lived Threads. However, if you're already worried about memory consumption, a fixed pool might be better.
When you get to Java 7, you might find that Fork-Join is a better pattern than a series of Futures. Whatever fits your needs best, though.
we use JDK 7 watchservice to watch directory which can have xml or csv files. These files are put in threadpool and later on processed and pushed into database.This application runs for ever watching the directory and keeps processing files as and when available. XML file are small and does not take time, however each csv file can contain more than 80 thousand records so processing takes time to put in database. Java application give us outofmemory error when there are 15 csv files getting processed from threadpool. Is there any way where when csv files comes into threadpool, it can be serially processed i.e only one at a time.
Java application give us outofmemory error when there are 15 csv files getting processed from threadpool. Is there any way where when csv files comes into threadpool, it can be serially processed i.e only one at a time.
If I'm understanding, you want to stop adding to the pool if you are over some threshold. There is an easy way to do that which is by using a blocking-queue and the rejected execution handler.
See the following answer:
Process Large File for HTTP Calls in Java
To summarize it, you do something like the following:
// only allow 100 jobs to queue
final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(100);
ThreadPoolExecutor threadPool =
new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, queue);
// we need our RejectedExecutionHandler to block if the queue is full
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
// this will block the producer until there's room in the queue
executor.getQueue().put(r);
} catch (InterruptedException e) {
throw new RejectedExecutionException(
"Unexpected InterruptedException", e);
}
}
});
This will mean that it will block adding to the queue and should not exhaust memory.
I would take a different route to solve your problem, I guess you have everything right except when you start reading too much data into memory.
Not sure how are you reading csv files, would suggest to use a LineReader and read e.g. 500 lines process them and then read next 500 lines, all large files should be handled this way only, because no matter how much you increase your memory arguments, you will hit out of memory as soon as you will have a bigger file to process, so use an implementation that can handle records in batches. This would require some extra coding effort but will never fail no matter how big file you have to process.
Cheers !!
You can try:
Increase the memory of JVM using the -Xmx JVM option
Use a different executor to reduce the number of processed files at a time. A drastical solution is to use a SingleThreadExecutor:
public class FileProcessor implements Runnable {
public FileProcessor(String name) { }
public void run() {
// process file
}
}
// ...
ExecutorService executor = Executors.newSingleThreadExecutor();
// ...
public void onNewFile(String fileName) {
executor.submit(new FileProcessor(fileName));
}
Here's my scenario:
process A spawns child process B and spins threads to drain B's outputs.
process B spawns daemon process C and drains its outputs, too.
process B finishes, daemon process still lives.
process A finds out that process B exited via process.waitFor(). However, it's stuck on reading the input streams of process B. It's because B has started a daemon. The input stream receives EOF only when the process C exits.
This only happens on Windows. I'm using the ProcessBuilder. Here're the solutions I came up with and I'd like to hear your feedback as none of the solutions I really like:
I can use jna to spawn the daemon process C. This way I can create a process that is 'detached enough' and process A is not stuck on draining the streams from B. It works but I'm not very keen on that solution because it means some native code (and lots of that since I'm keen on consuming the inputs). Some inspiration how to do it via JNA is here: http://yajsw.sourceforge.net (however it contains way more stuff than mere process starting).
Run on jre7. Jdk7 brings some new goodies to the ProcessBuilder, e.g. inheritIO() stuff that also solves my problem. Apparently, when inheritIO() is turned on, I can simply close all streams in the daemon process C (which I do anyway because it'a daemon) and that solves the problem. However, I need to run on jre5+
Close the System.out and System.err in process B before spawning the daemon process C. Again, it solves the problem but I really need those streams to work in process B as I write useful stuff to them. No good. I hoped I could take advantage of this characteristic by placing some kind of a bootstrap process between B & C but that didn't solve the problem.
I don't have that problem on linux so could I only run on linux? No, I can't.
Made the process A drain the outputs of process B in a non-blocking way. This somewhat works but it's not convenient. E.g. inputStream.read() is not interruptible. I could use inputStream.available() but it doesn't distinguish between EOF and zero-bytes-available. So the solution is only good if process A is never interested in B's output EOF. Also, this solution seems to be more CPU intensive and generally... feels awkward and not really bullet proof.
Run process C in a --dry-run mode where it just checks if it can be started. It tries to start, sends welcome message and exits. It's no longer long-running so it will not block reads. Process B can gain enough confidence that C can be started and we can use relatively simple JNA code to spawn detached process without consuming its outputs (it's the consuming of outputs makes the JNA-related code messy and heavyweight). The only problem is that we no longer consumer process' C outputs but it can be solved by making C write to a well known file that process B can consume. This solution is more like a big and ugly workaround but kind of workable for us. Anyways, we are trying the solution 1) at the moment.
I would really appreciate any hints!
I just encountered the same problem. I think I have a workaround to the problem. In process A, I have the following code fragment after Process.waitFor(), where outT and errT are the threads to read process B's stdout and stderr, respectively:
try {
outT.join(1000);
if (outT.isAlive()) {
errmsg("stdout reader still alive, interrupting", null);
outT.interrupt();
}
} catch (Exception e) {
errmsg("Exception caught from out stream reader: "+e, e);
}
try {
errT.join(1000);
if (errT.isAlive()) {
errmsg("stderr reader still alive, interrupting", null);
errT.interrupt();
}
} catch (Exception e) {
errmsg("Exception caught from err stream reader: "+e, e);
}
p.destroy();
Not sure if p.destroy() is needed, but I have been trying all kinds of combinations to deal with the problem.
Anyway, in the run() method of the outT/errT threads, I have the following, where the 'pipe' variable is a Writer instance I am capturing stdout/stderr of the sub-process to. The 'in' variable is the stdout, or stderr, stream obtained from Process:
try {
r = new BufferedReader(new InputStreamReader(in, enc));
String line;
while (true) {
if (Thread.currentThread().isInterrupted()) {
errmsg("Text stream reader interrupted", null);
break;
}
if (r.ready()) {
line = r.readLine();
if (line == null) {
break;
}
pipe.write(line);
pipe.write(SystemUtil.EOL);
if (autoFlush) {
pipe.flush();
}
}
}
pipe.flush();
} catch (Throwable t) {
errmsg("Exception caught: "+t, t);
try { pipe.flush(); } catch (Exception noop) {}
} finally {
IOUtil.closeQuietly(in);
IOUtil.closeQuietly(r);
}
It seems that I never get an EOF indication from any sub-process, even after the sub-process terminates, hence, all the chicanery above to prevent stale threads and blocking.
Hi I have a webapp - and in one method I need to encrypt part of data from request and store them on disk and return response.
Response is in no way related to encryption.
The encryption is quite time demanding however. How to make threads or so properly in this problem?
I tried something like
Thread thread ...
thread.start();
or
JobDetail job = encryptionScheduler.getJobDetail(jobDetail.getName(), jobDetail.getGroup());
encryptionScheduler.scheduleJob(jobDetail,TriggerUtils.makeImmediateTrigger("encryptionTrigger",1,1)
I tried servlet where before encryption I close the outpuStream.
or: Executors.newFixedThreadPool(1);
But whatever I tried a client has to wait longer.
btw: why is that so? Can it be faster?
I haven't tried to start thread after context initalization and wait somehow for method needing encryption.
how to speed up this?
thank you
--------------EDIT:
//I use axis 1.4, where I have Handler, which in invoke method encrypt a value:
try {
LogFile logFile = new LogFile(strategy,nodeValue,path, new Date());
LogQueue.queue.add(logFile);
}
catch (Exception e) {
log.error(e.getMessage(),e);
}
EExecutor.executorService.execute(new Runnable() {
public void run() {
try {
LogFile poll = LogQueue.queue.poll();
String strategy = poll.getStrategy();
String value = poll.getNodeValue();
value = encrypt(strategy,value);
PrintWriter writer = new PrintWriter(new OutputStreamWriter(new BufferedOutputStream(new FileOutputStream(poll.getPath(), true )),"UTF-8"));
writer.print(value);
writer.close();
}catch (IOException e ) {
log.error(e.getMessage(),e);
}
}
});
} catch (Throwable e ) {
log.error(e.getMessage(),e);
}
//besides I have executor service
public class EExecutor { public static ExecutorService executorService = Executors.newCachedThreadPool();}
//and what's really interesting.. when I move encryption from this handler away into another handler which is called
last when I send response! It's faster. But when I leave it in one of the first handlers when I receive request. It's even slower without using threads/servlet etc.
Threads only help you if parts of your task can be done in parallel. It sounds like you're waiting for the encryption to finish before returning the result. If it's necessary for you to do that (e.g., because the encrypted data is the result) then doing the encryption on a separate thread won't help you here---all it will do is introduce the overhead of creating and switching to a different thread.
Edit: If you're starting a new thread for each encryption you do, then that might be part of your problem. Creating new threads is relatively expensive. A better way is to use an ExecutorService with an unbounded queue. If you don't care about the order in which the encryption step happens (i.e., if it's ok that the encryption which started due to a request at time t finishes later than one which started at time t', and t < t'), then you can let the ExecutorService have more than a single thread. That will give you both greater concurrency and save you the overhead of recreating threads all the time, since an ExecutorService pools and reuses threads.
The proper way to do something like this is to have a message queue, such as the standard J2EE JMS.
In a message queue, you have one software component whose job it is to receive messages (such as requests to encrypt some resource, as in your case), and make the request "durable" in a transactional way. Then some independent process polls the message queue for new messages, takes action on them, and transactionally marks the messages as received.