I am working on a project in which I will be having different Bundles. Let's take an example, Suppose I have 5 Bundles and each of those bundles will have a method name process.
Now currently, I am calling the process method of all those 5 bundles in parallel using multithread code below.
But somehow, everytime when I am running the below multithread code, it always give me out of memory exception. But if I am running it sequentially meaning, calling process method one by one, then it don't give me any Out Of memory exception.
Below is the code-
public void callBundles(final Map<String, Object> eventData) {
// Three threads: one thread for the database writer, two threads for the plugin processors
final ExecutorService executor = Executors.newFixedThreadPool(3);
final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
try {
final Map<String, String> response = entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
}
Below is the exception, I am getting whenever I am running above Multithreaded code.
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt' in response to an event
UTE430: can't allocate buffer
UTE437: Unable to load formatStrings for j9mm
JVMDUMP010I Java dump written to S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt
JVMDUMP032I JVM requested Snap dump using 'S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc' in response to an event
UTE001: Error starting trace thread for "Snap Dump Thread": -1
JVMDUMP010I Snap dump written to S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
ERROR: Bundle BullseyeModellingFramework [1] EventDispatcher: Error during dispatch. (java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12)
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175302.12608.0005.txt' in response to an event
I am using JDK1.6.0_26 as the installed JRE's in my eclipse.
Each call of callBundles() will create a new threadpool by creating an own executor. Each thread has its own stack space! So if you say you start the JVM, the first call will create three threads with a sum of 3M heap (1024k is the default stack size of a 64-bit JVM), the next call another 3M etc. 1000 calls/s will need 3GB/s!
The second problem is you never shutdown() the created executor services, so the thread will live on until the garbage collector removes the executor (finalize() also call shutdown()). But the GC will never clear the stack memory, so if the stack memory is the problem and the heap is not full, the GC will never help!
You need to use one ExecutorService, lets say with 10 to 30 threads or a custom ThreadPoolExecutor with 3-30 cached threads and a LinkedBlockingQueue. Call shutdown() on the service before your application stops if possible.
Check the physical RAM, load and response time of your application to tune the parameters heap size, maximum threads and keep alive time of the threads in the pool. Have a look on other locking parts of the code (size of a database connection pool, ...) and the number of CPUs/cores of your server. An staring point for a thread pool size may be number of CPUs/core plus 1., with much I/O wait more become useful.
The main problem is that you aren't really using the thread pooling properly. If all of your "process" threads are of equal priority, there's no good reason not to make one large thread pool and submit all of your Runnable tasks to that. Note - "large" in this case is determined via experimentation and profiling: adjust it until your performance in terms of speed and memory is what you expect.
Here is an example of what I'm describing:
// Using 10000 purely as a concrete example - you should define the correct number
public static final LARGE_NUMBER_OF_THREADS = 10000;
// Elsewhere in code, you defined a static thread pool
public static final ExecutorService EXECUTOR =
Executors.newFixedThreadPool(LARGE_NUMBER_OF_THREADS);
public void callBundles(final Map<String, Object> eventData) {
final Map<String, String> outputs =
(Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
// "Three threads: one thread for the database writer,
// two threads for the plugin processors"
// so you'll need to repeat this future = E.submit() pattern two more times
Future<?> processFuture = EXECUTOR.submit(new Runnable() {
public void run() {
final Map<String, String> response =
entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
}
}
// Note, I'm catching the exception out here instead of inside the task
// This also allows me to force order on the three component threads
try {
processFuture.get();
} catch (Exception e) {
System.err.println("Should really do something more useful");
e.printStackTrace();
}
// If you wanted to ensure that the three component tasks run in order,
// you could future = executor.submit(); future.get();
// for each one of them
}
For completeness, you could also use a cached thread pool to avoid repeated creation of short-lived Threads. However, if you're already worried about memory consumption, a fixed pool might be better.
When you get to Java 7, you might find that Fork-Join is a better pattern than a series of Futures. Whatever fits your needs best, though.
Related
I have a simple application which transfers data from one machine to another. As the application running, the size of heap is increasing slowly. So I dumped the heap and analysed it, and I found that the zmq.poll.Poller cost the biggest amount of memory. They belong to thread 'iothread-2':
The heap screenshot is here
My application demo is like this:
public static void main(String[] args) throws Exception {
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket socket = context.socket(ZMQ.DEALER);
socket.connect("tcp://localhost:5550");
ZMQ.Poller poller = context.poller(1);
poller.register(socket, ZMQ.Poller.POLLIN);
while(!Thread.currentThread().isInterrupted()) {
poller.poll(5000);
if (poller.pollin(0)) {
socket.send("message"); // send message to another machine
String msg = socket.recvStr(); // get the reply
// do some stuff
Thread.sleep(1000);
}
}
}
As I checked the Poller Object in heap, I found there were 4 million HashMap$Node, and the value of the hashmap node is a list of 10 null object array list.
The heap was dumped by command:
jmap -dump:live,format=b,file=dump.hprof [pid]
The jdk is 1.8.0_131, OS is CentOS 7.2.1511 and jeromq 0.4.2
Did I use poller wrong? Thanks very much for anyone who helps!
The issue seems to be related rather to a missing resources management:
Native API documentation is strict on this:
The zmq_msg_close() function shall inform the ØMQ infrastructure that any resources associated with the message object referenced by msg are no longer required and may be released. Actual release of resources associated with the message object shall be postponed by ØMQ until all users of the message or underlying data buffer have indicated it is no longer required.
Applications should ensure that zmq_msg_close() is called once a message is no longer required, otherwise memory leaks may occur. Note that this is NOT necessary after a successful zmq_msg_send().
Try to include a proper explicit message disposal and ought see improvement ( yet, depends on the jeromq version, garbage collection dynamics et al ).
I have a parallel running java application that consumes huge log files and applies some custom logic. Each log row is processed in a separate thread using fire-and-forget approach.
However sometimes the java process just stops processing, what I mean with that is that the java application doesn't get assigned CPU to execute the process even if the application is still hasn't finished consuming the file.
Running top I get quite low load average considering 16 cores that I have:
Running vmstat I can see that non of the user processes are running neither the kernel processes, rather it's idle 99%
The output of iostat shows me that there are no pending IO tasks running either:
I also haven't spotted any deadlocks or starvation taking a thread dump. The most of the threads are WAITING or RUNNABLE.
What am I missing? I got lost, and I don;t really know where to investigate further.
=UPDATE=
This is the part that initiates parallel execution, after this there are thousand lines of code applying modification incl. elasticsearch, akka etc
So I don't really know what the relevant code would be that might causes any troubles.
BlockingQueue<Runnable> workQueue = new ArrayBlockingQueue<Runnable>(100);
ExecutorService executorService = new MetricsThreadPoolExecutor(numThreadCore, numThreadCore, idleTime, TimeUnit.SECONDS, workQueue, new ThreadPoolExecutor.AbortPolicy(), "process.concurrent", metrics);
FileInputStream fileStream = new FileInputStream(file);
BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(new GZIPInputStream(fileStream));
String strRow = bufferedReader.readLine();
while (strRow != null) {
final Row row = new Row(strRow);
try {
executorService.submit(new Runnable() {
#Override
public void run() {
if (!StringUtil.isBlank(row.getLine())) {
processor.process(row);
}
}
});
strRow = bufferedReader.readLine();
} catch (RejectedExecutionException ree) {
try {
logger.warn(ree.getMessage());
Thread.sleep(50L);
} catch (InterruptedException ie) {
logger.warn("Wait interrupted", ie);
}
}
However sometimes the java process just stops processing, what I mean with that is that the java application doesn't get assigned CPU to execute the process even if the application is still hasn't finished consuming the file.
Don't think about this at the CPU/vmstat/iostat level. That's just confusing the debugging of the problem. You should think about this in terms of threads only and trust the OS to schedule them appropriately.
I see no reason why the main thread shouldn't finish after all of the rows have been submitted for processing. As an aside, you may instead want to just block the producer instead of regenerating the rows in your spin/sleep loop like you are doing. See: RejectedExecutionException free threads but full queue
If you application is not completing then either one of the worker threads is hung while processing the row or maybe the MetricsThreadPoolExecutor has not been shutdown. I suspect the latter. The producer thread, after it exits the while (strRow != null) { loop should call executorService.shutdown(). Otherwise the threads will be waiting for more rows to be added.
You could do a thread-dump on your application to see if it is stuck in a worker. You could add logging when the producer thread finishes which should let you know if it completed it's work. Both might help figure out where the problem lies.
I have a file upload functionality, in which user can upload mutiple file at a same time, for good result I used Thread processing like this
Thread.start{
// do file processing 'cause it is a long running process
}
Now the problem is, for each file upload system will create a new Thread so that will lead to system thrashing and someother issues, so now I am looking for a solution where I can create a Queue to store all the received files and create minimum number(say 5 nos) of Thread at a time and process it and again create a set of Thread and process.
So for I am looking into GPars, Java Thread and Queue and soon but no idea which is efficient way and what is the existing good solution
You are looking for a thread pool or - in Java terms - for an Executor:
Executor executor = anExecutor();
executor.execute(aRunnable());
The method anExecutor should return a new Executor instance:
Executor anExecutor() {
return Executors.newFixedThreadPool(42); // just an example ...
}
I am using the Java Executor Service to create a singlethread.
Code:-
ExecutorService executor = Executors.newSingleThreadExecutor();
try {
executor.submit(new Runnable() {
#Override
public void run() {
Iterator<FileObject> itr = mysortedList.iterator();
while (itr.hasNext()) {
myWebFunction(itr.next();
}
};
}).get(Timeout * mysortedList.size() - 10, TimeUnit.SECONDS);
} catch (Exception ex) {
} finally {
executor.shutdownNow();
}
Details: myWebfunction processes files of different size and content.Processing involves extracting the entire content and applying further actions on the file content.
The program runs in 64bit Centos.
Problem: When the myWebfunction gets file of size greater than some threshold, say 10MB, the executor service is unable to create a native thread. I tried various -Xmx and -Xms settings, but still the executor service throws the same error.
My guess is you calling this many times, and you are not waiting for the thread which has timed out, leaving lots of threads lying around. When you run out of stack space, or you reach about 32K threads, you cannot create any more.
I suggest using a different approach which doesn't use so many threads or kills them off when you know you don't need them any more. E.g. have the while loop check for interrupts and call Future.cancel(true) to interrupt it.
we use JDK 7 watchservice to watch directory which can have xml or csv files. These files are put in threadpool and later on processed and pushed into database.This application runs for ever watching the directory and keeps processing files as and when available. XML file are small and does not take time, however each csv file can contain more than 80 thousand records so processing takes time to put in database. Java application give us outofmemory error when there are 15 csv files getting processed from threadpool. Is there any way where when csv files comes into threadpool, it can be serially processed i.e only one at a time.
Java application give us outofmemory error when there are 15 csv files getting processed from threadpool. Is there any way where when csv files comes into threadpool, it can be serially processed i.e only one at a time.
If I'm understanding, you want to stop adding to the pool if you are over some threshold. There is an easy way to do that which is by using a blocking-queue and the rejected execution handler.
See the following answer:
Process Large File for HTTP Calls in Java
To summarize it, you do something like the following:
// only allow 100 jobs to queue
final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(100);
ThreadPoolExecutor threadPool =
new ThreadPoolExecutor(nThreads, nThreads, 0L, TimeUnit.MILLISECONDS, queue);
// we need our RejectedExecutionHandler to block if the queue is full
threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() {
#Override
public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
try {
// this will block the producer until there's room in the queue
executor.getQueue().put(r);
} catch (InterruptedException e) {
throw new RejectedExecutionException(
"Unexpected InterruptedException", e);
}
}
});
This will mean that it will block adding to the queue and should not exhaust memory.
I would take a different route to solve your problem, I guess you have everything right except when you start reading too much data into memory.
Not sure how are you reading csv files, would suggest to use a LineReader and read e.g. 500 lines process them and then read next 500 lines, all large files should be handled this way only, because no matter how much you increase your memory arguments, you will hit out of memory as soon as you will have a bigger file to process, so use an implementation that can handle records in batches. This would require some extra coding effort but will never fail no matter how big file you have to process.
Cheers !!
You can try:
Increase the memory of JVM using the -Xmx JVM option
Use a different executor to reduce the number of processed files at a time. A drastical solution is to use a SingleThreadExecutor:
public class FileProcessor implements Runnable {
public FileProcessor(String name) { }
public void run() {
// process file
}
}
// ...
ExecutorService executor = Executors.newSingleThreadExecutor();
// ...
public void onNewFile(String fileName) {
executor.submit(new FileProcessor(fileName));
}