Hi Guys can you help me about the error that I've encountered on my Java program. I do the Callable implementation with a loop, Basically I need to send a request to another webservice and I will be collecting the response ID. Based on my testing, my aim to process it asynchronously is working using the below implementation. One time I try to run again my program then I got this kind of error "Error 500: java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 11". What I did is I just restarted the server then it becomes ok.
So I want to know if there is something wrong with my implementation? like is there any additional line of code that I need to add or remove just to prevent again to experience those kind of error? I only experience it once. Hope you could help me
//pccsSurvList is a list of details coming from the database.
ExecutorService executorService = null;
List<Callable<SyncFlagEntity>> lst = new ArrayList<Callable<SyncFlagEntity>>();
if(pccsSurvList != null && pccsSurvList.size() > 0){
executorService = Executors.newFixedThreadPool(pccsSurvList.size());
for(PCCSSurveyInfoEntity user: pccsSurvList){
NotifyEmailTransactionImpl emailTransact = new NotifyEmailTransactionImpl(user);
lst.add(emailTransact);
}
}
// returns a list of Futures holding their status and results when all complete
List<Future<SyncFlagEntity>> tasks = new ArrayList<Future<SyncFlagEntity>>();
tasks = executorService.invokeAll(lst);
executorService.shutdown();
java.lang.OutOfMemoryError: Failed to create a thread
This OutOfMemoryError indicates that Java is out of memory.
Most likely the problem lies on this line ...
executorService = Executors.newFixedThreadPool(pccsSurvList.size());
You are getting a lot of rows and there's no enough RAM for the JVM to create a thread for each one. Try logging the number of rows you get and see what's happening.
I think you are running too many processes at once. You should try to set a limit and use a thread pool.
Maybe you can do something like:
ThreadPoolExecutor executor = (ThreadPoolExecutor) Executors.newFixedThreadPool(pccsSurvList.size());
Then inside for loop you can do:
executor.submit(() -> emailTransact);
There is more here: https://www.baeldung.com/thread-pool-java-and-guava
Also take a look at reactive programming. Can be of more help in your case: https://www.vogella.com/tutorials/RxJava/article.html
Related
An idea I am trying to implement is the following.
I have 1000 urls to download data from to use it for post processing (say, calculating some statistics).
I don't really need all of the downloads to finish successfully, but as many as possible.
I assume that some of the locations might be unavailable, either responding nothing valuable (e.g., HTTP 503) or taking more that TO=10 seconds of time to process a request.
I have T=5 threads to process the urls in parallel, giving the equal timeout TO to each.
As soon as one completes (what I expect to happen far earlier that TO exceeds) I aggregate some statistics (what is a very fast operation) and start the next download (if any).
The solution I have come up so far with is
ExecutorService executorService = Executors.newFixedThreadPool(T);
ExecutorCompletionService<MyResult> completionService = new ExecutorCompletionService<>(executorService);
urls.forEach(url -> {
Callable<MyResult> callable = () -> new MyResult(url);
completionService.submit(callable);
});
for (int i = 0; i < urls.size(); i++) {
Future<MyResult> resultFuture = completionService.poll(TO, TimeUnit.SECONDS);
if (resultFuture == null)
continue;
MyResult myResult = resultFuture.get();
myAggregate(myResult.getRate());
}
It looks like somewhat I am trying to achieve. But it for instance neither gives every download the same timeout nor cancels the Futures properly. So, what is the correct solution?
Try using the invokeAll-Method, you simply put your Callables in a List and then call invokeAll() on your ExecutorService giving it a timeout as second and third argument.
executorService.invokeAll(callableList, 20, TimeUnit.SECONDS);
I have a file upload functionality, in which user can upload mutiple file at a same time, for good result I used Thread processing like this
Thread.start{
// do file processing 'cause it is a long running process
}
Now the problem is, for each file upload system will create a new Thread so that will lead to system thrashing and someother issues, so now I am looking for a solution where I can create a Queue to store all the received files and create minimum number(say 5 nos) of Thread at a time and process it and again create a set of Thread and process.
So for I am looking into GPars, Java Thread and Queue and soon but no idea which is efficient way and what is the existing good solution
You are looking for a thread pool or - in Java terms - for an Executor:
Executor executor = anExecutor();
executor.execute(aRunnable());
The method anExecutor should return a new Executor instance:
Executor anExecutor() {
return Executors.newFixedThreadPool(42); // just an example ...
}
I encounter a problem with using a PriorityBlockingQueue in a custom thread pull where the poll method causes a NullPointerException. When using this setup
int POOL_SIZE = 5;
int OVERHEAD_POOL_SIZE = 10;
long LIFE_TIME = 5000;
TimeUnit LIFE_TIME_UNIT = TimeUnit.MILLISECONDS;
with
new ThreadPoolExecutor(POOL_SIZE, OVERHEAD_POOL_SIZE,
LIFE_TIME, LIFE_TIME_UNIT, new PriorityBlockingQueue<Runnable>());
instead of Executors.newCachedThreadPool(), I sometimes encounter the following stack trace:
Exception in thread "pool-14-thread-3" java.lang.NullPointerException
at java.util.PriorityQueue.siftDownComparable(PriorityQueue.java:624)
at java.util.PriorityQueue.siftDown(PriorityQueue.java:614)
at java.util.PriorityQueue.poll(PriorityQueue.java:523)
at java.util.concurrent.PriorityBlockingQueue.take(PriorityBlockingQueue.java:225)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:957)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:917)
at java.lang.Thread.run(Thread.java:662)
Basically, I want to order Runnables in the queue by a priority:
class MyRunnableFuture implements
RunnableFuture<Boolean>, Comparable<MyRunnableFuture>
The wired thing is: The exception is not always thrown, but if I set a breakpoint inside of PriorityQueue.siftDownComparable, the chance for the exception to occure is higher.
Any ideas? I found other people having the same problem, but nobody really knew a solution. Do I have to synchronize the queue manually when used in a ThreadPool? I am using the queue, because I do not want to synchronize it. I understood the descriptions as if the queue was already synchronized internally? Thanks for any answers!
Updated after reading the comments below: All troubles were caused by using submit instead of execute where in the former, all threads are wrapped in a RunnableFuture internally which of course do not implement the Comparable interface. This is why the comparison failed. I got distracted by the NullPointerException which is caused when a new worker thread is trying to pull the job that could not be inserted before. (The clue: The PriorityBlockingQueue does not insert the new job because a ClassCastException is thrown. At the same time, the ThreadPoolExecutor thinks there is a new job available and alerts the next worker. The worker pulls a nonexistant job and throws a NullPointerException. The ClassCastException got swallowed and logged deep down my application, I did not trace this carefully enough since I tried to find the reason for the NullPointerException.
For the sake of completeness, the submit method of the ThreadPoolExecutor actually does the following:
public Future<?> submit(Runnable task) {
if (task == null) throw new NullPointerException();
RunnableFuture<Void> ftask = newTaskFor(task, null);
execute(ftask);
return ftask;
}
Thanks to you guys for help (in the comments!).
PS: Mean bug.
I am working on a project in which I will be having different Bundles. Let's take an example, Suppose I have 5 Bundles and each of those bundles will have a method name process.
Now currently, I am calling the process method of all those 5 bundles in parallel using multithread code below.
But somehow, everytime when I am running the below multithread code, it always give me out of memory exception. But if I am running it sequentially meaning, calling process method one by one, then it don't give me any Out Of memory exception.
Below is the code-
public void callBundles(final Map<String, Object> eventData) {
// Three threads: one thread for the database writer, two threads for the plugin processors
final ExecutorService executor = Executors.newFixedThreadPool(3);
final Map<String, String> outputs = (Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
executor.submit(new Runnable () {
public void run() {
try {
final Map<String, String> response = entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
}
Below is the exception, I am getting whenever I am running above Multithreaded code.
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175256.12608.0001.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt' in response to an event
UTE430: can't allocate buffer
UTE437: Unable to load formatStrings for j9mm
JVMDUMP010I Java dump written to S:\GitViews\Stream\goldseye\javacore.20130904.175256.12608.0002.txt
JVMDUMP032I JVM requested Snap dump using 'S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc' in response to an event
UTE001: Error starting trace thread for "Snap Dump Thread": -1
JVMDUMP010I Snap dump written to S:\GitViews\Stream\goldseye\Snap.20130904.175256.12608.0003.trc
JVMDUMP013I Processed dump event "systhrow", detail "java/lang/OutOfMemoryError".
ERROR: Bundle BullseyeModellingFramework [1] EventDispatcher: Error during dispatch. (java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12)
java.lang.OutOfMemoryError: Failed to create a thread: retVal -1073741830, errno 12
JVMDUMP006I Processing dump event "systhrow", detail "java/lang/OutOfMemoryError" - please wait.
JVMDUMP032I JVM requested Heap dump using 'S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd' in response to an event
JVMDUMP010I Heap dump written to S:\GitViews\Stream\goldseye\heapdump.20130904.175302.12608.0004.phd
JVMDUMP032I JVM requested Java dump using 'S:\GitViews\Stream\goldseye\javacore.20130904.175302.12608.0005.txt' in response to an event
I am using JDK1.6.0_26 as the installed JRE's in my eclipse.
Each call of callBundles() will create a new threadpool by creating an own executor. Each thread has its own stack space! So if you say you start the JVM, the first call will create three threads with a sum of 3M heap (1024k is the default stack size of a 64-bit JVM), the next call another 3M etc. 1000 calls/s will need 3GB/s!
The second problem is you never shutdown() the created executor services, so the thread will live on until the garbage collector removes the executor (finalize() also call shutdown()). But the GC will never clear the stack memory, so if the stack memory is the problem and the heap is not full, the GC will never help!
You need to use one ExecutorService, lets say with 10 to 30 threads or a custom ThreadPoolExecutor with 3-30 cached threads and a LinkedBlockingQueue. Call shutdown() on the service before your application stops if possible.
Check the physical RAM, load and response time of your application to tune the parameters heap size, maximum threads and keep alive time of the threads in the pool. Have a look on other locking parts of the code (size of a database connection pool, ...) and the number of CPUs/cores of your server. An staring point for a thread pool size may be number of CPUs/core plus 1., with much I/O wait more become useful.
The main problem is that you aren't really using the thread pooling properly. If all of your "process" threads are of equal priority, there's no good reason not to make one large thread pool and submit all of your Runnable tasks to that. Note - "large" in this case is determined via experimentation and profiling: adjust it until your performance in terms of speed and memory is what you expect.
Here is an example of what I'm describing:
// Using 10000 purely as a concrete example - you should define the correct number
public static final LARGE_NUMBER_OF_THREADS = 10000;
// Elsewhere in code, you defined a static thread pool
public static final ExecutorService EXECUTOR =
Executors.newFixedThreadPool(LARGE_NUMBER_OF_THREADS);
public void callBundles(final Map<String, Object> eventData) {
final Map<String, String> outputs =
(Map<String, String>)eventData.get(Constants.EVENT_HOLDER);
for (final BundleRegistration.BundlesHolderEntry entry : BundleRegistration.getInstance()) {
// "Three threads: one thread for the database writer,
// two threads for the plugin processors"
// so you'll need to repeat this future = E.submit() pattern two more times
Future<?> processFuture = EXECUTOR.submit(new Runnable() {
public void run() {
final Map<String, String> response =
entry.getPlugin().process(outputs);
//process the response and update database.
System.out.println(response);
}
}
// Note, I'm catching the exception out here instead of inside the task
// This also allows me to force order on the three component threads
try {
processFuture.get();
} catch (Exception e) {
System.err.println("Should really do something more useful");
e.printStackTrace();
}
// If you wanted to ensure that the three component tasks run in order,
// you could future = executor.submit(); future.get();
// for each one of them
}
For completeness, you could also use a cached thread pool to avoid repeated creation of short-lived Threads. However, if you're already worried about memory consumption, a fixed pool might be better.
When you get to Java 7, you might find that Fork-Join is a better pattern than a series of Futures. Whatever fits your needs best, though.
Edit
This question has gone through a few iterations by now, so feel free to look through the revisions to see some background information on the history and things tried.
I'm using a CompletionService together with an ExecutorService and a Callable, to concurrently call the a number of functions on a few different webservices through CXF generated code.. These services all contribute different information towards a single set of information I'm using for my project. The services however can fail to respond for a prolonged period of time without throwing an exception, prolonging the wait for the combined set of information.
To counter this I'm running all the service calls concurrently, and after a few minutes would like to terminate any of the calls that have not yet finished, and preferably log which ones weren't done yet either from within the callable or by throwing an detailed Exception.
Here's some highly simplified code to illustrate what I'm doing already:
private Callable<List<Feature>> getXXXFeatures(final WiwsPortType port,
final String accessionCode) {
return new Callable<List<Feature>>() {
#Override
public List<Feature> call() throws Exception {
List<Feature> features = new ArrayList<Feature>();
//getXXXFeatures are methods of the WS Proxy
//that can take anywhere from second to never to return
for (RawFeature raw : port.getXXXFeatures(accessionCode)) {
Feature ft = convertFeature(raw);
features.add(ft);
}
if (Thread.currentThread().isInterrupted())
log.error("XXX was interrupted");
return features;
}
};
}
And the code that concurrently starts the WS calls:
WiwsPortType port = new Wiws().getWiws();
List<Future<List<Feature>>> ftList = new ArrayList<Future<List<Feature>>>();
//Counting wrapper around CompletionService,
//so I could implement ccs.hasRemaining()
CountingCompletionService<List<Feature>> ccs =
new CountingCompletionService<List<Feature>>(threadpool);
ftList.add(ccs.submit(getXXXFeatures(port, accessionCode)));
ftList.add(ccs.submit(getYYYFeatures(port accessionCode)));
ftList.add(ccs.submit(getZZZFeatures(port, accessionCode)));
List<Feature> allFeatures = new ArrayList<Feature>();
while (ccs.hasRemaining()) {
//Low for testing, eventually a little more lenient
Future<List<Feature>> polled = ccs.poll(5, TimeUnit.SECONDS);
if (polled != null)
allFeatures.addAll(polled.get());
else {
//Still jobs remaining, but unresponsive: Cancel them all
int jobsCanceled = 0;
for (Future<List<Feature>> job : ftList)
if (job.cancel(true))
jobsCanceled++;
log.error("Canceled {} feature jobs because they took too long",
jobsCanceled);
break;
}
}
The problem I'm having with this code is that the Callables aren't actually canceled when waiting for port.getXXXFeatures(...) to return, but somehow keep running. As you can see from the if (Thread.currentThread().isInterrupted()) log.error("XXX was interrupted"); statements the interrupted flag is set after port.getFeatures returns, this is only available after the Webservice call completes normally, instead of it having been interrupted when I called Cancel.
Can anyone tell me what I am doing wrong and how I can stop the running CXF Webservice call after a given time period, and register this information in my application?
Best regards, Tim
Edit 3 New answer.
I see these options:
Post your problem on the Apache CXF as feature request
Fix ACXF yourself and expose some features.
Look for options for asynchronous WS call support within the Apache CXF
Consider switching to a different WS provider (JAX-WS?)
Do your WS call yourself using RESTful API if the service supports it (e.g. plain HTTP request with parameters)
For über experts only: use true threads/thread group and kill the threads with unorthodox methods.
The CXF docs have some instructions for setting the read timeout on the HTTPURLConnection:
http://cwiki.apache.org/CXF20DOC/client-http-transport-including-ssl-support.html
That would probably meet your needs. If the server doesn't respond in time, an exception is raised and the callable would get the exception. (except there is a bug where is MAY hang instead. I cannot remember if that was fixed for 2.2.2 or if it's just in the SNAPSHOTS right now.)