Runtime.getRuntime().exec() not launching process - java

I have a multi-threaded application that launches an external app to do data conversion in preparation for later parts of the application.
I have an issue that when I set my thread count higher then 6 concurrent threads, the Runtime.getRuntime().exec() fails to launch the external application (I have also tried using ProcessBuilder with the same results). It does not throw any sort of Exception, and nothing is captured on either the standard output stream or standard error stream. What's even stranger is upon rebooting the server, I can run at least 8 concurrent threads for a few minutes without the issue, but then the issue will return.
I have read that Linux's implementation uses the fork() command which can cause an unable to allocate memory error, but I am not getting any such error.
I have written another test app to just launch X number of notespad.exe's without issue with X being as large as 100.
The application is running on a Window 2003 standard x64 server in a VMware environment.
JVM version is 1.6.0_11.
I can update the JVM in an attempt to resolve the issue, but would like to leave that as a last effort to prevent needing to test all application with the new JVM version.

Try to download the dll "framedyn.dll" from http://www.dlldump.com/download-dll-files_new.php/dllfiles/F/framedyn.dll/5.1.2600.2180/download.html and paste on C:\Windows\System32.

Related

JVM only using half the cores on a server

I have a number of Java processes using OpenJDK 11 running on Windows Server 2019. The server has two physical processors and 36 total cores; it is an HP machine. When I start my processes, I see work allocation in Task Manager across all the cores. This is good. However after the processes run for some period of time, not a consistent amount of time, the machine begins to only utilize only half the cores.
I am working off a few theories:
The JDK has some problem that is preventing it from consistently accessing all the cores.
Something with Windows Server 2019 is causing a problem, limiting Java from accessing all the cores.
There is a thermal management problem and one processor is getting too hot and the OS is directing all the processing to the other processor.
There is some issue with hyper-threading and the 'logical' processors that is causing the process to not be able to utilize all the cores.
I've tried searching for JDK issues and haven't found anything like this mentioned. I went down to the server and while it's running a little warm, it didn't appear excessively hot. I have not yet tried disabling hyper-threading. I have tried a number of parameters to force the JVM to use all the cores and indeed the process initially does use all the cores; I can see the activity in Task Manager.
Anyone have any thoughts? This is a really baffling problem and I'd appreciate any ideas.
UPDATE: I am able to make it use the other processor by using the Task Manager to assign one of the java.exe processes to the other processor. This is also working from the java invocation on the command line as well with an argument for which socket to use.
Now that said, this feels like a hack. I don't see why I should have to manually assign a socket to each of my java processes; that job should be left to the OS. I'm still not sure exactly where the problem is, if it's the OS or what.

Is it better to launch a Java app once and sleep or repeat launching and killing?

I have a Java application that needs to run several times. Every time it runs, it checks if there's data to process and if so, it processes the data.
I'm trying to figure out what's the best approach (performance, resource consumption, etc.) to do this:
1.- Launch it once, and if there's nothing to process make it sleep (All Java).
2.- Using a bash script to launch the Java app, and when it finishes, sleep (the script) and then relaunch the java app.
I was wondering if it is best to keep the Java app alive (sleeping) or relaunching every time.
It's hard to answer your question without the specific context. On the face of it, your questions sounds like it could be a premature optimization.
Generally, I suggest you do what's easier for you to do (and to maintain), unless you have good reasons not to. Here are some possible good reasons, pick the ones appropriate to your situation:
For sleeping in Java:
The check of whether there's new data is easier in Java
Starting the Java program takes time or other resources, for example if on startup, your program needs to load a bunch of data
Starting the Java process from bash is complex for some reason - maybe it requires you to fiddle with a bunch of environment variables, files or something else.
For re-launching the Java program from bash:
The check of whether there's new data is easier in bash
Getting the Java process to sleep is complex - maybe your Java process is a complex multi-threaded beast, and stopping, and then re-starting the various threads is complicated.
You need the memory in between Java jobs - killing the Java process entirely would free all of its memory.
I would not keep it alive.
Instead of it you can use some Job which runs at defined intervals you can use jenkins or you can use Windows scheduler and configure it to run every 5 minutes (as you wish).
Run a batch file with Windows task scheduler
And from your batch file you can do following:
javac JavaFileName.java // To Compile
java JavaFileName // to execute file
See here how to execute java file from cmd :
How do I run a Java program from the command line on Windows?
I personally would determine it, by the place where the application is working.
if it would be my personal computer, I would use second option with bash script (as resources on my local machine might change a lot, due to extensive use of some other programs and it can happen that at some point I might be running out of memory for example)
if it goes to cloud (amazon, google, whatever) I know exactly what kind of processes are running there (it should not change so dynamically comparing to my local PC) and long running java with some scheduler would be fine for me

OutOfMemoryError on OpenShift

I have a Tomcat Java application running on OpenShift (1 small gear) that consists of two main parts: A cron job that runs every minute, parses information from the web and saves it into a MongoDB database, and some servlets to access that data.
After deploying the app, it runs fine, but sooner or later the server will stop and I cannot access the servlets anymore (the HTTP request takes very long, and if it finishes, it returns a Proxy Error). I can only force stop the app using the rhc command line and restart it.
When I look at the jbossews.log file, I see multiple occurences of this error:
Exception in thread "http-bio-127.5.35.129-8080-Acceptor-0" java.lang.OutOfMemoryError:
unable to create new native thread
Is there anything I can do to prevent this error without needing to upgrade to a larger gear with more memory?
According your description I can understand that some memory leak issue is their with your app . That may be because that you are not stooping your threads.
Sometimes what happen is thread will not stop automatically then we need to stop the thread explicitly.
I guess, itss not a memory problem, but OS resource problem. You are running out of native threads, max threads your JVM can have.
You can increase it by this way
ulimit -s newvalue

How to define host of some strange java-process?

Good day everybody!
I'm developing java GWT web application. Yesterday it was working fine - task manager was showing netbeans process and ONE java process - definetely it was tomcat. But today I'm observing netbeans process, java process of tomcat and some unknown java process which causes java heap space error. This strange process eats a lot of memory and it's memory consumption grows dramatically in time.
Probably useful information: the only thing I changed in my app is dropping database and creating it again from some backup. I suspect java JDBC driver can't connect to DB because of probable incorrect user privileges - it is not a problem, queries are performing successfully but strange java process is exists.
Question: How to define host of this unknown java-process? What application, netbeans or tomcat or something else creates it?
On a Unix platform, ps has several options that show more than just the process name ("java") - e.g. on Linux try ps ax | grep java and you'll see the whole command line that was used to start the java process. It's easy to determine from there what process is running and what they're supposed to do.
On Windows you'll have to find an equivalent - if you're lucky the user executing the process will help you as well - e.g. if it's you or SYSTEM (for services), but the full commandline definitely beats it.
OK, I found the reason - I select a lot of data from my DB. It seems JDBC driver was loading incoming data continuously until memory was enough.

Java application performance changing based on how it is executed

hopefully this is an easy and quick question. I recently developed a CPU intensive java application in Netbeans. It uses A* pathfinding tens of thousands of times per second to solve a tiles matching game. The application is finished, and it runs pretty fast (I've been testing in netbeans the whole time). I've clocked it at 700 attempts per second (each attempt is probably 20 or so pathfinds). When I build the project it creates a jar, and I can run this outside of netbeans. If I use the command line (Windows 7), and use java -jar theFile.jar, I clock it at 1000 attempts per second. This is understandable since the IDE was probably using a bit of cpu power and holding it back (My application is multicored, you can set the number. I usually use 3/4 so it doesnt slow my system too much). Now, the confusing part. Obviously I don't want the user to have to use the command line every time they want to run this application on windows. They should just be able to click the jar. The problem is that when I double click the jar file, the program runs at a sickly 300 attempts per second!!
Why on earth would these three ways of running the exact same program, all else being constant, have such a massive impact on performance? Is my fix to create a script to run the .jar by command line, or do you guys recognize what's going on here? Thanks very much!
Edit: New Information
I made a batch file with the command: java -jar theFile.jar
When this is executed, it runs at the same speed as it would if I ran it in the console (so, 1000 att/sec)
However, I also made an executable with a simple c++ program. The program had just a couple lines, and was System("java -jar theFile.jar"); and return 0;. Unbeleivably, this runs at the speed of double clicking the jar file, about 300att/sec. How bizarre! It could very well be different IDE parameters, but i'm not sure how to check the default system parameters, or how to modify them for this particular jar.
You may be running into the differences between the client and server versions of the HotSpot VM. From this article:
On platforms typically used for client applications, the JDK comes with a VM implementation called the Java HotSpotâ„¢ Client VM (client
VM). The client VM is tuned for reducing start-up time and memory
footprint. It can be invoked by using the -client command-line option
when launching an application.
On all platforms, the JDK comes with an implementation of the Java virtual machine called the Java HotSpot Server VM (server VM). The
server VM is designed for maximum program execution speed. It can be
invoked by using the -server command-line option when launching an
application.
I'm guessing that clicking the jar file may be invoking the client VM, unless you set the -server flag. This article provides some more details:
What's the difference between the -client and -server systems?
These two systems are different binaries. They are essentially two
different compilers (JITs)interfacing to the same runtime system. The
client system is optimal for applications which need fast startup
times or small footprints, the server system is optimal for
applications where the overall performance is most important. In
general the client system is better suited for interactive
applications such as GUIs. Some of the other differences include the
compilation policy,heap defaults, and inlining policy.
Where do I get the server and client systems?
Client and server systems are both downloaded with the 32-bit Solaris
and Linux downloads. For 32-bit Windows, if you download the JRE, you
get only the client, you'll need to download the SDK to get both
systems.
For 64-bit, only the server system is included. On Solaris, the 64-bit
JRE is an overlay on top of the 32-bit distribution. However, on Linux
and Windows, it's a completely separate distribution.
I would like java to default to -server. I have a lot of scripts which
I cannot change (or do not want to change). Is there any way to do
this?
Since Java SE 5.0, with the exception of 32-bit Windows, the server VM
will automatically be selected on server-class machines. The
definition of a server-class machine may change from release to
release, so please check the appropriate ergonomics document for the
definition for your release. For 5.0, it's Ergonomics in the 5.0
Java[tm] Virtual Machine.
Should I warm up my loops first so that Hotspot will compile them?
Warming up loops for HotSpot is not necessary. HotSpot contains On
Stack Replacement technology which will compile a running
(interpreted) method and replace it while it is still running in a
loop. No need to waste your applications time warming up seemingly
infinite (or very long running) loops in order to get better
application performance.

Categories

Resources