I used this line to execute a python script from a java application:
Process process = Runtime.getRuntime().exec("python foo.py", null, directory);
The script runs a TCP server that communicates with my java app and other clients.
When I was debugging the script I had a few console prints here and there and everything was fine. As soon as the script was launched from Java Code, after a fixed time my TCP server was not responding. Following some time of debugging and frustration I removed my prints from the script and everything worked as expected.
It seems that there is some memory allocated for the process' standard output and error stream and if you don't consume it the process gets stuck while trying to write to a full buffer.
How to launch the process so that I don't have to consume the standard output stream? I'd like to keep the prints though for debugging, but don't want to start a thread that reads a stream that I don't need.
You have to consume the child process's output, or eventually it will block because the output buffer is full (don't forget about stderr, too). If you don't want to modify your Java program to consume it, perhaps you could add a flag to your script to turn off debugging altogether or at least direct it to a file (which could be /dev/null).
Java exposes ProcessBuilder. This is something you can use to execute what you want. This does a fork-exec and also lets you handle the output and error streams.
The fact that your process hangs is not because of java. Its a standard problem where your process is blocked because the stream is full and no one is consuming it.
Try using the ProcessBuilder with a thread that reads the streams.
I did some similar in the past and I redirect the exit of the process to a particular file log relative with the process. Later you could see what is happening. And you could maintenance your trace of your python script.
Related
I am working on fixing a bug that makes our CI/CD pipeline fails. During an integration test, we spin up a local database instance. In order to do this, we are using some mariadb wrappers to launch it from a java codebase.
This process can (potentially) take a long time to finish, which will cause our tests to timeout. In this case, we have added a functionality to kill a process if it cannot install within 20 seconds and should try again.
This part seems to be working.
The strange bit comes when trying to destroy the process. It seems to randomly take ~2-3 MINUTES to be unblocked. This is problematic for the same reason that the above problem was problematic.
Upon investigation into the underlying libraries, it seems like we are using ExecuteWatchdog to manage the process. The is a bit of code that is blocking is:
watchDog.destroyProcess();
// this part usually returns nearly instantly
try {
// this part can take minutes...
resultHandler.waitFor();
} catch (InterruptedException e) {
throw handleInterruptedException(e);
}
In addition to this, there is different behavior on Mac/Linux. If I do something like resultHandler.waitFor(1000) // Wait with 1000ms timeout before just exiting, it will work fine on a macbook, but on linux i see an error like: java.io.FileNotFoundException: {{executable}} (Text file busy)
Any ideas on this?
I have done some research and it seems like watchDog.destroyProcess is sending a SIGTERM instead of a SIGKILL. But I do not have any hooks to get the Process object in order to send it the KILL instead.
Thanks.
A common cause for blocking when working with processes is that the process is blocked on output, either to stdout or (the more likely to be overlooked) stderr.
In this context, setting up tests on a CI server, you might try setting the output and error output to INHERIT.
Note that this means that you won't be able to read the sub-process output or error stream in your Java code. My assumption is that you aren't trying to do that anyway, and that's why the process hangs. Instead, that output will be redirected to the output of the Java process, and I expect your CI server will log it as part of the build.
I have a very strange situation where a Java process seems to hang when called via Apache/PHP, but OK when invoked from the command line. I spent hours debugging this, no avail. Any and all thoughts welcome!
Situation: I have a .class file (without the original Java code) that reads an input file, processes the read information, and writes a report on stdout. The Java code doesn't read stdin, and only writes stdout. I wrapped this in a tiny Perl script that basically just execs "java -cp /path/to/classfile MyJavaProgram /path/to/inputfile/to/process". That way I can invoke it from the command line for testing, this works like a charm. Next, I try to invoke this from PHP using popen(), and there Java just hangs. I see the Perl process in the ps list, and Java; but the Java process waits forever. Once I kill it, the webserver page continues loading (but of course without the expected output that the Java process would generate).
What I tried so far:
Wrapping the Java process in a shell script, same behaviour. Java just hangs.
Running it from PHP with popen() without a wrapper, same behaviour.
Starting it from PHP with system() or passthru(), same behaviour.
In the Perl wrapper, reopening STDIN for /dev/null (so that reading stdin immediately returns EOF), same behaviour.
In the Perl wrapper, reopening STDERR for /dev/null, same behaviour.
In the Perl wrapper, reopening STDOUT for /dev/null. Here I would expect no output (as it gets discarded) but still the Java process just hangs.
In the Perl wrapper, reopening all 3 streams for /dev/null. Java still hangs.
Replacing the Java invocation in the Perl wrapper with a simple "ls -l /bin". This works as expected; the web page gets populated with the "ls" listing. So the problem isn't in PHP or Perl.
Starting the Java process with a "/bin/sh -c 'java .....'". Same behaviour, Java hangs.
In the Perl wrapper, I dump the environment variables too, to check them. Environment seems OK.
When the Java process is running, I look up the Perl wrapper invocation in the ps list, and copy/paste it to the command line. Works like a charm.
Similarly, when the Java process is hanging, I look up the invocation in the ps list, and copy/paste it to the command line. Works like a charm.
I also verified that the the input file is readable when invoked from the web server. All above tests with the command line were run using the same user ID as the Apache user.
Unfortunately I can't replace the Java code with something that's under my control. I only have the .class file to work with. What I haven't tried yet is to run this under Linux, so this still might be an OSX specific issue (which would surprise me).
What the hell is going on here? Any and all "wild" ideas appreciated.. thanks!
Check ALL environment from apache AND from cmd line, including the paths, UIDs etc.
Also check what the java process does when hanging (use truss/tusc/strace -f java xxxxxxxxxxx 2>/tmp/trace.$$ ) when wrapping it from both places (apache and cmdline), then compare the results.
Also, when wrapping from perl, set autoflush to 1 for stdin, stdout, stderr before exec-ing java.
I'm using I/O completion ports for a process management library (yes, there's a reason for this). You can find the source for what I'm talking about here: https://github.com/jcommon/process/blob/master/src/main/java/jcommon/process/platform/win32/Win32ProcessLauncher.java (take a look at lines 559 and 1137 -- yes, that class needs to be refactored and cleaned up).
I'm launching a child process and using named pipes (not anonymous pipes b/c I need asynchronous, overlapped ReadFile()/WriteFile()) in order to process the child process' stdout and stderr. This is mostly actually working. In a test, I launch 1,000 concurrent processes and monitor their output, ensuring they emit the proper information. Typically either all 1,000 work fine or 998 of them, leaving a couple which have problems.
Those couple of processes are showing that not all their messages are being received. I know the message is being output, but the thread processing GetQueuedCompletionStatus() for that process returns from the read with ERROR_BROKEN_PIPE.
The expected behavior is that the OS (or the C libs) would flush any remaining bytes on the stdout buffer upon process exit. I would then expect for those bytes to be queued to my iocp before getting a broken pipe error. Instead, those bytes seem to disappear and the read completes with an ERROR_BROKEN_PIPE -- which in my code causes it to initiate the teardown for the child process.
I wrote a simple application to test and figure out the behavior (https://github.com/jcommon/process/blob/master/src/test/c/stdout-1.c). This application disables buffering on stdout so all writes should effectively be flushed immediately. Using that program in my tests yields the same issues as launching "cmd.exe /c echo hi". And at any rate, shouldn't the application (or OS?) flush any remaining bytes on stdout when the process exits?
The source is in Java, using direct-mapped JNA, but should be fairly easy for C/C++ engineers to follow.
Thanks for any help you can provide!
Are you sure that the broken pipe error isn't occurring with a non zero ioSize? If ioSize is not zero then you should process the data that was read as well as noting that the file is now closed.
My C++ code which does this basically ignores ERROR_BROKEN_PIPE and ERROR_HANDLE_EOF and simply waits for either the next read attempt to fail with one of the above errors or the current read to complete with zero bytes read. The code in question works with files and pipes and I've never seen the problem that you describe when running the kind of tests that you describe.
I am calling an external application from my Java GUI. The Java code is below when the user hits the "RUN" button in the GUI:
Runtime runme = Runtime.getRuntime();
runme.exec("MyApp.bin");
MyApp.bin does some math calculations and has some loops in it - no big deal. What happens is that MyApp.bin gets stuck! When I close my Java GUI, then MyApp.bin continues to run and finishes. If I run MyApp.bin directly from the terminal, then it runs fine without freezing. Why does my application freeze when it is run from the Java GUI, but resumes when I close the Java GUI? What is the Java GUI or Java code doing that is blocking my application from running successfully?
I'm going to make a wild guess that MyApp.bin is outputting something to its standard out, and you're not reading it. This causes the buffer to fill, and blocks your process.
Runtime.exec() returns a Process object. If you read the javadoc for that you'll find:
The created subprocess does not have its own terminal or console. All
its standard io (i.e. stdin, stdout, stderr) operations will be
redirected to the parent process through three streams
(getOutputStream(), getInputStream(), getErrorStream()). The parent
process uses these streams to feed input to and get output from the
subprocess. Because some native platforms only provide limited buffer
size for standard input and output streams, failure to promptly write
the input stream or read the output stream of the subprocess may cause
the subprocess to block, and even deadlock.
http://docs.oracle.com/javase/6/docs/api/java/lang/Process.html
I need to spawn a process in Java (under Linux exclusively) that will continue to run after the JVM has exited. How can I do this?
Basically the Java app should spawn an updater which stops the Java app, updates files and then starts it again.
I'm interested in a hack & slash method to just get it working as well as a better design proposal if you have one :)
If you're spawning the process using java.lang.Process it should "just work" - I don't believe the spawned process will die when the JVM exits. You might find that the Ant libraries make it easier for you to control the spawning though.
It does actually "just work", unless you're trying to be clever.
My wrapped java.lang.Process was trying to capture the script's output, so when the JVM died, the script didn't have anywhere to send output so it just dies. If I don't try to capture the output, or the script doesn't generate any or redirects everything to a file or /dev/null, everything works as it should.
I was having trouble with this and the launched process was getting killed when the JVM shutdown.
Redirecting stdout and stderr to a file fixed the issue. I guess the process was tied to the launched java app as by default it was expecting to pass its output to it.
Here's the code that worked for me (minus exception handling):
ProcessBuilder pb = new ProcessBuilder(cmd);
pb.redirectOutput(logFile);
pb.redirectError(logFile);
Process p = pb.start();
I thought the whole point of Java was that it's fully contained within the JVM. It's kinda hard to run bytecode when there's no runtime.
If you're looking to have a totally separate process you might look into trying to start a second java.exe instance. Although for your application, it might be easier to simply make a synchronized block that stops (but doesn't kill) your app, does the updating, and then re-initializes your app's data.
It won't always "just work". When JVM spawns the child and then shuts down, the child process will also shutdown in some cases. That is expected behaviour of the process. Under WIN32 systems, it just works.
E.g. If WebLogic server was started up by a Java process, and then that process exits, it also sends the shutdown signal to the WebLogic via shutdown hook in JVM, which causes WebLogic to also shutdown.
If it "just works" for you then there is no problem, however if you find yourself in a position that child process also shutsdown with JVM it is worth having a look at the "nohup" command. The process won't respond to SIGTERM signal, but will respond to SIGKILL signal, as well as normal operations.
Update: The way described above is a bit of an overkill. Another way of doing this would be to use "&" on the end of command. This will spawn a new process that is not a child of current java process.
P.S. Sorry for so many updates, I have been learning and trying it from scratch.
>>don't believe the spawned process will die when the JVM exits.
Child process is always dying on my box(SuSE) whenever I kill java. I think, the child process will die if it's dealing with I/O of the parent process(i.e., java)
If you're looking at making an updater on Linux, you're probably barking up the wrong tree. I believe all major linux distros have a package manager built in. You should use the package manager to do your updating. Nothing frustrates me more than programs that try to self-update... (I'm looking at you, Eclipse)