commande console form java - java

i'm trying to execute 2 commands via java programme with process
Process p = Runtime.getRuntime().exec(command1);
Process p2 = Runtime.getRuntime().exec(command2);
the problem is that the first one is ok but the seconde on cant be established
it is always bloqed in waitfor()

You might be running into the dreaded "need to empty the streams" problem. See When Runtime.exec() won't for details on it.
Also in the same article is some info on other traps you can run into if you're treating getRuntime().exec() like the command line.

When running an external procss that prints anything to stdout/stderr, you should read what it writes - otherwise it will block once it's buffer fills up.
you basically needs a thread to read from stdout and a thread to read from stderr of each process.

Related

Batch file immediately terminates after executing a Java program (if batch file is called from another Java program)

I have a batch file called 'StartUpdate.bat' which contains something like this:
set CLASSPATH="myclasspath"
java -cp %CLASSPATH% UpdateProgram
runMyApp.bat
If I run 'StartUpdate.bat' directly from command line, it executes UpdateProgram and then runMyApp.bat immediately after. This is the intention.
However, if I call 'StartUpdate.bat' from another Java program, it terminates immediately after completing UpdateProgram. 'StartUpdate.bat' is called from this other Java program using
Runtime.getRuntime().exec(path + "StartUpdate.bat");
StartUpdate.bat is executed just fine, as is UpdateProgram inside it, but nothing else following UpdateProgram.
Why does it behave this way? What should I do so that it executes the remainder of the batch file?
You can use call or start to execute the java program
Explicitly use a user Thread with setDaemon(false). It seems that there was the problem.
As long as there is a user (non-daemon) thread, the JVM will keep the application alive. Daemon threads are closed when no user threads exist anymore.
As daemon threads are typically used for such "server" like purposes, an often misconception.
For the rest ProcessBuilder would be a more robust class for this task.
ProcessBuilder pb = new ProcessBuilder("dir");
Process process = pb.start();
int returnCode = process.waitFor();

Strange execution patterns with subprocess.Popen

I have a Python script wherein a JAR is called. After the JAR is called, two shell scripts are called. Initially I was doing this:
proc = subprocess.Popen(jar_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait()
output, errors = proc.communicate()
proc = subprocess.Popen(prune_command, shell=True)
proc.wait()
proc = subprocess.call(push_command, shell=True)
I have to wait for the first two processes to finish so I use Popen() and the final one I can let it run in the background, so I call() it. I pass shell=True because I want the called shell scripts to have access to environment variables.
The above works, however, I don't get any logging from the JAR process. I've tried calling it this way:
proc = subprocess.call(jar_command)
This logs as I would expect, but the two shell scripts that follow are not executed. Initially I thought the logs just weren't going to stdout but it turns out they're not being executed at all. I.E. not removing superfluous files or pushing to a database.
Why are the followup shell scripts being ignored?
If you are certain your shell scripts are not running at all, and with the first code everything works - then it must be the java command deadlocks or not terminates correctly using the call() function.
You can validate that by adding a dummy file creation in your bash scripts. Put it in the first line of the script, so if it is executed you'll get the dummy file created. If it's not created, that means the scripts weren't executed, probably due to something with the java execution.
I would have try couple things:
First I would return the Popen instead of call. Instead of using wait(), use communicate():
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
communicate() returns a tuple (stdoutdata, stderrdata).
proc = subprocess.Popen(jar_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.communicate()
Make sure to check both streams for data (stdout and stderr). You might miss an error the java process raises.
Next I would try disabling the buffer by providing bufsize=0 to Popen. It will eliminate the option it relates to python buffering.
If both options still don't work, try to see if there is an exception by using check_call():
proc = subprocess.check_call(jar_command)
Run command with arguments. Wait for command to complete. If the return code was zero then return, otherwise raise CalledProcessError.
These options might have the answer; if not, they would help the debugging process. Feel free to comment how this progress.
Most likely, you are forgetting that the processes streams are in fact OS-level buffers with some finite capacity.
For example, if you run a process that produces a lot of output in PIPE mode, and you wait for it to finish before trying to consume whatever that process wrote to output, you have a deadlock:
The process has filled up the output buffer and is now blocked on writing more data to its output. Until somebody empties the buffer by reading from pipe, the process cannot continue.
Your program is waiting for the subprocess to finish before you read the data from its buffer.
The correct way is to start a thread in your program that will "drain" the pipe constantly as the process is running and while your main thread is waiting. You must first start the process, then start the drain threads, then wait for process to finish.
For differential diagnosis, check whether the subprocess will run fine with little output (i.e. as long as the buffer does not fill up, such as a line or two).
The documentation for subprocess has a note about this.

UNIX STDOUT end symbol

I want to execute multiple commands from Java Process but I don't want to spawn a new process for executing every command. So I made an Object called Shell that holds InputStream and OutputStream for Process.
The problem is that if I don't terminate a process by appending
"exit\n"
I can't tell where is the end of the InputStream and the InputStream gets into waiting state when I've read the whole output so I need to know when to stop doing next read.
Is there some kind of a standard symbol at the end of the output?
Because what I came up with is
final String outputTerminationSignal = checksum(command);
command += ";echo \"" + outputTerminationSignal + "\";echo $?\n"
This way when I get the outputTerminationSignal line I can get the exit code and stop reading.
final String line = bufferedReader.readLine();
if (line != null && line.equals(outputTerminationSignal)) {
final String exitCode = bufferedReader.readLine();
}
Of course this is exploitable and error-prone because the real output in some case may match my generated outputTerminationSignal and the app will stop reading when it shouldn't.
I wonder if there is some standard so called "outputTerminationSignal" comming from the output I am not aware of.
Unix doesn't use a special character or symbol to indicate the end of a stream. In java, if you try to read from a stream that's at end-of-file, then you'll get an EOFException.
Having said that, if you're reading from a stream connected to a running program, then you won't get an EOFException just because the other program is idle. You would only get an EOFException if the other program has exited, or if it explicitly closes its output stream (that you are reading from). The situation you describe sounds like the shell is just idle waiting for another command. You won't get an EOF indication from the stream in this case.
You could try getting the shell to print a command prompt when it's waiting for a command, then look for the command prompt as an "end of command" indicator. Shells normally print command prompts only when they're interactive, but you might be able to find a way around that.
If you want to make the shell process exit without sending it the "exit" command, you could try closing the stream that you're using to write to the shell process. The shell should see that as an end-of-file and exit.
You could ask the shell for the PID of the spawned child, and monitor its state

send character to stdin of background Java process

I am looking to send a character to a Java process running in the background. I found this article https://serverfault.com/questions/178457/can-i-send-some-text-to-the-stdin-of-an-active-process-running-in-a-screen-sessi?answertab=active#comment155464_178470 which I thought would solve the problem, but it actually doesn't.
For testing purposes I added a line
System.out.println("This is what I read "+(int)temp);
where temp is read this way
int temp = inputStreamReader.read();
Something really weird actually happens:
I start the process in a terminal window (not in background this time)
I open another terminal and look up the process' PID
I run the command
echo q > /proc/*pid_of_the_process/fd/0
In the other window this line appears
q
so for some reason I get to see this character in the terminal where the process is running, but it is actually not read by the process, because if that was the case then I would see this line
This is what I read 113
which is what I actually get if I type 'q' from within the terminal window.
Anybody knows why do I get this funny behavior?

How to Send a Password to Process in Java

I am launching a process from java to run a command for me. This process runs for a little while, then needs a password to continue. Now I know that I can write to the in stream of the proces, but I am not quite sure how to detect when I need to write to it.
Possible solutions:
Is there a way that I can detect that the process is blocking?
Can I just write to the standard in immediately after executing the command and when the process hits a point when it needs it, it can just read from it?
Any other ideas?
It is not necessary to detect if the child process is blocking or not. If the child process is designed to block until input is provided to it via stdin, it will block until such input is provided.
It it necessary to keep in mind that the standard input, output and error buffer sizes are limited, and therefore it would be necessary for the child process to process the contents of the input buffer, and for the parent process to process the contents of the output and error buffers as soon as possible. Not doing so will result in the child process hanging.
Maybe you should get around the runas problem but not using runas. Google found me this: http://www.source-code.biz/snippets/c/1.htm Lets you pass your password at runtime....

Categories

Resources