I want to execute multiple commands from Java Process but I don't want to spawn a new process for executing every command. So I made an Object called Shell that holds InputStream and OutputStream for Process.
The problem is that if I don't terminate a process by appending
"exit\n"
I can't tell where is the end of the InputStream and the InputStream gets into waiting state when I've read the whole output so I need to know when to stop doing next read.
Is there some kind of a standard symbol at the end of the output?
Because what I came up with is
final String outputTerminationSignal = checksum(command);
command += ";echo \"" + outputTerminationSignal + "\";echo $?\n"
This way when I get the outputTerminationSignal line I can get the exit code and stop reading.
final String line = bufferedReader.readLine();
if (line != null && line.equals(outputTerminationSignal)) {
final String exitCode = bufferedReader.readLine();
}
Of course this is exploitable and error-prone because the real output in some case may match my generated outputTerminationSignal and the app will stop reading when it shouldn't.
I wonder if there is some standard so called "outputTerminationSignal" comming from the output I am not aware of.
Unix doesn't use a special character or symbol to indicate the end of a stream. In java, if you try to read from a stream that's at end-of-file, then you'll get an EOFException.
Having said that, if you're reading from a stream connected to a running program, then you won't get an EOFException just because the other program is idle. You would only get an EOFException if the other program has exited, or if it explicitly closes its output stream (that you are reading from). The situation you describe sounds like the shell is just idle waiting for another command. You won't get an EOF indication from the stream in this case.
You could try getting the shell to print a command prompt when it's waiting for a command, then look for the command prompt as an "end of command" indicator. Shells normally print command prompts only when they're interactive, but you might be able to find a way around that.
If you want to make the shell process exit without sending it the "exit" command, you could try closing the stream that you're using to write to the shell process. The shell should see that as an end-of-file and exit.
You could ask the shell for the PID of the spawned child, and monitor its state
Related
I built an interactive EXE which means that you can continuously send new commands to it and it will process them.
An automation of this can be implemented in Java according to this answer. However, when sending the command, the code will not wait till the command has finished. Instead, it will return the control back to the caller right away which might lead to race conditions: If the sent command was supposed to write a file, maybe the file isn't created yet before it is accessed. How can a command be sent, the output read and as soon as some input command is expected again, the sendCommand() call returns?
public synchronized void sendCommand(String command) throws IOException
{
byte[] commandBytes = (command + "\n").getBytes(UTF_8.name());
outputStream.write(commandBytes);
outputStream.flush();
}
Preferably also returning the process output in the meantime. This would be the default behavior of a non-interactive shell command which terminates once finished executing. read() blocks indefinitely until the process terminates and I do not want to hardcode the length of the expected process output or similar hacks to circumvent this shortcoming.
I decided to rewrite my binary to be non-interactive again. It turns out the expected performance gain was negligible so there was no more reason to keep it interactive and go through an increased implementation hassle.
I have a Python script wherein a JAR is called. After the JAR is called, two shell scripts are called. Initially I was doing this:
proc = subprocess.Popen(jar_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait()
output, errors = proc.communicate()
proc = subprocess.Popen(prune_command, shell=True)
proc.wait()
proc = subprocess.call(push_command, shell=True)
I have to wait for the first two processes to finish so I use Popen() and the final one I can let it run in the background, so I call() it. I pass shell=True because I want the called shell scripts to have access to environment variables.
The above works, however, I don't get any logging from the JAR process. I've tried calling it this way:
proc = subprocess.call(jar_command)
This logs as I would expect, but the two shell scripts that follow are not executed. Initially I thought the logs just weren't going to stdout but it turns out they're not being executed at all. I.E. not removing superfluous files or pushing to a database.
Why are the followup shell scripts being ignored?
If you are certain your shell scripts are not running at all, and with the first code everything works - then it must be the java command deadlocks or not terminates correctly using the call() function.
You can validate that by adding a dummy file creation in your bash scripts. Put it in the first line of the script, so if it is executed you'll get the dummy file created. If it's not created, that means the scripts weren't executed, probably due to something with the java execution.
I would have try couple things:
First I would return the Popen instead of call. Instead of using wait(), use communicate():
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
communicate() returns a tuple (stdoutdata, stderrdata).
proc = subprocess.Popen(jar_command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.communicate()
Make sure to check both streams for data (stdout and stderr). You might miss an error the java process raises.
Next I would try disabling the buffer by providing bufsize=0 to Popen. It will eliminate the option it relates to python buffering.
If both options still don't work, try to see if there is an exception by using check_call():
proc = subprocess.check_call(jar_command)
Run command with arguments. Wait for command to complete. If the return code was zero then return, otherwise raise CalledProcessError.
These options might have the answer; if not, they would help the debugging process. Feel free to comment how this progress.
Most likely, you are forgetting that the processes streams are in fact OS-level buffers with some finite capacity.
For example, if you run a process that produces a lot of output in PIPE mode, and you wait for it to finish before trying to consume whatever that process wrote to output, you have a deadlock:
The process has filled up the output buffer and is now blocked on writing more data to its output. Until somebody empties the buffer by reading from pipe, the process cannot continue.
Your program is waiting for the subprocess to finish before you read the data from its buffer.
The correct way is to start a thread in your program that will "drain" the pipe constantly as the process is running and while your main thread is waiting. You must first start the process, then start the drain threads, then wait for process to finish.
For differential diagnosis, check whether the subprocess will run fine with little output (i.e. as long as the buffer does not fill up, such as a line or two).
The documentation for subprocess has a note about this.
I'm trying to make a java program that commands through cmd.exe and prints their output. To do this, I'm using this code:
cmdLine = Runtime.getRuntime().exec("cmd.exe");
cmdLineOut = new BufferedReader(new InputStreamReader(cmdLine.getInputStream()));
cmdLineIn = new PrintWriter(cmdLine.getOutputStream());
// ...
cmdLineIn.println(command);
cmdLineIn.flush();
String s = null;
while ((s = cmdLineOut.readLine()) != null)
System.out.println(s);
Although, when input is given, the output is never printed.
EDIT: Solved
The cmdLineOut.readLine() doesn't return null when the input is empty, it freezes. Since readLine freezes at the end no other code is executed, I just put the printing of the readLine in a seperate thread.
If somebody wants to answer this better, go ahead.
You never actually execute the user's command, at least in the snippet you posted. Also, nearly all command prompt "commands" are actually just programs that are on the default program search path. You should probably just Runtime.getRuntime().exec(user_command) for each command. This means that you will have to set up the input and output streams like you have already done for each command. You are right to get input in a separate thread, since attempting to read input will block the current thread until there is actually input to read.
However, some commands, even under UNIX or Linux systems, are "built-in" (like cd), meaning that the command prompt (aka "shell") handles the commands internally. Your program will have to test the user input to see if they are calling a built-in, and specially handle calls to built-in commands. Your program should actually be portable to non-Windows computers. Of course, the user would use different commands (cp instead ofcopy), but the only part you would have to add would be handling for other OS' shells' lists of built-ins (or simply have your program implement a "cross-platform" set of built-ins - this is your shell program, you get to make the rules).
I'm trying to run a perl script from Java code and read it's output with the following code:
String cmd = "/var/tmp/./myscript";
Process process = Runtime.getRuntime().exec(cmd);
stdin = new BufferedReader(new InputStreamReader(process.getInputStream()));
String line;
while((line = stdin.readLine()) != null) {
System.out.println(line);
}
But the code always hangs on the readLine().
I tried using
stdin.read();
Instead but that also hangs.
tried modifying the cmd to
cmd = "perl /var/tmp/myscript";
And also
cmd = {"perl","/var/tmp/myscript"};
But that also hangs.
tried reading the stdin in separate thread. tried reading both stdin and stderr in separate threads. Still no luck.
I know there are many questions here dealing with Process.waitFor() hanging due to not reading the streams, as well as BufferedReader.read() hanging, tried all the suggested solutions, still no luck.
Of course, running the same script on the CLI itself writes output to the standard output (console) and exists with exit code 0.
I'm running on Centos 6.6.
Any help will be appreciated.
I presume that when run directly from the command line, the script runs to completion, producing the expected output, and terminates cleanly. If not, then fix your script first.
The readLine() invocation hanging almost surely means that neither a line terminator nor end-of-file is encountered. In other words, the method is blocked waiting for the script. Perhaps the script produces no output at all under the conditions, but does not terminate. This might happen, for instance, if it expects to read data from its own standard input before it proceeds. It might also happen if it is blocked on output to its stderr.
In the general case, you must read both a Process's stdout and its stderr, in parallel, via the InputStreams provided by getInputstream() and getErrorStream(). You should also handle the OutputStream provided by getOutputStream() by either feeding it the needed standard input data (also in parallel with the reading) or by closing it. You can substitute closing the process's streams for reading them if the particular process you are running does not emit data to those streams, and you normally should close the Process's OutputStream when you have no more data for it. You need to read the two InputStreams even if you don't care about what you read from them, as the process may block or fail to terminate if you do not. This is tricky to get right, but easier to do for specific cases than it is to write generalized support for. And anyway, there's ProcessBuilder, which goes some way toward an easier general-purpose interface.
Try using ProcessBuilder like so:
String cmd = "/var/tmp/./myscript";
ProcessBuilder perlProcessBuilder = new ProcessBuilder(cmd);
perlProcessBuilder.redirectOutput(ProcessBuilder.Redirect.PIPE);
Process process = perlProcessBuilder.start();
stdin = new BufferedReader(new InputStreamReader(process.getInputStream()));
String line;
while((line = stdin.readLine()) != null) {
System.out.println(line);
}
From the ProcessBuilder javadoc (link)
public ProcessBuilder redirectOutput(ProcessBuilder.Redirect destination)
Sets this process builder's standard output destination. Subprocesses subsequently started by this object's start() method send their standard output to this destination.
If the destination is Redirect.PIPE (the initial value), then the standard output of a subprocess can be read using the input stream returned by Process.getInputStream(). If the destination is set to any other value, then Process.getInputStream() will return a null input stream.
Parameters:
destination - the new standard output destination
Returns:
this process builder
Throws:
IllegalArgumentException - if the redirect does not correspond to a valid destination of data, that is, has type READ
Since:
1.7
I am using the code below to read a sql statement from stdin on the command line:
BufferedReader in = null;
StringBuilder sb = new StringBuilder();
String line = null;
try {
in = new BufferedReader(new InputStreamReader(System.in));
while((line = in.readLine()) != null) {
sb.append(line);
}
} finally {
if (in!=null) in.close();
}
My problem is that the application needs to run sometimes with data from stdin, and sometimes not (no piped input). If there is no input in the above code, in.readLine() blocks. Is there a way to rewrite this code so that it can still run if nothing is piped in?
UPDATE: The application is designed to expect piped data from the command line, not from the keyboard.
I don't think there is any way to check if you will eventually get another line of input. Note that your current code does terminate if the user closes the input stream (e.g., with ^D in a terminal).
BufferedReader.ready() checks if there is some data on the stream. Like T.J. mentioned, you might be unlucky and ask for data right before you actually receive it, and your user will be sad because you didn't answer their query.
Scanner.hasNextLine() is a blocking operation, so probably not what you're looking for.
You could have the user specify whether or not to read from System.in, for instance by using command line arguments.
You can use ready to test whether the stream has data ready. If ready returns false, the stream didn't have data ready when you called it (it might have received data a microsecond later). If you call ready and it returns true, you know you can call read without blocking. (You may not be able to call readLine without blocking, of course.)
Unix utilities that support optionally reading from stdin usually figure out what to do based on their command line. For example, cat will read from stdin if no files are named on the command line. You could do something along those lines--add a command-line flag or option or something to indicate whether the program should try to read anything from stdin.
Another approach is to redirect input from /dev/null if there is no input that you want the program to read. In this case, you could read from stdin and get an immediate end-of-file indication. The windows equivalent is to redirect from NUL.
If you really need your program to detect whether it can read from stdin without blocking, look at InputStream.available(). But if you want to support someone typing in the program's input (or copy-pasting it into a terminal window, say) that the InputStream won't show input as available until the user actually types something.
On unix you can check whether /proc/self/fd/0 points to a file, a pipe or from the console.
$ ls -l /proc/self/fd/0
lrwx------ 1 peter peter 64 Apr 25 18:12 /proc/self/fd/0 -> /dev/pts/21
$ ls -l /proc/self/fd/0 < /dev/null
lr-x------ 1 peter peter 64 Apr 25 18:13 /proc/self/fd/0 -> /dev/null
$ echo Hello World | ls -l /proc/self/fd/0
lr-x------ 1 peter peter 64 Apr 25 18:13 /proc/self/fd/0 -> pipe:[139250355]
This will tell you if the input is a file, has been piped or is a terminal.
It depends on how your program is run when there is no piped input. Generally a program will have an open stdin and all you can do is set a timer, wait a certain amount of time for input, and if the timer expires without input, assume there isn't going to be any. The alternative would be to arrange for the program to be run with stdin closed, in which case you'll get an EOF when you try to read.
To close stdin in unix (including Mac OS) you'd write a bash script to launch your program:
#!/bin/bash
exec 0>&- # close stdin
java -jar yourProgram.jar # run your program
Of course, if you're launching from the bash command line anyway, you don't need a script:
prompt> java -jar yourProgram.jar 0>&-
But if your program is being run in a Java EE container, I don't know how you'd close stdin before launch and maybe you can't.