Starting multiple threads and having each exec() then destroy() a running java process result in some of the process not being destroyed and still running after program exit. Here is some code that reproduce the issue. I noticed the more threads you start, the more processes stay alive. And the more sleep before destroy(), the less processes stay alive. (I used InfiniteLoop as an example. Any running process will do the trick.)
EDIT : Bug has been reported to Oracle, waiting for an answer. Feel free to share any knowledge/experiments on the subject.
for(int i = 0; i < 100; i++)
{
new Thread(new Runnable()
{
public void run()
{
try
{
Process p = Runtime.getRuntime().exec(new String[]{"java", "InfiniteLoop"});
Thread.sleep(1);
p.destroy();
}catch(IOException | InterruptedException e){e.printStackTrace();}
}
}).start();
}
Use a p.waitFor(); before p.destroy(); ,
this will ensure the completion of the previous process. I think you p.destroy command gets invoked sooner than the exec() command performs the action. Therefore it becomes useless.
If subprocesses write anything to stdout or stderr (intentionally or not), that could cause trouble:
"Because some native platforms only provide limited buffer size for
standard input and output streams, failure to promptly write the input
stream or read the output stream of the subprocess may cause the
subprocess to block, and even deadlock."
Source: http://www.javaworld.com/jw-12-2000/jw-1229-traps.html
The whole article is IMO worth reading if you need to use Runtime.exec().
This is simply because before the threads execute the destroy call, your main program terminates and all the associated threads leaving the started processes running. To verify this, simply add a System.out call after the destroy and you will find it is not executed. To overcome this add a Thread.sleep at the end of your main method and you will not have the orphaned processes. The below does not leave any process running.
public class ProcessTest {
public static final void main (String[] args) throws Exception {
for(int i = 0; i < 100; i++) {
new Thread(new Runnable()
{
public void run() {
try {
Process p = Runtime.getRuntime().exec(new String[]{"java", "InfiniteLoop"});
Thread.sleep(1);
p.destroy();
System.out.println("Destroyed");
}catch(IOException e) {
System.err.println("exception: " + e.getMessage());
} catch(InterruptedException e){
System.err.println("exception: " + e.getMessage());
}
}
}).start();
}
Thread.sleep(1000);
}
}
You should close the input/output/error streams to the process. We saw some issues in the past where the forked process was not completing properly due to those streams not being closed (even if they weren't being used).
An exemplary solution:
p.destroy();
p.getInputStream().close();
p.getOutputStream().close();
p.getErrorStream().close();
I believe that according to link, a distinct process is spawned by the operating system in response to this call. This process has a lifetime independent of your Java program and threads within it so you would expect it to continue running after your program has exited. I just tried it on my machine and it appeared to work as expected:
import java.io.*;
class Mp {
public static void main(String []args) {
for(int i = 0; i < 100; i++) {
new Thread(new Runnable() {
public void run() {
try {
System.out.println("1");
Process p = Runtime.getRuntime().exec
(new String[]{"notepad", ""});
System.out.println("2");
Thread.sleep(5);
System.out.println("3");
p.destroy();
System.out.println("4");
}
catch(IOException | InterruptedException e) {
e.printStackTrace();
}
}
}).start();
}
}
}
This is not an answer; I am posting complete source for my own attempt at recreating this problem as per discussion in question comments.
I cannot reproduce this problem on Ubuntu 12.04; OpenJDK 6b_27 (however, see below).
ProcessTest.java:
import java.io.*;
public class ProcessTest {
public static final void main (String[] args) throws Exception {
for(int i = 0; i < 100; i++) {
new Thread(new Runnable()
{
public void run() {
try {
Process p = Runtime.getRuntime().exec(new String[]{"java", "InfiniteLoop"});
Thread.sleep(1);
p.destroy();
}catch(IOException e) {
System.err.println("exception: " + e.getMessage());
} catch(InterruptedException e){
System.err.println("exception: " + e.getMessage());
}
}
}).start();
}
}
}
InfiniteLoop.java
public class InfiniteLoop {
public static final void main (String[] args) {
while (true) ;
}
}
I cannot reproduce the issue where processes remaining running after the JVM terminates. However, if I add a long delay in the main thread after starting the threads but before returning from main, I do see roughly a dozen running java processes that stick around (although they are terminated when the main program terminates).
Update:
I just had it leave about 5 processes running after it terminated. It doesn't always happen. Weird. I want to know more about this too. I have a hunch that it has something to do with destroying the process too quickly or some kind of race condition; maybe java forks something off or does something to create a new process that destroy() doesn't take care of if called too quickly / at the wrong time.
I found an old bug (but it is not mark resolved) stating that if a process spawns subprocesses they may not be killed by destroy(). bugs.sun.com/bugdatabase/view_bug.do?bug_id=4770092 What version of the JDK are you using.
Here's another reference to what looks like a similar issue: Java tool/method to force-kill a child process And I want to apologize if I've only added confusion to your life, I don't actually use Process that much and am not familiar with the quirks. Hopefully somebody else will step in with a definitive answer. It seems like it doesn't handle subprocesses well, and I'm presuming java forks something off. That's all I got.
There is a race condition between the time Runtime.exec kicks off a new thread to start a Process and when you tell that process to destroy itself.
I'm on a linux machine so I will use the UNIXProcess.class file to illustrate.
Runtime.exec(...) will create a new ProcessBuilder and start it which on a unix machine creates a new UNIXProcess instance. In the constructor of UNIXProcess there is this block of code which actually executes the process in a background (forked) thread:
java.security.AccessController.doPrivileged(
new java.security.PrivilegedAction() {
public Object run() {
Thread t = new Thread("process reaper") {
public void run() {
try {
pid = forkAndExec(prog,
argBlock, argc,
envBlock, envc,
dir,
redirectErrorStream,
stdin_fd, stdout_fd, stderr_fd);
} catch (IOException e) {
gate.setException(e); /*remember to rethrow later*/
gate.exit();
return;
}
java.security.AccessController.doPrivileged(
new java.security.PrivilegedAction() {
public Object run() {
stdin_stream = new BufferedOutputStream(new
FileOutputStream(stdin_fd));
stdout_stream = new BufferedInputStream(new
FileInputStream(stdout_fd));
stderr_stream = new FileInputStream(stderr_fd);
return null;
}
});
gate.exit(); /* exit from constructor */
int res = waitForProcessExit(pid);
synchronized (UNIXProcess.this) {
hasExited = true;
exitcode = res;
UNIXProcess.this.notifyAll();
}
}
};
t.setDaemon(true);
t.start();
return null;
}
});
Notice that the background thread sets the field pid which is the UNIX process id. This will be used by destroy() to tell the OS which process to kill.
Because there is no way to make sure that this background thread has run when destroy() is called, we may try to kill the process before it has run OR we may try to kill the process before pid field has been set; pid is uninitialized and therefore is 0. So I think calling destroy too early will do the equivalent of a kill -9 0
There is even a comment in the UNIXProcess destroy() that alludes to this but only considers calling destroy after the process has already finished, not before it has started:
// There is a risk that pid will be recycled, causing us to
// kill the wrong process! So we only terminate processes
// that appear to still be running. Even with this check,
// there is an unavoidable race condition here, but the window
// is very small, and OSes try hard to not recycle pids too
// soon, so this is quite safe.
The pid field is not even marked as volatile so we may not even see the most recent value all the time.
I had a very similar issue and the problem with destroy() not working was manifesting even with a single thread.
Process process = processBuilder(ForeverRunningMain.class).start()
long endTime = System.currentTimeMillis() + TIMEOUT_MS;
while (System.currentTimeMillis() < endTime) {
sleep(50);
}
process.destroy();
The process was not always destroyed if TIMEOUT_MS was too low. Adding an additional sleep() before destroy() fixed it (even though I don't have an explanation why):
Thread.sleep(300);
process.destroy();
Related
I have a Java program as follows:
public class foo{
public static void main(String[] args) throws Exception{
Thread t = new Thread(
new Runnable() {
public void run() {
try{System.in.read();}catch(Exception e){}
}
}
);
t.setDaemon(true);
t.start();
Thread.sleep(10); // Make sure it hits the read() call
t.interrupt();
t.stop();
System.exit(0);
}
}
Running this (time java foo) with the System.in.read call present takes ~480ms to exit while running it with the System.in.read call commented out takes ~120ms to exit
I had thought that once the main thread reaches the end, the program terminates, but clearly, there's another 300ms lag hanging around (you can see this by adding a println after the Thread.sleep). I tried t.interrupt t.stop System.exit which should stop things "immediately", but none of them seem able to make the program skip it's 350ms extra exit latency seemingly doing nothing.
Anyone know why that is the case, and if there's anything I can do to avoid this latency?
As it turns out besides collecting bad news, there is some workaround too, see the very bottom of this post.
This is not a full reply, just a check popped into my mind, with sockets.
Based on this and the System.in.read() experiment I have reproduced too, the delay may be the cost of having an outstanding synchronous I/O request towards the OS. (Edit: actually it is an explicit wait which kicks in when threads do not exit normally when the VM is shutting down, see below the horizontal line)
(I am ending the thread(s) with a while(true);, so it (they) never exits prematurely)
if you create a bound socket final ServerSocket srv=new ServerSocket(0);, exit remains 'normal'
if you srv.accept();, you suddenly have the extra wait
if you create an "inner" daemon thread with Socket s=new Socket("localhost",srv.getLocalPort());, and Socket s=srv.accept(); outside, it becomes 'normal' again
however if you invoke s.getInputStream().read(); on any of them, you have the extra wait again
if you do it with both sockets, extra wait extends a bit longer (much less than 300, but consistent 20-50 ms for me)
having the inner thread, it is also possible to get stuck on the new Socket(...); line, if accept() is not invoked outside. This also has the extra wait
So having sockets (just bound or even connected) is not a problem, but waiting for something to happen (accept(), read()) introduces something.
Code (this variant hits the two s.getInputSteam().read()-s)
import java.net.*;
public class foo{
public static void main(String[] args) throws Exception{
Thread t = new Thread(
new Runnable() {
public void run() {
try{
final ServerSocket srv=new ServerSocket(0);
Thread t=new Thread(new Runnable(){
public void run(){
try{
Socket s=new Socket("localhost",srv.getLocalPort());
s.getInputStream().read();
while(true);
}catch(Exception ex){}
}});
t.setDaemon(true);
t.start();
Socket s=srv.accept();
s.getInputStream().read();
while(true);
}catch(Exception ex){}
}
}
);
t.setDaemon(true);
t.start();
Thread.sleep(1000);
}
}
I also tried what appears in the comments: having access (I just used static) to ServerSocket srv, int port, Socket s1,s2, it is faster to kill things on the Java side: close() on srv/s1/s2 shuts down accept() and read() calls very fast, and for shutting down accept() in particular, a new Socket("localhost",port) also works (just it has a race condition when an actual connection arrives at the same time). A connection attempt can be shut down too with close(), just an object is needed for that (so s1=new Socket();s1.connect(new InetSocketAddress("localhost",srv.getLocalPort())); has to be used instead of the connecting constructor).
TL;DR: does it matter to you? Not at all: I tried System.in.close(); and it had absolutely no effect on System.in.read();.
New bad news. When a thread is in native code, and that native code does not check for 'safepoint', one of the final steps of the shutdown procedure waits for 300 milliseconds, minimum:
// [...] In theory, we
// don't have to wait for user threads to be quiescent, but it's always
// better to terminate VM when current thread is the only active thread, so
// wait for user threads too. Numbers are in 10 milliseconds.
int max_wait_user_thread = 30; // at least 300 milliseconds
And it is waiting in vain, because the thread is executing a simple fread on /proc/self/fd/0
While read (and recv too) is wrapped in some magical RESTARTABLE looping thing (https://github.com/openjdk-mirror/jdk7u-hotspot/blob/master/src/os/linux/vm/os_linux.inline.hpp#L168 and read is a bit lower - it is a wrapper for fread in yet another file), which seems to be aware of EINTR
#define RESTARTABLE(_cmd, _result) do { \
_result = _cmd; \
} while(((int)_result == OS_ERR) && (errno == EINTR))
[...]
inline size_t os::restartable_read(int fd, void *buf, unsigned int nBytes) {
size_t res;
RESTARTABLE( (size_t) ::read(fd, buf, (size_t) nBytes), res);
return res;
}
, but that is not happening anywhere, plus there are some comments here and there that they did not want to interfere with libpthread's own signalling and handlers. According to some questions here on SO (like How to interrupt a fread call?), it might not work anyway.
On the library side, readSingle (https://github.com/openjdk-mirror/jdk7u-jdk/blob/master/src/share/native/java/io/io_util.c#L38) is the method which has been invoked:
jint
readSingle(JNIEnv *env, jobject this, jfieldID fid) {
jint nread;
char ret;
FD fd = GET_FD(this, fid);
if (fd == -1) {
JNU_ThrowIOException(env, "Stream Closed");
return -1;
}
nread = (jint)IO_Read(fd, &ret, 1);
if (nread == 0) { /* EOF */
return -1;
} else if (nread == JVM_IO_ERR) { /* error */
JNU_ThrowIOExceptionWithLastError(env, "Read error");
} else if (nread == JVM_IO_INTR) {
JNU_ThrowByName(env, "java/io/InterruptedIOException", NULL);
}
return ret & 0xFF;
}
which is capable of handling 'being interrupted' on JRE level, but that part just will not get executed as fread does not return (in case of everything non-Windows, that IO_Read is #define-d to JVM_Read, and that is just a wrapper for the restartable_read mentioned earlier)
So, it is by design.
One thing which works is to provide your own System.in (despite of being final there is a setIn() method for this purpose, doing the nonstandard swap in JNI). But it involves polling, so it is a bit ugly:
import java.io.*;
public class foo{
public static void main(String[] args) throws Exception{
System.setIn(new InputStream() {
InputStream in=System.in;
#Override
public int read() throws IOException {
while(in.available()==0)try{Thread.sleep(100);}catch(Exception ex){}
return in.read();
}
});
Thread t = new Thread(
new Runnable() {
public void run() {
try{
System.out.println(System.in.read());
while(true);
}catch(Exception ex){}
}
}
);
t.setDaemon(true);
t.start();
Thread.sleep(1000);
}
}
With the Thread.sleep() inside InputStream.read() you can balance between being unresponsive or cooking with the CPU. Thread.sleep() correctly checks for being shut down, so even if you put it up to 10000, the process will exit fast.
As others have commented, Thread interrupt does not cause a blocking I/O call to immediately stop. The wait is happening because system in is waiting on input from a File (stdin). If you were to supply the program with some initial input you will notice it finishes much faster:
$ time java -cp build/libs/testjava.jar Foo
real 0m0.395s
user 0m0.040s
sys 0m0.008s
$ time java -cp build/libs/testjava.jar Foo <<< test
real 0m0.064s
user 0m0.036s
sys 0m0.012s
If you want to avoid waiting on I/O threads, then you can check the availability first. You can then perform a wait using a method which is interruptable such as Thread.sleep:
while (true) {
try {
if (System.in.available() > 0) {
System.in.read();
}
Thread.sleep(400);
}
catch(Exception e) {
}
}
This program will exit quickly each time:
$ time java -cp build/libs/testjava.jar Foo
real 0m0.065s
user 0m0.040s
sys 0m0.008s
$ time java -cp build/libs/testjava.jar Foo <<< test
real 0m0.065s
user 0m0.040s
sys 0m0.008s
I am trying to create a small program using Java to fork two new child processes. It's for a beginner's programming class who's tutorials are in C, so I'm looking for some help to understand what this code tidbit is trying to do and what is the best way to adapt it to a Java-based program (to eventually build on it).
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid t pid;
/*fork a child process*/
pid = fork();
if (pid < 0) { /*error occurred*/
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) {/*child process */
execlp("/bin/ls", "ls", NULL);
}
else { /*parent process*/
/*parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
}
return 0;
}
UPDATE:
I am supposed to attach an id to each child process and and its parent, printing the info when the child process executes and printing a termination notification when it terminates. I now see that this bit of code above lists the contents of the current directory and prints "Child Complete" when the process has terminated. Is the listing of the entire directory considered one process? If so, where/how does the second new child process come into the picture?
Well, to answer what the program does:
When fork() executes, you get two processes.
They do exactly the same thing, except that one of them (the child) gets 0 returned from fork(), while the parent gets any other positive value from fork(). A negative return from fork() means it failed.
So by looking at the return from fork(), the process can determine if it's child or parent.
In your case, you let the child execute the "ls" command, which lists files in current directory.
You let the parent wait() for all its child processes to finish. Then you say "Child complete".
You can try removing the wait() system call, to see clearer that the two processes actually run concurrently.
Have a look at the man pages for ps(1), ls(1), fork(2) and exec(3).
In Java, that might look something like -
public static void main(String[] args) {
try {
Process p = Runtime.getRuntime().exec("/bin/ls");
final InputStream is = p.getInputStream();
Thread t = new Thread(new Runnable() {
public void run() {
InputStreamReader isr = new InputStreamReader(is);
int ch;
try {
while ((ch = isr.read()) != -1) {
System.out.print((char) ch);
}
} catch (IOException e) {
e.printStackTrace();
}
}
});
t.start();
p.waitFor();
t.join();
System.out.println("Child Complete");
} catch (Exception e) {
e.printStackTrace();
}
}
Hi Every JAVA Developper,
I have juste a simple Question about JVM, i want to know how long the JVM will wait for a thread ?
For Example, take a look at this code :
public static void main(String[] args) {
p = Runtime.getRuntime().exec("myShellCommand -p1 v1 -p2 v2");
p.waitFor();
System.out.println("End ....:)");
}
Suppose that "myShellCommand" running for ever, whats happen then ? the JVM still also waiting for ever ?
the waitFor method causes the current thread to wait, if necessary, until the process represented by this Process object has terminated. This method returns immediately if the subprocess has already terminated. If the subprocess has not yet terminated, the calling thread will be blocked until the subprocess exits.(From the Javadoc).
Based on the documentation, I think that it will run forever.
In your case, the JVM would continue to wait for the launched process to terminate.
However, you could launch another "process monitor" thread and that could wait/sleep for a reasonable time and then interrupt the main thread.
As per the javadoc of the waitFor method
if the current thread is interrupted by another thread while it is
waiting, then the wait is ended and an InterruptedException is thrown.
#Kaoutar
If your requirement is to launch the process and then exit the JVM after a reasonable time period, e.g. 10 minutes, then you could do something like this:
public static void main(String[] args)
{
Thread subProcessThread = new Thread(new Runnable()
{
#Override
public void run()
{
Process p;
try
{
p = Runtime.getRuntime().exec("myShellCommand -p1 v1 -p2 v2");
p.waitFor();
System.out.println("End ....:)");
}
catch (IOException e)
{
e.printStackTrace();
}
catch (InterruptedException e)
{
e.printStackTrace();
}
}
});
subProcessThread.start();
long waitTimeInMillis = 10 * 60 * 1000;
subProcessThread.join(waitTimeInMillis);
}
I want to run an external programs repeated N times, waiting for output each time and process it. Since it's too slow to run sequentially, I tried multithreading.
The code looks like this:
public class ThreadsGen {
public static void main(String[] pArgs) throws Exception {
for (int i =0;i < N ; i++ )
{
new TestThread().start();
}
}
static class TestThread extends Thread {
public void run() {
String cmd = "programX";
String arg = "exArgs";
Process pr;
try {
pr = new ProcessBuilder(cmd,arg).start();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
try {
pr.waitFor();
} catch (InterruptedException e) {
e.printStackTrace();
}
//process output files from programX.
//...
}
However, it seems to me that only one thread is running at a time (by checking CPU usage).
What I want to do is getting all threads (except the one that is waiting for programX to finish) working? What's wrong with my code?
Is it because pr.waitFor(); makes the main thread wait on each subthread?
The waitFor() calls are not your problem here (and are actually causing the spawned Threads to wait on the completion of the spawned external programs rather than the main Thread to wait on the spawned Threads).
There are no guarantees around when Java will start the execution of Threads. It is quite likely, therefore, that if the external program(s) that you are running finish quickly then some of the Threads running them will complete before all the programs are launched.
Also note that CPU usage is not necessarily a good guide to concurrent execution as your Java program is doing nothing but waiting for the external programs to complete. More usefully you could look at the number of executed programs (using ps or Task Manager or whatever).
Isn't yours the same problem as in this thread: How to wait for all threads to finish, using ExecutorService?
Sub-process in java are very expensive. Each process is usually support by a NUMBERS of threads.
a thread to host the process (by JDK 1.6 on linux)
a thread to read to read/print/ignore the input stream
another thread to read/print/ignore the error stream
a more thread to do timeout and monitoring and kill sub-process by your application
the business logic thread, holduntil for the sub-process return.
The number of thread get out of control if you have a pool of thread focking sub-process to do tasks. As a result, there may be more then a double of concurrent thread at peak.
In many cases, we fork a process just because nobody able to write JNI to call native function missing from the JDK (e.g. chmod, ln, ls), trigger a shell script, etc, etc.
Some thread can be saved, but some thread should run to prevent the worst case (buffer overrun on inputstream).
How can I reduce the overhead of creating sub-process in Java to the minimum?
I am thinking of NIO the stream handles, combine and share threads, lower background thread priority, re-use of process. But I have no idea are they possible or not.
JDK7 will address this issue and provide new API redirectOutput/redirectError in ProcessBuilder to redirect stdout/stderr.
However the bad news is that they forget to provide a "Redirect.toNull" what mean you will want to do something like "if(*nix)/dev/null elsif(win)nil"
Unbeliable that NIO/2 api for Process still missing; but I think redirectOutput+NIO2's AsynchronizeChannel will help.
I have created an open source library that allows non-blocking I/O between java and your child processes. The library provides an event-driven callback model. It depends on the JNA library to use platform-specific native APIs, such as epoll on Linux, kqueue/kevent on MacOS X, or IO Completion Ports on Windows.
The project is called NuProcess and can be found here:
https://github.com/brettwooldridge/NuProcess
To answer your topic (I don't understand description), I assume you mean shell subprocess output, check these SO issues:
platform-independent /dev/null output sink for Java
Is there a Null OutputStream in Java?
Or you can close stdout and stderr for the command being executed under Unix:
command > /dev/null 2>&1
You don't need any extra threads to run a subprocess in java, although handling timeouts does complicate things a bit:
import java.io.IOException;
import java.io.InputStream;
public class ProcessTest {
public static void main(String[] args) throws IOException {
long timeout = 10;
ProcessBuilder builder = new ProcessBuilder("cmd", "a.cmd");
builder.redirectErrorStream(true); // so we can ignore the error stream
Process process = builder.start();
InputStream out = process.getInputStream();
long endTime = System.currentTimeMillis() + timeout;
while (isAlive(process) && System.currentTimeMillis() < endTime) {
int n = out.available();
if (n > 0) {
// out.skip(n);
byte[] b = new byte[n];
out.read(b, 0, n);
System.out.println(new String(b, 0, n));
}
try {
Thread.sleep(10);
}
catch (InterruptedException e) {
}
}
if (isAlive(process)) {
process.destroy();
System.out.println("timeout");
}
else {
System.out.println(process.exitValue());
}
}
public static boolean isAlive(Process p) {
try {
p.exitValue();
return false;
}
catch (IllegalThreadStateException e) {
return true;
}
}
}
You could also play with reflection as in Is it possible to read from a InputStream with a timeout? to get a NIO FileChannel from Process.getInputStream(), but then you'd have to worry about different JDK versions in exchange for getting rid of the polling.
nio won't work, since when you create a process you can only access the OutputStream, not a Channel.
You can have 1 thread read multiple InputStreams.
Something like,
import java.io.InputStream;
import java.util.List;
import java.util.concurrent.CopyOnWriteArrayList;
class MultiSwallower implements Runnable {
private List<InputStream> streams = new CopyOnWriteArrayList<InputStream>();
public void addStream(InputStream s) {
streams.add(s);
}
public void removeStream(InputStream s) {
streams.remove(s);
}
public void run() {
byte[] buffer = new byte[1024];
while(true) {
boolean sleep = true;
for(InputStream s : streams) {
//available tells you how many bytes you can read without blocking
while(s.available() > 0) {
//do what you want with the output here
s.read(buffer, 0, Math.min(s.available(), 1024));
sleep = false;
}
}
if(sleep) {
//if nothing is available now
//sleep
Thread.sleep(50);
}
}
}
}
You can pair the above class with another class that waits for the Processes to complete, something like,
class ProcessWatcher implements Runnable {
private MultiSwallower swallower = new MultiSwallower();
private ConcurrentMap<Process, InputStream> proceses = new ConcurrentHashMap<Process, InputStream>();
public ProcessWatcher() {
}
public void startThreads() {
new Thread(this).start();
new Thread(swallower).start();
}
public void addProcess(Process p) {
swallower.add(p.getInputStream());
proceses.put(p, p.getInputStream());
}
#Override
public void run() {
while(true) {
for(Process p : proceses.keySet()) {
try {
//will throw if the process has not completed
p.exitValue();
InputStream s = proceses.remove(p);
swallower.removeStream(s);
} catch(IllegalThreadStateException e) {
//process not completed, ignore
}
}
//wait before checking again
Thread.sleep(50);
}
}
}
As well, you don't need to have 1 thread for each error stream if you use ProcessBuilder.redirectErrorStream(true), and you don't need 1 thread for reading the process input stream, you can simply ignore the input stream if you are not writing anything to it.
Since you mention, chmod, ln, ls, and shell scripts, it sounds like you're trying to use Java for shell programming. If so, you might want to consider a different language that is better suited to that task such as Python, Perl, or Bash. Although it's certainly possible to create subprocesses in Java, interact with them via their standard input/output/error streams, etc., I think you will find a scripting language makes this kind of code less verbose and easier to maintain than Java.
Have you considered using a single long-running helper process written in another language (maybe a shell script?) that will consume commands from java via stdin and perform file operations in response?