I have a java socket server I wrote to allow me to keep a web clusters code base in sync. When I run the init.d script from a shell login like so
[root#web11 www]# /etc/init.d/servermngr start
Logout and all will work fine but if the server reboots or I run the init.d using services like so
[root#web11 www]# service servermngr start
Any of the exec() commands passed to the socket server will not get executed on the linux box. I am assuming it has to do with the JVM having no real shell. If I login and run
[root#web11 www]# /etc/init.d/servermngr start
...and logout all runs nice all CVS commands are executed.
Another note when run as a service the socket server responds to status checks so it is running
Here is the init.d script
#!/bin/sh
# chkconfig: 2345 95 1
# description: Starts Daemon Using ServerManager.jar.
#
# Source function library.
. /etc/init.d/functions
start () {
echo -n $"Starting ServerManager: "
# start daemon
cd /www/servermanager/
daemon java -jar ServerManager.jar > /www/logs/ServerManager.log &
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch /var/lock/subsys/cups
echo "";
return $RETVAL
}
stop () {
# stop daemon
echo -n $"Stopping $prog: "
kill `ps uax | grep -i "java -jar ServerManager.ja[r]" | head -n 1 | awk '{print $2}'`
RETVAL=$?
echo "";
return $RETVAL
}
restart() {
stop
start
}
case $1 in
start)
start
;;
stop)
stop
;;
*)
echo $"Usage: servermngr {start|stop}"
exit 3
esac
exit $RETVAL
And the Java responsible for actually executing the code:
// Build cmd Array of Strings
String[] cmd = {"/bin/sh", "-c", "cd /www;cvs up -d htdocs/;cvs up -d phpinclude/"};
final Process process;
try {
process = Runtime.getRuntime().exec(cmd);
BufferedReader buf = new BufferedReader(new InputStreamReader(
process.getInputStream()));
// Since this is a CVS UP we return the Response to PHP
if(input.matches(".*(cvs up).*")){
String line1;
out.println("cvsupdate-start");
System.out.println("CVS Update" + input);
while ((line1 = buf.readLine()) != null) {
out.println(line1);
System.out.println("CVS:" + line1);
}
out.println("cvsupdate-end");
}
} catch (IOException ex) {
System.out.println("IOException on Run cmd " + CommandFactory.class.getName() + " " + ex);
Logger.getLogger(CommandFactory.class.getName()).log(Level.SEVERE, null, ex);
}
Thx for any help
What is the command you are trying to run? cd is not a program and if you have ; you have multiple commands. You can only run one program!
Are you starting the process as root? What version of (bash?) is running on the system? You may want to give csh a whirl just to rule out issues with the shell itself. I'd also suggest chaining the commands with '&' instead of ';'. Finally you may find it easier to create a shell script which contains all your commands and is called by your java process. You may also want to investigate nohup and check /etc/security/limits
You might be happier using http://akuma.kohsuke.org/ to help you with this stuff, or at least Apache Commons Exec.
Here is the startup script that fixed my issue if someone runs into an issue
#!/bin/sh
# chkconfig: 2345 95 1
# description: Starts Daemon Using ServerManager.jar.
#
# Source function library.
. /etc/init.d/functions
RETVAL=0
prog="ServerManager"
servermanager="java"
serveroptions=" -jar ServerManager.jar"
pid_file="/var/run/servermanager.pid"
launch_daemon()
{
/bin/sh << EOF
java -Ddaemon.pidfile=$pid_file $serveroptions <&- &
pid=\$!
echo \${pid}
EOF
}
start () {
echo -n $"Starting $prog: "
if [ -e /var/lock/subsys/servermanager ]; then
if [ -e /var/run/servermanager.pid ] && [ -e /proc/`cat /var/run/servermanager.pid` ]; then
echo -n $"cannot start: servermanager is already running.";
failure $"cannot start: servermanager already running.";
echo
return 1
fi
fi
# start daemon
cd /www/voodoo_servermanager/
export CVSROOT=":pserver:cvsd#cvs.zzzzz.yyy:/cvsroot";
daemon "$servermanager $serveroptions > /www/logs/ServerManager.log &"
#daemon_pid=`launch_daemon`
#daemon ${daemon_pid}
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch /var/lock/subsys/servermanager && pidof $servermanager > $pid_file
echo "";
return $RETVAL
}
stop () {
# stop daemon
echo -n $"Stopping $prog: "
if [ ! -e /var/lock/subsys/servermanager ]; then
echo -n $"cannot stop ServerManager: ServerManager is not running."
failure $"cannot stop ServerManager: ServerManager is not running."
echo
return 1;
fi
killproc $servermanager
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/servermanager;
return $RETVAL
}
restart() {
stop
start
}
case $1 in
start)
start
;;
stop)
stop
;;
restart)
restart
*)
echo $"Usage: servermngr {start|stop|restart}"
RETVAL=1
esac
exit $RETVAL
Related
I'm trying to use Java to launch a Gradle process that performs a continuous build in the background. Here is my code:
final var updateTimestampProcess =
new ProcessBuilder("gradlew.bat", "compileGroovy", "--continuous")
.directory(new File(System.getProperty("user.dir")))
.inheritIO()
try {
updateTimestampProcess.start()
} catch (IOException ignored) {}
It works, but the process doesn't terminate when I use IntelliJ's Stop command. I have to use Task Manager to hunt down the Java process that's running the gradlew command and end it manually.
How can I amend my code so that the background process is terminated when the main process stops?
You can use a shutdown hook.
Add some code in the shutdown hook to kill the process you spawned.
Runtime
.getRuntime()
.addShutdownHook(new Thread(() -> System.out.println("killing the thing")));
Further Reading:
https://www.baeldung.com/jvm-shutdown-hooks
If you want to use an external script to kill a process, I needed to do something similar not long ago. When I start my app, I write the PID to a file, and when I need to shut it down, I have this script reading that PID and then killing it:
#!/bin/bash
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PIDS_DIR="${SCRIPT_DIR}/pids/"
#
# Kills all the processes associated with a CWS ID
#
kill_cws() {
if [ $# -ne 0 ]; then
echo "ERROR: Invalid number of arguments. Expected 0."
exit 1
fi
echo "[INFO] Stopping CWS..."
for f in "${PIDS_DIR}/cws_"*".pid"; do
kill_cws_process "${f}"
done
echo "[INFO] CWS Stopped"
}
#
# Kills a specific process
#
kill_cws_process() {
local pid_file="$1"
local pid
pid=$(cat "${pid_file}")
echo "[INFO][PID=${pid}] Killing CWS instance"
kill -9 "${pid}"
ev=$?
if [ "${ev}" -ne 0 ]; then
echo "[WARN][PID=${pid}] Error killing PID"
fi
echo "[INFO][PID=${pid}] Waiting for instance termination"
echo "[INFO][PID=${pid}] Clean instance PID file"
rm "${pid_file}"
ev=$?
if [ "${ev}" -ne 0 ]; then
echo "[WARN][PID=${pid}] Error deleting pid file ${pid_file}"
fi
}
#
# MAIN
#
kill_cws "$#"
After a couple hours of reading, I realize that while this is possible, I have not found a fully workable solution. As a work-around, I used the Gradle Tooling API to devise the following solution:
final ProjectConnection connection = GradleConnector
.newConnector()
.forProjectDirectory(new File(System.getProperty("user.dir")))
.connect()
try {
connection
.newBuild()
.forTasks("compileGroovy")
.withArguments("--continuous")
.setStandardOutput(System.out)
.run()
} catch (BuildException ignored) {
// Ignored for testing
}
It's neat and it solves all of the process management issues.
this groovy script terminates external process on main thread end.
not saying it's nice but it works
def procBuilder =
new ProcessBuilder("notepad.exe")
.inheritIO()
def mainThread = Thread.currentThread()
//not a daemon to avoid termination..
Thread.start {
try {
println "daemon start"
def proc = procBuilder.start()
println "daemon process started"
mainThread.join()
println "daemon process killing..."
proc.destroy()
println "daemon process end"
} catch(e) {
println "daemon error: $e"
}
}
println "main thread sleep"
Thread.sleep(5000)
println "main thread end"
I am developing a network monitoring solution for my Java application so I can sniff packets on my machine interfaces and dump the result in rolling PCAP files. When launching the tcpdump command (using sudo) from the Java code, I get tcpdump: /path/to/app/log/GTP00: Permission denied
DETAILS
The command is executed using Runtime.getRuntime().exec(command) where command is a String valued sudo tcpdump -i eth0 -w /path/to/app/log/GTP -W 50 -C 20 -n net 10.246.212.0/24 and ip
The user launching the Java app is "testUser" which belongs to group "testGroup". This user is allowed to sudo tcpdump.
The destination dir has the following attributes:
[testUser#node ~]$ ls -ld /path/to/app/log
drwxrwxr-x. 2 testUser testGroup 4096 Feb 4 15:40 /path/to/app/log
MORE DETAILS
Launching the command from the command line SUCCESFULLY creates the pcap file in the specified folder.
[testUser#node ~]$ ls -l /path/to/app/log/GTP00
-rw-r--r--. 1 tcpdump tcpdump 1276 Feb 4 16:12 /path/to/app/log/GTP00
I have developed a simplified Java app for testing purposes
package execcommand;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.logging.Level;
import java.util.logging.Logger;
public class ExecCommand {
public static void main(String[] args) {
try {
String command;
String line;
String iface = "eth0";
String capturePointName = "GTP";
String pcapFilterExpression = "net 10.246.212.0/24 and ip";
int capturePointMaxNumberOfFilesKept = 50;
int capturePointMaxSizeOfFilesInMBytes = 20;
command = "sudo tcpdump -i " + iface + " -w /path/to/app/log/"
+ capturePointName + " -W " + capturePointMaxNumberOfFilesKept + " -C "
+ capturePointMaxSizeOfFilesInMBytes + " -n " + pcapFilterExpression;
Process process = Runtime.getRuntime().exec(command);
BufferedReader br = new BufferedReader(new InputStreamReader(process.getErrorStream()));
while ((line = br.readLine()) != null) {
System.err.println(line);
}
} catch (IOException ex) {
Logger.getLogger(ExecCommand.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
This test program, launched by the same user, SUCCESFULLY creates the pcap file in the specified folder.
[testUser#node ~]$ ls -l /path/to/app/log/GTP00
-rw-r--r--. 1 tcpdump tcpdump 1448 Feb 4 16:21 /path/to/app/log/GTP00
Then, I can infer that the problem is somehow restricted to my Java app. This is how my Java app is launched:
exec java -Dknae_1 -Djavax.net.ssl.trustStorePassword=<trust_pass> -Djavax.net.ssl.trustStore=/path/to/app/etc/certificates/truststore -Djavax.net.ssl.keyStorePassword=<key_store_pass> -Djavax.net.ssl.keyStore=/path/to/app/etc/certificates/keystore -d64 -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=8887,suspend=y -XX:-UseLargePages -Xss7m -Xmx64m -cp /path/to/app/lib/knae.jar:/path/to/app/lib/xphere_baseentity.jar:/path/to/app/lib/mysql.jar:/path/to/app/lib/log4j-1.2.17.jar:/path/to/app/lib/tools.jar:/path/to/app/conf:/path/to/app/lib/pcap4j-core-1.7.5.jar:/path/to/app/lib/pcap4j-packetfactory-static-1.7.5.jar:/path/to/app/lib/jna-5.1.0.jar:/path/to/app/lib/slf4j-api-1.7.25.jar:/path/to/app/lib/slf4j-simple-1.7.25.jar com.app.package.knae.Knae knae_1
UPDATE
I am able to write the pcap file within /tmp.
I have also tried giving 777 permissions to /path/to/app/log to no avail.
These are the attibutes of both dirs:
[testUser#node ~]$ ls -ld /tmp
drwxrwxrwt. 10 root root 4096 Feb 6 10:13 /tmp
[testUser#node ~]$ ls -ld /path/to/app/log
drwxrwxrwx. 2 testUser testGroup 4096 Feb 6 09:25 /path/to/app/log
I will provide any additional information as needed.
Why is tcpdump complaining about not being able to write this file?
Use absolute paths in command line instead of "sudo" and "tcpdump"
Use ProcessBuilder.class instead of Runtime.exec() because you can specify the working directory, you can use spaces in options and more.
In tcpdump command you have to use -Z flag to specify user because PCAP uses different than caller one. Check this link on ServerFault: tcpdump permisson denied
Cloud Foundry is it possible to copy missing routes from one app to another while doing blue green deployment?
I have an app with few manually added routes, while doing blue green deployment (automated through script) I want to copy missing/manually added routes into new app. Is it possible?
Script:
#!/bin/bash
path="C:/Users/.../Desktop/cf_through_sh/appName.jar"
spaceName="development"
appBlue="appName"
appGreen="${appName}-dev"
manifestFile="C:/Users/.../Desktop/cf_through_sh/manifest-dev.yml"
domains=("domain1.com" "domain2.com")
appHosts=("host-v1" "host-v2")
evaluate_return_code (){
ret=$1
if [[ $ret != 0 ]]
then
exit $ret
fi
}
switch_to_target_space() {
space="development"
echo "Change space to ${space}"
cf t -s ${space}
evaluate_return_code $?
}
push_new_release() {
appGreen=$1
if [ ! -f "${manifestFile}" ]; then
echo "Missing manifest: ${manifestFile}";
exit 1;
fi
if [ ! -f "${path}" ]; then
echo "Missing artifact: ${path}";
exit 1;
fi
echo "Deploying ${path} as ${appGreen}"
cf push ${appGreen} -f ${manifestFile} -p ${path} --no-route
evaluate_return_code $?
}
map_routes() {
app=$1
domains=$2
shift
appHosts=$3
for host in ${appHosts[*]}; do
echo "Mapping ${host} to ${app}"
for domain in ${domains[*]}; do
cf map-route ${app} ${domain} -n ${host}
evaluate_return_code $?
done
done
}
unmap_routes() {
app=$1
domains=$2
shift
appHosts=$3
for host in ${appHosts[*]}; do
echo "Unmapping ${host} from ${app}"
for domain in ${domains[*]}; do
cf unmap-route ${app} ${domain} -n ${host}
evaluate_return_code $?
done
done
}
rename_app() {
oldName=$1
newName=$2
echo "Renaming ${oldName} to ${newName}"
cf rename ${oldName} ${newName}
evaluate_return_code $?
}
switch_names() {
appBlue=$1
appGreen=$2
appTemp="${appBlue}-old"
rename_app ${appBlue} ${appTemp}
rename_app ${appGreen} ${appBlue}
rename_app ${appTemp} ${appGreen}
}
stop_old_release() {
echo "Stopping old ${appGreen} app"
cf stop ${appGreen}
evaluate_return_code $?
}
switch_to_target_space ${spaceName}
push_new_release ${appGreen}
map_routes ${appGreen} ${domains[*]} ${appHosts[*]}
unmap_routes ${appBlue} ${domains[*]} ${appHosts[*]}
switch_names ${appBlue} ${appGreen}
stop_old_release
echo "DONE"
exit 0;
Eg:
appblue has 5 roues
1. host-v1.domain1.com
2. host-v2.domain1.com
3. host-v1.domain2.com
4. host-v2.domain2.com
5. manual-add.domain1.com //manually added route through admin UI
After blue green deployment through script app contains only 4 routes
1. host-v1.domain1.com
2. host-v2.domain1.com
3. host-v1.domain2.com
4. host-v2.domain2.com
How to copy missing 5th route? I don't want to pass host manual-add from script since it's added manually.
In general, is it possible to copy routes from one app to another if not mapped?
This has to be done only through Jenkins (or any CI-CD tool). What we did in our case is, we had a CF-Manifest-Template.yml and CF-Manifest-settings.json and we had a gradle task that would apply the settings from JSON and fill the Manifest-temple and generate a cf-manifest-generated.yml
The gradle file will have a task that would do blue-green-deployment by using this generated manifest file and all the routes will be hard-coded in the manifest-file. This is the standard way of doing it.
But if you want to copy route from an App running in Cloud Foundry and copy thos routes to another-app, then you would need to write a REST Client that connects to Cloud Foundry CloudController and gets all the route of App-A and then creates routes to APP-B
It is pretty simple !!
Write a REST Client that executes this command
cf app APP-A
This will bring back the details of APP-A as a JSON Response. The response would have these parameters
Showing health and status for app APP-A in org Org-A / space DEV as arun2381985#yahoo.com...
name: APP-A
requested state: started
instances: 1/1
usage: 1G x 1 instances
routes: ********
last uploaded: Sat 25 Aug 00:25:45 IST 2018
stack: cflinuxfs2
buildpack: java_buildpack
Read this JSON response and collect the Routes of APP-A and then have that mapped for APP-B .. Its pretty simple
I am calling a java class from a ksh script. Within the java class, it does some error checking, and if an error is found, an email is sent to user. Once the email is sent, I want to return an error code of lets say 11 to the script and fail it based on that.
Currently, my script is successful when it sends the email, but I want to fail it.
Here is some of my ksh script:
$JAVA_HOME/bin/java alfaSpecificEditCheck.ValDateCheck $edit_env $FILENAME $USERID
RC=$?
echo "RC=$RC"
if [[ $RC -eq 11 ]]; then
echo " ALFA Date has failed the edit. Safe exit."
echo ${FILEPATH} >> ${BASEDIR}/logs/failed_edit
echo " ALFA Date check Failed"
elif [[ $RC -ne 0 ]]; then
echo "ALFA Date DONE with status $RC ..."
echo ${FILEPATH} >> ${BASEDIR}/logs/status_fail
echo " ALFA Date Failed"
else
echo " ALFA Date Successful"
fi
;;
This comment is the correct answer.
Where is the code for your java file? You could use System.exit(11) from where you want to return the error code. – nikhil Oct 27 at 18:17
From Java code I am calling my script file following way,
Process process =Runtime.getRuntime().exec("sh /usr/local/garner/garnerd start");
int status = process.waitFor();
garnerd script code is given below(which in turn calls garner.sh):
function start()
{
sh /usr/local/garner/garner.sh > /usr/local/garner/log/garner.log &
echo "Garner is started"
}
case "$1" in
start)
start
;;
*)
echo "Usage: garnerd {start|stop|restart|status|reconfig}"
exit 1
esac
exit $retval
Garner shell script(garner.sh) source is:
/usr/local/garner/garnerd status
if [ $? -eq 0 ]; then
echo "`date` $0 :Garner is allready running"
exit 0
fi
touch /dev/blank
cd /usr/local/garner
uname -a | grep -i cygwin
if [ $? -eq 0 ]
then
export CYGWIN="$CYGWIN error_start=dumper -d %1 %2"
/usr/local/garner/garner.exe -n -c /usr/local/garner/conf/garner.conf -p /usr/local/garner/garner.pid -l /usr/local/garner/log/garner.log -L 4 &
else
/usr/local/garner/garner -c /usr/local/garner/conf/garner.conf -p /usr/local/garner/garner.pid -l /usr/local/garner/log/garner.log -L 4 &
fi
cd -
when I call ./garnerd start, it creates pid file.after that If I see contents of this file, it shows process id of garner.
[root#localhost garner]# cat garner.pid
9282
But when I check detail information of process id through following command, it shows "SigBlk: 0000000000000004" which uses signal 3.
[root#localhost garner]# cat /proc/9282/status
Name: garner
State: S (sleeping)
SleepAVG: 78%
Tgid: 9282
Pid: 9282
PPid: 9281
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 64
Groups: 0 1 2 3 4 6 10
VmPeak: 58888 kB
VmSize: 58884 kB
VmLck: 0 kB
VmHWM: 7124 kB
VmRSS: 7124 kB
VmData: 17192 kB
VmStk: 88 kB
VmExe: 84 kB
VmLib: 4480 kB
VmPTE: 156 kB
StaBrk: 05af0000 kB
Brk: 060ec000 kB
StaStk: 7fff0329d950 kB
Threads: 2
SigQ: 0/47721
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000004
SigIgn: 0000000000001002
SigCgt: 0400000180006005
CapInh: 0000000000000000
CapPrm: 00000000fffffeff
CapEff: 00000000fffffeff
Cpus_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00ffffff
Mems_allowed: 00000000,00000003
And If I manually run command(./garnerd start) from linux machine, it shows "SigBlk: 0000000000000000".
It means Java Blocks the processes? if yes then why and in which circumstances??
From the API doc of java.lang.Process:
Because some native platforms only provide limited buffer size for
standard input and output streams, failure to promptly write the input
stream or read the output stream of the subprocess may cause the
subprocess to block, or even deadlock.
This article explains the issue in detail and suggests a solution.