Cloud Foundry is it possible to copy missing routes from one app to another while doing blue green deployment?
I have an app with few manually added routes, while doing blue green deployment (automated through script) I want to copy missing/manually added routes into new app. Is it possible?
Script:
#!/bin/bash
path="C:/Users/.../Desktop/cf_through_sh/appName.jar"
spaceName="development"
appBlue="appName"
appGreen="${appName}-dev"
manifestFile="C:/Users/.../Desktop/cf_through_sh/manifest-dev.yml"
domains=("domain1.com" "domain2.com")
appHosts=("host-v1" "host-v2")
evaluate_return_code (){
ret=$1
if [[ $ret != 0 ]]
then
exit $ret
fi
}
switch_to_target_space() {
space="development"
echo "Change space to ${space}"
cf t -s ${space}
evaluate_return_code $?
}
push_new_release() {
appGreen=$1
if [ ! -f "${manifestFile}" ]; then
echo "Missing manifest: ${manifestFile}";
exit 1;
fi
if [ ! -f "${path}" ]; then
echo "Missing artifact: ${path}";
exit 1;
fi
echo "Deploying ${path} as ${appGreen}"
cf push ${appGreen} -f ${manifestFile} -p ${path} --no-route
evaluate_return_code $?
}
map_routes() {
app=$1
domains=$2
shift
appHosts=$3
for host in ${appHosts[*]}; do
echo "Mapping ${host} to ${app}"
for domain in ${domains[*]}; do
cf map-route ${app} ${domain} -n ${host}
evaluate_return_code $?
done
done
}
unmap_routes() {
app=$1
domains=$2
shift
appHosts=$3
for host in ${appHosts[*]}; do
echo "Unmapping ${host} from ${app}"
for domain in ${domains[*]}; do
cf unmap-route ${app} ${domain} -n ${host}
evaluate_return_code $?
done
done
}
rename_app() {
oldName=$1
newName=$2
echo "Renaming ${oldName} to ${newName}"
cf rename ${oldName} ${newName}
evaluate_return_code $?
}
switch_names() {
appBlue=$1
appGreen=$2
appTemp="${appBlue}-old"
rename_app ${appBlue} ${appTemp}
rename_app ${appGreen} ${appBlue}
rename_app ${appTemp} ${appGreen}
}
stop_old_release() {
echo "Stopping old ${appGreen} app"
cf stop ${appGreen}
evaluate_return_code $?
}
switch_to_target_space ${spaceName}
push_new_release ${appGreen}
map_routes ${appGreen} ${domains[*]} ${appHosts[*]}
unmap_routes ${appBlue} ${domains[*]} ${appHosts[*]}
switch_names ${appBlue} ${appGreen}
stop_old_release
echo "DONE"
exit 0;
Eg:
appblue has 5 roues
1. host-v1.domain1.com
2. host-v2.domain1.com
3. host-v1.domain2.com
4. host-v2.domain2.com
5. manual-add.domain1.com //manually added route through admin UI
After blue green deployment through script app contains only 4 routes
1. host-v1.domain1.com
2. host-v2.domain1.com
3. host-v1.domain2.com
4. host-v2.domain2.com
How to copy missing 5th route? I don't want to pass host manual-add from script since it's added manually.
In general, is it possible to copy routes from one app to another if not mapped?
This has to be done only through Jenkins (or any CI-CD tool). What we did in our case is, we had a CF-Manifest-Template.yml and CF-Manifest-settings.json and we had a gradle task that would apply the settings from JSON and fill the Manifest-temple and generate a cf-manifest-generated.yml
The gradle file will have a task that would do blue-green-deployment by using this generated manifest file and all the routes will be hard-coded in the manifest-file. This is the standard way of doing it.
But if you want to copy route from an App running in Cloud Foundry and copy thos routes to another-app, then you would need to write a REST Client that connects to Cloud Foundry CloudController and gets all the route of App-A and then creates routes to APP-B
It is pretty simple !!
Write a REST Client that executes this command
cf app APP-A
This will bring back the details of APP-A as a JSON Response. The response would have these parameters
Showing health and status for app APP-A in org Org-A / space DEV as arun2381985#yahoo.com...
name: APP-A
requested state: started
instances: 1/1
usage: 1G x 1 instances
routes: ********
last uploaded: Sat 25 Aug 00:25:45 IST 2018
stack: cflinuxfs2
buildpack: java_buildpack
Read this JSON response and collect the Routes of APP-A and then have that mapped for APP-B .. Its pretty simple
Related
I'm trying to use Java to launch a Gradle process that performs a continuous build in the background. Here is my code:
final var updateTimestampProcess =
new ProcessBuilder("gradlew.bat", "compileGroovy", "--continuous")
.directory(new File(System.getProperty("user.dir")))
.inheritIO()
try {
updateTimestampProcess.start()
} catch (IOException ignored) {}
It works, but the process doesn't terminate when I use IntelliJ's Stop command. I have to use Task Manager to hunt down the Java process that's running the gradlew command and end it manually.
How can I amend my code so that the background process is terminated when the main process stops?
You can use a shutdown hook.
Add some code in the shutdown hook to kill the process you spawned.
Runtime
.getRuntime()
.addShutdownHook(new Thread(() -> System.out.println("killing the thing")));
Further Reading:
https://www.baeldung.com/jvm-shutdown-hooks
If you want to use an external script to kill a process, I needed to do something similar not long ago. When I start my app, I write the PID to a file, and when I need to shut it down, I have this script reading that PID and then killing it:
#!/bin/bash
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PIDS_DIR="${SCRIPT_DIR}/pids/"
#
# Kills all the processes associated with a CWS ID
#
kill_cws() {
if [ $# -ne 0 ]; then
echo "ERROR: Invalid number of arguments. Expected 0."
exit 1
fi
echo "[INFO] Stopping CWS..."
for f in "${PIDS_DIR}/cws_"*".pid"; do
kill_cws_process "${f}"
done
echo "[INFO] CWS Stopped"
}
#
# Kills a specific process
#
kill_cws_process() {
local pid_file="$1"
local pid
pid=$(cat "${pid_file}")
echo "[INFO][PID=${pid}] Killing CWS instance"
kill -9 "${pid}"
ev=$?
if [ "${ev}" -ne 0 ]; then
echo "[WARN][PID=${pid}] Error killing PID"
fi
echo "[INFO][PID=${pid}] Waiting for instance termination"
echo "[INFO][PID=${pid}] Clean instance PID file"
rm "${pid_file}"
ev=$?
if [ "${ev}" -ne 0 ]; then
echo "[WARN][PID=${pid}] Error deleting pid file ${pid_file}"
fi
}
#
# MAIN
#
kill_cws "$#"
After a couple hours of reading, I realize that while this is possible, I have not found a fully workable solution. As a work-around, I used the Gradle Tooling API to devise the following solution:
final ProjectConnection connection = GradleConnector
.newConnector()
.forProjectDirectory(new File(System.getProperty("user.dir")))
.connect()
try {
connection
.newBuild()
.forTasks("compileGroovy")
.withArguments("--continuous")
.setStandardOutput(System.out)
.run()
} catch (BuildException ignored) {
// Ignored for testing
}
It's neat and it solves all of the process management issues.
this groovy script terminates external process on main thread end.
not saying it's nice but it works
def procBuilder =
new ProcessBuilder("notepad.exe")
.inheritIO()
def mainThread = Thread.currentThread()
//not a daemon to avoid termination..
Thread.start {
try {
println "daemon start"
def proc = procBuilder.start()
println "daemon process started"
mainThread.join()
println "daemon process killing..."
proc.destroy()
println "daemon process end"
} catch(e) {
println "daemon error: $e"
}
}
println "main thread sleep"
Thread.sleep(5000)
println "main thread end"
What should I do to make sure that my rabbitmq user has permission to run : C:\Windows\system32\cmd.exe .
In fact i want to use SSL protocole with Rabbitmq but the node crashes. Here's the sslLogfile :
=CRASH REPORT==== 4-May-2016::18:33:16 ===
crasher:
initial call: rabbit_mgmt_external_stats:init/1
pid: <0.233.0>
registered_name: rabbit_mgmt_external_stats
exception exit: {eacces,
[{erlang,open_port,
[{spawn,
"C:\\Windows\\system32\\cmd.exe /c handle.exe /accepteula -s -p 2052 2> nul"},
[stream,in,eof,hide]],
[]},
{os,cmd,1,[{file,"os.erl"},{line,204}]},
{rabbit_mgmt_external_stats,get_used_fd,1,[]},
{rabbit_mgmt_external_stats,'-infos/2-lc$^0/1-0-',2,
[]},
{rabbit_mgmt_external_stats,'-infos/2-lc$^0/1-0-',2,
[]},
{rabbit_mgmt_external_stats,emit_update,1,[]},
{rabbit_mgmt_external_stats,handle_info,2,[]},
{gen_server,handle_msg,5,
[{file,"gen_server.erl"},{line,599}]}]}
in function gen_server:terminate/6 (gen_server.erl, line 746)
ancestors: [rabbit_mgmt_agent_sup,<0.231.0>]
messages: []
links: [<0.232.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 4185
stack_size: 27
reductions: 77435063
neighbours:
Here's my rabbitmq.config file :
[
{ssl, [{versions, ['tlsv1.2']}]},
{
rabbit,
[
{ssl_listeners, [5676]},
{ssl_options, [{cacertfile,"D:/Profiles/user/AppData/Roaming/RabbitMQ/testca/cacert.pem"},
{certfile, "D:/Profiles/user/AppData/Roaming/RabbitMQ/server/cert.pem"},
{keyfile, "D:/Profiles/user/AppData/Roaming/RabbitMQ/server/key.pem"},
{versions, ['tlsv1.2']},
{verify,verify_peer},
{fail_if_no_peer_cert,false}
]},
{loopback_users, []}
]
}
].
eacces is an Erlang file error:
eacces:
Missing permission for reading the file, or for searching one of the parent directories.
Set the right permissions.
Stop the RabbitMQ service and try to use rabbitmq-server.bat and execute it as administrator.
Then check the logs
After submitting a COMPSs application I have received the following error message and the application is not executed.
MPI_CMD=mpirun -timestamp-output -n 1 -H s00r0
/apps/COMPSs/1.3/Runtime/scripts/user/runcompss
--project=/tmp/1668183.tmpdir/project_1458303603.xml
--resources=/tmp/1668183.tmpdir/resources_1458303603.xml
--uuid=2ed20e6a-9f02-49ff-a71c-e071ce35dacc
/apps/FILESPACE/pycompssfile arg1 arg2 : -n 1 -H s00r0
/apps/COMPSs/1.3/Runtime/scripts/system/adaptors/nio/persistent_worker_starter.sh
/apps/INTEL/mkl/lib/intel64 null
/home/myhome/kmeans_python/src/ true
/tmp/1668183.tmpdir 4 5 5 s00r0-ib0 43001 43000 true 1
/apps/COMPSs/1.3/Runtime/scripts/system/2ed20e6a-9f02-49ff-a71c-e071ce35dacc : -n 1 -H s00r0
/apps/COMPSs/1.3/Runtime/scripts/system/adaptors/nio/persistent_worker_starter.sh
/apps/INTEL/mkl/lib/intel64 null
/home/myhome/kmeans_python/src/ true
/tmp/1668183.tmpdir 4 5 5 s00r0-ib0 43001 43000 true 2
/apps/COMPSs/1.3/Runtime/scripts/system/2ed20e6a-9f02-49ff-a71c-e071ce35dacc
--------------------------------------------------------------------------
All nodes which are allocated for this job are already filled.
--------------------------------------------------------------------------
I am using COMPSs 1.3.
Why is this happenning?
You are trying to run master and worker in the same node. COMPSs 1.3 at cluster with the NIO adaptor (default option) is using mpirun to spawn the master and worker processes in the different nodes of the cluster and the mpirun installed in the cluster doesn't allow to do this.
The options to solve it are the following:
You do not specify --tasks_in_master= in the enqueue_compss command.
You execute with GAT Adaptor (--comm=integratedtoolkit.gat.master.GATAdaptor) which has more overhead
Next COMPSs software release will use the spawn command which is available in the different cluster resource managers( such as blaunch, srun) which must solve this issue
I am calling a java class from a ksh script. Within the java class, it does some error checking, and if an error is found, an email is sent to user. Once the email is sent, I want to return an error code of lets say 11 to the script and fail it based on that.
Currently, my script is successful when it sends the email, but I want to fail it.
Here is some of my ksh script:
$JAVA_HOME/bin/java alfaSpecificEditCheck.ValDateCheck $edit_env $FILENAME $USERID
RC=$?
echo "RC=$RC"
if [[ $RC -eq 11 ]]; then
echo " ALFA Date has failed the edit. Safe exit."
echo ${FILEPATH} >> ${BASEDIR}/logs/failed_edit
echo " ALFA Date check Failed"
elif [[ $RC -ne 0 ]]; then
echo "ALFA Date DONE with status $RC ..."
echo ${FILEPATH} >> ${BASEDIR}/logs/status_fail
echo " ALFA Date Failed"
else
echo " ALFA Date Successful"
fi
;;
This comment is the correct answer.
Where is the code for your java file? You could use System.exit(11) from where you want to return the error code. – nikhil Oct 27 at 18:17
I have a java socket server I wrote to allow me to keep a web clusters code base in sync. When I run the init.d script from a shell login like so
[root#web11 www]# /etc/init.d/servermngr start
Logout and all will work fine but if the server reboots or I run the init.d using services like so
[root#web11 www]# service servermngr start
Any of the exec() commands passed to the socket server will not get executed on the linux box. I am assuming it has to do with the JVM having no real shell. If I login and run
[root#web11 www]# /etc/init.d/servermngr start
...and logout all runs nice all CVS commands are executed.
Another note when run as a service the socket server responds to status checks so it is running
Here is the init.d script
#!/bin/sh
# chkconfig: 2345 95 1
# description: Starts Daemon Using ServerManager.jar.
#
# Source function library.
. /etc/init.d/functions
start () {
echo -n $"Starting ServerManager: "
# start daemon
cd /www/servermanager/
daemon java -jar ServerManager.jar > /www/logs/ServerManager.log &
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch /var/lock/subsys/cups
echo "";
return $RETVAL
}
stop () {
# stop daemon
echo -n $"Stopping $prog: "
kill `ps uax | grep -i "java -jar ServerManager.ja[r]" | head -n 1 | awk '{print $2}'`
RETVAL=$?
echo "";
return $RETVAL
}
restart() {
stop
start
}
case $1 in
start)
start
;;
stop)
stop
;;
*)
echo $"Usage: servermngr {start|stop}"
exit 3
esac
exit $RETVAL
And the Java responsible for actually executing the code:
// Build cmd Array of Strings
String[] cmd = {"/bin/sh", "-c", "cd /www;cvs up -d htdocs/;cvs up -d phpinclude/"};
final Process process;
try {
process = Runtime.getRuntime().exec(cmd);
BufferedReader buf = new BufferedReader(new InputStreamReader(
process.getInputStream()));
// Since this is a CVS UP we return the Response to PHP
if(input.matches(".*(cvs up).*")){
String line1;
out.println("cvsupdate-start");
System.out.println("CVS Update" + input);
while ((line1 = buf.readLine()) != null) {
out.println(line1);
System.out.println("CVS:" + line1);
}
out.println("cvsupdate-end");
}
} catch (IOException ex) {
System.out.println("IOException on Run cmd " + CommandFactory.class.getName() + " " + ex);
Logger.getLogger(CommandFactory.class.getName()).log(Level.SEVERE, null, ex);
}
Thx for any help
What is the command you are trying to run? cd is not a program and if you have ; you have multiple commands. You can only run one program!
Are you starting the process as root? What version of (bash?) is running on the system? You may want to give csh a whirl just to rule out issues with the shell itself. I'd also suggest chaining the commands with '&' instead of ';'. Finally you may find it easier to create a shell script which contains all your commands and is called by your java process. You may also want to investigate nohup and check /etc/security/limits
You might be happier using http://akuma.kohsuke.org/ to help you with this stuff, or at least Apache Commons Exec.
Here is the startup script that fixed my issue if someone runs into an issue
#!/bin/sh
# chkconfig: 2345 95 1
# description: Starts Daemon Using ServerManager.jar.
#
# Source function library.
. /etc/init.d/functions
RETVAL=0
prog="ServerManager"
servermanager="java"
serveroptions=" -jar ServerManager.jar"
pid_file="/var/run/servermanager.pid"
launch_daemon()
{
/bin/sh << EOF
java -Ddaemon.pidfile=$pid_file $serveroptions <&- &
pid=\$!
echo \${pid}
EOF
}
start () {
echo -n $"Starting $prog: "
if [ -e /var/lock/subsys/servermanager ]; then
if [ -e /var/run/servermanager.pid ] && [ -e /proc/`cat /var/run/servermanager.pid` ]; then
echo -n $"cannot start: servermanager is already running.";
failure $"cannot start: servermanager already running.";
echo
return 1
fi
fi
# start daemon
cd /www/voodoo_servermanager/
export CVSROOT=":pserver:cvsd#cvs.zzzzz.yyy:/cvsroot";
daemon "$servermanager $serveroptions > /www/logs/ServerManager.log &"
#daemon_pid=`launch_daemon`
#daemon ${daemon_pid}
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch /var/lock/subsys/servermanager && pidof $servermanager > $pid_file
echo "";
return $RETVAL
}
stop () {
# stop daemon
echo -n $"Stopping $prog: "
if [ ! -e /var/lock/subsys/servermanager ]; then
echo -n $"cannot stop ServerManager: ServerManager is not running."
failure $"cannot stop ServerManager: ServerManager is not running."
echo
return 1;
fi
killproc $servermanager
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/servermanager;
return $RETVAL
}
restart() {
stop
start
}
case $1 in
start)
start
;;
stop)
stop
;;
restart)
restart
*)
echo $"Usage: servermngr {start|stop|restart}"
RETVAL=1
esac
exit $RETVAL