I'm stuck on Mule version 3.4.0 due to requirements at work. I'm writing a service script to manage the service lifecycle of Mule and would really like to be able to have it hang and wait for a debugger to connect based on whether a certain option is present in the parameters.
I'm comfortable with Bash and implementing this, but I'm having an extremely hard time trying to get Mule to pass along the
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=9989
to the underlying Java process, as it uses its own (stupid) wrapper to address Java.
I'm trying to modify the bin/mule script to have a mode called debug which will pass the above debugger options to the JVM when invoked with:
bin/mule debug
My current work can be found here on PasteBin, and here is the relevant part near line 511:
debug() {
echo "Debugging $APP_LONG_NAME..."
getpid
if [ "X$pid" = "X" ]
then
# The string passed to eval must handle spaces in paths correctly.
COMMAND_LINE="$CMDNICE \"$WRAPPER_CMD\" \"$WRAPPER_CONF\" wrapper.syslog.ident=$APP_NAME wrapper.pidfile=\"$PIDFILE\" $ANCHORPROP $LOCKPROP"
######################################################################
# Customized for Mule
######################################################################
echo "command line: $COMMAND_LINE"
echo "mule opts: $MULE_OPTS"
echo "JPDA_OPTS: $JPDA_OPTS"
eval $COMMAND_LINE $JPDA_OPTS $MULE_OPTS
######################################################################
else
echo "$APP_LONG_NAME is already running."
exit 1
fi
}
I cannot upgrade to a newer version of Mule. I need to find a way to modify this script to simply wait for a debugger when invoked with bin/mule debug. I've modified it enough to get into this debug function I've defined which is basically a copy of their own console function for starting in console mode. I can't seem to figure out how to get my debug opts passed to the JVM. Any ideas?
The parameter -debug, following documentation, was present in 3.4.x:
./mule -debug
Give it a try.
Related
To run my application from the command line, I run:
java -Dconfig.file=./config/devApp.config -jar ./build/libs/myJar.jar
and inside my code, I have:
String configPath = System.getProperty("config.file");
Which gets the property just fine. However, when I try to debug using the built in debug Netbeans task, the property is null. The output of my run is:
Executing: gradle debug
Arguments: [-Dconfig.file=./config/devApp.config, -PmainClass=com.comp.entrypoints.Runner, -c, /home/me/Documents/projects/proj/settings.gradle]
JVM Arguments: [-Dconfig.file=./config/devApp.config]
Which is coming from:
I set it in both the arguments and JVM arguemtns to see if either would set it. Regardless of what I do, it is null. Can someone help me figure out how to set the system property so my app can get it?
You are setting the property on the Gradle JVM which has almost nothing to do with the JVM your application runs in. If you want to use Gradle to start your app for debugging, you have to tweak your Gradle build file to set or forward the system property to the debug task.
Assuming the debug task is of type JavaExec this would be something like
systemProperty 'config.file', System.properties.'config.file'
in the configuration of your debug task to forward what you set up in the "JVM Arguments" field in Netbeans.
It seems that the "Arguments (each line is an argument):" and "JVM Arguments (each line is an argument):" fields provide values to the Gradle task itself. How I managed to pass properties over to the application was to append them to the jvmLineArgs argument (see image).
My application is now receiving the profiles property.
Thanks to #Vampire for the "guess work", lol!
i am trying run the websphere liberty profile server from the command line. I am following the steps told here : https://developer.ibm.com/wasdev/downloads/liberty-profile-using-non-eclipse-environments/
I have created the server with the name server1.
But when the extraction completes and I try to start the server using the command : server start server1
the server throws an error : CWWKE0054E: Unable to open file C:\wlp\wlp\usr\servers\server1\logs\C:\Users\Furquan\AppData\Local\Temp\\ihp_custom_batches.log.. Now I know this cant be a valid path, but I dont know where and how to change it. Please help !!
This error is related to the LOG_FILE environment variable that you have defined in your environment by some other program. To solve that, you have the following opions:
Remove LOG_FILE env variable, if it is no longer needed by your system
If you cant do that, override it via server.env file, that you can create in the wlp\usr\servers\serverName directory with the following content:
LOG_FILE=console.log
As last resort (this is not recommended, will make your installation NOT SUPPORTED and in certain installations might get overwritten by updates) - modify the server.bat command line script - in the script find the following section:
if not defined LOG_FILE (
set X_LOG_FILE=console.log
) else (
set X_LOG_FILE=!LOG_FILE!
)
And after the line set X_LOG_FILE=!LOG_FILE! just add another line that will override it with the default like this set X_LOG_FILE=console.log
In general, I'd recommend second solution (with the server.env file), as it is the most portable and will work in any environment.
I have the similar problem for IBM Support Assistant V5. After I deleted %LOG_FILE% from Environment Variables, it worked.
I have a Solaris-10 non global zone. I am using MobaXterm. I login on box with root and then "su - caddrd" and then "/usr/local/bin/sudo -u cadwebppc /cad/envs/qa-cm/cadwccDomain/ucm/cs/bin/UserAdmin". This is supposed to open a GUI console, but it is failing and I am not able to figure out. Can somebody help on this ?
It gives me error -
No X11 DISPLAY variable was set, but this program performed an operation which requires it
Update - I am refining this question more. I am able to run xclock via root, via caddrd and via cadwebppc also. But when I am using it with sudo, it is giving error. So it seems something like, having issue with passing variables.
Try to set DISPLAY variable. If you are on the main display this command should do:
export DISPLAY=:0.0
I found this link to be helpful:
http://www.snapdba.com/2013/02/ssh-x-11-forwarding-and-magic-cookies/
When switching to my oracle user (or in your case caddrd) the X11 forwarding information is lost. You can use xauth to copy it to the user's .Xauthority file
So, as root do:
echo xauth add xauth list ${DISPLAY#localhost}
copy this command, sudo to your user and execute this command there.
I have the following script where I call a java program that writes a YAML output to Strandard output stream, and that is echoed (Simple).
#!/bin/bash
echo `/usr/lib/jvm/jre/bin/java -jar /etc/puppet/enc/enc.jar $1`
I have the above script in file/etc/puppet/enc/javaEnc.sh When I execute this providing node name as argument I get the following output.
---
classes:
class1:
class2:
The problem is, on the agent node, I get the error message
err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find node 'node-agent-1'; cannot compile
warning: Not using cache on failed catalog
err: Could not retrieve catalog; skipping run
I have found that the script does not execute (or rather my java program is not called, don't know why) - In my java program I write the output to a file in addition to doing a System.out.print.
I have a another script where I read the file (data.yaml) that contains the same data as I mentioned as output and writes it to output stream by following script.
#!/bin/bash
cat "/etc/puppet/enc/data.yaml"
When this script is mentioned against external_nodes, it works fine, the puppet agent configures itself. Can I please get an idea where I am getting it wrong.? The java program actually queries some external resources and classifies the classes and produces the output - it takes around 10 seconds to get this done. Could this be a problem ? I have seen ruby and python solutions - couldn't get them to work either. I would like it done with Java most preferably.
In my puppet.conf file I have the following.
[master]
node_terminus = exec
external_nodes = /etc/puppet/enc/javaEnc.sh
I have set up an automated deployment script (in shell script) for my web application.
It uses java, tomcat, maven and a postgres database.
The deployment script does this:
builds the deployable application from source repository
stops tomcat
applies database migration patches
deploys the war files in tomcat
starts tomcat (by invoking $TOMCAT_HOME/bin/startup.sh)
exits with a success message
It's all working and it's pretty neat - but it needs a little improvement.
You see, even though it exits with a success message, sometimes the deploy was not successful because the web application did not start correctly.
I would like to refactor steps 5 and 6 so that after bring up the tomcat server, the deployment script would "tail -f" in the catalina.out file, looking either for a "server started successfully" message or an exception stack trace.
The tail -f output up to that point should be part of the output of the deployment script, and step 6 would "exit 0" or "exit 1" accordingly.
I know that should be possible, if not in shell script, maybe with python.
The problem is I'm a java specialist - and by specialist I mean I suck at everything else :-)
Help please? :-)
Maybe something like this?
tmp=$(mktemp -t catalina.XXXXXXX) || exit 136
trap 'rm "$tmp"' 0
trap 'exit 255' 2 15
tail -n 200 catalina.out >"$tmp"
if grep -q error "$tmp"; then
cat "$tmp"
exit 1
fi
exit 0
On the other hand, if startup.sh were competently coded, you could just
if startup.sh; then
tail -f catalina.out
else
exit $?
fi
which can be shortened to
startup.sh || exit $?
tail -f catalina.out
As an alternative, you might want to take a look at the Apache Tomcat Manager application. It supports, amongst other things:
Deploying applications remotely, and from local paths
Listing currently deployed applications
Reloading existing applications
Starting an existing application
Stopping an existing application
Undeploying an existing application
The manager provides a web interface that can be called via curl, and which returns simple, parseable messages to indicate the status of the invoked command. Management functions can also be invoked via JMX, or Ant scripts. All in all, a very handy tool.
I ended up implementing a solution using Python's subprocess.Popen, as suggested by #snies.
Here's what it looks like:
waitForIt.py
#! /usr/bin/env python
import subprocess
import sys
def main(argv):
filename = argv[1]
match=argv[2]
p = subprocess.Popen(['tail', '-n', '0', '-f', filename], stdout=subprocess.PIPE)
while True :
line = p.stdout.readline()
print line ,
if match in line :
break
p.terminate()
if __name__ == "__main__":
main(sys.argv)
tailUntil.sh
#!/bin/bash
set -e
filename=$1
match=$2
thisdir=$(dirname $0)
python $thisdir/waitForIt.py "$filename" "$match"
and then
startTomcat.sh
${TOMCAT_HOME}/bin/startup.sh
logDeploy.sh "Agora vamos dar um tail no catalina.out..."
util_tailUntil.sh "$TOMCAT_HOME/logs/catalina.out" 'INFO: Server startup in '
It doesn't do what I originally intended (it still exits with return code 0 even when there is a stacktrace - but that could be changed with a little bit more of Python magic),
but all of tomcat's initialization log is part of the automated deploy out (and easily viewable on Jenkins' deploy job) -
so that's good enough.