RPM spec. How to avoid duplication - java

Hi i have a problem with code duplication while building rpm package.
I have a spec like this:
Summary : ${product.id} ${rpmType}
Name : ${product.id}_${rpmType}
.....
%build
%install
%files
%defattr(-,root,java,750)
%{home}
%clean
rm -rf %{buildroot}
%pre
CHECK_PAUSE=2;
echo -n "Stopping tomcat";
sh %{binary}/shutdown.sh
rc=0
while [ "$rc" == 0 ]; do
sleep $CHECK_PAUSE;
wget -q -O /dev/null -S http://localhost:8080/some-server/test;
rc=$?;
done;
echo "Tomcat: STOPPED"
mv %{home}/%{warname} %{home}/%{warname}.`date +%Y%d%m`
rm -rf %{home}/some-server
echo done.
%post
CHECK_PAUSE=2;
echo -n "Starting tomcat";
sh %{binary}/startup.sh
rc=1
while [ "$rc" -ne 0 ]; do
sleep $CHECK_PAUSE
wget -q -O /dev/null -S http://localhost:8080/some-server/test
rc=$?
done
echo "Tomcat: STARTED"
%preun
if [ "$1" == "0" ]; then
#STOP TOMCAT HERE SAME WAY AS IN PRE
fi
%postun
if [ "$1" == "0" ]; then
#START TOMCAT HERE SAME WAY AS IN POST
fi
The scripts which are going to be executed in preun and postun sections are the same as in pre and post sections. But I don't want just copy/paste them. Is there some sophisticated solution to avoid code duplication here?

Here is a possible solution:
Create separate files for each piece of reusable shell script code.
Create a sort of pre-processor (possibly another shell script) that inserts the files created in Step 1 above into the spec file under the appropriate scriplet label (i.e. %pre, %post, %preun, %postun).
If you are using a source control management system, you might want to track these additional files as well.

Related

getting the error "TERM environment variable not set" when running a bash script from Java process builder [duplicate]

I have a file.sh with this, when run show : TERM environment variable not set.
smbmount //172.16.44.9/APPS/Interfas/HERRAM/sc5 /mnt/siscont5 -o
iocharset=utf8,username=backup,password=backup2011,r
if [ -f /mnt/siscont5/HER.TXT ]; then
echo "No puedo actualizar ahora"
umount /mnt/siscont5
else
if [ ! -f /home/emni/siscont5/S5.TXT ]; then
echo "Puedo actualizar... "
touch /home/emni/siscont5/HER.TXT
touch /mnt/siscont5/SC5.TXT
mv -f /home/emni/siscont5/CCORPOSD.DBF /mnt/siscont5
mv -f /home/emni/siscont5/CCTRASD.DBF /mnt/siscont5
rm /mnt/siscont5/SC5.TXT
rm /home/emni/siscont5/HER.TXT
echo "La actualizacion ha sido realizada..."
else
echo "No puedo actualizar ahora: Interfaz exportando..."
fi
fi
umount /mnt/siscont5
echo "/mnt/siscont5 desmontada..."
You can see if it's really not set. Run the command set | grep TERM.
If not, you can set it like that:
export TERM=xterm
Using a terminal command i.e. "clear", in a script called from cron (no terminal) will trigger this error message. In your particular script, the smbmount command expects a terminal in which case the work-arounds above are appropriate.
You've answered the question with this statement:
Cron calls this .sh every 2 minutes
Cron does not run in a terminal, so why would you expect one to be set?
The most common reason for getting this error message is because the script attempts to source the user's .profile which does not check that it's running in a terminal before doing something tty related. Workarounds include using a shebang line like:
#!/bin/bash -p
Which causes the sourcing of system-level profile scripts which (one hopes) does not attempt to do anything too silly and will have guards around code that depends on being run from a terminal.
If this is the entirety of the script, then the TERM error is coming from something other than the plain content of the script.
You can replace :
export TERM=xterm
with :
export TERM=linux
It works even in kernel with virgin system.
SOLVED: On Debian 10 by adding "EXPORT TERM=xterm" on the Script executed by CRONTAB (root) but executed as www-data.
$ crontab -e
*/15 * * * * /bin/su - www-data -s /bin/bash -c '/usr/local/bin/todos.sh'
FILE=/usr/local/bin/todos.sh
#!/bin/bash -p
export TERM=xterm && cd /var/www/dokuwiki/data/pages && clear && grep -r -h '|(TO-DO)' > /var/www/todos.txt && chmod 664 /var/www/todos.txt && chown www-data:www-data /var/www/todos.txt
If you are using the Docker PowerShell image set the environment variable for the terminal like this with the -e flag
docker run -i -e "TERM=xterm" mcr.microsoft.com/powershell

Check and run/restart processes, bash [duplicate]

This question already has answers here:
Linux Script to check if process is running and act on the result
(8 answers)
Closed 5 years ago.
I wrote a bash-script to check if a process is running. It doesn't work since the ps command always returns exit code 1. When I run the ps command from the command-line, the $? is correctly set, but within the script it is always 1. Any idea?
#!/bin/bash
SERVICE=$1
ps -a | grep -v grep | grep $1 > /dev/null
result=$?
echo "exit code: ${result}"
if [ "${result}" -eq "0" ] ; then
echo "`date`: $SERVICE service running, everything is fine"
else
echo "`date`: $SERVICE is not running"
fi
Bash version: GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
There are a few really simple methods:
pgrep procname && echo Running
pgrep procname || echo Not running
killall -q -0 procname && echo Running
pidof procname && echo Running
This trick works for me. Hope this could help you. Let's save the followings as checkRunningProcess.sh
#!/bin/bash
ps_out=`ps -ef | grep $1 | grep -v 'grep' | grep -v $0`
result=$(echo $ps_out | grep "$1")
if [[ "$result" != "" ]];then
echo "Running"
else
echo "Not Running"
fi
Make the checkRunningProcess.sh executable.And then use it.
Example to use.
20:10 $ checkRunningProcess.sh proxy.py
Running
20:12 $ checkRunningProcess.sh abcdef
Not Running
I tried your version on BASH version 3.2.29, worked fine. However, you could do something like the above suggested, an example here:
#!/bin/sh
SERVICE="$1"
RESULT=`ps -ef | grep $1 | grep -v 'grep' | grep -v $0`
result=$(echo $ps_out | grep "$1")
if [[ "$result" != "" ]];then
echo "Running"
else
echo "Not Running"
fi
I use this one to check every 10 seconds process is running and start if not and allows multiple arguments:
#!/bin/sh
PROCESS="$1"
PROCANDARGS=$*
while :
do
RESULT=`pgrep ${PROCESS}`
if [ "${RESULT:-null}" = null ]; then
echo "${PROCESS} not running, starting "$PROCANDARGS
$PROCANDARGS &
else
echo "running"
fi
sleep 10
done
Check if your scripts name doesn't contain $SERVICE. If it does, it will be shown in ps results, causing script to always think that service is running. You can grep it against current filename like this:
#!/bin/sh
SERVICE=$1
if ps ax | grep -v grep | grep -v $0 | grep $SERVICE > /dev/null
then
echo "$SERVICE service running, everything is fine"
else
echo "$SERVICE is not running"
fi
Working one.
!/bin/bash
CHECK=$0
SERVICE=$1
DATE=`date`
OUTPUT=$(ps aux | grep -v grep | grep -v $CHECK |grep $1)
echo $OUTPUT
if [ "${#OUTPUT}" -gt 0 ] ;
then echo "$DATE: $SERVICE service running, everything is fine"
else echo "$DATE: $SERVICE is not running"
fi
Despite some success with the /dev/null approach in bash. When I pushed the solution to cron it failed. Checking the size of a returned command worked perfectly though. The ampersrand allows bash to exit.
#!/bin/bash
SERVICE=/path/to/my/service
result=$(ps ax|grep -v grep|grep $SERVICE)
echo ${#result}
if ${#result}> 0
then
echo " Working!"
else
echo "Not Working.....Restarting"
/usr/bin/xvfb-run -a /opt/python27/bin/python2.7 SERVICE &
fi
#!/bin/bash
ps axho comm| grep $1 > /dev/null
result=$?
echo "exit code: ${result}"
if [ "${result}" -eq "0" ] ; then
echo "`date`: $SERVICE service running, everything is fine"
else
echo "`date`: $SERVICE is not running"
/etc/init.d/$1 restart
fi
Something like this
Those are helpful hints. I just needed to know if a service was running when I started the script, so I could leave the service in the same state when I left. I ended up using this:
HTTPDSERVICE=$(ps -A | grep httpd | head -1)
[ -z "$HTTPDSERVICE" ] && echo "No apache service running."
I found the problem. ps -ae instead ps -a works.
I guess it has to do with my rights in the shared hosting environment. There's apparently a difference between executing "ps -a" from the command line and executing it from within a bash-script.
A simple script version of one of Andor's above suggestions:
!/bin/bash
pgrep $1 && echo Running
If the above script is called test.sh then, in order to test, type:
test.sh NameOfProcessToCheck
e.g.
test.sh php
I was wondering if it would be a good idea to have progressive attempts at a process, so you pass this func a process name func_terminate_process "firefox" and it tires things more nicely first, then moves on to kill.
# -- NICE: try to use killall to stop process(s)
killall ${1} > /dev/null 2>&1 ;sleep 10
# -- if we do not see the process, just end the function
pgrep ${1} > /dev/null 2>&1 || return
# -- UGLY: Step trough every pid and use kill -9 on them individually
for PID in $(pidof ${1}) ;do
echo "Terminating Process: [${1}], PID [${PID}]"
kill -9 ${PID} ;sleep 10
# -- NASTY: If kill -9 fails, try SIGTERM on PID
if ps -p ${PID} > /dev/null ;then
echo "${PID} is still running, forcefully terminating with SIGTERM"
kill -SIGTERM ${PID} ;sleep 10
fi
done
# -- If after all that, we still see the process, report that to the screen.
pgrep ${1} > /dev/null 2>&1 && echo "Error, unable to terminate all or any of [${1}]" || echo "Terminate process [${1}] : SUCCESSFUL"
I need to do this from time to time and end up hacking the command line until it works.
For example, here I want to see if I have any SSH connections, (the 8th column returned by "ps" is the running "path-to-procname" and is filtered by "awk":
ps | awk -e '{ print $8 }' | grep ssh | sed -e 's/.*\///g'
Then I put it in a shell-script, ("eval"-ing the command line inside of backticks), like this:
#!/bin/bash
VNC_STRING=`ps | awk -e '{ print $8 }' | grep vnc | sed -e 's/.*\///g'`
if [ ! -z "$VNC_STRING" ]; then
echo "The VNC STRING is not empty, therefore your process is running."
fi
The "sed" part trims the path to the exact token and might not be necessary for your needs.
Here's my example I used to get your answer. I wrote it to automatically create 2 SSH tunnels and launch a VNC client for each.
I run it from my Cygwin shell to do admin to my backend from my windows workstation, so I can jump to UNIX/LINUX-land with one command, (this also assumes the client rsa keys have already been "ssh-copy-id"-ed and are known to the remote host).
It's idempotent in that each proc/command only fires when their $VAR eval's to an empty string.
It appends " | wc -l" to store the number of running procs that match, (i.e., number of lines found), instead of proc-name for each $VAR to suit my needs. I keep the "echo" statements so I can re-run and diagnose the state of both connections.
#!/bin/bash
SSH_COUNT=`eval ps | awk -e '{ print $8 }' | grep ssh | sed -e 's/.*\///g' | wc -l`
VNC_COUNT=`eval ps | awk -e '{ print $8 }' | grep vnc | sed -e 's/.*\///g' | wc -l`
if [ $SSH_COUNT = "2" ]; then
echo "There are already 2 SSH tunnels."
elif [ $SSH_COUNT = "1" ]; then
echo "There is only 1 SSH tunnel."
elif [ $SSH_COUNT = "0" ]; then
echo "connecting 2 SSH tunnels."
ssh -L 5901:localhost:5901 -f -l USER1 HOST1 sleep 10;
ssh -L 5904:localhost:5904 -f -l USER2 HOST2 sleep 10;
fi
if [ $VNC_COUNT = "2" ]; then
echo "There are already 2 VNC sessions."
elif [ $VNC_COUNT = "1" ]; then
echo "There is only 1 VNC session."
elif [ $VNC_COUNT = "0" ]; then
echo "launching 2 vnc sessions."
vncviewer.exe localhost:1 &
vncviewer.exe localhost:4 &
fi
This is very perl-like to me and possibly more unix utils than true shell scripting. I know there are lots of "MAGIC" numbers and cheezy hard-coded values but it works, (I think I'm also in poor taste for using so much UPPERCASE too). Flexibility can be added with some cmd-line args to make this more versatile but I wanted to share what worked for me. Please improve and share. Cheers.
A solution with service and awk that takes in a comma-delimited list of service names.
First it's probably a good bet you'll need root privileges to do what you want. If you don't need to check then you can remove that part.
#!/usr/bin/env bash
# First parameter is a comma-delimited string of service names i.e. service1,service2,service3
SERVICES=$1
ALL_SERVICES_STARTED=true
if [ $EUID -ne 0 ]; then
if [ "$(id -u)" != "0" ]; then
echo "root privileges are required" 1>&2
exit 1
fi
exit 1
fi
for service in ${SERVICES//,/ }
do
STATUS=$(service ${service} status | awk '{print $2}')
if [ "${STATUS}" != "started" ]; then
echo "${service} not started"
ALL_SERVICES_STARTED=false
fi
done
if ${ALL_SERVICES_STARTED} ; then
echo "All services started"
exit 0
else
echo "Check Failed"
exit 1
fi
The most simple check by process name :
bash -c 'checkproc ssh.exe ; while [ $? -eq 0 ] ; do echo "proc running";sleep 10; checkproc ssh.exe; done'

Intellij : how to update run configuration environment from exisiting process?

I would like to update environment variables of a given run/debug configuration from an existing/running process (selected from PID).
Is there any plugin ? I could not find such thing on the jetbrain repository.
What convenient enough (no var,value pairs hand copy ; no intellij restart) solution could there be to update these environment variables ?
Just to get the functionality, somewhat, I wrote a bash function that works on Linux.
idea_run_env()
{
typeset rcf=${1}
if ! [ -f "$rcf" ]; then
echo "Can not find run configuration file '$rcf'" 1>&2*
return 2
fi
typeset rcf_back=${rcf}.bak
if (( $# > 1 )); then
typeset pid=${2}
# no Esc in attr value, I may/should escape/encode such values but I won't use them
typeset to_be_bloc=$(xargs -n 1 -0 printf "%s\n" < /proc/$pid/environ | grep -a -v -E $'\e' | while IFS='=' read k v; do echo '<env name="'$k'" value="'$v'" />'; done)
if [ -z "$to_be_bloc" ] ; then
echo "Can not find environment for given PID '$pid'" 1>&2*
return 2
fi
#typeset was_block=$(sed -e '/<envs>/,/<\/envs>/{//!b};d' "$rcf")
typeset prefix=$(sed -e '/<envs>/q' "$rcf")
typeset postfix=$(sed -n -e '/<\/envs>/,$p' "$rcf")
[ -f "$rcf_back" ] || mv "$rcf" "$rcf_back"
echo "${prefix}" > "$rcf"
echo "${to_be_bloc}" >> "$rcf"
echo "${postfix}" >> "$rcf"
elif [ -f "$rcf_back" ]; then
mv -f "$rcf_back" "$rcf"
else
echo "Can not find backup to restore" 1>&2
return 1
fi
}

Have script detect gnome session

I have a makeself script which I expect to be run as root; It's a desktop installer.
At the end of the script, the software which was recently installed to the filesystem tries to launch in user-space.
This works well using sudo -u $(logname) /path/to/application (or alternately sudo -u $SUDO_USER ... in Ubuntu 16.04) however a critical environmental variable from the user is missing:
GNOME_DESKTOP_SESSION_ID
I need GNOME_DESKTOP_SESSION_ID because the child process belongs to Java and Java uses this environmental variable for detecting the GtkLookAndFeel.
However attempts to use sudo -i have failed.
From some basic tests, the GNOME_DESKTOP_SESSION_ID doesn't appear to be a natural environmental variable when this users logs in. For example, if I CTRL+ALT+F1 to a terminal, env |grep GNOME yields nothing whereas XTerm and gnome-terminal both yield GNOME_DESKTOP_SESSION_ID.
How can I get a hold of this GNOME_DESKTOP_SESSION_ID variable from within the installer script without requiring users to pass something such as the -E parameter to the sudo command?
Note, although GtkLookAndFeel is the primary look and feel for Linux, I prefer not to hard-code the export JAVA_OPTS either, I prefer to continue to fallback onto Oracle's detection techniques for support, longevity and scalability reasons.
Update: In Ubuntu, GNOME_DESKTOP_SESSION_ID lives in /usr/share/upstart/sessions/xsession-init.conf
initctl set-env --global GNOME_DESKTOP_SESSION_ID=this-is-deprecated
Which leads to using initctl get-env to retrieve it. Unfortunately this does not help within a new sudo shell, nor does any (optimistic) attempt at dbus-launch.
It turns out this is a two-step process...
Read the user's UPSTART_SESSION environmental variables from /proc/$pid/environ
Then export UPSTART_SESSION and call initctl --user get-env GNOME_DESKTOP_SESSION_ID
To make this a bit more scalable to other variables, I've wrapped this into a bash helper function. This function should assist fetching other user-environment variables as well. Word of caution, it won't work if the variable's value has a space in the name.
In the below example, only UPSTART_SESSION and GNOME_DESKTOP_SESSION_ID are required to answer the question.
Once sudo_env is called, the next call to sudo -u ... must be changed to sudo -E -u .... The -E will import the newly exported variables for use by a child process.
# Provide user environmental variables to the sudo environment
function sudo_env() {
userid="$(logname 2>/dev/null || echo $SUDO_USER)"
pid=$(ps aux |grep "^$userid" |grep "dbus-daemon" | grep "unix:" |awk '{print $2}')
# Replace null delimiters with newline for grep
envt=$(cat "/proc/$pid/environ" |tr '\0' '\n')
# List of environmental variables to use; adjust as needed
# UPSTART_SESSION must come before GNOME_DESKTOP_SESSION_ID
exports=( "UPSTART_SESSION" "DISPLAY" "DBUS_SESSION_BUS_ADDRESS" "XDG_CURRENT_DESKTOP" "GNOME_DESKTOP_SESSION_ID" )
for i in "${exports[#]}"; do
# Re-set the variable within this session by name
# Careful, this technique won't yet work with spaces
if echo "$envt" | grep "^$i=" > /dev/null 2>&1; then
eval "$(echo "$envt" | grep "^$i=")" > /dev/null 2>&1
export $i > /dev/null 2>&1
elif initctl --user get-env $i > /dev/null 2>&1; then
eval "$i=$(initctl --user get-env $i)" > /dev/null 2>&1
export $i > /dev/null 2>&1
fi
echo "$i=${!i}"
done
}
You need to create a new file on /etc/sudoers.d with this content:
Defaults env_keep+=GNOME_DESKTOP_SESSION_ID
But, there is a problem, if you already are inside sudo, it will not been read again.
So, the complete solution is use sudo inside your script to create this file and then execute your command in another sudo:
#!/bin/bash
# ignore sudo
if [[ -z $SUDO_USER ]]; then
#save current dir
DIR="$(pwd)"
#generate random string (file name compatible)
NEW_UUID=$(cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1)
#create env_keep file
sudo -i -- <<EOF0
echo "Defaults env_keep+=GNOME_DESKTOP_SESSION_ID" > /etc/sudoers.d/"$NEW_UUID"_keep_java_laf
EOF0
sudo -u YOUR_USER -i -- <<EOF
#go to original directory
cd "$DIR"
#execute your java command
java YOUR_COMMAND
EOF
#clean file
sudo rm -f /etc/sudoers.d/"$NEW_UUID"_keep_java_laf
else
echo "sudo not allowed!";exit 1;
fi

jboss_init_redhat is not working

I used Redhat and Jboss 4.2.2 GA for application server.
I have a basic problem that when I try to run
./jboss_init_redhat.sh start
is not working.
Basicly I want to run inside jboss_init_redhat.sh file;
/opt/jboss/bin/run.sh -c X -b 0.0.0.0
This line is working but jboss_init_redhat.sh not working.That's our problem.
I checked all permissions but there is not any permission problem.
jboss_init_redhat.sh is below:
#!/bin/sh
#
# $Id: jboss_init_redhat.sh 60992 2007-02-28 11:33:27Z dimitris#jboss.org $
#
# JBoss Control Script
#
# To use this script run it as root - it will switch to the specified user
#
# Here is a little (and extremely primitive) startup/shutdown script
# for RedHat systems. It assumes that JBoss lives in /usr/local/jboss,
# it's run by user 'jboss' and JDK binaries are in /usr/local/jdk/bin.
# All this can be changed in the script itself.
#
# Either modify this script for your requirements or just ensure that
# the following variables are set correctly before calling the script.
#define where jboss is - this is the directory containing directories log, bin, conf etc
JBOSS_HOME=${JBOSS_HOME:-"/opt/jboss"}
#define the user under which jboss will run, or use 'RUNASIS' to run as the current user
JBOSS_USER=${JBOSS_USER:-"jboss"}
#make sure java is in your path
JAVAPTH=${JAVAPTH:-"/usr/local/jdk/bin"}
#configuration to use, usually one of 'minimal', 'default', 'all'
JBOSS_CONF=${JBOSS_CONF:-"ikarus"}
#if JBOSS_HOST specified, use -b to bind jboss services to that address
JBOSS_HOST="0.0.0.0"
JBOSS_BIND_ADDR=${JBOSS_HOST:+"-b $JBOSS_HOST"}
#define the classpath for the shutdown class
JBOSSCP=${JBOSSCP:-"$JBOSS_HOME/bin/shutdown.jar:$JBOSS_HOME/client/jnet.jar"}
#define the script to use to start jboss
JBOSSSH=${JBOSSSH:-"$JBOSS_HOME/bin/run.sh -c $JBOSS_CONF $JBOSS_BIND_ADDR"}
if [ "$JBOSS_USER" = "RUNASIS" ]; then
SUBIT=""
else
SUBIT="su - $JBOSS_USER -c "
fi
if [ -n "$JBOSS_CONSOLE" -a ! -d "$JBOSS_CONSOLE" ]; then
# ensure the file exists
touch $JBOSS_CONSOLE
if [ ! -z "$SUBIT" ]; then
chown $JBOSS_USER $JBOSS_CONSOLE
fi
fi
if [ -n "$JBOSS_CONSOLE" -a ! -f "$JBOSS_CONSOLE" ]; then
echo "WARNING: location for saving console log invalid: $JBOSS_CONSOLE"
echo "WARNING: ignoring it and using /dev/null"
JBOSS_CONSOLE="/dev/null"
fi
#define what will be done with the console log
JBOSS_CONSOLE=${JBOSS_CONSOLE:-"/dev/null"}
JBOSS_CMD_START="cd $JBOSS_HOME/bin; $JBOSSSH"
JBOSS_CMD_STOP=${JBOSS_CMD_STOP:-"java -classpath $JBOSSCP org.jboss.Shutdown --shutdown"}
if [ -z "`echo $PATH | grep $JAVAPTH`" ]; then
export PATH=$PATH:$JAVAPTH
fi
if [ ! -d "$JBOSS_HOME" ]; then
echo JBOSS_HOME does not exist as a valid directory : $JBOSS_HOME
exit 1
fi
echo JBOSS_CMD_START = $JBOSS_CMD_START
case "$1" in
start)
cd $JBOSS_HOME/bin
if [ -z "$SUBIT" ]; then
eval $JBOSS_CMD_START >${JBOSS_CONSOLE} 2>&1 &
else
$SUBIT "$JBOSS_CMD_START >${JBOSS_CONSOLE} 2>&1 &"
fi
;;
stop)
if [ -z "$SUBIT" ]; then
$JBOSS_CMD_STOP
else
$SUBIT "$JBOSS_CMD_STOP"
fi
;;
restart)
$0 stop
$0 start
;;
*)
echo "usage: $0 (start|stop|restart|help)"
esac
If you change JBOSS_USER information to RUNASIS, it should work properly.

Categories

Resources