From Java code I am calling my script file following way,
Process process =Runtime.getRuntime().exec("sh /usr/local/garner/garnerd start");
int status = process.waitFor();
garnerd script code is given below(which in turn calls garner.sh):
function start()
{
sh /usr/local/garner/garner.sh > /usr/local/garner/log/garner.log &
echo "Garner is started"
}
case "$1" in
start)
start
;;
*)
echo "Usage: garnerd {start|stop|restart|status|reconfig}"
exit 1
esac
exit $retval
Garner shell script(garner.sh) source is:
/usr/local/garner/garnerd status
if [ $? -eq 0 ]; then
echo "`date` $0 :Garner is allready running"
exit 0
fi
touch /dev/blank
cd /usr/local/garner
uname -a | grep -i cygwin
if [ $? -eq 0 ]
then
export CYGWIN="$CYGWIN error_start=dumper -d %1 %2"
/usr/local/garner/garner.exe -n -c /usr/local/garner/conf/garner.conf -p /usr/local/garner/garner.pid -l /usr/local/garner/log/garner.log -L 4 &
else
/usr/local/garner/garner -c /usr/local/garner/conf/garner.conf -p /usr/local/garner/garner.pid -l /usr/local/garner/log/garner.log -L 4 &
fi
cd -
when I call ./garnerd start, it creates pid file.after that If I see contents of this file, it shows process id of garner.
[root#localhost garner]# cat garner.pid
9282
But when I check detail information of process id through following command, it shows "SigBlk: 0000000000000004" which uses signal 3.
[root#localhost garner]# cat /proc/9282/status
Name: garner
State: S (sleeping)
SleepAVG: 78%
Tgid: 9282
Pid: 9282
PPid: 9281
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
FDSize: 64
Groups: 0 1 2 3 4 6 10
VmPeak: 58888 kB
VmSize: 58884 kB
VmLck: 0 kB
VmHWM: 7124 kB
VmRSS: 7124 kB
VmData: 17192 kB
VmStk: 88 kB
VmExe: 84 kB
VmLib: 4480 kB
VmPTE: 156 kB
StaBrk: 05af0000 kB
Brk: 060ec000 kB
StaStk: 7fff0329d950 kB
Threads: 2
SigQ: 0/47721
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000004
SigIgn: 0000000000001002
SigCgt: 0400000180006005
CapInh: 0000000000000000
CapPrm: 00000000fffffeff
CapEff: 00000000fffffeff
Cpus_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00ffffff
Mems_allowed: 00000000,00000003
And If I manually run command(./garnerd start) from linux machine, it shows "SigBlk: 0000000000000000".
It means Java Blocks the processes? if yes then why and in which circumstances??
From the API doc of java.lang.Process:
Because some native platforms only provide limited buffer size for
standard input and output streams, failure to promptly write the input
stream or read the output stream of the subprocess may cause the
subprocess to block, or even deadlock.
This article explains the issue in detail and suggests a solution.
Related
I used pip install pyspark in a python enviroment, java is installed but when i try to initialise a spark session I get a java error Java gateway process exited before sending its port number
spark = SparkSession \
.builder \
.appName("CustomerChurn") \
.master("local") \
.config() \
.getOrCreate()
RuntimeError Traceback (most recent call last)
Input In [3], in <cell line: 3>()
1 findspark.init()
3 spark = SparkSession \
4 .builder \
5 .appName("CustomerChurn") \
6 .master("local") \
7 .config() \
----> 8 .getOrCreate()
File ~\anaconda3\envs\CustomerChurnProject\lib\site-packages\pyspark\sql\session.py:269, in SparkSession.Builder.getOrCreate(self)
267 sparkConf.set(key, value)
268 # This SparkContext may be an existing one.
--> 269 sc = SparkContext.getOrCreate(sparkConf)
270 # Do not update `SparkConf` for existing `SparkContext`, as it's shared
271 # by all sessions.
272 session = SparkSession(sc, options=self._options)
File ~\anaconda3\envs\CustomerChurnProject\lib\site-packages\pyspark\context.py:483, in SparkContext.getOrCreate(cls, conf)
481 with SparkContext._lock:
482 if SparkContext._active_spark_context is None:
--> 483 SparkContext(conf=conf or SparkConf())
484 assert SparkContext._active_spark_context is not None
485 return SparkContext._active_spark_context
File ~\anaconda3\envs\CustomerChurnProject\lib\site-packages\pyspark\context.py:195, in SparkContext.__init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls, udf_profiler_cls)
189 if gateway is not None and gateway.gateway_parameters.auth_token is None:
190 raise ValueError(
191 "You are trying to pass an insecure Py4j gateway to Spark. This"
192 " is not allowed as it is a security risk."
193 )
--> 195 SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
196 try:
197 self._do_init(
198 master,
199 appName,
(...)
208 udf_profiler_cls,
209 )
File ~\anaconda3\envs\CustomerChurnProject\lib\site-packages\pyspark\context.py:417, in SparkContext._ensure_initialized(cls, instance, gateway, conf)
415 with SparkContext._lock:
416 if not SparkContext._gateway:
--> 417 SparkContext._gateway = gateway or launch_gateway(conf)
418 SparkContext._jvm = SparkContext._gateway.jvm
420 if instance:
File ~\anaconda3\envs\CustomerChurnProject\lib\site-packages\pyspark\java_gateway.py:106, in launch_gateway(conf, popen_kwargs)
103 time.sleep(0.1)
105 if not os.path.isfile(conn_info_file):
--> 106 raise RuntimeError("Java gateway process exited before sending its port number")
108 with open(conn_info_file, "rb") as info:
109 gateway_port = read_int(info)
RuntimeError: Java gateway process exited before sending its port number
the run time error is posted above, I have not seen this type of error in other posts
Based on your error logs, I think you need to specify the $JAVA_HOME variable on your system.
This link may help:
https://sparkbyexamples.com/pyspark/pyspark-exception-java-gateway-process-exited-before-sending-the-driver-its-port-number/
In Linux:
export JAVA_HOME=(Path to the JDK, e.x: /usr/lib/jvm/java-11-openjdk-amd64)
And after that, you need to save it in your ~/.bashrc (If you use bash)
vi ~/.bashrc
export JAVA_HOME=(Path to the JDK, e.x: /usr/lib/jvm/java-11-openjdk-amd64)
Then:
source ~/.bashrc
(You can see the above link)
In windows:
Go to the edit system environment window on your My Computer.
See this:
https://confluence.atlassian.com/doc/setting-the-java_home-variable-in-windows-8895.html
I'm running Elasticsearch commands from within Java, using Process and ProcessBuilder, on Windows:
new ProcessBuilder(command);
command here is the array of commands:
"C:\\cygwin64\\bin\\curl", "-XGET", "'"+ES_BASE_URL+"index2/_search?pretty'"
The output is fine-- except that the following is prepended to the Cygwin output, i.e. the output i get from Cygwin when i run it directly on a Cygwin terminal:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 796 100 796 0 0 27298 0 --:--:-- --:--:-- --:--:-- 31840
How to avoid this so that i get bare JSon result, the result i get from Cygwin?
curl -s -XGET
should suppress meter. ( You can also reduce it to just a progress bar instead of those numbers by curl -# )
From curl manual:
-#, --progress-bar
Make curl display progress as a simple progress bar instead of the standard, more informational, meter.
-s, --silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. It will still output the data you ask for,
potentially
even to the terminal/stdout unless you redirect it.
This question already has answers here:
is nice() used to change the thread priority or the process priority?
(3 answers)
Closed 1 year ago.
On a Unix system, you can run a process at lower CPU "priority" (pedantically, it does not change the thing that is called the priority, but rather influences what share of available CPU time is used, which is "priority" in the general sense) using the nice command:
nice program
And you could use that to run a JVM process:
nice java -jar program.jar
The Java program run by that JVM process will start multiple threads.
Does the nice change affect the scheduling of those Java threads? That is, will the Java threads have a lower CPU priority when run as
nice java -jar program.jar
that when run as
java -jar program.jar
In general, this will be system dependent, so I am interested in the Linux case.
According to what ps reports niceness is applied to java threads. I ran this quick test with a java application that waits for user input:
Start process with : nice -n 19 java Main
Output of ps -m -l 20746
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 - 1000 20746 10006 0 - - - 1739135 - pts/2 0:00 java Main
0 S 1000 - - 0 99 19 - - futex_ - 0:00 -
1 S 1000 - - 0 99 19 - - wait_w - 0:00 -
1 S 1000 - - 0 99 19 - - futex_ - 0:00 -
1 S 1000 - - 0 99 19 - - futex_ - 0:00 -
Start process with : nice -n 15 java Main
Output of ps -m -l 21488
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
0 - 1000 21488 10006 0 - - - 1722494 - pts/2 0:00 java Main
0 S 1000 - - 0 95 15 - - futex_ - 0:00 -
1 S 1000 - - 0 95 15 - - wait_w - 0:00 -
1 S 1000 - - 0 95 15 - - futex_ - 0:00 -
1 S 1000 - - 0 95 15 - - futex_ - 0:00 -
The NI column seems to reflect what I passed to nice and the priority changes accordingly too. I got the process ID (20746, 21488) using jps.
Note that running jstack 21488 for example will not give the above information.
I ran the above on Ubuntu 16.04 LTS (64bit)
Actually...Niceness is a property of the application according to POSIX.1. Here is a more detailed post. is nice() used to change the thread priority or the process priority?
Java is not special. It's just a process, and the OS sets its "niceness" the same way as with any other process.
On Linux, Java threads are implemented using native threads, so again, "niceness" is subject to OS controls in the same way as any other native thread.
I want to detect the Memory and CPU consumption of a particular app in android (programmatically), can any one help me with it. I have tried the TOP method, but i want an alternative for it.
Any help will be appreciated, thanks :)
If you wan to trace your memory usage in your app then there is ActivityManager.getMemoryInfo() API.
Cpu usage can be traced using CpuStatsCollector API.
For more informative memory usage overview, outside your app, you can use adb shell dumpsys meminfo <package_name|pid> [-d] for more specific memory usage statistics. For example, there is the the command for com.google.android.apps.maps process:
adb shell dumpsys meminfo com.google.android.apps.maps -d
Which gives you a following output:
** MEMINFO in pid 18227 [com.google.android.apps.maps] **
Pss Private Private Swapped Heap Heap Heap
Total Dirty Clean Dirty Size Alloc Free
------ ------ ------ ------ ------ ------ ------
Native Heap 10468 10408 0 0 20480 14462 6017
Dalvik Heap 34340 33816 0 0 62436 53883 8553
Dalvik Other 972 972 0 0
Stack 1144 1144 0 0
Gfx dev 35300 35300 0 0
Other dev 5 0 4 0
.so mmap 1943 504 188 0
.apk mmap 598 0 136 0
.ttf mmap 134 0 68 0
.dex mmap 3908 0 3904 0
.oat mmap 1344 0 56 0
.art mmap 2037 1784 28 0
Other mmap 30 4 0 0
EGL mtrack 73072 73072 0 0
GL mtrack 51044 51044 0 0
Unknown 185 184 0 0
TOTAL 216524 208232 4384 0 82916 68345 14570
(output trimmed) More about it here
Tracing of memory usage on modern operating systems is very complex task. See this question for more info.
To get your processid:
int pid = android.os.Process.myPid();
To get CPU Usage :
public String getCPUUsage(int pid) {
Process p;
try {
String[] cmd = {
"sh",
"-c",
"top -m 1000 -d 1 -n 1 | grep \""+pid+"\" "};
p = Runtime.getRuntime().exec(cmd);
String line = reader.readLine();
// line contains the process info
}
I have a java socket server I wrote to allow me to keep a web clusters code base in sync. When I run the init.d script from a shell login like so
[root#web11 www]# /etc/init.d/servermngr start
Logout and all will work fine but if the server reboots or I run the init.d using services like so
[root#web11 www]# service servermngr start
Any of the exec() commands passed to the socket server will not get executed on the linux box. I am assuming it has to do with the JVM having no real shell. If I login and run
[root#web11 www]# /etc/init.d/servermngr start
...and logout all runs nice all CVS commands are executed.
Another note when run as a service the socket server responds to status checks so it is running
Here is the init.d script
#!/bin/sh
# chkconfig: 2345 95 1
# description: Starts Daemon Using ServerManager.jar.
#
# Source function library.
. /etc/init.d/functions
start () {
echo -n $"Starting ServerManager: "
# start daemon
cd /www/servermanager/
daemon java -jar ServerManager.jar > /www/logs/ServerManager.log &
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch /var/lock/subsys/cups
echo "";
return $RETVAL
}
stop () {
# stop daemon
echo -n $"Stopping $prog: "
kill `ps uax | grep -i "java -jar ServerManager.ja[r]" | head -n 1 | awk '{print $2}'`
RETVAL=$?
echo "";
return $RETVAL
}
restart() {
stop
start
}
case $1 in
start)
start
;;
stop)
stop
;;
*)
echo $"Usage: servermngr {start|stop}"
exit 3
esac
exit $RETVAL
And the Java responsible for actually executing the code:
// Build cmd Array of Strings
String[] cmd = {"/bin/sh", "-c", "cd /www;cvs up -d htdocs/;cvs up -d phpinclude/"};
final Process process;
try {
process = Runtime.getRuntime().exec(cmd);
BufferedReader buf = new BufferedReader(new InputStreamReader(
process.getInputStream()));
// Since this is a CVS UP we return the Response to PHP
if(input.matches(".*(cvs up).*")){
String line1;
out.println("cvsupdate-start");
System.out.println("CVS Update" + input);
while ((line1 = buf.readLine()) != null) {
out.println(line1);
System.out.println("CVS:" + line1);
}
out.println("cvsupdate-end");
}
} catch (IOException ex) {
System.out.println("IOException on Run cmd " + CommandFactory.class.getName() + " " + ex);
Logger.getLogger(CommandFactory.class.getName()).log(Level.SEVERE, null, ex);
}
Thx for any help
What is the command you are trying to run? cd is not a program and if you have ; you have multiple commands. You can only run one program!
Are you starting the process as root? What version of (bash?) is running on the system? You may want to give csh a whirl just to rule out issues with the shell itself. I'd also suggest chaining the commands with '&' instead of ';'. Finally you may find it easier to create a shell script which contains all your commands and is called by your java process. You may also want to investigate nohup and check /etc/security/limits
You might be happier using http://akuma.kohsuke.org/ to help you with this stuff, or at least Apache Commons Exec.
Here is the startup script that fixed my issue if someone runs into an issue
#!/bin/sh
# chkconfig: 2345 95 1
# description: Starts Daemon Using ServerManager.jar.
#
# Source function library.
. /etc/init.d/functions
RETVAL=0
prog="ServerManager"
servermanager="java"
serveroptions=" -jar ServerManager.jar"
pid_file="/var/run/servermanager.pid"
launch_daemon()
{
/bin/sh << EOF
java -Ddaemon.pidfile=$pid_file $serveroptions <&- &
pid=\$!
echo \${pid}
EOF
}
start () {
echo -n $"Starting $prog: "
if [ -e /var/lock/subsys/servermanager ]; then
if [ -e /var/run/servermanager.pid ] && [ -e /proc/`cat /var/run/servermanager.pid` ]; then
echo -n $"cannot start: servermanager is already running.";
failure $"cannot start: servermanager already running.";
echo
return 1
fi
fi
# start daemon
cd /www/voodoo_servermanager/
export CVSROOT=":pserver:cvsd#cvs.zzzzz.yyy:/cvsroot";
daemon "$servermanager $serveroptions > /www/logs/ServerManager.log &"
#daemon_pid=`launch_daemon`
#daemon ${daemon_pid}
RETVAL=$?
echo
[ $RETVAL = 0 ] && touch /var/lock/subsys/servermanager && pidof $servermanager > $pid_file
echo "";
return $RETVAL
}
stop () {
# stop daemon
echo -n $"Stopping $prog: "
if [ ! -e /var/lock/subsys/servermanager ]; then
echo -n $"cannot stop ServerManager: ServerManager is not running."
failure $"cannot stop ServerManager: ServerManager is not running."
echo
return 1;
fi
killproc $servermanager
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/servermanager;
return $RETVAL
}
restart() {
stop
start
}
case $1 in
start)
start
;;
stop)
stop
;;
restart)
restart
*)
echo $"Usage: servermngr {start|stop|restart}"
RETVAL=1
esac
exit $RETVAL