PHP exec a unit testing with memory usage information - java

I have a question relates to PHP and Java programming.
I am going to develop a web application to do a unit testing.
PHP is the language that I'll use for the web and will executes the exec() function.
as I read, the function returns the output of execution, ok I do need it.
but it's not enough, I also think if I can get how much memory are used during the execution.
the web will run in an Apache web server in a Linux native operating system (orefered in Ubuntu).
This is the second case:
if there is a Java source which contains a program in which requires the user input during the execution, how can I execute it via a web server with also pass all the lines which may act as the user input?
the next problem is, the exec() function only accepts parameters in line.
How if I want
so, if there is any idea how to do that things?

The /usr/bin/time program (documented in time(1)) can return the amount of memory used during execution:
$ /usr/bin/time echo hello
hello
0.00user 0.00system 0:00.00elapsed ?%CPU (0avgtext+0avgdata 2512maxresident)k
0inputs+0outputs (0major+205minor)pagefaults 0swaps
You can see that the echo(1) program required 2.5 megabytes of memory and ran very quickly. Larger programs will be more impressive:
$ /usr/bin/time jacksum --help
Unknown argument. Use -h for help. Exit.
Command exited with non-zero status 2
0.08user 0.03system 0:00.87elapsed 12%CPU (0avgtext+0avgdata 57456maxresident)k
25608inputs+64outputs (92major+4072minor)pagefaults 0swaps
jacksum is a Java-based program, so it took 57 megabytes to tell me I screwed up the command line arguments. That's more like it.
You might also find the BSD process account system worthwhile. See lastcomm(1), sa(8), and dump-acct(8) for more information.

Related

How Can I Use sh jython.sh -i furElise.py with Creating my Heap with Java (Shell Based Java Issues)?

I am using a 2014 book on Jython-Java-Python in regards to music and computation.
...
I am trying to use a custom java command to handle a shell script with shell but all while telling java to handle the heap at a maximum size in MB.
I understand that the other previous contents of the heap management in java is stated well on this site. I do not need really a way to handle the heap but to handle the heap while handling shell scripts in java with a command like this:
java -Xms60m sh jython.sh furElise.py
The shell script is a wrapper for handling python and java, Jython, and I am trying to make this work on a 32-bit Linux SBC all while output as sound resonates. #JythonMusic
So, it is b/c of Elliott Frisch's answer that I have changed up the source in the .sh file called jython.sh to account for a smaller heap size.
I have chosen 1024 so far and things are in working order. I would have had to play with the allocated 4096 heap size which is too large for the entirety of my system plus what other "add-ons" are allocated to the heap outside of calling java via the jython.sh script.
Now, on my BeagleBone Black Wireless thus far, I can run a vncserver to account for the #JythonMusic source working which in the end leaves my command prompt in the jython interpreter.
Once in the jython interpreter, one would simply leave it as though one was in the python interpreter, e.g. exit().

Generating flame graphs for a whole Java program execution

I'm trying to generate a Flame Graph for a Java program, using perf-map-agent. I know that you can use perf-java-record-stack to record data for a running process. I have also found out that you may use the script jmaps in the Flame Graph directory. I have found Brendan Gregg's example as well as a Stack Overflow post illustrating this. However, in none of these examples the Java process is given as an argument to perf record (which means that perf collects stack traces for the entire system).
I want to record profiling data for the whole execution of the program (and preferably nothing else). Is there any way to do this? I have tried:
perf record -a -g java -XX:+PreserveFramePointer <other JVM arguments> <my java program>; sudo ~/bin/brendangregg/FlameGraph/jmaps
which answers:
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 1.711 MB perf.data (3449 samples) ]
Fetching maps for all java processes...
Mapping PID 2213 (user malin):
wc(1): 2970 9676 156108 /tmp/perf-2213.map
always with the same PID. This PID is a running process, not the one I tried to record data for.
I think what you want might be:
run the perf record with -a -g constantly, before the java application get fired up.
run jmap while the Java application running, so that you can collect the JIT related symbols.
End the perf record after the java application finished.
filter the output of perf script by the pid you are interested in. At that time, your Java process is already running, and you know what pid it is. (open the output of the perf script to have a look, you will know how to filter)
Run the flamegraph generation script.
In this way you can have the Java application recorded for the whole period of time.

disk I/O of a command line java program

I have a simple question, I've read up online but couldn't find a simple solution:
I'm running a java program on the command line as follows which accesses a database:
java -jar myProgram.jar
I would like a simple mechanism to see the number of disk I/Os performed by this program (on OSX).
So far I've come across iotop but how do I get iotop to measure the disk I/O of myProgram.jar?
Do I need a profiler like JProfiler do get this information?
iotop is a utility which gives you top n processes in descending order of IO consumption/utilization.
Most importantly it is a live monitoring utility which means its output changes every n sec( or time interval you specify). Though you can redirect it to a file, you need to parse that file and find out meaningful data after plotting a graph.
I would recommend to use sar. you can read more about it here
It is the lowest level monitoring utility in linux/unix. It will give you much more data than iotop.
best thing about sar is you can collect the data using a daemon when your program is running and then later analyze it using ksar
According to me, you can follow below approach,
Start sar monitoring, collect sar data every n seconds. value of n depends of approximate execution time of your program.
example : if your program takes 10 seconds to execute then monitoring per sec is good but if your program takes 1hr to execute then monitor per min or 30 sec. This will minimize overhead of sar process and still your data is meaningful.
Wait for some time (so that you get data before your program starts) and then start your program
end of your program execution
wait for some time again (so that you get data after your program finishes)
stop sar.
Monitor/visualize sar data using ksar. To start with, you check for disk utilization and then IOPS for a disk.
You can use Profilers for same thing but they have few drawbacks,
They need their own agents (agents will have their own overhead)
Some of them are not free.
Some of them are not easy to set up.
may or may not provide enough/required data.
besides this IMHO, Using inbuilt/system level utilities is always beneficial.
I hope this was helpful.
Your Java program will eventually be a process for host system so you need to filter out output of monitoring tool for your own process id. Refer Scripts section of this Blog Post
Also, even though you have tagged question with OsX but do mention in question that you are using OsX.
If you are looking for offline data - that is provided by proc filesystem in Unix bases systems but unfortunately that is missing in OSX , Where is the /proc folder on Mac OS X?
/proc on Mac OS X
You might chose to write a small script to dump data from disk and process monitoring tools for your process id. You can get your process id in script by process name, put script in a loop to look for that process name and start script before you execute your Java program. When script finds the said process, it will keep dumping relevant data from commands chosen by you at intervals decided by you. Once your programs ends ,log dumping script also terminates.

How does JPS tool get the name of the main class or jar it is executing

I am wondering, how does JPS tool get the name of the main class it is executed within jvm process. It is
jps -l
123 package.MainClass
456 /path/example.jar
I am talking specifically about Linux (I am not interested in Windows, and I have no Win machine to experiment on).
I could think of 2 ways
Connecting to the JVM in question which in turn tells it
From /proc file system
Regarding the first alternative, is it using local JMX connection? Still, it must go to /proc for the pids.
There is PID, so it must ask OS anyway
jps lists also itself
Regarding the second alternative, I feel this could be the correct one, because
On the command line, there is either -jar or MainClass
/proc knows wery well the PID
Before jps starts doind something, it has own folder in /proc
But, I am facing little problem here. When java command is very long (e.g. there is extremely long -classpath parameter), the information about the command line does not fit into space reserved for it in /proc. My system has 4kB for it, and what I learned elsewhere, this is hardwired in OS code (changing it requires kernel compilation). However, even in this case jps is still able to get that main class somewhere. How?
I need to find quicker way to get JVM process than calling jps. When system is quite loaded (e.g. when number of JVMs start), jps got stuck for several seconds (I have seen it waiting for ~30s).
jps scans through /tmp/hsperfdata_<username>/<pid> files that contain monitors and counters of running JVMs. The monitor named sun.rt.javaCommand contains the string you are looking for.
To find out the format of PerfData file you'll have to look into JDK source code.

unable to run java command from cgi

I have this function to read a doc file using tika on linux:
def read_doc(doc_path):
output_path=doc_path+'.txt'
java_path='/home/jdk1.7.0_17/jre/bin/'
environ = os.environ.copy()
environ['JAVA_HOME'] =java_path
environ['PATH'] =java_path
tika_path=java_path+'tika-app-1.3.jar'
shell_command='java -jar %s --text --encoding=utf-8 "%s" >"%s"'%(tika_path,doc_path,output_path)
proc=subprocess.Popen(shell_command,shell=True, env=environ,cwd=java_path)
proc.wait()
This function works fine when I run it from the command line, but when I call the same function using CGI, I get the following error:
Error occurred during initialization of VM Could not reserve enough
space for object heap
I checked previous answers for this particular error and they suggest increasing the memory, but this doesn't seem to work...I don't think this has to do with memory allocation, but rather some read/write/execute privilages from the cgi script, any idea how to solve this problem?
You're loading an entire JVM instance within the memory & process space of each individual CGI invocation. That's bad. Very bad. For both performance and memory usage. Increasing memory allocation is a hack that doesn't address the real problem. Core java code should almost never be invoked via CGI.
You'd be better off:
Avoiding both CGI and Python by running a java Servlet within your web server that invokes the appropriate Tika class directly with desired arguments. Map the user url directly to the servlet (via #WebServlet("someURL") annotation on the Servlet class).
Running Tika in server mode and invoking it via REST from Python.
Running a core java app separately as a server/daemon proces, have it listen on a TCP ServerSocket. Invoke from Python via a client socket.
Try to add -Xmx512m and -XX:MaxHeapSize=256m to the shell command. So that the shell command looks like this.
shell_command = 'java -XX:MaxHeapSize=256m -Xmx512m -jar %s --text --encoding=utf-8 "%s" >"%s"'%(tika_path,doc_path,output_path)

Categories

Resources