Script length limitation on DTrace Java API? - java

I have a Java program that uses the Java DTrace API (Java DTrace API) to build and execute a dtrace script on Solaris Sparc 10. The script uses the pid provider to place probes on the entry points of all the functions in a set of libraries, like this (in reality there are over 100 libraries so the script is quite long):
pid1234:liba::entry,pid1234:libb::entry,pid1234:libc::entry {printf("%s", probemod);}
My code looks like:
_consumer = new LocalConsumer();
_consumer.open();
// set the zdefs option to allow scripts with no probes
_consumer.setOption(org.opensolaris.os.dtrace.Option.zdefs);
_consumer.compile(_dtraceScript); // where _dtraceScript is a string as described above
The call to compile fails with:
invalid probe specifier pid1234:liba::entry,pid1234:libb::entry...[truncated]
I have copied and pasted the script in it's entirety and executed it at the command line using:
dtrace -n '...my script...'
... and it works fine, so I know there is nothing syntactically wrong with my script.
So, two problems/questions:
Why is the compilation failing? Since the script runs on the command line, but not via Java, am I hitting some limitation of the Java DTrace API, like script length? Or maybe the JVM is running out of memory because I'm trying to enable so many probes?
Since the exception message is truncated (because my script is so long) how can I see the dtrace error that usually appears at the end of the exception message?
Any suggestions appreciated!

Related

BPXWUNIX: not found error when trying to run Regina Rexx script

I have made a logfile reader in Java that is supposed to alert me via Xymon when more than 1 redis servers is down simultaneously.
Now I am supposed to feed the output to Xymon via a Rexx script and I tried to do that by calling the command to run the Java program using bpxwunix.
However, when I run the code to test it, it says: "sh: 1: BPXWUNIX not found".
I don't understand what I am doing wrong, I've been searching for a method to somehow include the bpxwunix function but it is my understanding that this is not necesarry.
I'm pretty sure the Rexx script is the problem because I tried a blank Java program that just prints a single line and got the same error. Also tried to just run the program in the command line with java -jar and it runs fine.
I am talking about Regina Rexx (even though it says oorexx and netrexx in the tags, I couldn't add a new rexx tag because my reputation was not high enough).
And I am trying this on Ubuntu 18.04.
Anyone that can help me out? Please alert me if I missed any details! The rexx code is provided below:
/* rexx */
env.0=1
env.1="/usr/bin:.:/usr/lib/jvm/java-1.11.0-openjdk-amd64/bin:."
stdin.0=0
reader="/home/slave2/Downloads/LogFileReader.jar"
cmd="java -jar reader"
call bpxwunix cmd,stdin.,stdout.,stderr.,env.
SAY "stdout:"
exit
IBM provides BPXWUNIX as a built-in command in the z/OS operating system. If you're not running there — and your mention of Regina Rexx implies that you're not — then the command won't be available.

php exec function not working for jar files [duplicate]

The exec command doesn't work on my server, it does not do anything, I've had safe_mode off, and verified that all the console commands are working, I've tried with absolute paths. I've checked the permissions on the applications and all the applications I need have execution permissions. I don't know what else to do, here are the rundown of the codes I've tried.
echo exec('/usr/bin/whoami');
echo exec('whoami');
exec('whoami 2>&1',$output,$return_val);
if($return_val !== 0) {
echo 'Error<br>';
print_r($output);
}
exec('/usr/bin/whoami 2>&1',$output,$return_val);
if($return_val !== 0) {
echo 'Error<br>';
print_r($output);
}
The last two codes display:
Error
Array ( )
I've contacted the server service and they can't help me, they don't know why the exec command isn't working.
have a look at /etc/php.ini , there under:
; This directive allows you to disable certain functions for security reasons.
; It receives a comma-delimited list of function names. This directive is
; *NOT* affected by whether Safe Mode is turned On or Off.
; http://www.php.net/manual/en/ini.sect.safe-mode.php#ini.disable-functions
disable_functions =
make sure that exec is not listed like this:
disable_functions=exec
If so, remove it and restart the apache.
For easy debugging I usually like to execute the php file manually (Can request more errors without setting it in the main ini). to do so add the header:
#!/usr/bin/php
ini_set("display_errors", 1);
ini_set("track_errors", 1);
ini_set("html_errors", 1);
error_reporting(E_ALL);
to the beginning of the file, give it permissions using chmod +x myscript.php and execute it ./myscript.php. It's very heedful especially on a busy server that write a lot to the log file.
EDIT
Sounds like a permissions issue. Create a bash script that does something simple as echo "helo world" and try to run it. Make sure you have permissions for the file and for the folder containing the file. you chould just do chmod 755 just for testing.
A few more notes
For debugging always wrap your exec/shell_exec function in var_dump().
error_reporting(-1); should be on, as should be display_errors, as last resort even set_error_handler("var_dump"); - if only to see if PHP itself didn't invoke execvp or else.
Use 2>&1 (merge the shells STDERR to STDOUT stream) to see why an invocation fails.
For some cases you may need to wrap your command in an additional shell invocation:
// capture STDERR stream via standard shell
echo shell_exec("/bin/sh -c 'ffmpeg -opts 2>&1' ");
Else the log file redirect as advised by #Mike is the most recommendable approach.
Alternate between the various exec functions to uncover error messages otherwise. While they mostly do the same thing, the output return paths vary:
exec() → either returns the output as function result, or through the optional $output paramater.
Also provides a $return_var parameter, which contains the errno / exit code of the run application or shell. You might get:
ENOENT (2) - No such file
EIO (127) - IO error: file not found
// run command, conjoined stderr, output + error number
var_dump(exec("ffmpeg -h 2>&1", $output, $errno), $output, $errno));
shell_exec() → is what you want to run mostly for shell-style expressions.
Be sure to assign/print the return value with e.g. var_dump(shell_exec("..."));
`` inline backticks → are identical to shell_exec.
system() → is similar to exec, but always returns the output as function result (print it out!). Additionally allows to capture the result code.
passthru() → is another exec alternative, but always sends any STDOUT results to PHPs output buffer. Which oftentimes makes it the most fitting exec wrapper.
popen() or better proc_open() → allow to individually capture STDOUT and STDERR.
Most shell errors wind up in PHPs or Apaches error.log when not redirected. Check your syslog or Apache log if nothing yields useful error messages.
Most common issues
As mentioned by #Kuf: for outdated webhosting plans, you could still find safe_mode or disable_functions enabled. None of the PHP exec functions will work. (Best to find a better provider, else investigate "CGI" - but do not install your own PHP interpreter while unversed.)
Likewise can AppArmor / SELinux / Firejail sometimes be in place. Those limit each applications ability to spawn new processes.
The intended binary does not exist. Pretty much no webhost does have tools like ffmpeg preinstalled. You can't just run arbitrary shell commands without preparation. Some things need to be installed!
// Check if `ffmpeg` is actually there:
var_dump(shell_exec("which ffmpeg"));
The PATH is off. If you installed custom tools, you will need to ensure they're reachable. Using var_dump(shell_exec("ffmpeg -opts")) will search all common paths - or as Apache has been told/constrained (often just /bin:/usr/bin).
Check with print_r($_SERVER); what your PATH contains and if that covers the tool you wanted to run. Else you may need to adapt the server settings (/etc/apache2/envvars), or use full paths:
// run with absolute paths to binary
var_dump(shell_exec("/bin/sh -c '/usr/local/bin/ffmpeg -opts 2>&1'"));
This is somewhat subverting the shell concept. Personally I don't think this preferrable. It does make sense for security purposes though; moreover for utilizing a custom installation of course.
Permissions
In order to run a binary on BSD/Linux system, it needs to be made "executable". This is what chmod a+x ffmpeg does.
Furthermode the path to such custom binaries needs to be readable by the Apache user, which your PHP scripts run under.
More contemporary setups use PHPs builtin FPM mode (suexec+FastCGI), where your webhosting account equals what PHP runs with.
Test with SSH. It should go without saying, but before running commands through PHP, testing it in a real shell would be highly sensible. Probe with e.g. ldd ffmpeg if all lib dependencies are there, and if it works otherwise.
Use namei -m /Usr/local/bin/ffmpeg to probe the whole path, if unsure where any access permission issues might arise from.
Input values (GET, POST, FILE names, user data) that get passed as command arguments in exec strings need to be escaped with escapeshellarg().
$q = "escapeshellarg";
var_dump(shell_exec("echo {$q($_GET['text'])} | wc"));
Otherwise you'll get shell syntax errors easily; and probably exploit code installed later on...
Take care not to combine backticks with any of the *exec() functions:
$null = shell_exec(`wc file.txt`);
↑ ↑
Backticks would run the command, and leave shell_exec with the output of the already ran command. Use normal quotes for wrapping the command parameter.
Also check in a shell session how the intended program works with a different account:
sudo -u www-data gpg -k
Notably for PHP-FPM setups test with the according user id. www-data/apache are mostly just used by olden mod_php setups.
Many cmdline tools depend on some per-user configuration. This test will often reveal what's missing.
You cannot get output for background-run processes started with … & or nohup …. In such cases you definitely need to use a log file redirect exec("cmd > log.txt 2>&1 &");
On Windows
CMD invocations will not play nice with STDERR streams often.
Definitely try a Powershell script to run any CLI apps else, or use a command line like:
system("powershell -Command 'pandoc 2>&1'");
Use full paths, and prefer forward slashes always ("C:/Program Files/Whatevs/run.exe" with additional quotes if paths contain spaces).
Forward slashes work on Windows too, ever since they were introduced in MS-DOS 2.0
Figure out which service and SAM account IIS/Apache and PHP runs as. Verify it has execute permissions.
You can't run GUI apps usually. (Typical workaround is the taskscheduler or WMI invocations.)
PHP → Python, Perl
If you're invoking another scripting interpreter from PHP, then utilize any available debugging means in case of failures:
passthru("PYTHONDEBUG=2 python -vvv script.py 2>&1");
passthru("perl -w script.pl 2>&1");
passthru("ruby -wT1 script.rb 2>&1");
Or perhaps even run with any syntax -c check option first.
Since you are dropping out of the PHP context into the native shell, you are going to have a lot of issues debugging.
The best and most foolproof I have used in the past is writing the output of the script to a log file and tailing it during PHP execution.
<?php
shell_exec("filename > ~/debug.log 2>&1");
Then in a separate shell:
tail -200f ~/debug.log
When you execute your PHP script, your errors and output from your shell call will display in your debug.log file.
You can retreive the outputs and return code of the exec commands, thoses might contains informations that would explain the problem...
exec('my command', $output, $return);

The same cmd work in shell but not in subprocess.Popen() for a matlab-based java program under Django

Background: Ubuntu 64bit machine. I need to call a matlab-based jar from django(deployed on apache). Here is the problem, when I run the command on the shell, it works; however, when I call subprocess.Popen({{cmd}}) inside django code, an exception is thrown.
Edit: I try to open a python shell and call subprocess.Popen({{cmd}}). I write a single python script file and put the same code in it. They all work. It's so weird that the code just fail when run on django!!!!!!
For details:
The cmd: java -jar A.jar param1 param2 param3 param4
When run directly in shell, everything is normal. When run with python code, the exception is:
Exception in thread "main" java.lang.ExceptionInInitializerError
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration.getProxyLibraryDir(MCRConfiguration.java:178)
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration$MCRRoot.get(MCRConfiguration.java:77)
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration$MCRRoot.<clinit>(MCRConfiguration.java:87)
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration.getMCRRoot(MCRConfiguration.java:92)
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration$ModuleDir.<clinit>(MCRConfiguration.java:66)
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration.getModuleDir(MCRConfiguration.java:71)
at com.mathworks.toolbox.javabuilder.internal.MWMCR.<clinit>(MWMCR.java:1466)
at autoBlockJava.AutoBlockJavaMCRFactory.newInstance(AutoBlockJavaMCRFactory.java:83)
at autoBlockJava.AutoBlockJavaMCRFactory.newInstance(AutoBlockJavaMCRFactory.java:94)
at autoBlockJava.AutoBlockJavaSharedMCRFactory$3.call(AutoBlockJavaSharedMCRFactory.java:95)
at autoBlockJava.AutoBlockJavaSharedMCRFactory$3.call(AutoBlockJavaSharedMCRFactory.java:93)
at autoBlockJava.AutoBlockJavaSharedMCRFactory.getInstance(AutoBlockJavaSharedMCRFactory.java:72)
at autoBlockJava.AutoBlockJavaSharedMCRFactory.newInstance(AutoBlockJavaSharedMCRFactory.java:93)
at autoBlockJava.manualMain.<init>(manualMain.java:97)
at autoblock.AutoBlock.main(AutoBlock.java:29)
Caused by: java.lang.NullPointerException
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration$ProxyLibraryDir.get(MCRConfiguration.java:143)
at com.mathworks.toolbox.javabuilder.internal.MCRConfiguration$ProxyLibraryDir.<clinit>(MCRConfiguration.java:173)
... 15 more
I'm totally confused. I really don't know what's the reason for it now.
I think your problem has nothing to do with Python, Django, or Java, but only with the way Matlab calls external programs.
On Linux, Matlab sets the variable LD_LIBRARY_PATH in the environment of child processes. As an example, on my system:
>> getenv('LD_LIBRARY_PATH')
ans =
/opt/MATLAB/R2013a/sys/os/glnxa64:/opt/MATLAB/R2013a/bin/glnxa64:/opt/MATLAB/R2013a/extern/lib/glnxa64:/opt/MATLAB/R2013a/runtime/glnxa64:/opt/MATLAB/R2013a/sys/java/jre/glnxa64/jre/lib/amd64/native_threads:/opt/MATLAB/R2013a/sys/java/jre/glnxa64/jre/lib/amd64/server:/opt/MATLAB/R2013a/sys/java/jre/glnxa64/jre/lib/amd64
Apparently, this setting makes some system libraries (or just the correct version of glibc?) unavailable to child processes:
>> !konsole
konsole: /opt/MATLAB/R2013a/sys/os/glnxa64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /usr/lib/libstreamanalyzer.so.0)
konsole: /opt/MATLAB/R2013a/sys/os/glnxa64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by /usr/lib/libstreams.so.0)
The problem can be resolved by explicitly unsetting LD_LIBRARY_PATH by
setenv('LD_LIBRARY_PATH')
I'm not sure for what purpose Matlab sets LD_LIBRARY_PATH, and I'd guess that some special functionality must be broken by unsetting it. However, I've had the line above in my startup.m for years now, and I haven't run into any problems.
See also: Start application from Matlab

java jni Exception Access Violation

My java application which uses JNI is crashing with hs_err_pid file giving the error as "Exception Access Violation". The OS is Windows VISTA.
From what I know, my native code is illegally writing to some chunk of memory that does not belong to it.
I have used valgrind on Linux on pure native code to detect such problems in the past.
But when using java, valgrind simply fails and does not work.
What (if any) method would you suggest to identify the offending piece of code?
It is not possible for me to manually dig through the native code (few million lines) to identify it.
I was finally able to resolve the issue. I thought I will post the procedure here in case someone else is in a similar situation.
Step 1:
Build the native code with proper debugging symbols. The compiler flags could be something like "-g -rdynamic -O0".
Step 2:
The following valgrind command should do the job.
valgrind --error-limit=no --trace-children=yes --smc-check=all --leak-check=full --track-origins=yes -v $JAVA -XX:UseSSE=0 -Djava.compiler=NONE $JAVA_ARGS
In the above command, $JAVA is the java executable and $JAVA_ARGS is the arguments to your java program.
Once successfully started, it will take orders of magnitude more time to complete the execution. Valgrind will print thousands of errors (most related to jvm which can be ignored). You can however identify the ones that relate to your jni code.
This general strategy should be applicable to most native memory related problems.
If you are running Java under Linux, you could use the -XX:OnError="gdb - %p" option to run gdb when the error occurs. See this example.
Under windows, you can use the -XX:+UseOSErrorReporting option to obtain a similar effect.
For debugging JNI code a method posted in this article could be useful (it's about debugging JNI using Netbeans and Visual Studio). It's simple - just start your Java program, then in Visual Studio pick Debug -> Attach to process and choose java.exe process running your program.
When you add breakpoints to your C++ code, Visual Studio will break on them. Voila :)

What is the difference between calling a script from a shell/Terminal and using Java Processes?

I've been having real problems trying to get a ruby script to run through Java. I've had all kinds of solutions proposed, and all of them are failing for some reason, so I'm trying to simplify my problem.
Let's say I have a shell script that just has this line in it:
ruby -rubygems script/test_s2t.rb
At the terminal, I can run this script using script/runruby.sh and it works as expected. Now let's say I have a Java method that does the following:
String[] cmd = {"script/runruby.sh"};
ProcessBuilder builder = new ProcessBuilder(cmd);
builder.redirectErrorStream(true);
Process process = builder.start();
This doesn't work (it throws an error back from the Ruby script, specifically, but this is a misdirect because it's really down to the script itself not working as expected). My question is not why that test_s2t.rb script doesn't work, because I think that might be distracting me from the real problem.
My question is simply what is different when I run something through ProcessBuilder as opposed to just running it via the command line. Is it a permissions thing? Path differences? There must be something screwing around with the environment the script runs in, because I can't see a problem with the script itself.
As alwyas, any suggestions appreciated. Three days and counting on this issue...
EDIT - For those curious, the exact error I receive in Java is the one described at the bottom of this question: Java receives an error executing Ruby script; Terminal doesn't
The outcome we got in that question was that I should try JRuby, but that resulted in further problems as I can't get the gems to work properly within JRuby. So I went back to asking myself why it wouldn't run normally in the first place.
The reason I think the error is a distraction is because the error is given simply because it processed a string it wasn't expecting to see. The string it expects is the normal process the script runs, which is using ffmpeg and suchlike. What this means is that the script encountered another error (which it isn't showing, which means it was probably not caused by ruby/jruby but by the processes the script launches like ffmpeg).
It's incredibly frustrating, purely because it runs so perfectly from the command line.
I've run into similar problems and there are two things that seem to be common problems:
1) The environment of the child process will be the same as environment of the current virtual machine. This includes the working directory of the launched process.
Example from: http://docs.oracle.com/javase/1.5.0/docs/api/java/lang/ProcessBuilder.html
Map<String, String> env = pb.environment();
env.put("VAR1", "myValue");
env.remove("OTHERVAR");
env.put("VAR2", env.get("VAR1") + "suffix");
pb.directory("myDir");
Alternatively, you could set the environment inside the shell script.
2) Do you have the proper shebang #! at the beginning of the .sh file? Personally I'd make it absolutely clear and perhaps explicitly call bash or zsh or whatever with the path to the shell script as the first argument OR directly call ruby with the '-rubygems' and 'script/test_s2t.rb' as arguments.
Good Luck!

Categories

Resources