Py4j Exceptions when running application in a server - java

I have created an application using py4j that makes it possible to save data from python in to SQL database using a java application,everything works so fine when i run the JVM as an application and it actually saves the data. But when i run the code in a server it gives me back an exception.Therfore i thought maybe my server(Wildfly) and Py4j are using the same port so i changed the default py4j port as the turotial suggested and this is how the python sides looks like after modification:
from py4j.java_gateway import JavaGateway, GatewayParameters
gateway = JavaGateway(GatewayParameters(port=25335))
testBD = gateway.entry_point
DBin = gateway.jvm.com.packtpub.wflydevelopment.ch.Application(10,3) #calling constructor
testBD.create(DBin)
but i m still having an exception:
Traceback (most recent call last):
File "C:\Users\user\Desktop\test.py", line 4, in
DBin = gateway.jvm.com.packtpub.wflydevelopment.ch.Application(10,3)
File "C:\Users\user\AppData\Local\Programs\Python\Python35-32\lib\site-packages\py4j-0.9-py3.5.egg\py4j\java_gateway.py", line 1185, in getattr
answer = self._gateway_client.send_command(
AttributeError: 'GatewayParameters' object has no attribute 'send_command'
Any suggestions would be very much appreciated.

I got the answer from bartdag from the issue at https://github.com/bartdag/py4j/issues/180, he pointed out that specifying the GatewayParameter instance to the argument "gateway_parameters" it works.
# This produces the error
gateway = JavaGateway(GatewayParameters(address='192.168.99.100', port=25333))
But adding the argument name makes it work:
# This solves it the error
gateway = JavaGateway(gateway_parameters=GatewayParameters(address='192.168.99.100', port=25333))

Related

Create/Read files with Python in Graal VM

Im working with Graal VM, using combined languages like Java and Python. I have a problem when try to execute Python sintax to read/create files using context.eval().
I use this code using Graalpython in terminal:
out_file = File.new("cadena.txt", "w+")
out_file.puts("write your stuff here")
out_file.close
and works, but when I tried to run a code to read the file in context.eval() with Java:
codigoPython += "fichw = open('cadena.txt','r')";
codigoPython += "fichw.read() ";
codigoPython += "fichw.close() ";
Value filecontent = context.eval("python", codigoPython);
it throws me this error:
PermissionError: (1, Operation not permitted, cadena.txt, None, None)
I also tried running it using sudo and sudo su but it gives me the same error. Does anyone know why this happened?
Thanks
You need to give your context permission to do IO:
Context context = Context.newBuilder("python").allowIO(true).build();
For experimenting/prototyping it may be useful to allow everything:
Context context = Context.newBuilder("python").allowAllAccess(true).build();

Why is java_executable_exec_path giving me a legacy "external" runfiles path

Suppose I've got a minimal Scala WORKSPACE file like this:
workspace(name = "scala_example")
git_repository(
name = "io_bazel_rules_scala",
commit = "e9e65ada59823c263352d10c30411f4739d5df25",
remote = "https://github.com/bazelbuild/rules_scala",
)
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_repositories")
scala_repositories()
load("#io_bazel_rules_scala//scala:toolchains.bzl", "scala_register_toolchains")
scala_register_toolchains()
And then a BUILD:
load("#io_bazel_rules_scala//scala:scala.bzl", "scala_binary")
scala_binary(
name = "example-bin",
srcs = glob(["*.scala"]),
main_class = "Example",
)
And an Example.scala:
object Example { def main(args: Array[String]): Unit = println("running") }
I can run bazel run example-bin and everything works just fine. My problem is that this recent rules_scala PR changed the way the Java binary path is set to use the following:
ctx.attr._java_runtime[java_common.JavaRuntimeInfo].java_executable_exec_path
…instead of the previous ctx.executable._java.short_path.
After this change the Java binary path includes an external directory in the path, which seems to be a legacy thing (?). This means that after this change, if I run the following:
bazel run --nolegacy_external_runfiles example-bin
It no longer works:
INFO: Running command line: bazel-bin/example-bin
.../.cache/bazel/_bazel_travis/03e97e9dbbfe483081a6eca2764532e8/execroot/scala_example/bazel-out/k8-fastbuild/bin/example-bin.runfiles/scala_example/example-bin_wrapper.sh: line 4: .../.cache/bazel/_bazel_travis/03e97e9dbbfe483081a6eca2764532e8/execroot/scala_example/bazel-out/k8-fastbuild/bin/example-bin.runfiles/scala_example/external/local_jdk/bin/java: No such file or directory
ERROR: Non-zero return code '127' from command: Process exited with status 127
It also breaks some scripts I have that expect non-external paths.
Why is java_executable_exec_path giving me this external path? Is there some option I can give bazel to convince it not to do this?
Sorry for the slow reply -- it appears that this is because the Scala rules erroneously used java_executable_exec_path whereas they should have used java_executable_runfiles_path.
I sent a pull request to fix it, then I realized that you already did in https://github.com/bazelbuild/rules_scala/commit/4235ef58782ce2ec82981ea70b808397b64fe7df
Since the latter is now available at HEAD with Bazel, I'll remove the ugly if at least.

JDI - IllegalConnectorArgumentsException: Argument invalid

I am using the JDI to debug another running java application.
What I do that works:
Run two applications using Eclipse. The debugger is launched with the following VM Options:
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=4000
The other application connects to the socket at port 4000, and follows normal procedures (break points, etc.) to get a value of a Local Variable.
Works properly and gives me that value.
What I want to do now:
Instead of using Eclipse to launch two processes, I launch one in Eclipse, and that Process uses a ProcessBuilder to launch another one with the following arguments:
String[] args1 = {getJavaDir(),"-cp",classpath,"-Xdebug", "-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=4000", "processII.Main2"};
ProcessBuilder builder = new ProcessBuilder(args1);
builder.directory(directory);
Process process = builder.start();
The process starts successfully. However, when I try to access it through the first process, I get the following Error:
com.sun.jdi.connect.IllegalConnectorArgumentsException: Argument invalid
Looked this up online, and there is little information about what the Exception is.
I would appreciate any help figuring out what the problem is!
This exception is throw when there is an error on the connector parameters to debug the JVM. I think that your debug parameters must go in the same argument together instead of two separate arguments (put -Xdebug with -Xrunjdwp... on the same argument), try with:
String[] args1 = {getJavaDir(),"-cp",classpath,"-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=4000", "processII.Main2"};
ProcessBuilder builder = new ProcessBuilder(args1);
builder.directory(directory);
Process process = builder.start();
Hope this helps,
You missed this code :import com.sun.jdi.connect.IllegalConnectorArgumentsException;
It depends on the jdk/lib/tool.jar.If you add this jar to you classpath,you can fixout your problem.

Java and R integration

I am trying to build a java project which contains R codes. Main logic behind that is I want to automate the data structuring and data analysis with in a same project. Partially I am being able to do that. I connected R to Java and my R codes are running well. I did all my set up in the local machine and its giving me all output as I need. As data set is big I am trying to run this on amazon server. But when I am shifting it to server, my project is not working properly. Its not being able to execute library(XLConnect), library(rJava). When ever I am calling this two libraries in my java project it's crashing. Independently in R codes are running and giving me output. What I can I for that, and how to fix thus error. Please help me out from this.
My java codes is
import java.io.InputStreamReader;
import java.io.Reader;
public class TestRMain {
public static void main(String[] arg)throws Exception{
ProcessBuilder broker = new ProcessBuilder("R.exe","--file=E:\\New\\Modified_Best_Config.R");
Process runBroker = broker.start();
Reader reader = new InputStreamReader(runBroker.getInputStream());
int ch;
while((ch = reader.read())!= -1)
System.out.print((char)ch);
reader.close();
runBroker.waitFor();
System.out.println("Execution complete");
}
}
And in the Modified_Best_Config.R I have written these codes
library('ClustOfVar');
library("doBy");
library(XLConnect)
#library(rJava)
#library(xlsx)
path="E:/New/";
############Importing and reading the excel files into R##############
Automated_R <- loadWorkbook("E:/New/Option_Mix_Calculation1.xlsx")
sheet1 <- readWorksheet(Automated_R, sheet = "Current Output")
sheet2 <- readWorksheet(Automated_R, sheet = "Actual Sales monthly")
sheet3 <- readWorksheet(Automated_R, sheet = "Differences")
#####################Importing raw Data###############################
optionData<- read.csv(paste(path,"ModifiedStructureNewBestConfig1.csv",sep=""),head=TRUE,sep=",");
nrow(optionData)
optionDemand=sapply(split(optionData,optionData$Trim),trimSplit);
optionDemand1=t(optionDemand[c(-1,-2),]);
optionDemand1
################Calculating the equipment Demand####################
optionDemand2<-t(optionDemand2[c(-1,0)]);
Rownames <- as.data.frame(row.names(optionDemand2))
writeWorksheet(Automated_R,Rownames, sheet = "Current Output", startRow = 21, startCol = 1)
writeWorksheet(Automated_R,optionDemand2, sheet = "Current Output", startRow = 21, startCol = 2)
saveWorkbook(Automated_R)
But java is stopping its operation after these line.
library("doBy");
Whole set of codes are running on nicely on my local machine. But whenever I am trying to run this on amazon server it's not running. Individually in R this code is running on server. I have couple of more R codes which are running with out any error. What can I do for that, please help me out.
Thanks for updating your question with some example code. I cannot completely replicate your circumstances because I presently don't have immediate access to Amazon EC2, and I don't know the specific type of instance you are using. But here a couple of suggestions for de-bugging your issue, which I have a hunch is being caused by a missing package.
1. Try to install the offending packages via your R script
At the very beginning of your R script, before you try to load any packages, insert the following:
install.packages(c("XLConnect", "rJava"))
If your instance includes a specified CRAN mirror (essentially, the online repository where R will first look to download the package source code from), this should install the packages in the same repo where your other packages are kept on your server. Then, either library or require should load your packages.
(sidenote: rJava is actually a dependency of XLConnect, so it will automatically load anyway if you only specify library(XLConnect))
2. If the above does not work, try installing the packages via the command line
This is essentially what #Ben was suggesting with his comment. Alternatively, see perhaps this link, which deals with a similar problem with a different package. If you can, in terminal on the server, I would try entering the following three commands:
sudo add-apt-repository ppa:marutter/rrutter
sudo apt-get update
sudo apt-get install r-cran-XLConnect
In my experience this has been a good go-to repo when I can't seem to find a package I need to install. But you may or may not have permission to install packages on your server instance.

Mongodb: db.printShardingStatus() / sh.status() call in Java (and JavaScript)

I need to get a list of chunks after sharding inside my Java code. My code is simple and looks like this:
Mongo m = new Mongo( "localhost" , 27017 );
DB db = m.getDB( "admin" );
Object cr = db.eval("db.printShardingStatus()", 1);
A call of eval() returns an error:
Exception in thread "main" com.mongodb.CommandResult$CommandFailure: command failed [$eval]: { "serverUsed" : "localhost/127.0.0.1:27017" , "errno" : -3.0 , "errmsg" : "invoke failed: JS Error: ReferenceError: printShardingStatus is not defined src/mongo/shell/db.js:891" , "ok" : 0.0}
at com.mongodb.CommandResult.getException(CommandResult.java:88)
at com.mongodb.CommandResult.throwOnError(CommandResult.java:134)
at com.mongodb.DB.eval(DB.java:340)
at org.sm.mongodb.MongoTest.main(MongoTest.java:35)
And, really, if we look into the code of db.js, in line 891 there is a call to a method printShardingStatus() that is not defined inside a file. Inside of sh.status() method in utils_sh.js file, there is even a comment:
// TODO: move the actual commadn here
Important to mention, when I run these commands in mongo command line, everything works properly!
My questions are:
Is there any other possibility of getting a full sharding status within Java code? (eg. with DB.command() method)
If not, any other suggestions how to avoid my problem?
Many of the shell's helper functions are not available for server-side code execution. In the case of printShardingStatus(), it makes sense because there isn't a console to use for printing output and you'd rather have a string returned. Thankfully, you should be able to pull up the source of the shell function and reimplement it in your application (e.g. concatenating a returned string instead of printing directly).
$ mongo
MongoDB shell version: 2.2.0
connecting to: test
> db.printShardingStatus
function (verbose) {
printShardingStatus(this.getSiblingDB("config"), verbose);
}
So, let's look at the printShardingStatus() function...
> printShardingStatus
function (configDB, verbose) {
if (configDB === undefined) {
configDB = db.getSisterDB("config");
}
var version = configDB.getCollection("version").findOne();
// ...
}
Before turning all of the output statements into string concatenation, you'd want to make sure the other DB methods are all available to you. Performance-wise, I think the best option is to port the innards of this function to Java and avoid server-side JS evaluation altogether. If you dive deeper into the printShardingStatus() function, you'll see it's just issuing find() on the config database along with some group() queries.
If you do want to stick with evaluating JS and would rather not keep this code within your Java application, you can also look into storing JS functions server-side.
Have you deployed a shard cluster properly?
If so, you could connect to a mongo database that has sharding enabled.
Try calling the method db.printShardingStatus() with a that database within the mongo shell and see what happens.
Apparently the Javascript function 'printShardingStatus' is only available for the mongo shell and not for execution with server commands, to see the code start mongo.exe and type only 'printShardingStatus' and press enter.
In this case writing an extension method would be the best for solving this...
Javascript way of printing output of MongoDB query to a file
1] create a javascript file
test.js
cursor = db.printShardingStatus();
while(cursor.hasNext()){
printjson(cursor.next());
}
2] run
mongo admin --quiet test.js > output.txt

Categories

Resources