calling clojure from Java (Clojure Interop) - java

Calling Java from Clojoure is quite simple and straightforward but the inverse has proven to be unpredictable.
They seem to be two ways of doing it:
1)the following classes
i) import clojure.java.api.Clojure; ,
ii) import clojure.lang.IFn;
2)compile your clojure into an uberjar then import it into the java
code.
I have opted for the 2nd option as it's more straight forward.
Here is the clojure code
(ns com.test.app.service
(:gen-class
:name com.test.app.service
:main false
:methods [^{:static true} [returned [int] int]]))
(defn returned
[number]
(* 2 number))
(defn -returned
[number]
(returned number))
Here is the Java code.
package com.s.profile;
import java.util.*;
import com.microsoft.azure.serverless.functions.annotation.*;
import com.microsoft.azure.serverless.functions.*;
import com.test.app.service;
/**
* Azure Functions with HTTP Trigger.
*/
public class Function {
/**
* This function listens at endpoint "/api/hello". Two ways to invoke it using "curl" command in bash:
* 1. curl -d "HTTP Body" {your host}/api/hello
* 2. curl {your host}/api/hello?name=HTTP%20Query
*/
#FunctionName("hello")
public HttpResponseMessage<String> hello(
#HttpTrigger(name = "req", methods = {"get", "post"}, authLevel = AuthorizationLevel.ANONYMOUS) HttpRequestMessage<Optional<String>> request,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse query parameter
String query = request.getQueryParameters().get("name");
String name = request.getBody().orElse(query);
if (name == null) {
return request.createResponse(400, "Please pass a name on the query string or in the request body");
} else {
service.returned(4);
context.getLogger().info("process data" );
return request.createResponse(200, "Hellos, " + name );
}
}
}
When ever I make the "service.returned(4);" the system never returns. I can't quite figure out why to me it comes off like the function doesn't return from Clojure but I can't see the cause.
Just to add some context I have tried it when its a simple hello world java app which just prints out the result and it works. It's when I try implement it in the Azure functions.

Please see this question for a running example:
How to invoke Clojure function directly from Java
I would suggest simplifying your code at first, then adding back in the Azure stuff one line at a time in case some interaction there is causing the problem.

I followed these instructions and it seemed to resolve the error of class not found. It seems as though when running the command
mvn azure-functions:run
It doesn't automatically find all imported libraries. You either have to use
maven-assembly-plugin
maven-shade-plugin

Related

GraalVM - embedding python multi-file project in java

I couldn't find a solution create a polyglot source out of multiple files in GraalVM.
What exactly I want to achieve:
I have a python project:
my-project:
.venv/
...libs
__main__.py
src/
__init__.py
Service.py
Example sourcecode:
# __main__.py
from src.Service import Service
lambda url: Service(url)
# src/Service.py
import requests
class Service:
def __init__(self, url):
self.url = url
def invoke(self):
return requests.get(self.url)
This is very simple example, where we've got an entry-point script, project is structured in packages and there is one external library (requests).
It works, when I run it from command-line with python3 __main__.py, but I can't get it work, when embedding it in Java (it can't resolve imports).
Example usage in java:
import org.graalvm.polyglot.Context;
import org.graalvm.polyglot.Source;
import org.graalvm.polyglot.Value;
import java.io.File;
import java.io.IOException;
public class Runner {
public static void main(String[] args) throws IOException {
Context context = Context.newBuilder("python")
.allowExperimentalOptions(true)
.allowAllAccess(true)
.allowIO(true)
.build();
try (context) {
// load lambda reference:
Value reference = context.eval(Source.newBuilder("python", new File("/path/to/my-project/__main__.py")).build());
// invoke lambda with `url` argument (returns `Service` object)
Value service = reference.execute("http://google.com");
// invoke `invoke` method of `Service` object and print response
System.out.println("Response: " + service.getMember("invoke").execute());
}
}
}
It fails with Exception in thread "main" ModuleNotFoundError: No module named 'src'.
The solution works for javascript project (having similar index.js to __main__.py, its able to resolve imports - GraalVM "sees" other project's files, but somehow it doesn't, when using python.
I found out, that python is able to run zip package with project inside, but this also doesn't work with GraalVM.
Is there any chance to accomplish it? If not, maybe there is a similar tool to webpack for python (if I could create a single-file bundle, it should also work).
Btw, I don't know python at all, so I may missing something.
Thanks for any help!

How to accept and execute arbitrary Java code

Many Websites allow a user to type in Java code and run it. How does a program accept Java written externally/at run time and run it?
The only/closest answer i see on StackOverflow is from 5 years ago about Android development that recommended using Janino (Compile and execute arbitrary Java string in Android). Is this still the way to go? Has a better approach (like something built into Java) appeared in the last half decade?
If it helps, I'm building a training app for my students. The code is short (a few methods, maybe 20 lines max) and they must use standard libraries (no need to worry about importing things from maven, etc.).
Like similar online coding sites, I'd like to return the output of the run (or compilation failure).
An example use case:
Webpage says "The code below has an error, try to fix it." A text box contains code. A semicolon is missing.
User modifies the code and presses submit.
The Webpage either returns a compile error message or success.
Another use case would be for me to execute unit tests on the code they submitted and return the result. The point being, I give the user feedback on the code compilation/run.
Here is a small example to use the Java compiler interface
import java.io.File;
import java.lang.reflect.Method;
import java.net.URL;
import java.net.URLClassLoader;
import javax.tools.JavaCompiler;
import javax.tools.ToolProvider;
public class Compiler {
public static void main(String[] args) throws Exception {
// compile the java file
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
int result = compiler.run(null, null, null, "D:\\development\\snippets\\Test.java");
System.out.println("compiler result " + result);
// load the new class
File classesDir = new File("D:\\development\\snippets\\");
URLClassLoader classLoader = URLClassLoader.newInstance(new URL[] { classesDir.toURI().toURL() });
Class<?> cls = Class.forName("Test", true, classLoader);
// invoke a method of the class via reflection
Object instance = cls.getDeclaredConstructors()[0].newInstance();
Method testMethod = cls.getMethod("test");
String testMethodResult = (String) testMethod.invoke(instance);
System.out.println(testMethodResult);
}
}
And here the test class
public class Test {
public String test() {
return "String from test.";
}
}
Running the Compiler class returns
compiler result 0
String from test.

Cannot run Shared Groovy library function

I am in the process of setting up Jenkins pipeline builds and am starting to use the same methods across multiple jobs, so it's time to put these common methods into a shared library.
The first function I have created is to update GitHub with the result of some unit tests. I am having an issue where I can run this function from the command line fine but when it comes to using it within my Jenkins build it does not work and I cannot seem to get any debug output in the Jenkins console
This is the directory structure of my shared library
my-project
src
vars
- getCommitId.groovy
- gitUpdateStatus.groovy
So the first function getCommitId works fine
#!/usr/bin/env groovy
def call() {
commit_id = sh script: 'git rev-parse HEAD', returnStdout: true
commit_id = commit_id.replaceAll("\\s","") // Remove Whitespace
return commit_id
}
This returns the correct value
This is gitUpdateStatus
#!/usr/bin/env groovy
#Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7')
import static groovyx.net.http.ContentType.JSON
import static groovyx.net.http.Method.POST
import groovyx.net.http.HTTPBuilder
String targetUrl = 'https://api.github.com/repos/myRepo/'
def http = new HTTPBuilder(targetUrl)
http.request(POST) {
uri.path = "repo/statuses/12345678"
requestContentType = JSON
body = [state: 'success', description: 'Jenkins Unit Tests', target_url: 'http://test.co.uk', context: 'unit tests']
headers.'Authorization' = "token myOauthTokenHere"
headers.'User-Agent' = 'Jenkins Status Update'
headers.Accept = 'application/json'
response.success = { resp, json ->
println "GitHub updated successfully! ${resp.status}"
}
response.failure = { resp, json ->
println "GitHub update Failure! ${resp.status} " + json.message
}
}
I can run this fine via the command line, but I get no output when run as a Jenkins build.
My Jenkinsfile
#Library('echo-jenkins-shared')_
node {
GIT_COMMIT_ID = getGitCommitId()
echo "GIT COMMIT ID: ${GIT_COMMIT_ID}"
gitUpdateStatus(GIT_COMMIT_ID)
}
Why would this not work or could this be converted to just use native Groovy methods?
First off, I would advise you to use as service like https://requestb.in to check if your code actually perform HTTP calls.
Second, I would recommend NOT to use #Grab-based dependencies like HTTPBuilder in Jenkins pipelines, but the http_request plugin instead, downloadable & installable as a .hpi:
https://jenkins.io/doc/pipeline/steps/http_request/
Finally, you can find an example of utility class to perform HTTP requests here:
https://github.com/voyages-sncf-technologies/hesperides-jenkins-lib/blob/master/src/com/vsct/dt/hesperides/jenkins/pipelines/http/HTTPBuilderRequester.groovy
With the rationale behind it explained there: https://github.com/voyages-sncf-technologies/hesperides-jenkins-lib#httprequester

Invoke PHP code from Java (Scala) and get result

This seems to have been asked in several places and has been marked as "closed" and "off-topic". However, people seem to have this problem constantly
invoking a php method from java (closed)
Calling PHP from Java (closed)
How can I run PHP code within a Java application? (closed)
This answer in the last question partly answers this but does not clarify how to read the outputs.
I finally found the answer to the question:
How do I run a PHP program from within Java and obtain its output?
To give more context, someone has given me a PHP file containing code for some method foo that returns a string. How do we invoke this from JVM?
Searching on Google is not helpful as all articles I found don't explain how to call PHP from Java but rather Java from PHP.
The answer below explains how to do this using the PHP/Java bridge.
The answer is in Scala but would be easy to read for Java programmers.
Code created from this SO answer and this example:
package javaphp
import javax.script.ScriptEngineManager
import php.java.bridge._
import php.java.script._
import php.java.servlet._
object JVM{ // shared object for PHP/JVM communication
var out = ""
def put(s:String) = {
out = s
}
}
object Test extends App {
val engine = (new ScriptEngineManager).getEngineByExtension("php")
val oldCode = """
<?php
function foo() {
return 'hello';
// some code that returns string
}
?>
"""
val newCode = """
<?php
$ans = foo();
java('javaphp.JVM')->put($ans);
?>
"""+oldCode
// below evaluates and returns
JVM.out = "" //reset shared output
engine.eval(newCode)
println("output is : "+JVM.out) // prints hello
}
To run this file:
Install PHP, Scala and set path correctly. Then create a file php.scala with the above code. Then run:
scalac php.scala
and
scala javaphp.Test

pyspark: call a custom java function from pyspark. Do I need Java_Gateway?

I wrote the following MyPythonGateway.java so that I can call my custom java class from Python:
public class MyPythonGateway {
public String findMyNum(String input) {
return MyUtiltity.parse(input).getMyNum();
}
public static void main(String[] args) {
GatewayServer server = new GatewayServer(new MyPythonGateway());
server.start();
}
}
and here is how I used it in my Python code:
def main():
gateway = JavaGateway() # connect to the JVM
myObj = gateway.entry_point.findMyNum("1234 GOOD DAY")
print(myObj)
if __name__ == '__main__':
main()
Now I want to use MyPythonGateway.findMyNum() function from PySpark, not just a standalone python script. I did the following:
myNum = sparkcontext._jvm.myPackage.MyPythonGateway.findMyNum("1234 GOOD DAY")
print(myNum)
However, I got the following error:
... line 43, in main:
myNum = sparkcontext._jvm.myPackage.MyPythonGateway.findMyNum("1234 GOOD DAY")
File "/home/edamameQ/spark-1.5.2/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 726, in __getattr__
py4j.protocol.Py4JError: Trying to call a package.
So what did I miss here? I don't know if I should run a separate JavaApplication of MyPythonGateway to start a gateway server when using pyspark. Please advice. Thanks!
Below is exactly what I need:
input.map(f)
def f(row):
// call MyUtility.java
// x = MyUtility.parse(row).getMyNum()
// return x
What would be the best way to approach this? Thanks!
First of all the error you see usually means the class you're trying to use is not accessible. So most likely it is a CLASSPATH issue.
Regarding general idea there are two important issues:
you cannot access SparkContext inside an action or transformation so using PySpark gateway won't work (see How to use Java/Scala function from an action or a transformation? for some details)). If you want to use Py4J from the workers you'll have to start a separate gateways on each worker machine.
you really don't want to pass data between Python an JVM this way. Py4J is not designed for data intensive tasks.
In PySpark before start calling the method -
myNum = sparkcontext._jvm.myPackage.MyPythonGateway.findMyNum("1234 GOOD DAY")
you have to import MyPythonGateway java class as follows
java_import(sparkContext._jvm, "myPackage.MyPythonGateway")
myPythonGateway = spark.sparkContext._jvm.MyPythonGateway()
myPythonGateway.findMyNum("1234 GOOD DAY")
specify the jar containing myPackage.MyPythonGateway with --jars option in spark-submit
If input.map(f) has inputs as an RDD for example, this might work, since you can't access the JVM variable (attached to spark context) inside the executor for a map function of an RDD (and to my knowledge there is no equivalent for #transient lazy val in pyspark).
def pythonGatewayIterator(iterator):
results = []
jvm = py4j.java_gateway.JavaGateway().jvm
mygw = jvm.myPackage.MyPythonGateway()
for value in iterator:
results.append(mygw.findMyNum(value))
return results
inputs.mapPartitions(pythonGatewayIterator)
all you need to do is compile jar and add to pyspark classpath with --jars or --driver-class-path spark submit options. Then access class and method with below code-
sc._jvm.com.company.MyClass.func1()
where sc - spark context
Tested with Spark 2.3. Keep in mind, you can call JVM class method only from driver program and not executor.

Categories

Resources