I want to call the Groovy scripts from Java and refresh the Groovy scripts periodically.
For example ,
public class AppTest {
public static void main(String args[]) throws Exception {
TestVO test = new TestVO();
AnotherInput input = new AnotherInput();
test.setName("Maruthi");
input.setCity("Newark");
GroovyClassLoader loader = new GroovyClassLoader(AppTest.class.getClassLoader());
Class groovyClass = loader.parseClass(new File("src/main/resources/groovy/MyTestGroovy.groovy"));
GroovyObject groovyObject = (GroovyObject) groovyClass.newInstance();
Object[] inputs = {test,null};
Map<String,String> result = (Map<String, String>)groovyObject.invokeMethod("checkInput", inputs);
System.out.println(result);
}
}
And my Groovy script is
class MyTestGroovy {
def x = "Maruthi";
def checkInput = { TestVO input,AnotherInput city ->
if(input.getName().equals(x)) {
input.setName("Deepan");
println "Name changed Please check the name";
} else {
println "Still Maruthi Rocks";
}
Map<String, String> result = new HashMap<String,String>();
result.put("Status", "Success");
if(city != null && city.getCity().equalsIgnoreCase("Newark")) {
result.put("requested_State", "Newark");
}
return result;
}
def executeTest = {
println("Test Executed");
}
}
How efficient my memory would be managed when I create multiple instances of groovy script and execute the script. Is it advisable to use a number of Groovy scripts as my customized rule engine. Please advise.
It is usually better to have several instances of the same script, than parsing the class every time you want to create an instance. Performance wise that is because compiling the script takes some time, you have to pay in addition to creating an instance. Memory wise you use up the number of available classes up faster. Even if old classes are collected, if you have many scripts active, it can happen... though that normally means hundreds or even thousands of them (depends on the jvm version and your memory settings)
Of course, once the script changed, you will have to recompile the class anyway. So if in your scenario you will have only one instance of the class active at the same time, and a new instance is only required after a change to the source, you can recompile every time.
I mention that especially, because you might even be able to write the script in a way, that let's you reuse the same instance. But it is of course beyond the scope of this question.
Related
tl;dr:
How do/can I store the function-handles of multiple js-functions in java for using them later? Currently I have two ideas:
Create multipe ScriptEngine instances, each containing one loaded function. Store them in a map by column, multiple entries per column in a list. Looks like a big overhead depending on how 'heavy' a ScriptEngine instance is...
Some Javascript solution to append methods of the same target field to an array. Dont know yet how to access that from the java-side, but also dont like it. Would like to keep the script files as stupid as possible.
var test1 = test1 || [];
test1.push(function(input) { return ""; });
???
Ideas or suggestions?
Tell me more:
I have a project where I have a directory containing script files (javascript, expecting more than hundred files, will grow in future). Those script files are named like: test1;toupper.js, test1;trim.js and test2;capitalize.js. The name before the semicolon is the column/field that the script will be process and the part after the semicolon is a human readable description what the file does (simplified example). So in this example there are two scripts that will be assigned to the "test1" column and one script to the "test2" column. The js-function template basically looks like:
function process(input) { return ""; };
My idea is, to load (and evaluate/compile) all script files at server-startup and then use the loaded functions by column when they are needed. So far, so good.
I can load/evaluate a single function with the following code. Example uses GraalVM, but should be reproducable with other languages too.
final ScriptEngine engine = new ScriptEngineManager().getEngineByName("graal.js");
final Invocable invocable = (Invocable) engine;
engine.eval("function process(arg) { return arg.toUpperCase(); };");
var rr0 = invocable.invokeFunction("process", "abc123xyz"); // rr0 = ABC123XYZ
But when I load/evaluate the next function with the same name, the previous one will be overwritten - logically, since its the same function name.
engine.eval("function process(arg) { return arg + 'test'; };");
var rr1 = invocable.invokeFunction("process", "abc123xyz"); // rr1 = abc123xyztest
This is how I would do it.
The recommended way to use Graal.js is via the polyglot API: https://www.graalvm.org/reference-manual/embed-languages/
Not the same probably would work with the ScriptEngine API, but here's the example using the polyglot API.
Wrap the function definition in ()
return the functions to Java
Not pictured, but you probably build a map from the column name to a list of functions to invoke on it.
Call the functions on the data.
import org.graalvm.polyglot.*;
import org.graalvm.polyglot.proxy.*;
public class HelloPolyglot {
public static void main(String[] args) {
System.out.println("Hello Java!");
try (Context context = Context.create()) {
Value toUpperCase = context.eval("js", "(function process(arg) { return arg.toUpperCase(); })");
Value concatTest = context.eval("js", "(function process(arg) { return arg + 'test'; })");
String text = "HelloWorld";
text = toUpperCase.execute(text).asString();
text = concatTest.execute(text).asString();
System.out.println(text);
}
}
}
Now, Value.execute() returns a Value, which I for simplicity coerce to a Java String with asString(), but you don't have to do that and you can operate on Value (here's the API for Value: https://www.graalvm.org/sdk/javadoc/org/graalvm/polyglot/Value.html).
i am new to writing junits.I have my below java api which gets a unique value every time from database.It contains just a single query.I need to write junit for below api.can anybody give some suggestions how should i approach??
public static int getUniqueDBCSequence() throws Exception
{
int w_seq = 0;
QueryData w_ps = null;
ResultSet w_rs = null;
try
{
w_ps = new QueryData("SELECT GETUNIQUENUMBER.NEXTVAL FROM DUAL");
w_rs = SQLService.executeQuery(w_ps);
while ( w_rs.next() )
{
w_seq = w_rs.getInt(1);
}
}
catch (Exception a_ex)
{
LOGGER.fatal("Error occured : " + a_ex.getMessage());
}
finally
{
SQLService.closeResultSet(w_rs);
}
return w_seq;
}
You are using only static methods : in the class under test but also in the dependencies of it.
It is really not a testable code with JUnit.
Besides, what you do you want to test unitary ?
Your test has no substantive logic.
You could make SQLService.executeQuery() a method instance to be able to mock it. But really which interest to mock it ?
To assert that the result is returned w_seq = w_rs.getInt(1); ?
It looks like technical assertions that have few value and maintaining unit tests with few value should be avoided.
Now, you could test with DBunit or tools to populate a in memory database and executes the code against.
But the query executed have a strong coupling with Oracle sequences.
So, you could have some difficulties to do it.
I'm writing an IntelliJ-Plugin to analyse java-program code. Thus i use Soot to write static analyses. Every time a user triggers the analyse-action of my plugin, I take the current VirtualFile of the current context like this:
FileEditorManager manager = FileEditorManager.getInstance(e.getProject());
VirtualFile files[] = manager.getSelectedFiles();
toAnalyse = files[0]; [...]
When I check the content of this file all changes are applied. After this I'm loading the class I want to analyse in Soot.
String dir = toAnalyse.getParent().getPath() ;
Options.v().setPhaseOption("jb", "use-original-names");
Options.v().set_soot_classpath( System.getProperty("java.home")+";"+ dir);
c = Scene.v().loadClassAndSupport(name);
/*no analyse c*/
This works perfectly for me. But now to my issue:
If i change sth. in test instance of my plugin and trigger the same analysis again, nothing changes.
What have i tried so far?
I set following options:
Options.v().set_dump_body( Arrays.asList("jb"));
Options.v().set_dump_cfg( Arrays.asList("jb"));
Options.v().set_allow_phantom_refs(true);
Options.v().set_whole_program(true);
I also removed all classes by hand
like this:
Chain<SootClass> classes = Scene.v().getClasses();
Stack<SootClass> stack = new Stack<>();
for(SootClass s : classes)
stack.push(s);
while(!stack.empty())
Scene.v().removeClass(stack.pop());
and started the program again.
I solved this issue.
SootClass c = Scene.v().loadClassAndSupport(name);
// ...
c.setResolvingLevel(0);
G.reset();
G.reset() resets all singleton instances.
Therefore all cached results will be overwritten by calling this action again.
public static Scene v() {
return G.v().soot_Scene();
}
this.instance_soot_Scene is null after calling G.reset().
Therefore the following code:
public Scene soot_Scene() {
if(this.instance_soot_Scene == null) {
synchronized(this) {
if(this.instance_soot_Scene == null) {
this.instance_soot_Scene = new Scene(this.g);
}
}
}
return this.instance_soot_Scene;
}
returns a new instance with an empty result cache.
I've created two groovy extension modules / methods on java.util.ArrayList(). It's all working very well inside my IDE. I use gradle to build the jar, and deploy it to a remote JVM. When it reaches the remote JVM, it fails.
Here is the extension method:
static Map sumSelectedAttributes(final List self, List attributes) {
Map resultsMap = [:]
attributes.each { attr ->
resultsMap[attr] = self.inject(0) { sum, obj ->
sum + obj[attr]
}
}
return resultsMap
Here is the code that invokes it:
outputMap[processName][iName] << kBeanList.sumSelectedAttributes([
"messageCount", "userCount", "outstandingRequests",
"cpuUsage", "memoryUsage", "threadCount", "cacheCount", "huserCount",
"manualHuserCount", "dataPointerCount", "tableHandleCount",
"jdbCacheRecordCount", "dbConnectionCount"])
Here is the error:
No signature of method: java.util.ArrayList.sumSelectedAttributes() is
applicable for argument types: (java.util.ArrayList) values:
[[messageCount, incomingConnectionsCount, outgoingConnectionsCount,
...]]
Again, it works fine in intellij with test cases. What is different on the remote JVM that would prevent this from working? Here are some things that came to my mind:
The remote JVM uses Groovy 2.3 while I'm on 2.4.5
We use a custom classloader on the remote JVM to load classes
Other than that, I could not find any other documentation about anything special I need to do to make extensions work on remote JVM's.
Any help is greatly appreciated.
Per a comment, seems like an issue with custom classloader, here is the class that handles the manipulation of a few classloaders.
class CustomLoader {
static Map loaders = [:]
static File loaderRoot = new File("../branches")
static URLClassLoader getCustomLoader(String branchName) {
if (!loaders[branchName]) {
loaders[branchName] = new URLClassLoader(getUrls(branchName))
} else {
loaders[branchName]
}
}
static URLClassLoader updateClassLoader(String branchName) {
loaders[branchName] = null
loaders[branchName] = new URLClassLoader(getUrls(branchName))
}
private static URL[] getUrls(String branchName) {
def loaderDir = new File(loaderRoot, branchName)
List<File> files = []
loaderDir.eachFileRecurse{
if (it.name.endsWith('.jar')) {
files << it
}
}
List urls = files.sort{ it.name }.reverse().collect{it.toURI().toURL()}
return urls
}
}
To manually register an extension method module you can use code similar to what is used in GrapeIvy. Because grapes have the same problem in that they make a jar visible in the wrong loader but still want to enable extension methods. The piece of code in question is this here:
JarFile jar = new JarFile(file)
def entry = jar.getEntry(ExtensionModuleScanner.MODULE_META_INF_FILE)
if (entry) {
Properties props = new Properties()
props.load(jar.getInputStream(entry))
Map<CachedClass, List<MetaMethod>> metaMethods = new HashMap<CachedClass, List<MetaMethod>>()
mcRegistry.registerExtensionModuleFromProperties(props, loader, metaMethods)
// add old methods to the map
metaMethods.each { CachedClass c, List<MetaMethod> methods ->
// GROOVY-5543: if a module was loaded using grab, there are chances that subclasses
// have their own ClassInfo, and we must change them as well!
Set<CachedClass> classesToBeUpdated = [c]
ClassInfo.onAllClassInfo { ClassInfo info ->
if (c.theClass.isAssignableFrom(info.cachedClass.theClass)) {
classesToBeUpdated << info.cachedClass
}
}
classesToBeUpdated*.addNewMopMethods(methods)
}
}
In this code file is a File representing the jar. In your case you will need to have something else here. Basically we first load the descriptor file into Properties, call registerExtensionModuleFromProperties to fill the map with MetaMethods depending on a given class loader. And that is the key part for the solution of your problem, the right class loader here is one that can load all the classes in your extension module and the groovy runtime!. After this any new meta class will know about the extension methods. The code that follows is needed only if there are already existing meta classes, you want to know about those new methods.
I need to submit several jobs to Hadoop which are all related (which is why they are launched by the same driver class) but completely independent of each other. Right now I start jobs like this:
int res = ToolRunner.run(new Configuration(), new MapReduceClass(params), args);
which runs a job, gets the return code, and moves on.
What I'd like to do is submit several such jobs to run in parallel, retrieving the return code of each one.
The obvious (to me) idea would be to launch several threads, each of which is responsible for a single hadoop job, but I'm wondering if hadoop has a better way to accomplish this? I don't have any experience writing code with concurrency, so I'd rather not spend a lot of time learning the intricacies of it unless it's necessary here.
This could be a suggestion, but implies code, so I will put it as an answer.
In this code (personal code), I just iterate through some variable, and submit a job (the same job) several times.
Using job.waitForCompletion(false) will help you to submit several jobs.
while (processedInputPaths < inputPaths.length) {
if (processedInputPaths + inputPathsLimit < inputPaths.length) {
end = processedInputPaths + inputPathsLimit - 1;
} else {
end = inputPaths.length - 1;
}
start = processedInputPaths;
Job job = this.createJob(configuration, inputPaths, cycle, start, end, outputPath + "/" + cycle);
boolean success = job.waitForCompletion(true);
if (success) {
cycle++;
processedInputPaths = end + 1;
} else {
LOG.info("Cycle did not end successfully :" + cycle);
return -1;
}
}
psabbate's answer led me to find a couple of pieces of the API that I was missing. This is how I solved it:
In the driver class, start the jobs with code like this:
List<RunningJob> runningJobs = new ArrayList<RunningJob>();
for (String jobSpec: jobSpecs) {
// Configure, for example, a params map that gets passed into the MR class's constructor
ToolRunner.run(new Configuration(), new MapReduceClass(params, runningJobs), null);
}
for (RunningJob rj: runningJobs) {
System.err.println("Waiting on job "+rj.getID());
rj.waitForCompletion();
}
Then, in the MapReduceClass, define a private variable List<RunningJob> runningJobs, define a constructor like this:
public MergeAndScore(Map<String, String> p, List<RunningJob> rj) throws IOException {
params = Collections.unmodifiableMap(p);
runningJobs = rj;
}
And in the run() method that ToolRunner calls, define your JobConf and submit the job with
JobClient jc = new JobClient();
jc.init(conf);
jc.setConf(conf);
runningJobs.add(jc.submitJob(conf));
With this, run() returns immediately, and the jobs can be accessed via the runningJobs object in the driver class.
Note that I am working on an older version of Hadoop, so jc.init(conf) and/or jc.setConf(conf) may or may not be necessary depending on your setup, though probably at least one of them is required.