How can I convince GroovyShell to maintain state over eval() calls? - java

I'm trying to use Groovy to create an interactive scripting / macro mode for my application. The application is OSGi and much of the information the scripts may need is not know up front. I figured I could use GroovyShell and call eval() multiple times continually appending to the namespace as OSGi bundles are loaded. GroovyShell maintains variable state over multiple eval calls, but not class definitions or methods.
goal: Create a base class during startup. As OSGi bundles load, create derived classes as needed.

I am not sure about what you mean about declared classes not existing between evals, the following two scripts work as expected when evaled one after another:
class C {{println 'hi'}}
new C()
...
new C()
However methods become bound to the class that declared them, and GroovyShell creates a new class for each instance. If you do not need the return value of any of the scripts and they are truly scripts (not classes with main methods) you can attach the following to the end of every evaluated scrips.
Class klass = this.getClass()
this.getMetaClass().getMethods().each {
if (it.declaringClass.cachedClass == klass) {
binding[it.name] = this.&"$it.name"
}
}
If you depend on the return value you can hand-manage the evaluation and run the script as part of your parsing (warning, untested code follows, for illustrative uses only)...
String scriptText = ...
Script script = shell.parse(scriptText)
def returnValue = script.run()
Class klass = script.getClass()
script.getMetaClass().getMethods().each {
if (it.declaringClass.cachedClass == klass) {
shell.context[it.name] = this.&"$it.name"
}
}
// do whatever with returnValue...
There is one last caveat I am sure you are aware of. Statically typed variables are not kept between evals as they are not stored in the binding. So in the previous script the variable 'klass' will not be kept between script invocations and will disappear. To rectify that simply remove the type declarations on the first use of all variables, that means they will be read and written to the binding.

Ended up injecting code before each script compilation. End goal is that the user written script has a domain-specific-language available for use.

This might be what you are looking for?
From Groovy in Action
def binding = new Binding(x: 6, y: 4)
def shell = new GroovyShell(binding)
def expression = '''f = x * y'''
shell.evaluate(expression)
assert binding.getVariable("f") == 24
An appropriate use of Binding will allow you to maintain state?

Related

How Can I Verify varargs To RPC Calls Before Runtime?

I can make an RPC call in Java like this:
final FlowHandle flowHandle = rpcOps.startFlowDynamic(
TransferObligation.Initiator.class,
linearId, newLender, true);
The first parameter is the class to invoke and the next three are arguments to the class passed via varargs.
As we can see by the class definition the args match and the call works fine:
public Initiator(UniqueIdentifier linearId, Party newLender, Boolean anonymous) {
this.linearId = linearId;
this.newLender = newLender;
this.anonymous = anonymous;
}
However, if I add or remove args from the constructor the code will still compile and I will only notice at runtime (or integration testing - assuming I have enough test coverage).
The same applies if I pass the wrong args in the first place in the RPC call.
e.g. the following compiles fine but gives a runtime error:
final FlowHandle flowHandle = rpcOps.startFlowDynamic(
TransferObligation.Initiator.class,
linearId, newLender, true, 100000L, "Random String");
Is it possible to check for these errors with something other than test cases?
e.g. Static analysis using a custom IDEA code inspection or a custom SonarQube rule
EDIT: It appears that the Kotlin API has a type safe way of starting the flows (using inline reified extension functions) that the Java API does not, so I have removed the kotlin tag and updated the references to Java examples.
Along with CordaRPCOps.startFlowDynamic which as you mentioned has a varargs parameter for the Flow constructor arguments, there is CordaRPCOps.startFlow methods, which is basically nothing more than extension function for type-safe invocation of flows.
CordaRPSOps.kt

JUnit - How to unit test method that reads files in a directory and uses external libraries

I have this method that I am using in a NetBeans plugin:
public static SourceCodeFile getCurrentlyOpenedFile() {
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
/* Get Java file currently displaying in the IDE if there is an opened project */
if (openedProject != null) {
TopComponent activeTC = TopComponent.getRegistry().getActivated();
DataObject dataLookup = activeTC.getLookup().lookup(DataObject.class);
File file = FileUtil.toFile(dataLookup.getPrimaryFile()); // Currently opened file
// Check if the opened file is a Java file
if (FilenameUtils.getExtension(file.getAbsoluteFile().getAbsolutePath()).equalsIgnoreCase("java")) {
return new SourceCodeFile(file);
} else {
return null;
}
} else {
return null;
}
}
Basically, using NetBeans API, it detects the file currently opened by the user in the IDE. Then, it loads it and creates a SourceCodeFile object out of it.
Now I want to unit test this method using JUnit. The problem is that I don't know how to test it.
Since it doesn't receive any argument as parameter, I can't test how it behaves given wrong arguments. I also thought about trying to manipulate openedProject in order to test the method behaviour given some different values to that object, but as far as I'm concernet, I can't manipulate a variable in JUnit that way. I also cannot check what the method returns, because the unit test will always return null, since it doesn't detect any opened file in NetBeans.
So, my question is: how can I approach the unit testing of this method?
Well, your method does take parameters, "between the lines":
MainProjectManager mainProjectManager = new MainProjectManager();
Project openedProject = mainProjectManager.getMainProject();
basically fetches the object to work on.
So the first step would be to change that method signature, to:
public static SourceCodeFile getCurrentlyOpenedFile(Project project) {
...
Of course, that object isn't used, except for that null check. So the next level would be to have a distinct method like
SourceCodeFile lookup(DataObject dataLookup) {
In other words: your real problem is that you wrote hard-to-test code. The "default" answer is: you have to change your production code, to make easier to test.
For example by ripping it apart, and putting all the different aspects into smaller helper methods.
You see, that last method lookup(), that one takes a parameter, and now it becomes (somehow) possible to think up test cases for this. Probably you will have to use a mocking framework such as Mockito to pass mocked instances of that DataObject class within your test code.
Long story short: there are no detours here. You can't test your code (in reasonable ways) as it is currently structured. Re-structure your production code, then all your ideas about "when I pass X, then Y should happen" can work out.
Disclaimer: yes, theoretically, you could test the above code, by heavily relying on frameworks like PowerMock(ito) or JMockit. These frameworks allow you to contol (mock) calls to static methods, or to new(). So they would give you full control over everything in your method. But that would basically force your tests to know everything that is going on in the method under test. Which is a really bad thing.

Import classes in method?

I'm customizing a PLM Windchill Workflow, which provides a mechanism to execute java code snippets. Unfortunately, they are 'inserted' into prepared service's method, which means that there is no way to import classes, so I have to include full package names to use it. Don't try to understand the snippet below, just look how does it looks like:
wt.fc.QueryResult activities = wt.fc.PersistenceHelper.manager.find((wt.pds.StatementSpec) activitiesQuery);
while (activities.hasMoreElements()) {
wt.workflow.work.WfAssignedActivity activity = (wt.workflow.work.WfAssignedActivity) activities.nextElement();
if(activity.getDisplayIdentifier().toString().equals("Analyze Image Request")){
java.util.List<wt.workflow.work.WorkItem> workItems = wt.workflow.status.WfWorkflowStatusHelper.service.getWorkItems(activity);
for (wt.workflow.work.WorkItem workItem : workItems){
String action = workItem.getActionPerformed();
if(action != null && action.equals("Accepted")){
wt.org.WTPrincipalReference approver = workItem.getOwnership().getOwner();
n_approver = approver.getFullName() + " ("+approver.getDisplayName()+")";
wt.fc.collections.WTHashSet approverSet = new wt.fc.collections.WTHashSet(java.util.Arrays.asList(approver));
wt.project.Role role = wt.project.Role.toRole("APPROVER");
com.ptc.windchill.pdmlink.change.server.impl.WorkflowProcessHelper.setChangeItemParticipants(report, role, approverSet);
break;
}
}
break;
}
}
And my question is - how to make this code any more readable? Of course there is no way to import classes inside the method, there is even no way to divide this snippet into separate methods (as it is 'pasted' into one) so I'm looking for other ideas.
One option to make the code more readable would be to separate chained method/property calls across multiple lines.
For example, this line:
wt.project.Role role = wt.project.Role.toRole("APPROVER");
could be rewritten as:
wt.project.Role role = wt
.project
.Role
.toRole("APPROVER");
You can call this complete code from a Customized Java class.
You just have to call your class and take the final parameters required from the Java class to make it more readable.
If you need multiple outputs write multiple methods in Java class and call them in workflow expression.
You can't.
Workflows expressions are methods bodies.
A statement like
wt.fc.QueryResult activities = wt.fc.PersistenceHelper.manager.find((wt.pds.StatementSpec) activitiesQuery);
ends in a class under $WT_HOME/codebase/wt/workflow/expr/
with a method :
public static Object executemethod_1(Object[] var0, Object[] var1) throws Exception {
wt.fc.QueryResult activities = wt.fc.PersistenceHelper.manager.find((wt.pds.StatementSpec) activitiesQuery);
// some generated code to handle variables...
}
So, you can't use import.
However :
If you have a PDMLink version greater than 10,
You can externalize workflow expression
http://support.ptc.com/cs/help/windchill_hc/wc100_hc/index.jspx?id=WFTemplateExtExpression&action=show
This create a java class under /codebase/ext/wt/workflow/externalize
Then you can do what you want, but you'll have to compile these classes, and do a stop/start in case of modifications.
Basically, it's nothing more than calling external code from the expression, so I don't use it a lot...

Create modules Java

I have a Java bot running based on the PircBotX framework. An IRC bot simply replies on commands. So now I have a list of static strings e.g.; !weather, !lastseen and the likes in my Main.java file.
For each command I add I create a new static string and I compare each incoming message if it starts with any of the defined commands.
Pseudocode
Receive message `m`
if m matches !x
-> do handleX()
if m matches !y
-> do handleY()
This is basicly a very large if test.
What I would like to do is create some sort of skeleton class that perhaps implements an interface and defines on which command it should act and a body that defines the code it should execute. Something I'm thinking of is shown below:
public class XkcdHandler implements CommandHandlerInterface
{
public String getCommand()
{
return "!xkcd";
}
public void HandleCommand(String[] args, Channel ircChannel)
{
// Get XKCD..
ircChannel.send("The XKCD for today is ..");
}
}
With such a class I could simply add a new class and be done with it. Now I have to add the command, add the if test in the list, and add the method to the Main.java class. It is just not a nice example of software architecture.
Is there a way that I could create something that automatically loads these classes (or instances of those classes), and then just call something like invokeMatchingCommand()? This code could then iterate a list of loaded commands and invoke HandleCommand on the matching instance.
Update
With the answer of BalckEye in mind I figured I could load all classes that are found in a package (i.e., Modules), instantiate them and store them in a list. This way I could handle each message as shown in his answer (i.e., iterate the list and execute the class method for each matching command).
However, it seems, according to this thread, that it's not really viable to do. At this point I'm having a look at classloaders, perhaps that would be a viable solution.
There are several ways I think. You can just use a Map with the command as the key and an interface which executes your code as the value. Something like this:
Map<String, CommandInterface> commands = new ....
and then use the map like this:
CommandInterface cmd = commands.get(command);
if(cmd != null) {
cmd.execute();
}
You are looking for the static block, for instance:
class main {
private static List<CommandHandlerInterface> modules = new ArrayList<...>();
static { // gets called when a static member gets accessed for the first time (once per class)
modules.add(new WeatherCommand());
// etc.
}
// method here which iterates over modules and checks
}

Groovy make methods visible from one script to another

I have a script with utility methods I would like to access from my other script.
I load my script like this in my java code
static {
GroovyShell shell = new GroovyShell();
//This is the script that has the utility
groovyUtils = shell.parse(new InputStreamReader(MyJavaClass.class.getResourceAsStream("scripts/json/MyUtils.groovy")));
//This is the script that does thing
groovyScript = shell.parse(new InputStreamReader(MyJavaClass.class.getResourceAsStream("scripts/json/MyScript.groovy")));
}
I would like to expose the methods from MyUtils.groovy to be usable in MyScript.groovy (and also other scripts in the future)
There is a number of ways how you can achieve this.
You're talking about methods, so I guess you have a class in MyUtils.groovy.
In this case you can specify a Binding, e.g.
def myUtils = new MyUtils()
def binding= new Binding([ method1: myUtils.&method1 ])
def shell= new GroovyShell(binding)
shell.evaluate(new File("scripts/json/MyScript.groovy"))
In the above you can reference method1 in your script and you will end up invoking it on myUtils instance.
Another solution is to specify a script base-class, e.g.
def configuration = new CompilerConfiguration()
configuration.setScriptBaseClass('MyUtils.groovy')
def shell = new GroovyShell(this.class.classLoader, new Binding(), configuration)
MyUtils class must extend Script then; all its methods are available in scripts you parse using shell.
There are essentially multiple ways how to embed / run Groovy. These are quite often discussed while designing DSLs. You can take a look e.g. here, if you haven't searched for it before.

Categories

Resources