Make and load copy of SWI-Prolog Instance with JPL - java

What I am trying to do is create a copy of a Prolog instance and load that copy with JPL (the Java-Prolog Interface). I can think of several possible ways to do this, but none of them are completely worked out, and that is why I have come here.
First, I know I could save a copy of the state using qsave_program/2. This creates an exe file which I can run. However, I need to query this saved instance from Java using JPL. I've tried looking for documentation for this, but I couldn't find any (probably not a common issue). Is there any way I can run an instance saved using qsave_program/2 and query it from JPL?
The second idea would be to query the original instance for all dynamically asserted clauses. However, I cannot know what was asserted, so I cannot ask for those things directly, but rather I must collect these clauses based on the fact that they are dynamic. Then I could simply start another instance from JPL and assert these facts to create a copy. Is this possible? And would this effectively create a copy of the state?

Alright, here is the solution I've decided on. I can find all the predicates I will need to reassert with the following query:
predicate_property(X,dynamic),\+predicate_property(X,built_in),\+predicate_property(X,number_of_clauses(0)).
Here's why I think this will work for me.
predicate_property(X,dynamic) will give me all the dynamic predicates. The reason I don't stop here is because there are a lot of predicates that are dynamic whose clauses I don't need to explicitly assert in my new instance of prolog. Predicates that have the property built_in can be ignored, because those will be automatically defined when I create the new instance of my prolog query. Even if they are explicitly defined by the user, that definition will be reinstantiated because I am consulting the same file. I can also ignore those predicates that have no clauses (number_of_clauses(0)), because the predicates are not affecting the state if they have no clauses.
So, once I have all the dynamic predicates I want, I can find all solutions to those predicates, make a list of the Terms returned in Java through JPL, open a new consultation of the file, and reassert those Terms.

Related

How to run only one specific rule in SonarQube?

I want to execute only the 'Unused assignments should be removed' rule/check on my Java project. But I don't know how to do it.
I've already tried with 'Ignore Issues on Multiple Criteria' and 'Restrict Scope of Coding Rules' but if I want to reach my goal, I should add all the rule except the one that I want to 'Ignore Issues on Multiple Criteria'.
So, is there a way to execute only a single rule?
I'm using the following version sonarqube-8.4.0.35506.
If I figured out what you want to achieve, you need a new copy of the root profile and use the new copy, so that you can modify your set of rules as you want.
This answer with steps could help you: https://sqa.stackexchange.com/questions/24734/how-to-deactivate-a-rule-in-sonarqube

A better way to call static methods in user-submitted code?

I have a large data set. I am creating a system which allows users to submit java source files, which will then be applied to the data set. To be more specific, each submitted java source file must contain a static method with a specific name, let's say toBeInvoked(). toBeInvoked will take a row of the data set as an array parameter. I want to call the toBeInvoked method of each submitted source file on each row in the data set. I also need to implement security measures (so toBeInvoked() can't do I/O, can't call exit, etc.).
Currently, my implementation is this: I have a list of the names of the java source files. For each file, I create an instance of the custom secure ClassLoader which I coded, which compiles the source file and returns the compiled class. I use reflection to extract the static method toBeInvoked() (e.g. method = c.getMethod("toBeInvoked", double[].class)). Then, I iterate over the rows of the data set, and invoke the method on each row.
There are at least two problems with my approach:
it appears to be painfully slow (I've heard reflection tends to be slow)
the code is more complicated than I would like
Is there a better way to accomplish what I am trying to do?
There is no significantly better approach given the constraints that you have set yourself.
For what it is worth, what makes this "painfully slow" is compiling the source files to class files and loading them. That is many orders of magnitude slower than the use of reflection to call the methods.
(Use of a common interface rather than static methods is not going to make a measurable difference to speed, and the reduction in complexity is relatively small.)
If you really want to simplify this and speed it up, change your architecture so that the code is provided as a JAR file containing all of the compiled classes.
Assuming your #toBeInvoked() could be defined in an interface rather than being static (it should be!), you could just load the class and cast it to the interface:
Class<? extends YourInterface> c = Class.forName("name", true, classLoader).asSubclass(YourInterface.class);
YourInterface i = c.newInstance();
Afterwards invoke #toBeInvoked() directly.
Also have a look into java.util.ServiceLoader, which could be helpful for finding the right class to load in case you have more than one source file.
Personally, I would use an interface. This will allow you to have multiple instance with their own state (useful for multi-threading) but more importantly you can use an interface, first to define which methods must be implemented but also to call the methods.
Reflection is slow but this is only relative to other options such as a direct method call. If you are scanning a large data set, the fact you have to pulling data from main memory is likely to be much more expensive.
I would suggest following steps for your problem.
To check if the method contains any unwanted code, you need to have a check script which can do these checks at upload time.
Create an Interface having a method toBeInvoked() (not a static method).
All the classes which are uploaded must implement this interface and add the logic inside this method.
you can have your custom class loader scan a particular folder for new classes being added and load them accordingly.
When a file is uploaded and successfully validated, you can compile and copy the class file to the folder which class loader scans.
You processor class can lookup for new files and then call toBeInvoked() method on loaded class when required.
Hope this help. (Note that i have used a similar mechanism to load dynamically workflow step classes in Workflow Engine tool which was developed).

Allowing maximal flexibly/extensibility using a factory

I have a little design issue on which I would like to get some advice:
I have several classes that inherit from the same base class, each one can accept the same data and analyze it in a slightly different way.
Analyzer
|
˪_ AnalyzerA
|
˪_ AnalyzerB
...
I have an input file (I do not have control over the file's format) that defines which analyzers should be invoked and their parameters. Plus it defines data-extractors in the same way and other similar things too (in similar I mean that this is an action that can have several variations).
I have a module that iterates over different analyzers in the file and calls some factory that constructs the correct analyzer. I have a factory for each of the archetypes the input file can define and so far so good.
But what if I want to extend it and to add a new type of analyzer?
The solution I was thinking about is using a property file for each factory that will be named after the factories name and it will hold a mapping between the input file's definition of whatever it wants me to execute and the actual classes that I use to execute the action.
This way I could load that class at run-time -> verify that it's implementing the right interface and then execute it.
If some John Doe would like to create his own analyzer he'd just need to add a new property to the correct file (I'm not quite sure what would be the best strategy to allow this kind of property customization).
So in short:
Is my solution too flawed?
If no what would be the most user friendly/convenient way to allow customization of properties?
P.S
Unfortunately I'm confined to using only build in JDK classes as the existing solution, so I can't just drop in SF on them.
I hope this question is not out of line I'm just not used to having my wings clipped this way, not having SF or some other to help me implement an elegant solution.
Have a look at the way how the java.sql.DriverManager.getConnection(connectionString) method is implemented. The best way is to watch the source code.
Very rough summary of the idea (it is hidden inside a lot of private methods). It is more or less an implementation of chain of responsibility, although there is not linked list of drivers.
DriverManager manages a list of drivers.
Each driver must register itself to the DriverManager by calling its method registerDriver().
Upon request for a connection, the getConnection(connectionString) method sequentially calls the drivers passing them the connectionString.
Each driver KNOWS if the given connection string is within its competence. If yes, it creates the connection and returns it. Otherwise the control is passed to the next driver.
Analogy:
drivers = your concrete Analyzers
connection strings = types of your files to be analyzed
Advantages:
There is no need to explicitly bind the analyzers with their type of file they are meant for. Let the analyzer to decide itself if it is able to analyze the file. If not, null is returned (or an exception or whatever) to tell the AnalyzerManager that the next analyzer in the row should be asked.
Adding new analyzer just means adding a new call to the register() method. Complete decoupling.

Drools Flow dynamic Ruleflowgroup parameter

I have a process in drools with a process variable that gets set. I would like to be able to dynamically change what ruleflowgroup gets called based on the variable.
I have tried setting the ruleflowgroup to #{ruleFlowGroupName} but the rules never activate.
I have a script task right before the ruleflow group that prints out the value of the variable and it is correct.
I have done this before with a reconfigurable subprocess where the process id is a process variable and the process dynamically gets replaced when the main process runs.
I was hoping to be able to do this with specifying the ruleflowgroup too.
any ideas?
What is the business objective of doing that? if you have two different set of rules that evaluate different data depending on what you are inserting inside the drools engine, there is no need to have two different rule flow groups. Only the relevant rules will be activated.
Cheers
It is indeed true that a dynamic ruleflowgroup name is currently not supported. I've created a JIRA for this so we can track this and you can keep updated on any progress.
https://issues.jboss.org/browse/JBPM-3552
It would indeed be useful to describe the situation where you think this might be useful, as there may be alternatives / workarounds already.

In Java, how can I construct a "proxy wrapper" around an object which invokes a method upon changing a property?

I'm looking for something similar to the Proxy pattern or the Dynamic Proxy Classes, only that I don't want to intercept method calls before they are invoked on the real object, but rather I'd like to intercept properties that are being changed. I'd like the proxy to be able to represent multiple objects with different sets of properties. Something like the Proxy class in Action Script 3 would be fine.
Here's what I want to achieve in general:
I have a thread running with an object that manages a list of values (numbers, strings, objects) which were handed over by other threads in the program, so the class can take care of creating regular persistent snapshots on disk for the purpose of checkpointing the application. This persistor object manages a "dirty" flag that signifies whether the list of values has changed since the last checkpoint and needs to lock the list while it's busy writing it to disk.
The persistor and the other components identify a particular item via a common name, so that when recovering from a crash, the other components can first check if the persistor has their latest copy saved and continue working where they left off.
During normal operation, in order to work with the objects they handed over to the persistor, I want them to receive a reference to a proxy object that looks as if it were the original one, but whenever they change some value on it, the persistor notices and acts accordingly, for example by marking the item or the list as dirty before actually setting the real value.
Edit: Alternatively, are there generic setters (like in PHP 5) in Java, that is, a method that gets called if a property doesn't exist? Or is there a type of object that I can add properties to at runtime?
If with "properties" you mean JavaBean properties, i.e. represented bay a getter and/or a setter method, then you can use a dynamic proxy to intercept the set method.
If you mean instance variables, then no can do - not on the Java level. Perhaps something could be done by manipulations on the byte code level though.
Actually, the easiest way to do it is probably by using AspectJ and defining a set() pointcut (which will intercept the field access on the byte code level).
The design pattern you are looking for is: Differential Execution. I do believe.
How does differential execution work?
Is a question I answered that deals with this.
However, may I suggest that you use a callback instead? You will have to read about this, but the general idea is that you can implement interfaces (often called listeners) that active upon "something interesting" happening. Such as having a data structure be changed.
Obligitory links:
Wiki Differential execution
Wiki Callback
Alright, here is the answer as I see it. Differential Execution is O(N) time. This is really reasonable, but if that doesn't work for ya Callbacks will. Callbacks basically work by passing a method by parameter to your class that is changing the array. This method will take the value changed and the location of the item, pass it back by parameter to the "storage class" and change the value approipriately. So, yes, you have to back each change with a method call.
I realize now this is not what you want. What it appears that you want is a way that you can supply some kind of listener on each variable in an array that would be called when that item is changed. The listener would then change the corresponding array in your "backup" to refect this change.
Natively I can't think of a way to do this. You can, of course, create your own listeners and events, using an interface. This is basically the same idea as the callbacks, though nicer to look at.
Then there is reflection... Java has reflection, and I am positive you can write something using it to do this. However, reflection is notoriously slow. Not to mention a pain to code (in my opinion).
Hope that helps...
I don't want to intercept method calls before they are invoked on the real object, but
rather I'd like to intercept properties that are being changed
So in fact, the objects you want to monitor are no convenient beans but a resurgence of C structs. The only way that comes to my mind to do that is with the Field Access call in JVMTI.
I wanted to do the same thing myself. My solution was to use dynamic proxy wrappers using Javassist. I would generate a class that implements the same interface as the class of my target object, wrap my proxy class around original class, and delegate all method calls on proxy to the original, except setters which would also fire the PropertyChangeEvent.
Anyway I posted the full explanation and the code on my blog here:
http://clockwork-fig.blogspot.com/2010/11/javabean-property-change-listener-with.html

Categories

Resources