Invoking acceleo template from java app - java

Hi all i'm very new to acceleo
In my project, i have a java driver class which calls different acceleo templates. Some of the templates have parameters without any EObject type. I am calling calling initialize() and doGenerate() functions of generated java module for a template. the problems are facing are :
initialize() expects the first argument to be Ecore object and rest of parameters are as List. However as I mentioned some of the templates do not have any EObject parameters. How do I call such templates from a java application?
To work around the above problem, i adjusted my driver and templates to have dummy EObject as first parameter. Then it calls templates successfully but it won't generate any output. The templates generate output if I call them from another driver template though. However I do not want to write my driver program in MTL as it requires complex analysis of data model.
Please advice me on how can I progress in my case.
Thanks&Regards
Dhanunjaya M.

The API we expose by default through the Java class we generate alongside the "main" templates' modules and the Acceleo "facade" classes always assume that there is an EObject as first parameter of the templates that are to be called. This has been made in order to facilitate the use for most use cases (we expect this use case to be 90% of the total).
For other use cases, you will have to make use of the APIs that are behind those facades. Namely, you can create another "initialize" method that does not take an EObject as parameter for these cases when you simply don't have one. You will then need to also override the "generate(Monitor)" method so that it does not use AcceleoService.doGenerate... or any other method of AcceleoService for that matter : this is the "facade" class that I was talking about.
What you will need is to call a method that mimics what AcceleoService.doGenerate does without relying on an EObject to find the template that needs to be called. If you do not have Acceleo's SDK or sources at hand, you can take a peek at the code through github : AcceleoService#doGenerate.

Related

A better way to call static methods in user-submitted code?

I have a large data set. I am creating a system which allows users to submit java source files, which will then be applied to the data set. To be more specific, each submitted java source file must contain a static method with a specific name, let's say toBeInvoked(). toBeInvoked will take a row of the data set as an array parameter. I want to call the toBeInvoked method of each submitted source file on each row in the data set. I also need to implement security measures (so toBeInvoked() can't do I/O, can't call exit, etc.).
Currently, my implementation is this: I have a list of the names of the java source files. For each file, I create an instance of the custom secure ClassLoader which I coded, which compiles the source file and returns the compiled class. I use reflection to extract the static method toBeInvoked() (e.g. method = c.getMethod("toBeInvoked", double[].class)). Then, I iterate over the rows of the data set, and invoke the method on each row.
There are at least two problems with my approach:
it appears to be painfully slow (I've heard reflection tends to be slow)
the code is more complicated than I would like
Is there a better way to accomplish what I am trying to do?
There is no significantly better approach given the constraints that you have set yourself.
For what it is worth, what makes this "painfully slow" is compiling the source files to class files and loading them. That is many orders of magnitude slower than the use of reflection to call the methods.
(Use of a common interface rather than static methods is not going to make a measurable difference to speed, and the reduction in complexity is relatively small.)
If you really want to simplify this and speed it up, change your architecture so that the code is provided as a JAR file containing all of the compiled classes.
Assuming your #toBeInvoked() could be defined in an interface rather than being static (it should be!), you could just load the class and cast it to the interface:
Class<? extends YourInterface> c = Class.forName("name", true, classLoader).asSubclass(YourInterface.class);
YourInterface i = c.newInstance();
Afterwards invoke #toBeInvoked() directly.
Also have a look into java.util.ServiceLoader, which could be helpful for finding the right class to load in case you have more than one source file.
Personally, I would use an interface. This will allow you to have multiple instance with their own state (useful for multi-threading) but more importantly you can use an interface, first to define which methods must be implemented but also to call the methods.
Reflection is slow but this is only relative to other options such as a direct method call. If you are scanning a large data set, the fact you have to pulling data from main memory is likely to be much more expensive.
I would suggest following steps for your problem.
To check if the method contains any unwanted code, you need to have a check script which can do these checks at upload time.
Create an Interface having a method toBeInvoked() (not a static method).
All the classes which are uploaded must implement this interface and add the logic inside this method.
you can have your custom class loader scan a particular folder for new classes being added and load them accordingly.
When a file is uploaded and successfully validated, you can compile and copy the class file to the folder which class loader scans.
You processor class can lookup for new files and then call toBeInvoked() method on loaded class when required.
Hope this help. (Note that i have used a similar mechanism to load dynamically workflow step classes in Workflow Engine tool which was developed).

Allowing maximal flexibly/extensibility using a factory

I have a little design issue on which I would like to get some advice:
I have several classes that inherit from the same base class, each one can accept the same data and analyze it in a slightly different way.
Analyzer
|
˪_ AnalyzerA
|
˪_ AnalyzerB
...
I have an input file (I do not have control over the file's format) that defines which analyzers should be invoked and their parameters. Plus it defines data-extractors in the same way and other similar things too (in similar I mean that this is an action that can have several variations).
I have a module that iterates over different analyzers in the file and calls some factory that constructs the correct analyzer. I have a factory for each of the archetypes the input file can define and so far so good.
But what if I want to extend it and to add a new type of analyzer?
The solution I was thinking about is using a property file for each factory that will be named after the factories name and it will hold a mapping between the input file's definition of whatever it wants me to execute and the actual classes that I use to execute the action.
This way I could load that class at run-time -> verify that it's implementing the right interface and then execute it.
If some John Doe would like to create his own analyzer he'd just need to add a new property to the correct file (I'm not quite sure what would be the best strategy to allow this kind of property customization).
So in short:
Is my solution too flawed?
If no what would be the most user friendly/convenient way to allow customization of properties?
P.S
Unfortunately I'm confined to using only build in JDK classes as the existing solution, so I can't just drop in SF on them.
I hope this question is not out of line I'm just not used to having my wings clipped this way, not having SF or some other to help me implement an elegant solution.
Have a look at the way how the java.sql.DriverManager.getConnection(connectionString) method is implemented. The best way is to watch the source code.
Very rough summary of the idea (it is hidden inside a lot of private methods). It is more or less an implementation of chain of responsibility, although there is not linked list of drivers.
DriverManager manages a list of drivers.
Each driver must register itself to the DriverManager by calling its method registerDriver().
Upon request for a connection, the getConnection(connectionString) method sequentially calls the drivers passing them the connectionString.
Each driver KNOWS if the given connection string is within its competence. If yes, it creates the connection and returns it. Otherwise the control is passed to the next driver.
Analogy:
drivers = your concrete Analyzers
connection strings = types of your files to be analyzed
Advantages:
There is no need to explicitly bind the analyzers with their type of file they are meant for. Let the analyzer to decide itself if it is able to analyze the file. If not, null is returned (or an exception or whatever) to tell the AnalyzerManager that the next analyzer in the row should be asked.
Adding new analyzer just means adding a new call to the register() method. Complete decoupling.

Is there a way to add a custom class template to Eclipse?

Here's the scenario. I often create classes that follow a certain pattern. All classes extend a base abstract class (out of my control) with certain methods I wish to always override, including the constructor. I'd like to rightclick on my package name, click New, and then click "Foo Class". I'd then have a class added to my project, but using the "Foo" template instead of the standard class template. I understand I can change the class template, but I don't want to change it for all classes. Is this possible without writing a full blown extension?
you can use the editor template to create a new template and use that.
Go to Preferences->Java->Editor->Templates and create your template there.
Then create a new class file and apply your template.
Unfortunately no. What you'd like to achieve needs writing an eclipse plugin.
I'd suggest setting up a Template (at Preferences | Java | Editor | Templates) and giving it a short useful name describing your scenario. Whenever you'd want to write an additional template class, you could create a new class, press Ctrl+A, Del (select all and remove), then type "name Ctrl+Space" and have your template there as configured.
ps. I'd really review your requirement, can't think of a valid approach at the moment which would require writing many similar classes without a valid way to remove the code duplication you are just about to introduce.

Dynamic templates in Play Framework 2.0

There is TemplateLoader in Play 1.0 for generating templates in runtime.
Is there any solution to dynamically load template in Play 2.0? Or can I somehow convert it to scala code for using Eval?
For example: I want to store some templates in the database, so that certain users could edit them.
Play 2.0 already compiles your templates to object methods, so you don't have to 'dynamically load' them!
Consider this simple template called app/views/test.scala.html.
#(num:Long)
Your number is #num
It becomes a Scala method of views.html called test. Evaluate it with this code:
val msg : String = views.html.test(23).toString()
You don't have to use html views only. To use templates with strings, use the play.api.templates.Txt derived classes. This is a template called app/views/quick.scala.txt:
#(id:Long)Your id is #id
It becomes a method views.txt.quick and is used:
val msg2 : String = views.txt.quick(32).body
You can find out more in the documentation for the the play.api.templates package.
It seems the relevant code is in framework/src/play/src/main/scala/system/ApplicationProvider.scala in the Play-2.0 directory, particularly the ReloadableApplication class. I'm unsure how this compiling on the fly would suit you, since you don't want to do it when the template is requested (it is slow). Which means that storing in a database doesn't quite make sense: you don't want to store the template source code, but rather the compiled template object.
For arguments sake, if you just wrote the templates to the app/views directory, you could leave Play to compile them at its leisure. But then, beware, because they probably won't compile on a production system.

Are there any web frameworks for JVM with data binding checked at compilation time?

Usually when you bind some property to some element in www page, you will know about typo when testing.
I am looking for web framework which, at compile time would give me an error, that I made error in binding ("property not found" or something similar) and assuming my IDE has valid refactorization mechanism, that renaming property would also affect the binding (and vice-versa), or in other words, that renaming would not result in broken code.
Is there such framework for JVM?
I am new to JVM world so I don't know the features of the JVM frameworks (at all, not just this feature I ask for).
I've implemented static-mustache library to provide a type-safe template engine based on mustache syntax.
It checks both syntax errors and type-errors (like missing property) at compile-time. It requires zero build configuration as it's a standard annotation processor.
Templates remain pure mustache templates with all type-information extracted from normal Java-class used for rendering.
JSP development in Eclipse can do this
Vaadin Framework
Vaadin 8+ supports this kind of binding with Java lambda expressions.
There is a special Binder class:
Binder<Person> binder = new Binder<>();
TextField titleField = new TextField();
// Start by defining the Field instance to use
binder.forField(titleField)
// Finalize by doing the actual binding to the Person class
.bind(
// Callback that loads the title from a person instance
Person::getTitle,
// Callback that saves the title in a person instance
Person::setTitle));
See docs for details: https://vaadin.com/docs/framework/datamodel/datamodel-forms.html

Categories

Resources