I developed a transformation that generates a Kermeta XMI file.
My problem is that I would run this transformation (.kmt file) in the background, that is to say from a Java program.
I tried a lot of code but always without result.
Someone can help me?
Thank you in advance for your help.
Running a kermeta program depends on the version of Kermeta and about its execution context.
With Kermeta 2, the kmt code is compiled into scala. This scala code can be directly called from java thanks to scala compatibility with java.
You first need to declare at least one main operation in order to generate the appropriate initialization operation.
Let's say you have a method "transformFooToBar" in a class "org::example::FooToBar" in your kermeta code.
Declare it as a main entry point using the tags
#mainClass "org::example::FooToBar"
#mainOperation "transformFooToBar"
This main operation must have only string parameters.
This will generate an utility scala class org.example.MainRunner which contains useful initialization methods and a main operation that you can call from the command line.
Case 1/ With Kermeta 2 code called from an eclipse plugin:
This is the simpliest case, call the method org.example.MainRunner.init4eclipse()
before any other call. Then call your main method org.example.KerRichFactory.createFooToBar.transformFooToBar()
You can call any other classes or methods (ie. even with parameters that aren't String)
This is quite convinient for building tool for eclipse based on kermeta transformation.
Case 2/ Kermeta 2 code called from a standard java application (ie. not running in an eclipse plugin)
The initialiation method is then org.example.MainRunner.init()
Common trap for case 2: many transformations that run as standalone still need to manipulate models using eclipse uri scheme in their internal refencing system. Ie. platform:/plugin/... , platform:/resource/..., pathmap:/... , or even more complex uri mapping (typically using custom protocols ) (you can check that easily in by looking in the xmi files as text)
In that case, since Eclipse platform isn't running to provide the search mechanism, you need to provide manually the equivalent URI mapping to map these URI to your local system URI (Ie. to file:/... or jar:file:/ URIs)
One possibility is to use an urimap.properties file that provide such mapping. By default when running a kermeta program in eclipse, an urimap.properties is generated for the current eclipse configuration.
When deploying a kermeta program to another computer, or using a custom deployment packaging, you will have to provide/compute an equivalent file for the target computer.
For convinience, you can set the location of this urimap.properties files thanks to the system propery "urimap.file.location"
Related
I'm trying to write an application that uses the AWS API from an Android app written in Java. It seems that the recommended way to do it is using a special set of libraries called "Amplify." I was able to import the appropriate Amplify Java classes into my code, but I see that not all the parameters I want to supply (such as the S3 bucket or the API access key) can be given as method arguments.
All the advice I see online suggests running a command-line configuration command using npm install aws-amplify. But I'd prefer not to use a command-line tool which asks me questions: I'd prefer to configure everything in code. And I don't want to install npm or mess around with it (full disclosure, I tried installing it and got some hassles).
Is there a way to supply the Amplify configuration without using the command-line tool, perhaps via a configuration file or some additional arguments to the methods I'm calling within Java?
I figured it out!
The Amplify.configure() has a not-well-documented overload where you can specify a config file in the form of an Android "resource."
So instead of using
Amplify.configure(getApplicationContext());
as directed in the tutorials, I use
Amplify.configure(
AmplifyConfiguration.fromConfigFile(getApplicationContext(), R.raw.amplifyconfiguration),
getApplicationContext());
The config file needs to be located in the app/src/main/res/raw/ path of the project, named amplifyconfiguration.json. The development environment automatically generates the definition of the value R.raw.amplifyconfiguration, which is a number identifying the file.
That solves the problem of loading the configuration from an explicit file, without using the amplify CLI. The next hurdle is figuring out what keys can be specified in the file...
I'm totally new to StackOverflow so please try to tolerate me.Thank you.I'm new to ANT with a beginners knowledge in java. So I wanted to know if it is possible to run multiple classes of java source files in an xml file using Ant.
Instead of specifying multiple java classnames within the targets, can I run the run classes in a single go?
If you are indeed asking whether a java task can be configured to run more than one class I believe the answer is no. According to the documentation for the java task the classname attribute specifies
the Java class to execute.
If you need to run multiple classes using a single java task you could create a controller class to run these classes then simply invoke that single controller.
This discussion involves getting a way to load different jars in different Operating Systems.
Case Scenario
I am working on a specific OS known as NSK. Its an unix flavour and powers the HP NSK Servers. I am running one of my middleware app ( a java application) on NSK. The requirement is to make this app off-platform. i.e. it must work in other platforms like LINUX or Windows as well.
To implement this, I introduced 1 more jar. Now I need to introduce a logic where-in the JVM must load the appropriate jar at runtime (jar1 on NSK and jar2 on any other non-NSK platform). I used the following logic to implement:
Code:
if (System.getProperty(os.name).equals("NSK"))
load jar1
else
load jar2
The above code works fine until I hit one of the Security exceptions "SecurityException" in getProperty API used above. This tells that the user running the app does not have necessary permission to use getProperty(). So, the above logic goes for a toss here.
Is there any way to tackle this exception and still be able to find out OS details and load the correct jar? Or better, are there any better logic to implement the same?
Please refer the below link for more details about getProperty(..)
http://docs.oracle.com/javase/7/docs/api/java/lang/System.html
Thanks in advance
Regards,
Pabitra
Given that Java is platform independent and only loads the classes you use, I would have one JAR which has everything you need and only load the class which are appropriate for your platform.
Doing what you suggest requires a sub class loader which just add complexity which doesn't appear to be needed.
If you can't access a system property you can actively test your library and see which one work on your system. I am sure there is a method or class which working in one case but not the other or your wouldn't need two sets of code.
makefile that compiles all java files.
The way I have done this multiple times in past is to generate a Java file depending on the flag. If you are using ant then this code generation is very simple. Otherwise, you can use a template file with placeholder and do some shell-scripting or similar to generate the file.
In ant you can use the replace task to modify files as part of your build.
We do this in our builds, but we use it to modify a Java .properties file which the application will read for its configurable behavior.
I've written rather pleasant flag-controlled systems using a combination of Google Guice and Apache CLI to inject flag-controlled variables into constructors.
I realize that this is not possible since Android doesnt have a JVM but is there a work around to this problem? I need to perform a byte code injection operation for an Android application. Any suggestions?
You can't directly inject bytecode into already loaded classes/methods. However, you can dynamically create new classes, write them to a dex file, and then dynamically load them
See this blog post for more information on dynamic loading of classes from a dex file on disk.
In order to dynamically create a new dex file, you might look at using the dexlib component that is part of the smali/baksmali codebase, which is a general purpose library for reading/writing dex files.
Or, alternatively, you could include smali in your application and generate your classes in the smali assembly format and use smali directly to assemble them into a new dex file.
Code injection is possible in Android, please take a look on Disabler project hosted on Github.
Disabler allows to optimize, trace and modify Android project on the fly using code injection into existing project. Code is injected on the fly, no need to modify old functionality to add logging/profiling or disable portion of the flow.
Main functionality of the tool:
trace: entering/exiting to/from method, collecting parameters and exiting value)
profile: measuring the frequency and duration of method calls
disable: disabling/skipping part of the program flow by overriding returning value from methods defined by the user
delay: introduce delays in certain sections of the code (i.e. for certain packages)
Under the hood, it uses AspectJ and Eclipse build mechanism (javac is replaced by ajc)
Do you want to inject during runtime or compile time ?
For compile time - there are several very mature solutions for manipulating java source code / bytecode - ASM, java-assist, etc
Specifically for android, try ASMDEX
http://asm.ow2.org/doc/tutorial-asmdex.html