For a couple of reasons, I'm interested in writing a hybrid application which is partially coded in Java (via Google Web Toolkit) and partially coded in JavaScript. I'm planning on calling the Java library from JavaScript by using GWT Exporter.
The trouble is that this destroys a lot of the opportunities for code optimization and compression. GWT is mostly just designed to optimize the JavaScript it generates, and third-party Javascript compression libraries will probably break down when given GWT output.
Is there a way to tell the GWT compiler "hey, pass these Javascript files into your optimization pass as well"? GWT has a flag for using the Closure Compiler under the hood (which obviously supports optimizing regular javascript), so it feels like this should be possible.
Insofar as you can ask Closure to compile source, it is possible, but you have to get those 'other' sources into the code that GWT (and later closure) is compiling. Presently, this means putting that code in JSNI, otherwise it is just another file on the filesystem, and the compiler can't know what the dependencies in and out are, since it likewise can't tell when/how you are loading that file.
If I remember correctly, standard usage of dependencies in Closure is via a goog.require() method call in JavaScript at the top of your file - this both declares the dependency, and if needed loads the file. Without this, your base HTML page needs to have <script> tags for each file you might possibly use, and without actually running that page, Closure won't know what order to expect those files to load in, or how far and wide to run when compiling your source.
GWT itself (i.e. outside of Closure) only makes a very small set of optimizations to the raw JS included as JSNI in its Java sources:
com.google.gwt.dev.js.JsStaticEval - simple constant folding, and other expression simplification
com.google.gwt.dev.js.JsInliner - inline functions under a certain complexity threshold, and other assorted cleanup
com.google.gwt.dev.js.JsUnusedFunctionRemover - simple pruning of unreferenced functions/variables
com.google.gwt.dev.js.JsDuplicateCaseFolder - look for the same body in more than one case block and merge them into one fall-through case.
GWT has three primary ways to include JS source: JSNI, tags in the .gwt.xml (not supported by all linkers) and com.google.gwt.core.client.ScriptInjector to pull from a string constant or from a remote url. Only the first considers the code as actual source - the second/third let the code come from any source, and don't count on the code being statically available at compile-time. JSNI has its own limitations - it doesn't support with blocks, and must use $wnd and $doc to refer to the host page's window and document.
First, although this is not your question, you have to be aware that to use gwt-exporter you have to call GWT.create per each class you want to populate, or call the exportAll() method for exporting everything marked as exportable. That implies that you are saying to the compiler that you are going to use those classes even your JS app would eventually not use them. So you wont take advantage of the removal of unused code causing large js output. You could use code-splitting for separate code in fragments though.
Second, and related with your question, as #Colin says in his answer only code written in JSNI blocks will be optimised by the GWT compiler, but the default optimisation is very trivial, although if you use the closure compiler it is a bit stronger. I have not tried, but I think closure annotations are not allowed since probably GWT compiler removes them before passing the js to the closure compiler.
Anyway, the main problem for including those files in JSNI blocks, is that you have to copy and paste the code in your java classes manually, and then perform some other tricks for addressing $wnd, etc.
We at gwt-query have a JsniBundle generator able to take .jsfiles from filesystem or from any url, and in compile time include that code in JSNI fragments and make a couple of tricks to make it work in the iframe where GWT runs. It works for almost libraries and plugins I have used, but sometimes I had to modify the javascript source for allowing sand-boxing it.
Here you have an example of how to include jquery and highcharts:
public interface JQueryBundle extends JsniBundle {
#LibrarySource(value =
"http://ajax.googleapis.com/ajax/libs/jquery/1.8.1/jquery.min.js")
public void initJQuery();
}
public static abstract class HighCharts implements JsniBundle {
#LibrarySource("js/highcharts.src.js")
public abstract void initHighcharts();
public void drawChart(String id, JavaScriptObject props) {
JavaScriptObject $container = JsUtils.runJavascriptFunction(window, "$", "#" + id);
JsUtils.runJavascriptFunction($container, "highcharts", props);
}
}
public void testHighCharts() {
JQueryBundle jQuery = GWT.create(JQueryBundle.class);
HighCharts highCharts = GWT.create(HighCharts.class);
jQuery.initJQuery();
highCharts.initHighcharts();
highCharts.drawChart("chart", charData);
}
Some of the advantages of using this method are enumerated in this slide of our GWT.create-2013 presentation.
You can optimize arbitrary JavaScript code using the GWT compiler, just use the appropriately named LinkerContext.optimizeJavaScript(TreeLogger, String) method. LinkerContext objects are available inside Linkers, which are pieces of custom code run during the compilation. Here's a minimal example on how to write one:
#LinkerOrder(LinkerOrder.Order.POST)
public class ScriptOptimizer extends AbstractLinker {
#Override
public String getDescription() {
return "Optimizes external JavaScript files.";
}
#Override
public ArtifactSet link(TreeLogger logger, LinkerContext context,
ArtifactSet artifacts) throws UnableToCompleteException {
// This is some arbitrary JavaScript code you'd probably want to read
// from a static file in your classpath
String script = "var foobar = 1; for (var i = foobar; i < 5; i++) alert(1);";
// Do the optimizations
script = context.optimizeJavaScript(logger, script);
// Create an Artifact from the optimized JavaScript string
ArtifactSet newArtifacts = new ArtifactSet(artifacts);
newArtifacts.add(emitString(logger, script, "example.js"));
return newArtifacts;
}
}
Then, to include the Linker in your GWT compilation process, add this to your *.gwt.xml:
<define-linker name="scriptoptimizer"
class="com.example.ScriptOptimizer" />
<add-linker name="scriptoptimizer" />
The result will be a compiled file called example.js. You can of course generate as many files as you want or, for optimal results, concatenate all your scripts into one and compile that into a single output file.
Related
Is there a way to hook into the Eclipse compiler to specify custom class reading/resolving/loading logic while a Java source file is being compiled? I'm not sure what the correct term is, but essentially the compile-time equivalent of "class loading" that happens at run-time.
For example, say I have the Java source:
package foo;
import bar.Bar;
public final class Foo {
// getQux() returns type: qux.Qux
private final Bar bar = baz.Baz.getQux().getBar();
[...]
}
The compiler should request that 3 classes are read while compiling the source file foo/Foo.java:
bar.Bar - It is specified as an import.
baz.Baz - It is used in its fully qualified form (... = baz.Baz.getQux()...).
qux.Qux - It is an "indirect" dependency (it is returned by the call to baz.Baz.getQux(), which in turn is used to access a bar.Bar through the call to its getBar() method).
I'd like to be able intercept each of these "class requests" so that I can provide custom logic to obtain the class in question (perhaps it lives in a database, perhaps it it served up by some server somewhere, etc).
Also, I'd like it if no attempt was made to compile any of the source files in the Eclipse project until they are explicitly opened by the user. So in the example above, the 3 class requests (bar.Bar, baz.Baz, qux.Qux) aren't made until the user actually opens the source file foo/Foo.java. Ideally the list of source files in the project needn't be actual files on the filesystem (perhaps they too live in a database, etc) and a compile attempt is made only when a user opens/loads a source file.
I realize that, if possible, this has some drawbacks. For example, if I edit source file foo/Foo.java to make the class "package private", this will silently break any class that depends on foo.Foo until a "full" compile is done of the project. For now, that is fine for my purposes (there are things that I can do later to solve this).
Any ideas/suggestions?
Thank you!
Probably not, this would fall under the Java build path part of the JDT and I don't think it has that level of customization. There does not appear to be a documented extension point for this. To get a definitive answer you would need to look at the source. You could probably add this capability and it would mean that your would need to use an alternate version of the JDT, which might be difficult or impossible.
In Java, I'd like to find a way to allow a program to access its own source code, mainly for debugging and metaprogramming purposes (such as printing a method signature at runtime, or allowing a program to read its own comments, or allowing a Java class to print all methods of a certain type, or allowing a program to generate a new version of its own source code, etc).
Is there any way to allow a Java program to access a copy of its own source code, and read it line-by-line?
//this is the first line of the program
//this method is not implemented
public class inspectSourceCode(){
public static String getLine(int lineNumber){
//get the line of the program's own source code as a string,
//this is not currently implemented
}
//this method is implemented
public static void main(String[] args){
System.out.println(getLine(0));
//should print "//this is the first line of the program",
//if the method getLine works correctly
}
}
You could just directly access the .java file in the code. Just point it to the correct directory and access the file as you would any other.
The program is not running the java file itself, there are compiled files instead that are used at runtime.
I'm trying to set properties of each method
I'd suggest you to use annotations and then get them with Method.getAnnotation
Did you checked ASM? http://asm.ow2.org/
But I think, what you are trying to do is very cpu-expensive.
You COULD theoretically use a decompiler library in your source code to potentially get access to the classes, but keep in mind due to optimization and/or obfuscation etc you might not be able to reliably do a 1-1 translation between bytecode and Java code. Also keep in mind that you don't even necessarily have the line #s available to you if the code was not compiled with debugging information built in.
Can a Java program access its own source code?
In general no. The source code is typically not available on the execution platform.
In the sub-cases where the source code is available, then yes (of course) a program can read it using the standard Java I/O APIs. However, there are no standard APIs that are specific to the task of reading source code.
... mainly for debugging purposes (such as printing a method signature at runtime, or allowing a program to read its own comments, or allowing a Java class to print all methods of a certain type)
There is no technical reason why you could not do those things, but it strikes me that you would have a lot of work to do before such a tool got to the point of being useful. And, frankly, a typical Java IDE's source code debugger does pretty much all of these things already, so I don't really see the point of that effort.
EDIT: There must be some way I can approach this without writing a whole new debugger. I'm currently looking into ways to build on top of the existing java debugger. If anyone has any ideas on how to grab information the Java debugger already has (about stack frames, variables, raw data etc.), that would be really helpful.
--
What I'm trying to do is I have this framework/API built on Java, and I would like to write an eclipse plugin debugger that is customized to my framework. Here is a simple example:
I have two classes, one called scope and one called variable. The scope holds a map of variables. The code is all in java, but I'm using this scope-variable relationship almost like a new language, and would like a variable debug tab that gives me a list of currently active scopes with the variables that are currently stored inside. Here is some code:
import java.util.Hashtable;
public class Scope {
private Hashtable<String, Variable> variableList = new Hashtable<String, Variable>();
// constructor
public Scope(){
}
public void put(String key, Variable v){
variableList.put(key, v);
}
public Variable get(String key){
return variableList.get(key);
}
}
public class Variable {
private String value;
private String name;
public Variable(String aName, String aValue){
name = aName;
value = aValue;
}
public String getValue(){
return value;
}
public String getName(){
return name;
}
public void setValue(String aValue){
value = aValue;
}
}
This is obviously an extremely simple example, but I would like to accomplish something similar to this where I can get a variables window, set a breakpoint, and have a "debugger" list out my active scope objects and the variable objects inside.
I've been trying to read and understand: http://www.eclipse.org/articles/Article-Debugger/how-to.html
and its pretty dense (as well as extremely outdated), but I will try to take some time to understand it. I just wanted to see if anyone had any high level recommendations on how to approach this type of problem, as I have little experience developing plugins in eclipse or making debuggers.
Thanks!
Not an easy task. That article is still the main reference, I think. Old, but not outdated. Try to digest it, and preferably to make it work. Before it, you should have a minimal experience developing Eclipse plugins.
There are many pieces in the picture, but the first thing you must understand is that when Eclipse is debugging something (assuming we are using the standard debug model), we have two separate "worlds": the Eclipse side, and the interpreter side (or, if you prefer, the "local" and "remote" sides).
Int the Eclipse side, the programming involves a cooperation between some Eclipse core classes and some classes of your own, which extend or implement some Eclipse classes/interfaces:
A "launchConfigurationType" (extension point in your plugin.xml) which causes the apparition of a new custom configuration when you click "Debug As -> New Configuration); this goes togetther with some "launchConfigurationTabGroups" definition that defines the "Tabs" dialogs that will appear in your custom launch configuration (eg) (each Tab will have its own class typically).
The launchConfigurationType is typically associated to a LaunchDelegate class, which is sort of your bootstrap class: it has the responsability of creating and starting a running/debugging instance, both on the Eclipse side and on the "interpreter" (or "remote") side.
On the Eclipse side, the running/debugging instance is represented by a IDebugTarget object and its children (the implementation is your responsability); this is created by the LaunchDelegate and "attached" to the remotely running process at launching time.
The remote side, the interpreter or program you are actually debugging, can be anything: a binary executable, a perl script, some app running in a some site (perhaps also a local Java program; but, even in this case, this would probably run in its own JVM, not in the debugging Eclipse JVM!). Your IDebugTarget object must know how to communicate to the "remote interpreter" (eg, by TCP) and perform the typical debugger tasks (place breakpoints, step, run, ask for variables, etc) - but the protocol here is up to you, it's entirely arbitrary.
What is not arbitrary is the hierarchy of your custom classes that the running Eclipse debugger will use: these should have a IDebugTarget as root, and should implement "The debug model" (see figure in article). As said above, the IDebugTarget object is who understands how to make the translation between the EClipse side and the remote side (see this image)
having worked on the eclipse edc debugger, it sounds like writing a whole debugger is not so much what you want.
it sounds like while running the debugger, you will have access to the objects that have the variables and scopes you are interested in.
you can use toString() in the classes themselves or use detail formatters to display a variation on the information you want. the toString() call can get quite detailed and nest into calls, show whole arrays, etc. detail formatters can also be quite complex.
see http://www.robertwloch.net/2012/01/eclipse-tips-tricks-detail-formatter/ . it's the best of several URLs (i have no association with the author).
once you are happy with the output of the Variable and Scope objects, you should be able to add watch expressions that will always show them in your expressions window (thus you don't have to rely on local variables in the stack frame you may be in).
this should then give you the list of Variables and Scopes from your framework that you are tracking … hopefully without having to write an entire eclipse debugger plugin to do so.
ok, i'm going to add a second answer here … i guess i'm not familiar enough with the state of your environment to know why custom detail formatters would not do the trick. for most cases, i think they'll provide you what you're looking for.
but if you're really interested in creating another view holding these items, then you could check out the eclipse jdt project . it's entirely possible that the extension points it provides will give you access to the internal variables and stack-frame information that you're looking to add, and also perhaps some UI that will make your job easier.
in other words, you might not have to write an entirely new debugger plugin, but perhaps a plug-in that can work together with jdt.
the site has pointers to the project plan, source repositories, the bugzilla issue tracking database (used for both bug-tracking and new feature discussion). perhaps some of those who are experts on jdt can help weigh in with their opinions about what will best suit your needs.
I have some Java code written that I'd like to convert to JavaScript.
I wonder if it is possible to use the GWT compiler to compile the mentioned Java code into JavaScript code preserving all the names of the methods, variables and parameters.
I tried to compile it with code optimizations turned off using -draftCompile but the method names are mangled.
If GWT compiler can't do this, can some other tool?
Update
The Java code would have dependencies only to GWT emulated classes so the GWT compiler would definitely be able to process it.
Update 2
This Java method :
public String method()
got translated to this JavaScript funciton :
function com_client_T_$method__Lcom_client_T_2Ljava_lang_String_2()
using the compiler options :
-style DETAILED
-optimize 0
-draftCompile
So names can't be preserved. But is there a way to control how they are changed?
Clarification
Say, for example, you have a sort algorithm written in Java (or some other simple Maths utility). The method sort() takes an array of integers. and returns these integers in an array sorted. Say now, I have both Java and JavaScript applications. I want to write this method once, in Java, run it through the GWT compiler and either keep the method name the same, or have it change in a predictable way, so I can detect it and know how to change it back to sort(). I can then put that code in my JavaScript application and use it. I can also automatically re-generate it if the Java version changes. I have a very good reason technically for this, I understand the concepts of GWT at a high level, I'm just looking for an answer to this point only.
Conclusion
The answer to the main question is NO.
While method name can be somewhat preserved, its body is not usable. Method calls inside it are scattered throughout the generated file and as such, they can't be used in a JavaScript library which was the whole point of this topic.
Although you can set the compiler to output 'pretty' code, I suggest you write export functions for the classes you want to call from outside your GWT project. I believe somewhere in the GWT documentation it's detailed how to do this, but I couldn't find it so here an example I just created.
class YourClass {
public YourClass() {
...
}
public void yourMethod() {
...
}
public static YourClass create() {
return new YourClass();
}
public final static native void export() /*-{
$wnd.YourClass = function() {
this.instance = new #your.package.name.YourClass::create()()
}
var _ = $wnd.YourClass.prototype;
_.yourMethod = function() {this.instance.#your.package.name.YourClass::yourMethod()()}
}-*/;
}
EDIT
To elaborate, your code will get obfuscated like normal, but thanks to the export function, you can easily reference those functions externally. You don't have to rewrite anything from your Java class in JavaScript. You only write the references in JavaScript, so you can do this:
var myInstance = new YourClass();
myInstance.yourMethod();
Of course you have to call the static export method from somewhere in your GWT app (most likely in your EntryPoint) to make this work.
More info about referencing Java methods from JavaScript:
http://code.google.com/webtoolkit/doc/latest/DevGuideCodingBasicsJSNI.html#methods-fields
No - this isn't possible with the GWT compiler, since the GWT compiler is build to generate optimized and very performant JavaScript out of Java.
The big advantage is, that you can maintain your projekt in Java and compile it with GWT to JavaScript. So there is no need to prevent the variable-names and method-names in the JavaScript result, since all changes and work is done in the JAVA-sources.
Working in the JavaScript-output of GWT just isn't that easy and is really a lot of work!
Update:
By a hint of David, I found the Compiler-Option "-style". You can have a try with the following options:
-style=PRETTY -optimize=0
I have no idea if this will really generate "human readable" code. I think it won't, since the GWT framework will still be part of the resulting JavaScript and so it will be difficult to make changes to the JavaScript-result. Have a try and let us know ...
Maybe I can answer your second question: "If GWT compiler can't do this, can some other tool?"
I am using Java2Script for quite a while now, also on quite large projects. Integration with native JavaScript is fine, names are preserved, and after some time one can even match the generated JavaScript (in the browser debugger) with the original Java code with little effort.
Udo
You can "export" your function by writing inline JavaScript that calls it, and there is a tool gwt-exporter that does this automatically when you annotate classes and methods with #Export and similar. More information: https://code.google.com/p/gwtchismes/wiki/Tutorial_ExportingGwtLibrariesToJavascript_en
I maintain a Java Swing application.
For backwards compatibility with java 5 (for Apple machines), we maintain two codebases, 1 using features from Java 6, another without those features.
The code is largely the same, except for 3-4 classes that uses Java 6 features.
I wish to just maintain 1 codebase. Is there a way during compilation, to get the Java 5 compiler to 'ignore' some parts of my code?
I do not wish to simply comment/uncomment parts of my code, depending on the version of my java compiler.
The suggestions about using custom class loaders and dynamically commented code are a bit incredulous when it comes to maintenance and the preservation of the sanity of whichever poor soul picks up the project after you shuffle to pastures new.
The solution is easy. Pull the affected classes out into two separate, independent projects - make sure the package names are the same, and just compile into jars that you can then consume in your main project. If you keep the package names the same, and the method signatures the same, no problems - just drop whichever version of the jar you need into your deployment script. I would assume you run separate build scripts or have separate targets in the same script - ant and maven can both easily handle conditionally grabbing files and copying them.
Assuming that the classes have similar functionality with 1.5 vs. 6.0 differences in implementation you could merge them into one class. Then, without editing the source to comment/uncomment, you can rely on the optimization that the compiler always do. If an if expression is always false, the code in the if statement will not be included in the compilation.
You can make a static variable in one of your classes to determine which version you want to run:
public static final boolean COMPILED_IN_JAVA_6 = false;
And then have the affected classes check that static variable and put the different sections of code in a simple if statement
if (VersionUtil.COMPILED_IN_JAVA_6) {
// Java 6 stuff goes here
} else {
// Java 1.5 stuff goes here
}
Then when you want to compile the other version you just have to change that one variable and recompile. It might make the java file larger but it will consolidate your code and eliminate any code duplication that you have. Your editor may complain about unreachable code or whatever but the compiler should blissfully ignore it.
I think the best approach here is probably to use build scripts. You can have all your code in one location, and by choosing which files to include, and which not to include, you can choose what version of your code to compile. Note that this may not help if you need finer-grained control than per file.
You can probably refactor your code so that conditional compile really isn't needed, just conditional classloading. Something like this:
public interface Opener{
public void open(File f);
public static class Util{
public Opener getOpener(){
if(System.getProperty("java.version").beginsWith("1.5")){
return new Java5Opener();
}
try{
return new Java6Opener();
}catch(Throwable t){
return new Java5Opener();
}
}
}
}
This could be a lot of effort depending on how many version-specific pieces of code you have.
Keep one "master" source root that builds under JDK 5. Add a second parallel source root that has to build under JDK 6 or higher. (There should be no overlap, i.e. no classes present in both.) Use an interface to define the entry point between the two, and a tiny bit of reflection.
For example:
---%<--- main/RandomClass.java
// ...
if (...is JDK 6+...) {
try {
JDK6Interface i = (JDK6Interface)
Class.forName("JDK6Impl").newInstance();
i.browseDesktop(...);
} catch (Exception x) {
// fall back...
}
}
---%<--- main/JDK6Interface.java
public interface JDK6Interface {
void browseDesktop(URI uri);
}
---%<--- jdk6/JDK6Impl.java
public class JDK6Impl implements JDK6Interface {
public void browseDesktop(URI uri) {
java.awt.Desktop.getDesktop().browse(uri);
}
}
---%<---
You could configure these as separate projects in an IDE using different JDKs, etc. The point is that the main root can be compiled independently and it is very clear what you can use in which root, whereas if you try to compile different parts of a single root separately it is too easy to accidentally "leak" usage of JDK 6 into the wrong files.
Rather than using Class.forName like this, you can also use some kind of service registration system - java.util.ServiceLoader (if main could use JDK 6 and you wanted optional support for JDK 7!), NetBeans Lookup, Spring, etc. etc.
The same technique can be used to create support for an optional library rather than a newer JDK.
Not really, but there are workarounds. See
http://forums.sun.com/thread.jspa?threadID=154106&messageID=447625
That said, you should stick with at least having one file version for Java 5 and one for Java 6, and include them via a build or make as appropriate. Sticking it all in one big file and trying to get the compiler for 5 to ignore stuff it doesn't understand isn't a good solution.
HTH
-- nikki --
This will make all the Java purists cringe (which is fun, heh heh) but i would use the C preprocessor, put #ifdefs in my source. A makefile, rakefile, or whatever controls your build, would have to run cpp to make a temporary files to feed the compiler. I have no idea if ant could be made to do this.
While stackoverflow looks like it'll be the place for all answers, you could wehn no one's looking mosey on over to http://www.javaranch.com for Java wisdom. I imagine this question has been dealt with there, prolly a long time ago.
It depends on what Java 6 features you want to use. For a simple thing like adding row sorters to JTables, you can actually test at runtime:
private static final double javaVersion =
Double.parseDouble(System.getProperty("java.version").substring(0, 3));
private static final boolean supportsRowSorter =
(javaVersion >= 1.6);
//...
if (supportsRowSorter) {
myTable.setAutoCreateRowSorter(true);
} else {
// not supported
}
This code must be compiled with Java 6, but can be run with any version (no new classes are referenced).
EDIT: to be more correct, it will work with any version since 1.3 (according to this page).
You can do all of your compiling exclusively on Java6 and then use System.getProperty("java.version") to conditionally run either the Java5 or the Java6 code path.
You can have Java6-only code in a class and the class will run fine on Java5 as long as the Java6-only code path is not executed.
This is a trick that is used to write applets that will run on the ancient MSJVM all the way up to brand-new Java Plug-in JVMs.
There is no pre-compiler in Java. Thus, no way to do a #ifdef like in C.
Build scripts would be the best way.
You can get conditional compile, but not very nicely - javac will ignore unreachable code. Thus if you structured your code properly, you can get the compiler to ignore parts of your code. To use this properly, you would also need to pass the correct arguments to javac so it doesn't report unreachable code as errors, and refuse to compile :-)
The public static final solution mentioned above has one additional benefit the author didn't mention--as I understand it, the compiler will recognize it at compile time and compile out any code that is within an if statement that refers to that final variable.
So I think that's the exact solution you were looking for.
A simple solution could be:
Place the divergent classes outside of your normal classpath.
Write a simple custom classloader and install it in main as your default.
For all classes apart from the 5/6 ones the cassloader can defer to its parent (the normal system classloader)
For the 5/6 ones (which should be the only ones that cannot be found by the parent) it can decide which to use via the 'os.name' property or one of your own.
You can use reflection API. put all your 1.5 code in one class and 1.6 api in another. In your ant script create two targets one for 1.5 that won't compile the 1.6 class and one for 1.6 that won't compile the class for 1.5. in your code check your java version and load the appropriate class using reflection that way javac won't complain about missing functions. This is how i can compile my MRJ(Mac Runtime for Java) applications on windows.