I was kind a curious if it was possible to do assembly programming in a similar fashion of using NASM in C.
After quick Google search to see if it was possible to do assembly language programming on the JVM and was surprised to find some results.
Has anyone tried doing something like this before?
I'm also wondering if there are any support assembly support for Clojure or Scala.
Invoking Assembly Language Programming from Java
minijavac : Not in English but it looks like it using some kind of NASM support.
Assembly is usually used in C so that a) you can access instructions C doesn't generate or b) lower level performance tuning.
As byte code is designed for Java,
there aren't any useful byte code instructions it doesn't generate
The JVM looks for common patterns in byte code generated by the compiler and optimises for those. This means if you write the byte code yourself it is more likely to be less optimised i.e.
slower, unless it is the same as what the compiler would produce.
Write a JNI library in C with inline assembly in it.
In theory, you could write a JNI-compliant library in pure assembly, but why bother?
I'd like to point to another solution: generating assembly code at runtime from your java program.
Some (long) time ago there was a project called softwire, written in c++, that did exactly that. It (ab)used (method and operator) overloading to create some kind of c++ DSL that closely resembles x86 ASM, and which behind the scene would assemble the corresponding assembly. The main goal was to be able to dynamically assemble an assembly routine customized for specific configuration, while eliminating nearly all the branchings (the routine would be recompiled if the confiugration changed).
This was an excellent library and the author used to to great effect to implement a software renderer with shading support (shaders were dynamically translated to x86 assembly and the assembled, all at runtime), so this was not just a crazy idea. Unforuntately he was hired by a company and the library acquired in the process.
Today, to follow such a route you could create a JNI binding to DynAsm (that alone is probably no small task) and use it to assemble at runtime. If you are willing to use scala over java, you can even relatively easily create a DSL ala softwire, that will under the hood generate the assembly source code and pass it to DynASM.
Sounds like fun :-)
No reason to be bored anymore.
Are you looking for something like jasmin project? Because,for some reason for me, minijava always reminds me of jasmin parser...
Related
Okay so, if I am not mistaken, in C or C++ we use the code below to shorten or substitute the statement to a different one. So you can just write P rather than printf as a command right?
#define P printf
Then how do we do that in Java?
Java does not have macros, or a pre-processing step.
One must realize that with every programming language comes its own set of tools.
Many times MACROS are used where C++ templates or Java generics can be used, for example in case of a MAX macro.
If you really want to have a pre processing state, you should consider inserting a step to your build system (i.e - maven pluggin) that will go over your "Java code with macros", generate real Java files from it (similar to how inline functions behave in C++), and then compile the generated java code.
You can find examples to it for example in case where Java code is generated from XSD or other schemas, so theoretically, why not generate it from "Java with macros code"?
If you look for example at project Lombok you will see they introduce a "properties" systax to Java, but in fact they just introduced IDE plugins (so the code does not look "broken" or "in error" when you code with your favorite IDE), and they introduced mavan steps/goals so you can actually build something developed with Lombok.
Maybe you should adopt a similar approach, if this is that crucial for you (actually in past , prior to JDK 5, this is how "annotations" were used in some frameworks, but you should have a really good reason to do that in your code).
Java does not have a preprocessor step like the languages you enumerated (the C macro language is handled by the preprocessor). You can make a static final function, or you could use cpp to pre-process your Java src (which I would not recommend because it wouldn't work with standard tools). Another somewhat similar alternative (but only in the sense of being able to omit a class name by adding a symbol to a local namespace) might be the static import.
import static java.lang.System.out;
// ...
out.println("Hello, World"); // <-- System.out.println
java doesn't have any internal preprocessor but if it is strongly desired by project (usually by mobile project where needed small code corrections for many destination devices) then external tools can be used, somebody uses even C/C++ preprocessor to preprocess sources, I use my own java-comment-preprocessor but anyway all java preprocessors, I have seen, don't allow such tricks as C/C++ preprocessor does, because preprocessor directives are not supported on the java language level
I have just recently discovered Project Sumatra, which aims to bring the JVM to the graphics card. From their webpage this includes a custom compiler (called Rootbeer) for Java.
This is all good news, however, I would like to hear from someone with more knowledge about the project internals if this means that project Sumatra applies to other JVM languages as well? Will it be possible to make Aparapi calls from Scala or Clojure directly? Or will you have to develop some core functionality in Java and then access that via other JVM languages?
I only just came across this question. Apologies for taking so long. Full disclosure I am the Aparapi inventor/lead and co sponsor of Sumatra.
Unlike Aparapi Sumatra has the advantage of working from the IR (Intermediate Representation) of Java methods from inside the JVM. This means that ultimately it will detect opportunities for GPU offload based on patterns found at this abstract level. Aparapi had to reverse engineer opportunities from bytecode.
It is likely that Sumatra will initially key off user hints, rather than trying to auto-parallelize code. The main focus at present is the new 'lambda' feature of Java 8 and it's companion 'stream API'. So where Aparapi required the user to inherit from a Kernel base class. Sumatr a will likely use the 'explicit' hint of parallelism suggested by:-
IntRange.range(1024).parallel().forEach(gid->{out[gid]=a[gid]+b[gid];});
Although for obvious cases, such as
for (int id=0; i< 1024; i++){
out[gid]=a[gid]+b[gid];
}
It should be entirely possible to offload this loop. So support for other JVM based languages will depend on how ambitious we are looking for opportunities to auto-parallelize. I suspect that many patterns from other languages (JavaScript (Nashorn), JRuby, Scala, JPython etc) will be detectable.
AFAIK Rootbeer (a university project) and Aparapi (an AMD based project) are unrelated, so you may haved missed something here.
Regarding Aparapi itself it states in its Wiki that it won't work with Scale/Closure etc. or in fact with anything except pure Java, since it depends on patterns used by JDK's javac to properly analyse bytecode. It also requires you to extend its Kernel class to be able to convert the bytecode into OpenCL and execute it in GPU. So it looks like you would use one or another.
Back to your question: based on all this you would have to develop in Java and call it from other JVM languages.
I'm looking for a Java macro language that provides for convenient ways of doing closures (that compile to anonymous inner classes) and list comprehension (that compiles down to basic java loops).
An example of the kind of thing I'm looking for would be Xtend2 http://www.eclipse.org/Xtext/#xtend2
But I want something for general purpose programming (Xtend2 is very specific DSL for Xtext and has a ton of dependencies). Maybe even something that would let me define multiple classes in a single file (which would then get split up into two separate files by the pre-processor).
Does anything like this exist?
Edited to add:
I'm doing Android development so any alternatives have to generate either valid Java source or the byte code has to be compatible with the dalvik recompiler.
Mmm, there used to be the JSE, which was tremendous fun, back in the day.
Mirah is cool, but not ready for primetime, IMO.
You can do a lot with smart templating, although your source view is the Java.
There's a post on SO about using XTend on Android from a few days ago, too.
Frege produces java source code.
I do not know whether dalvik would like it. (But I would be interested to hear ...)
And, of course, you have some runtime library code.
That being said, there are a number of other projects that do closures etc. in java, for example: lambdaj
If I wanted to create a new language for Java I should make a compiler that is able to generate the byte-code compatible with the JVM spec, right? and also for the JDK libraries?
Where can I find some info?
Thanks.
Depends what you mean by "create a new language for Java" -- do you mean a language that compiles to bytecode and the code it generates can be used from any Java app (e.g. Groovy) or an interpreted language (for which you want to write a parser in Java)?
If it is the former one then #Joachim is right, look at the JVM spec; for the latter look at the likes of JavaCC for creating a parser for your language grammar.
I would start with a compiler which produced Java source. You may find this easier to read/understand/debug. Later you can optimise it to produce byte code.
EDIT:
If you have features which cannot be easily translated to Java code, you should be able to create a small number of byte code classes using Jasmin with all the exotic functionality which you can test to death. From the generated Java code this will look like a plain method call. The JVM can still inline the method so this might not impact performance at all.
The Java Virtual Machine Spec should have most of what you need.
An excellent library for bytecode generation/manipulation is ASM: http://asm.ow2.org.
It is very versatile and flexible. Note however that it's API is based on events (similar to Sax parsers) - it reads .class files and invokes a method whenever it encounters a new entity (class declaration, method declaration, statements, etc.). This may seem a bit awkward at first, but it saves a lot of memory (compared to the alternative: the library read the input, spits out a fully-evolved tree structure and then you have to iterate over it).
I don't think this will help much practically, but it has a lot of sweet theoretical stuff that I think you'll find useful.
http://www.codeproject.com/KB/recipes/B32Machine1.aspx
I'm writing a Java application using Struts 2, but now I'd like to make it a hybrid Java & Scala project instead. I don't have much experience with Scala, but I learned Haskell years ago at college -- I really liked the functional programmed paradigm, but of course in class we were only given problems that were supremely suited to a functional solution! In the real world, I think some code is better suited to an imperative style, and I want to continue using Java for that (I know Scala supports imperative syntax, but I'm not ready to go in the direction of a pure Scala project just yet).
In a hybrid project, how does one decide what to code in Java and what to code in Scala?
Two things:
99% of Java code can be expressed in Scala
You can write projects that support mixed Java+Scala compilation. Your Scala code can call your Java code and your Java code can call your Scala code. (If you want to do the latter, I suggest defining the interface in Java and then just implementing it in Scala. Otherwise, calling Scala code from Java can get a little ugly.)
So the answer is: whatever parts you want. Your Scala code does not need to be purely functional. Your Scala code can call Java libraries. So pretty much any parts you could write in Java you could also write in Scala.
Now, some more practical considerations. When first trying Scala, some people pick relatively isolated, non-mission-critical parts of their program to write in Scala. Unit tests are a good candidate if you like that approach.
If you're familiar with Java and have learned Haskell in the past, I suggest treating Scala as a "better Java". Fundamentally, Scala compiles to JVM bytecode that is very, very similar to what Java outputs. The only difference is that Scala is more "productive": it produces more bytecode per line of code than Java does. Scala has a lot of things in common with Haskell (first-class functions, for-comprehensions are like Haskell's do-notation, limited type inference), but it's also very different (it's not lazy by default, it's not pure). So you can use some of your insights from Haskell to inspire your Scala style, but "under the hood" it's all Java bytecode.
In the spirit of your question, I recommend you write in Scala any code that involves heavy manipulation of collections, or that handle XML.
Scala's collection library is the foremost functional feature of Scala, and you'll experience great LoC reduction through its usage. Yes, there are Java alternatives, such as Google's collection library, but you asked what you should write in Scala. :-)
Scala also has native handling of XML. You might well find the transition difficult, if you try to take DOM code and make it work on Scala. But if you, instead, try to approach the problem and the Scala perspective and write it from scratch for Scala, you'll have gains.
I'd advise using Actors as well, but I'm not sure how well you can integrate that with Struts 2 code on Java. But if you have concurrent code, give Actors in Scala a thought.
It might sound silly, but why not write your entire project in Scala? It's a wonderful language that is far more expressive than Java while maintaining binary-compatible access to existing Java libraries.
Ask these questions of your project:
"What operations need side-effects?" and "What functionality is already covered well by Java libraries?" Then implement the rest in Scala.
However I would warn that hybrid projects are by their very nature more difficult than stand alone projects as you need to use multiple languages/environments. So given you claim not much experience with Scala I'd recommend playing with some toy projects first, perhaps a subset of your full goal. That will also give you a feel for where the divide should occur.