I've been using ZPT in python recently and I love the templating language. I was looking for something similar for Java but couldn't really find anything I liked as well. The closest thing is FreeMarker.
The problem with FreeMarker and the other Java template engines I looked at was their JSP style syntax that allows for non-conforming XML. I was just wondering if there was a Java template engine that is similar to Zope Page Templates such that it's an "attribute" language that requires valid xml.
I think think there quite some of the template engines your looking for:
Cambridge
Thymeleaf
JTP (dead - but exact implementation)
javaTAL (dead - but exact implementation)
Other approaches supporting valid html are:
Snippetory (Not bound to html)
Lift (scala)
FreeMarker has a nasty dependency on AWT. It makes it impossible to use with things like Google App Engine.
I prefer to use StringTemplate for all my Java templating needs. It is the only Java based template system that strictly separates the logic from the template.
StringTemplate is a java template engine (with ports for C#, Python,
Ruby, and Scala) for generating source code, web pages, emails, or any
other formatted text output. StringTemplate is particularly good at
multi-targeted code generators, multiple site skins, and
internationalization/localization
Its distinguishing characteristic is that it strictly enforces
model-view separation unlike other engines. Strict separation makes
websites and code generators more flexible and maintainable; it also
provides an excellent defense against malicious template authors.
Since you are generating XML
Another solution that isn't obvious at first sight is using JAXB. We have a project here that requires us to generate XML, we have very well defined XSD files for the output files, building the objects and marshalling them is super easy and very painless.
The java template engine you will find to be most similar to Chameleon is Thymeleaf.
There is Distal for client based templating.
There are currently two tal implementations for Java that I know of:
Java ZPT
JPT
There is also Apache Velocity. Although it doesn't require that your templates be valid XML. That may be a deal breaker for you given the question. You could likely enforce that rule with extensions to the core classes though.
What about GXP?
There's also LSP and xtempore.
Related
I need to parse a simple HTML page with a simple form in it. The answers to similar questions on StackOverflow suggest using one of a large variety of non-standard Java libraries such as TagSoup, JSoup, HTMLParser and many others.
However, a web search revealed that there exists some standard functionality in Java SE via this class: http://docs.oracle.com/javase/7/docs/api/javax/swing/text/html/parser/ParserDelegator.html
My sub-questions are:
Is it really true that the standard ParserDelegator class can parse a use case like mine?
What are the limitations of the standard library that create the need for so many non-standard libraries?
Does the fact that ParserDelegator is within swing preclude using it in a regular EC2 cloud server for a web application? Would I have to jump through a lot of hoops to get around the headless aspect or would it be just a small tweak to the configuration?
If the standard one is not recommended, which non-standard one should I use, given: (a) my desire to not stray far from the standard; (b) my simple use case; (c) desire for a mature reliable implementation; and (d) no size or weight limitations since this is a server application as opposed to an embedded client. API is a far lower priority so while I do appreciate JSoup's CSS selector like API, the other concerns (a) through (d) override it.
Thank you.
JDK has built-in HTML parser that supports HTML 1.0 or so. It should support parsing of base text formatting tags and forms.
The reason to use other, third party parsers is requirement to support "real" HTML pages DHTML, JavaScript etc.
JSoup is one of popular parsers that can do the job. For more information about other implementations please take a look on the following discussion:
Pure Java HTML viewer/renderer for use in a Scrollable pane
I have a DSL which is based on a custom metamodel, which in its turn is based on EMF/Ecore. I am trying to figure out which solution to choose, and I cant find any decent comparisons anywhere.
Does anyone have any reasons why I should choose one over the other?
What I know so far is that Acceleo uses a OMG standardized language, but it seems harder to use than Xpand.
First of all, I wonder why you consider Acceleo more difficult to learn than Xpand, while both languages have differences (blocks and delimiters for example) they have quite a similar structure. I won't details all the elements in both languages but, for example, I don't see such a difference between something like:
«FOREACH myAttributes AS a»«a.name»«ENDFOREACH»
and
[for (a: Attribute|myAttributes)][a.name/][/for]
Both are template based languages and as such they have quite the same structure. The main difference between Acceleo and Xpand comes from the fact that Acceleo is based on the standards MOFM2T and OCL from the OMG and the tooling.
I am not very familiar with Xpand tooling but you can find more about it on their wiki. Acceleo on the other side contains an editor with syntax highlighting, code completion, error detection, refactoring and more. It also contains a debugger, a profiler, Ant and Maven support. You can also easily deploy your generators as Eclipse plugin for other users or use them out of Eclipse in a regular Java application. You can find more information on Acceleo here. You can see in videos most of the features of Acceleo on the Obeo Network (registration required).
Finally, the latest activity on xPand as occurred a year ago while Acceleo is actively developed. You can even follow the Acceleo development on github if you want.
Stephane Begaudeau
Disclaimer: I am one of the member of the Acceleo dev' team.
I am a dabbler, not an expert.
My impression is that if you need little more than a templating language, then Xpand is the way to go. Otherwise, pick Acceleo - but as you say, the learning curve is very steep.
When do you need more than a templating language? For me, they seem to run out of gas when the structure (not content) of the output is dependent on multiple independent pieces of the input. If you don't want to get into Acceleo, but have one of these cases, consider inventing an auto-generated "shim" language that gets you partway from input language to output language, perhaps with a lot of redundancy in it to avoid lookups at template-generation time.
I've been using the old 2.x Acceleo on a full scalled project and done some test with the new one.
The langage is pretty easy to use, but with the new version it's a little bit more difficult to bind some
java code to your template when the script langage is not enought.
I was a very big fan of the 2.x, but with the 3.x, I add lots of troubles to make it work. You have to write java code to handle eclipse resources for instance. I totaly gave up when updating to juno, my acceleo projects didn't worked anymore and I didn't manage to correct it in two days. I hope they will make it easier to use out of the box.
Basically the main difference is that ACCELEO is an implementation of the MOF Models To Text Transformation Language which is the OMG (Object Management Group) Standard for the definition of Models to Text transformation. It is therefore a standard language designed by the same group ho designed MOF, UML, SysML and MDA in general. XPAnd is a language which I guess existed before the standard but it is now different from it.
If you start from scratch then start with Acceleo.
In my case, I use a custom meta-model (derived from UML2) with custom stereotypes and stereotypes properties). I tried both Acceleo and Xpand template languages. Indeed they are pretty similar in term of structure and capabilities.
However, I can see one big difference (which makes Xpand much better in this use case): you can use your custom stereotypes in your Xpand templates.
Xpand engine brilliantly chooses the "best matching template/rule" for every stereotype (taking into account inheritance between stereotypes as well).
Furthermore, it is very easy to obtain stereotype properties.
These two "features" make the templates very elegant, compact and readable.
For example:
«DEFINE myTemplate FOR MyUmlProfile::MyStereoType»
MyValue: «this.myStereotypeProperty» or simply: «myStereotypeProperty»
«ENDDEFINE»
In Acceleo, I found it clumsy to achieve the same (longer statements, more code) and my templates ended up lengthy and complex. The positive thing about Acceleo, however, was that it worked conveniently from IBM RSA (applied directly to RSA (emx) models). It has code highlighting and auto-complete working nicely.
Xpand only worked if I exported my RSA models to ".uml" (~XML) format. It doesn't offer code highlighting or auto-complete (or at least I didn't figure out how).
Considering all pros and cons, I still vote for Xpand (in my use case).
I need to translate programs written in a domain specific language into xml representation. These programs are in the form of simple text file. What approach would you suggest me? What api should I use to:
Parse the text files written in this language.
Write xml based on the token and token streams I obtain.
My criteria is more of a rapid and easier development rather then memory or computing time efficiency.
Many Thanks
Ketan
The less trivial part of the job is with step #1, parsing the Domain Specific Language (DSL) text, rather than #2, pushing this to some XML language.
Hopefully you readily have a parser for the DSL (obviously this language must have been put to use somewhere...), and you may be able to "hook" your export/conversion logic into this parser. If such is not possible, you'll need to write a new parser.
Depending on the complexity of the DSL, you may be able to write, longhand, a simple parser based on a few loops and switch cases.
For more complicated languages, ANTLR is often a good choice. In a nutshell, one formalize the grammar of the DSL, in Backus Naur Form (BNF, or actually EBNF, here, i.e. the Extended family) and ANTLR produces a parser, written in a target language of choice (including Java). The learning curve with ANTLR is a factor to consider but in the context of a moderately to extremely sophisticated language, a well worth investment. ANTLR is similar but, in my opinion, a better tool than GNU Bison, this latter would however do the trick as well, and too, target Java is so desired.
If you are familiar with other languages, in particular Python, there are many other tools that can be put to use for more or less ad-hoc parsers; I've also used PyParsing and gladly recommend it.
XStream is the best XML serializer/deserializer for Java EVAR. If you can turn your DSL into Java classes, this is a great library to use.
Are there similar documentation generation systems like Javadoc, for C++? Javadoc produces nice output; It would be great if you could use something like it in other languages.
There are several tools that works like JavaDoc for C++ The most popular tool is probably doxygen. It can handle JavaDoc-like comments, and also several languages (e.g., C++, C, Java, Objective-C, Python, PHP, C#). It has pretty good support for tweaking the style of the HTML output using CSS (see the users list for example documentations).
Two important issues when choosing the documentation system is to make sure that it allows you to
Document the entities that you are interested in. Do you want to document the system following the code structure or according to some other module division.
Getting the output formatted as you want. It is preferable when the documentation fits in with your general project style.
Our experience with doxygen is that it is pretty easy to set up and use, and the resulting output is fairly easy to tweak. Unfortunately, doxygen is not perfect, so in some cases it is necessary to work around quirks or bugs where the doxygen parser breaks down. Be sure to inspect all of your generated documentation carefully.
You can't use javadoc specifically, but there are a couple of tools that do what you want. The one most people tend to use is Doxygen. Here are some links for Doxygen and Doc++:
Doxygen
Doc++
There's doxygen that supports a lot of things (and more) Doxygen
There is also qdoc for QT based C++ projects. http://doc-snapshot.qt-project.org/qdoc
I'm just starting to use Sphinx for my Python projects. Its home page states "C/C++ is already supported as well".
It uses a lightweight markup called "reStructuredText".
I've just started using it for my Python projects, and like the look of the output very much.
From the Standardese home page:
Standardese aims to be a nextgen Doxygen.
It consists of two parts: a library and a tool.
The library aims at becoming the documentation frontend
that can be easily extended and customized. It parses C++
code with the help of libclang and provides access to it.
The tool drives the library to generate documentation
for user-specified files. It supports a couple of output
formats including Markdown and HTML as well as
experimental Latex and Man pages.
The Standardese code repository
also points to some blog posts:
Standardese - a (work-in-progress) nextgen Doxygen
Standardese documentation generator version 0.1
Standardese documentation generator version 0.2: Entity linking, index generation & more
Standardese documentation generator version 0.3: Groups, inline documentation, template mode & more
Standardese Documentation Generator: Post Mortem and My Open-Source Future
I am working on a small text editor project and want to add basic syntax highlighting for a couple of languages (Java, XML..just to name a few). As a learning experience I wanted to add one of the popular or non popular Java lexer parser.
What project do you recommend. Antlr is probably the most well known, but it seems pretty complex and heavy.
Here are the option that I know of.
Antlr
Ragel (yes, it can generate Java source for processing input)
Do it yourself (I guess I could write a simple token parser and highlight the source code).
ANTLR or JavaCC would be the two I know. I'd recommend ANTLR first.
ANTLR may seem complex and heavy but you don't need to use all of the functionality that it includes; it's nicely layered. I'm a big fan of using it to develop parsers. For starters, you can use the excellent ANTLRWorks to visualize and test the grammars that you are creating. It's really nice to be able to watch it capture tokens, build parse trees and step through the process.
For your text editor project, I would check out filter grammars, which might suit your needs nicely. For filter grammars you don't need to specify the entire lexical structure of your language, only the parts that you care about (i.e. need to highlight, color or index) and you can always add in more until you can handle a whole language.
Google code has new project acacia-lex. Written by myself, it seems simple (so far) java lexer using javax annotations.
SableCC
Another interesting option (which I didn't try yet) would be Xtext, which uses Antlr but also includes tools for creating Eclipse editors for your language.
ANTLR is the way to go. I would not build it by hand. You'll also find if you look around on the ANTLR web site that grammars are available for Java, XML, etc.
Another option would be Xtext. It will not only generate a parser for your grammar, but also a complete editor with syntax coloring, error markers, content assist and outline view.
I've done it with JFlex before and was quite satisfied with it. But the language I was highlighting was simple enough that I didn't need a parser generator, so your mileage may vary.
JLex and CUP are decent lexer and parser generators, respectively. I'm currently using both to develop a simple scripting language for a project I'm working on.
I don't think that you need a lexer. all you need is first read the file extention to detect the language and then from a xml file which listed the language keywords easily find them and highlight them.