I am new to ANTLR. I parsed a grammar in antlr and got lexer.java and parser.java files. I test it with simple example and it showed proper tree in inerpreter tab and pars tree in debugger tab Now I want to extract spesific information from it. I would like to know if I need ast or not and is there any tool which is compatible with ANTLR for extracting data?
Thanks.
According to the ANTLR4 Wiki, and ANTLR-generated parser generates a parse tree data structure, and it provides a tree walker class that you can use for traversing it. You could use this mechanism to extract information. Note that you'd need to code a "listener" class in Java to extract the information and output it (or whatever).
For more details; see https://theantlrguy.atlassian.net/wiki/display/ANTLR4/Parse+Tree+Listeners
UPDATE
Since you are using ANTLR3, these links are more relevant to you:
Tree pattern matching.
Tree Construction.
FAQ: Tree construction.
I strongly recommend that you take the time to read the available documentation.
Related
I am developing Multi-mode resource-constrain project scheduling solver in Java. I was looking for test instances but only I found this. It is in .mm file that is extension for C++ compilator. Is there any way how to transform this data into something easy readable by java like XML, JSON?
As suggested you could of course parse the file as a text file. Alternatively the two other main approaches would be:
Use clang/llvm's active syntax tree (AST) to interpret the data in the file.
Use an Objective-C++ grammar for a compiler generator like yacc or, since you're using Java, JavaCC. This will also yield a syntax tree, that you can that walk and extract information from.
I've already parsed javascript source using Rhino and reconstructed it successfully.
and when I call astroot.toSource(), it shows to me reconstructed source well.
but .toSource() method can't prints Comments.
using .toSource() method, all my javascript source's comments are disappear.
so, How can I get the full source including comments?
My goal is write AstRoot Object(contain source) to a new javascript file that including full comments.
I'm using Rhino 1.7R4
In general, this is difficult because comments can appear in the middle of any decl, state ment or expression. So how to represent that fact in the various AST objects? It could be done but is very messy for parser and the AST objects it creates.
If you restrict yourself to only allowing comments on statement boundaries there are some possible solutions.
One way would be to write your own javascript tokenizer and inspect the stream while reading the file. Then you would need to figure out how to track them. One hackish way would be to transform them into 'var somexXXxx = "comment";' and use a naming convention to transform them back after ast.toSource() call. That would map your comments into the AST node structure.
I have a C-Header file defining a couple of stucts, containing multiple char arrays.
I'd like to parse these files using Java. Is there a library for reading C-Header files either into a structure or is there a stream parser that understands C-Header files?
Just for more background (I'm just looking for a C-Header parser, not a solution for this particular problem):
I have a text file containing data and a C-Header file explaining the structure. Both are a bit dynamic, so I don't want to generate Java class files.
example:
#define TYPE1
typedef struct type1
{
char name1[10];
char name2[5];
}
#endif
Type2, Type3 etc are similar.
Data structure:
type1ffffffffffaaaaa
You can use an existing C parser for Java. It does a lot more than parsing header files, of course, but that shouldn't hurt you.
We use the parser from the Eclipse CDT project. This is an Eclipse plugin, but we sucessfully use it outside of Eclipse, we just have to bundle 3 JAR files of Eclipse with the parser JAR.
To use the CDT parser, start with an implementation of org.eclipse.cdt.core.model.ILanguage, for example org.eclipse.cdt.core.dom.ast.gnu.c.GCCLanguage. You can call getTranslationUnit on it, passing the code and some helper stuff. A code file is represented by a org.eclipse.cdt.core.parser.FileContent instance (at least in CDT7, this seems to change a lot). The easiest way to create such an object is FileContent.createForExternalFileLocation(filename) or FileContent.create(filename, content). This way you don't need to care about the Eclipse IFile stuff, which seems to work only within projects and workspaces.
The IASTTranslationUnit you get back represents the whole AST of the file. All the nodes therein are instances of IASTSomething types, for example IASTDeclaration etc. You can implement your own subclass of org.eclipse.cdt.core.dom.ast.ASTVisitor to iterate through the AST using the visitor pattern. If you need further help, just ask.
The JAR files we use are org.eclipse.cdt.core.jar, org.eclipse.core.resources.jar, org.eclipse.equinox.common.jar, and org.eclipse.osgi.jar.
Edit: I had found a paper which contains source code snippets for this:
"Using the Eclipse C/C++ Development Tooling as a Robust, Fully Functional, Actively Maintained, Open Source C++ Parser", but it is no longer available online (only as a shortened version).
Example using Eclipse CDT with only 2 jars.
- https://github.com/ricardojlrufino/eclipse-cdt-standalone-astparser
In the example has a class that displays the structure of the source file as a tree and another example making interactions on the api ...
A detail is that with this api(Eclipse CDT Parser) you can do the parsing from a string in memory.
Another example of usage is:
https://github.com/ricardojlrufino/cplus-libparser
Library for metadata extraction (information about classes, methods, variables) of source code in C / C ++.
See file:
https://github.com/ricardojlrufino/cplus-libparser/blob/master/src/main/java/br/com/criativasoft/cpluslibparser/SourceParser.java
As mentioned already, CDT is perfect for this task. But unlike described above I used it from within a plugin and was able to use IFiles. Then everything is so mouch easier. To get the "ITranslationUnit" just do:
ITranslationUnit tu = (ITranslationUnit) CoreModel.getDefault().create(myIFile);
IASTTranslationUnit ias = tu.getAST();
I was i.e. looking for a special #define, so I could just:
ppc = ias.getAllPreprocessorStatements();
To get all the preprocessed code statements, each statement in array-element. Perfectly easy.
You can try to use ANTLR. There should be already some existing C grammar available for it.
I want to write Java code to build a LALR parser for my grammar. Can someone please suggest some books or some links where I can learn how to write Java code for a LALR parser?
Writing a LALR parser by hand is difficult, but it can he done. If you want to learn the theory behind constructing parsers for them by hand, consider looking into "Parsing Techniques: A Practical Guide" by Grune and Jacobs. It's an excellent book on general parsing techniques, and the chapter on LR parsing is particularly good.
If you're more interested in just getting a LALR parser that is written in Java, consider looking into Java CUP, which is a general purpose parser generator for Java.
Hope this helps!
You can split the LALR functionality in two parts: preparation of the tables and parsing the input.
The first part is complex and errorprone, so even if you like knowing how it works I suggest to use a proven working table generator for the LALR states (and for the tokenizer DFA as well).
The second part consists of consuming those tables using some quite simple algorithms to tokenize and process the input into a parse tree/concrete syntax tree. This is easier to implement yourself if you like to do so, and you still have full control over how it works and what it does.
When doing parsing tasks, I personally use the free GOLD Parsing System, which has a nice UI for creating and debugging the grammar and it does also generate table files which can then be loaded and processed by an existing engine or your own implementation (the file format for these CGT files is well documented).
As previously stated, you would always use a parser-generator to produce an LALAR parser. A few such tools for Java are:
SableCC (my personal favourite)
CUP
Beaver3
SJPT
Gold
Just want to mention that my project CookCC ( http://coconut2015.github.io/cookcc/ ) is a LALR(1) parser + Lexer (much like flex).
The unique feature of CookCC is that you can write your lexer and parser in Java using Java annotations. See the calculator example here: https://github.com/coconut2015/cookcc/blob/master/tests/javaap/calc/Calculator.java
I need to build a component which would take a few XML documents in input and check the following kind of rules:
XML1:/bookstore/book[price>35.00] != null
and (XML2:/city/name = 'Montreal'
or XML3://customer[#language] contains 'en')
Basically my component should be able to:
substitute the XML tokens with the corresponding XML document(before colon)
apply xpath query on this XML document
check the xpath output against expected result ("=", "!=", "contains")
follow the basic syntax ("and", "or" and parentheses)
tell if the rule is true or false
Do you know any library which could help me? maybe JavaCC?
Thanks
For evaluating XPATHs I recommend JAXEN.
Jaxen is an open source XPath library
written in Java. It is adaptable to
many different object models,
including DOM, XOM, dom4j, and JDOM.
Is it also possible to write adapters
that treat non-XML trees such as
compiled Java byte code or Java beans
as XML, thus enabling you to query
these trees with XPath too.
The Java XPath API (Java 5 / javax.xml.xpath) is also an option, but I haven't tried it yet.
Somebody on the JavaCC mailing list pointed me to the right direction, mentioning Schematron. It led me to Probatron which seems to be the best java implementation available.
Schematron web site claims that the language supports "jump across links and between XML documents to check constraints" but it seems Probatron doesn't allow that. I may not to tweak it or find a trick for that (like building a temporary XML document containing all my source documents). Apart from that, it looks Probatron is the right library for me.