I created a simple grammar in AntlWorks. Then I generated code and I have two files: grammarLexer.java and grammarParser.java. My goal is to create mapping my grammar to java language. What should I do next to achieve it?
Here is my grammar:
`
grammar grammar;
prog : ((FOR | WHILE | IF | PRINT | DECLARATION | ENTER | (WS* FUNCTION) | VARIABLE) | FUNCTION_DEC)+;
FOR : WS* 'for' WS+ VARIABLE WS+ DIGIT+ WS+ DIGIT+ WS* ENTER ( FOR | WHILE | IF | PRINT | DECLARATION | ENTER | (WS* FUNCTION) | INC_DEC )* WS* 'end' WS* ENTER;
WHILE : WS* 'while' WS+ (VARIABLE | DIGIT+) WS* EQ_OPERATOR WS* (VARIABLE | DIGIT+) WS* ENTER (FOR | WHILE | IF | PRINT | DECLARATION | ENTER | (WS* FUNCTION) | (WS* INC_DEC))* WS* 'end' WS* ENTER;
IF : WS* 'if' WS+ ( FUNCTION | VARIABLE | DIGIT+) WS* EQ_OPERATOR WS* (VARIABLE | DIGIT+) WS* ENTER (FOR | WHILE | IF | PRINT | DECLARATION | ENTER | (WS* FUNCTION) | INC_DEC)* ( WS* 'else' ENTER (FOR | WHILE | IF | PRINT | DECLARATION | ENTER | (WS* FUNCTION) | (WS* INC_DEC))*)? WS* 'end' WS* ENTER;
CHAR : ('a'..'z'|'A'..'Z')+;
EQ_OPERATOR : ('<' | '>' | '==' | '>=' | '<=' | '!=');
DIGIT : '0'..'9'+;
ENTER : '\n';
WS : ' ' | '\t';
PRINT_TEMPLATE : WS+ (('"' (CHAR | DIGIT | WS)* '"') | VARIABLE | DIGIT+ | FUNCTION | INC_DEC);
PRINT : WS* 'print' PRINT_TEMPLATE (',' PRINT_TEMPLATE)* WS* ENTER;
VARIABLE : CHAR(CHAR|DIGIT)*;
FUN_TEMPLATE : WS* (VARIABLE | DIGIT+ | '"' (CHAR | DIGIT | WS)* '"');
FUNCTION : VARIABLE '(' (FUN_TEMPLATE (WS* ',' FUN_TEMPLATE)*)? ')' WS* ENTER*;
DECLARATION : WS* VARIABLE WS* ('=' WS* (DIGIT+ | '"' (CHAR | DIGIT | WS)* '"' | VARIABLE)) WS* ENTER;
FUNCTION_DEC : WS*'def' WS* FUNCTION ( FOR | WHILE | IF | PRINT | DECLARATION | ENTER | (WS* FUNCTION) | INC_DEC )* WS* 'end' WS* ENTER*;
INC_DEC : VARIABLE ('--' | '++') WS* ENTER*;`
Here is my Main class for parser:
`
import org.antlr.runtime.ANTLRStringStream;
import org.antlr.runtime.CommonToken;
import org.antlr.runtime.CommonTokenStream;
import org.antlr.runtime.Parser;
public class Main {
public static void main(String[] args) throws Exception {
// the input source
String source =
"for i 1 3\n " +
"printHi()\n " +
"end\n " +
"if fun(y, z) == 0\n " +
"end\n ";
// create an instance of the lexer
grammarLexer lexer = new grammarLexer(new ANTLRStringStream(source));
// wrap a token-stream around the lexer
CommonTokenStream tokens = new CommonTokenStream(lexer);
// traverse the tokens and print them to see if the correct tokens are created
int n = 1;
for(Object o : tokens.getTokens()) {
CommonToken token = (CommonToken)o;
System.out.println("token(" + n + ") = " + token.getText().replace("\n", "\\n"));
n++;
}
grammarParser parser = new grammarParser(tokens);
parser.file();
}
}
`
As I already mentioned in comments: your overuse of lexer rules is wrong. Look at lexer rules as being the fundamental building blocks of your language. Much like how you'd describe water in chemistry. You would not describe water like this:
WATER
: 'HHO'
;
I.e.: as a single element. Water should be described as 3 separate elements:
water
: Hydrogen Hydrogen Oxygen
;
Hydrogen : 'H';
Oxygen : 'O';
where Hydrogen and Oxygen are the fundamental building blocks (lexer rules) and water is the compound (the parser rule).
A good rule of thumb is that if you're creating lexer rules that consist of several other lexer rules, chances are there's something fishy in your grammar. This is not always the case, of course.
Let's say you want to parse the following input:
for i 1 3
print(i)
end
if fun(y, z) == 0
print('foo')
end
A grammar could look like this:
grammar T;
options {
output=AST;
}
tokens {
BLOCK;
CALL;
PARAMS;
}
// parser rules
parse
: block EOF!
;
block
: stat* -> ^(BLOCK stat*)
;
stat
: for_stat
| if_stat
| func_call
;
for_stat
: FOR^ ID expr expr block END!
;
if_stat
: IF^ expr block END!
;
expr
: eq_expr
;
eq_expr
: atom (('==' | '!=')^ atom)*
;
atom
: func_call
| INT
| ID
| STR
;
func_call
: ID '(' params ')' -> ^(CALL ID params)
;
params
: (expr (',' expr)*)? -> ^(PARAMS expr*)
;
// lexer rules
FOR : 'for';
END : 'end';
IF : 'if';
ID : ('a'..'z' | 'A'..'Z')+;
INT : '0'..'9'+;
STR : '\'' ~('\'')* '\'';
SP : (' ' | '\t' | '\r' | '\n')+ {skip();};
And if you now run this test class:
import org.antlr.runtime.*;
import org.antlr.runtime.tree.*;
import org.antlr.stringtemplate.*;
public class Main {
public static void main(String[] args) throws Exception {
String src =
"for i 1 3 \n" +
" print(i) \n" +
"end \n" +
" \n" +
"if fun(y, z) == 0 \n" +
" print('foo') \n" +
"end \n";
TLexer lexer = new TLexer(new ANTLRStringStream(src));
TParser parser = new TParser(new CommonTokenStream(lexer));
CommonTree tree = (CommonTree)parser.parse().getTree();
DOTTreeGenerator gen = new DOTTreeGenerator();
StringTemplate st = gen.toDOT(tree);
System.out.println(st);
}
}
you'll see some output being printed to the console which corresponds to the following AST:
Related
my Antlr-grammar expect a FunctionCall but in my example-code for the compiler built by antlr, i wrote a print-command. Does someone know why and how to fix that? The print-command is named: RetroBox.show(); The print-command should be recognised from blockstatements to blockstatement to statement to localFunctionCall to printCommand
Here my Antrl-grammar:
grammar Mars;
// ******************************LEXER
BEGIN*****************************************
// Keywords
FUNC: 'func';
ENTRY: 'entry';
VARI: 'vari';
VARF: 'varf';
VARC: 'varc';
VARS: 'vars';
LET: 'let';
INCREMENTS: 'increments';
RETROBOX: 'retrobox';
SHOW: 'show';
// Literals
DECIMAL_LITERAL: ('0' | [1-9] (Digits? | '_'+ Digits)) [lL]?;
FLOAT_LITERAL: (Digits '.' Digits? | '.' Digits) ExponentPart? [fFdD]?
| Digits (ExponentPart [fFdD]? | [fFdD])
;
CHAR_LITERAL: '\'' (~['\\\r\n] | EscapeSequence) '\'';
STRING_LITERAL: '"' (~["\\\r\n] | EscapeSequence)* '"';
// Seperators
ORBRACKET: '(';
CRBRACKET: ')';
OEBRACKET: '{';
CEBRACKET: '}';
SEMI: ';';
POINT: '.';
// Operators
ASSIGN: '=';
// Whitespace and comments
WS: [ \t\r\n\u000C]+ -> channel(HIDDEN);
COMMENT: '/*' .*? '*/' -> channel(HIDDEN);
LINE_COMMENT: '//' ~[\r\n]* -> channel(HIDDEN);
// Identifiers
IDENTIFIER: Letter LetterOrDigit*;
// Fragment rules
fragment ExponentPart
: [eE] [+-]? Digits
;
fragment EscapeSequence
: '\\' [btnfr"'\\]
| '\\' ([0-3]? [0-7])? [0-7]
| '\\' 'u'+ HexDigit HexDigit HexDigit HexDigit
;
fragment HexDigits
: HexDigit ((HexDigit | '_')* HexDigit)?
;
fragment HexDigit
: [0-9a-fA-F]
;
fragment Digits
: [0-9] ([0-9_]* [0-9])?
;
fragment LetterOrDigit
: Letter
| [0-9]
;
fragment Letter
: [a-zA-Z$_] // these are the "java letters" below 0x7F
| ~[\u0000-\u007F\uD800-\uDBFF] // covers all characters above 0x7F which are not a surrogate
| [\uD800-\uDBFF] [\uDC00-\uDFFF] // covers UTF-16 surrogate pairs encodings for U+10000 to U+10FFFF
;
// *******************************LEXER END****************************************
// *****************************PARSER BEGIN*****************************************
program
: mainfunction #Programm
| /*EMPTY*/ #Garnichts
;
mainfunction
: FUNC VARI ENTRY ORBRACKET CRBRACKET block #NormaleHauptmethode
;
block
: '{' blockStatement '}' #CodeBlock
| /*EMPTY*/ #EmptyCodeBlock
;
blockStatement
: statement* #Befehl
;
statement
: localVariableDeclaration
| localVariableInitialization
| localFunctionImplementation
| localFunctionCall
;
expression
: left=expression op='%'
| left=expression op=('*' | '/') right=expression
| left=expression op=('+' | '-') right=expression
| neg='-' right=expression
| number
| IDENTIFIER
| '(' expression ')'
;
number
: DECIMAL_LITERAL
| FLOAT_LITERAL
;
localFunctionImplementation
: FUNC primitiveType IDENTIFIER ORBRACKET CRBRACKET block #Methodenimplementierung
;
localFunctionCall
: IDENTIFIER ORBRACKET CRBRACKET SEMI #Methodenaufruf
| printCommand #RetroBoxShowCommand
;
printCommand
: RETROBOX POINT SHOW ORBRACKET params=primitiveLiteral CRBRACKET SEMI #PrintCommandWP
;
localVariableDeclaration
: varTypeDek=primitiveType IDENTIFIER SEMI #Variablendeklaration
;
localVariableInitialization
: varTypeIni=primitiveType IDENTIFIER ASSIGN varValue=primitiveLiteral SEMI #VariableninitKonst
| varTypeIni=primitiveType IDENTIFIER ASSIGN varValue=expression SEMI #VariableninitExpr
;
primitiveLiteral
: DECIMAL_LITERAL
| FLOAT_LITERAL
| STRING_LITERAL
| CHAR_LITERAL
;
primitiveType
: VARI
| VARC
| VARF
| VARS
;
// ******************************PARSER END****************************************
Here my example-code:
func vari entry()
{
RetroBox.show("Hallo"); //Should be recognised as print-command
}
And here a AST printed from Antlr:
AST from Compiler
The problem is that your RETROBOX keyword is 'retrobox' but your example code has it typed as 'RetroBox'. Antlr parses 'RetroBox' as an identifier so the following '.' is unexpected.
Antlr should emit an error: "line 3:12 mismatched input '.' expecting '('".
Then it attempts to recover and continue parsing. It tries single token deletion (just ignoring the '.') and finds that that works... except the rule it now matches is #Methodenaufruf instead of #RetroBoxShowCommand.
I have a grammar like this (anything which looks convoluted is a result of it being a subset of the actual grammar which contains more red herrings):
grammar Query;
startExpression
: WS? expression WS? EOF
;
expression
| maybeDefaultBooleanExpression
;
maybeDefaultBooleanExpression
: defaultBooleanExpression
| queryFragment
;
defaultBooleanExpression
: nested += queryFragment (WS nested += queryFragment)+
;
queryFragment
: unquotedQuery
| quotedQuery
;
unquotedQuery
: UNQUOTED
;
quotedQuery
: QUOTED
;
UNQUOTED
: UnquotedStartChar
UnquotedChar*
;
fragment
UnquotedStartChar
: EscapeSequence
| ~( ' ' | '\r' | '\t' | '\u000C' | '\n' | '\\' | ':'
| '"' | '\u201C' | '\u201D' // DoubleQuote
| '\'' | '\u2018' | '\u2019' // SingleQuote
| '(' | ')' | '[' | ']' | '{' | '}' | '~'
| '&' | '|' | '!' | '^' | '?' | '*' | '/' | '+' | '-' | '$' )
;
fragment
UnquotedChar
: EscapeSequence
| ~( ' ' | '\r' | '\t' | '\u000C' | '\n' | '\\' | ':'
| '"' | '\u201C' | '\u201D' // DoubleQuote
| '\'' | '\u2018' | '\u2019' // SingleQuote
| '(' | ')' | '[' | ']' | '{' | '}' | '~'
| '&' | '|' | '!' | '^' | '?' | '*' )
;
QUOTED
: '"'
QuotedChar*
'"'
;
fragment
QuotedChar
: ~( '\\'
| | '\u201C' | '\u201D' // DoubleQuote
| '\r' | '\n' | '?' | '*' )
;
WS : ( ' ' | '\r' | '\t' | '\u000C' | '\n' )+;
If I call the lexer myself directly:
CharStream input = CharStreams.fromString("A \"");
QueryLexer lexer = new QueryLexer(input);
lexer.removeErrorListeners();
CommonTokenStream tokens = new CommonTokenStream(lexer);
System.out.println(tokens.LT(0));
System.out.println(tokens.LT(1));
System.out.println(tokens.LT(2));
System.out.println(tokens.LT(3));
I get:
java.lang.StringIndexOutOfBoundsException: String index out of range: 4
at java.lang.String.checkBounds(String.java:385)
at java.lang.String.<init>(String.java:462)
at org.antlr.v4.runtime.CodePointCharStream$CodePoint8BitCharStream.getText(CodePointCharStream.java:160)
at org.antlr.v4.runtime.Lexer.notifyListeners(Lexer.java:360)
at org.antlr.v4.runtime.Lexer.nextToken(Lexer.java:144)
at org.antlr.v4.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:169)
at org.antlr.v4.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:152)
at org.antlr.v4.runtime.CommonTokenStream.LT(CommonTokenStream.java:100)
This makes some kind of sense, though I think a proper ANTLR exception might have been better.
What I really don't get, though, is that when I feed this through the complete parser:
QueryParser parser = new QueryParser(tokens);
parser.removeErrorListeners();
parser.addErrorListener(LoggingErrorListener.get());
parser.setErrorHandler(new BailErrorStrategy());
// Performance hack as per the ANTLR v4 FAQ
parser.getInterpreter().setPredictionMode(PredictionMode.SLL);
ParseTree expression;
try
{
expression = parser.startExpression();
}
catch (Exception e)
{
// It catches a StringIndexOutOfBoundsException here.
parser.reset();
parser.getInterpreter().setPredictionMode(PredictionMode.LL);
expression = parser.startExpression();
}
I get:
tokens = {org.antlr.v4.runtime.CommonTokenStream#1811}
channel = 0
tokenSource = {MyQueryLexer#1810}
tokens = {java.util.ArrayList#1816} size = 3
0 = {org.antlr.v4.runtime.CommonToken#1818} "[#0,0:0='A',<13>,1:0]"
1 = {org.antlr.v4.runtime.CommonToken#1819} "[#1,1:1=' ',<32>,1:1]"
2 = {org.antlr.v4.runtime.CommonToken#1820} "[#2,3:2='<EOF>',<-1>,1:3]"
p = 2
fetchedEOF = true
expression = {MyQueryParser$StartExpressionContext#1813} "[]"
children = {java.util.ArrayList#1827} size = 3
0 = {MyQueryParser$ExpressionContext#1831} "[87]"
1 = {org.antlr.v4.runtime.tree.TerminalNodeImpl#1832} " "
2 = {org.antlr.v4.runtime.tree.TerminalNodeImpl#1833} "<EOF>"
Here I would have expected to get a RecognitionException, but somehow the parsing succeeds, and is missing the invalid bit of the token data at the end.
Questions are:
(1) Is this by design?
(2) If so, how can I detect this and have it treated as a syntax error?
Further investigation
When I went looking for the culprit for who was catching the StringIndexOutOfBoudsException and eating it, it turns out that it comes all the way out to our catch block. So I guess ANTLR never got a chance to finish building that last invalid token..?
I'm not entirely sure what's supposed to happen, but I guess I expected that ANTLR would have caught it, created an invalid token and continued.
I then drilled further in and found that Token#nextToken() was throwing an exception, and the docs made it seem like that wasn't supposed to happen, so I ended up filing a ticket about that.
Until very recent builds, ANTLR4's adaptive mechanism has the "feature" of being able to recover from single-token-missing and single-extra-token parses if there were only one viable alternative in that part of the token stream. Now recently, apparently that behavior has changed. So if you're using an older build as I am, you'll still see the adaptive parsing. Maybe Parr and Harwill will fix that.
Like you, I recognized the need for a perfect input stream and zero parse errors, "overlooked" or not. To create a "strict parser" follow these steps:
Make a class called perhaps "StrictErrorStrategy that inherit from/extend DefaultErrorStrategy. You need to override the Recover, RecoverInline, and Sync methods. Bottom line here is we throw exceptions for anything that goes wrongs, and make no attempt to re-sync the code after an extra/missing token. Here's my C# code, your java will look very similar:
public class StrictErrorStrategy : DefaultErrorStrategy
{
public override void Recover(Parser recognizer, RecognitionException e)
{
IToken token = recognizer.CurrentToken;
string message = string.Format("parse error at line {0}, position {1} right before {2} ", token.Line, token.Column, GetTokenErrorDisplay(token));
throw new Exception(message, e);
}
public override IToken RecoverInline(Parser recognizer)
{
IToken token = recognizer.CurrentToken;
string message = string.Format("parse error at line {0}, position {1} right before {2} ", token.Line, token.Column, GetTokenErrorDisplay(token));
throw new Exception(message, new InputMismatchException(recognizer));
}
public override void Sync(Parser recognizer) { /* do nothing to resync */}
}
Make a new lexer that implements a single method:
public class StrictLexer : <YourGeneratedLexerNameHere>
{
public override void Recover(LexerNoViableAltException e)
{
string message = string.Format("lex error after token {0} at position {1}", _lasttoken.Text, e.StartIndex);
throw new ParseCanceledException(BasicEnvironment.SyntaxError);
}
}
Use your lexer and strategy:
AntlrInputStream inputStream = new AntlrInputStream(stream);
StrictLexer lexer = new BailLexer(inputStream);
CommonTokenStream tokenStream = new CommonTokenStream(lexer);
LISBASICParser parser = new LISBASICParser(tokenStream);
parser.RemoveErrorListeners();
parser.ErrorHandler = new StrictErrorStrategy();
This works great, actual code from one of my projects that has a "zero-tolerance rule" about syntax errors. I got the code and ideas from Terence Parr's great book on ANTLR4.
I try to match the string "match 'match content'", meanwhile extract match content that within single quotes. But throws the following exception:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.antlr.runtime.Lexer.emit(Lexer.java:160)
at org.antlr.runtime.Lexer.nextToken(Lexer.java:91)
at org.antlr.runtime.BufferedTokenStream.fetch(BufferedTokenStream.java:133)
at org.antlr.runtime.BufferedTokenStream.sync(BufferedTokenStream.java:127)
at org.antlr.runtime.CommonTokenStream.consume(CommonTokenStream.java:70)
at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:106)
I don't known why throws OOM exception and i can not find error define in dot g file.
My dot g file:
grammar Contains;
options {
language=Java;
output=AST;
ASTLabelType=CommonTree;
backtrack=false;
k=3;
}
match
:
KW_MATCH SINGLE_QUOTE ( ~(SINGLE_QUOTE|'\\') | ('\\' .) )+ SINGLE_QUOTE
;
regexp
:
KW_REGEXP SINGLE_QUOTE RegexComponent+ SINGLE_QUOTE
;
range
:
KW_RANGE SINGLE_QUOTE left=(LPAREN | LSQUARE) start=Number COMMA end = Number right=(RPAREN | RSQUARE) SINGLE_QUOTE
;
DOT : '.'; // generated as a part of Number rule
COLON : ':' ;
COMMA : ',' ;
LPAREN : '(' ;
RPAREN : ')' ;
LSQUARE : '[' ;
RSQUARE : ']' ;
LCURLY : '{';
RCURLY : '}';
PLUS : '+';
MINUS : '-';
STAR : '*';
BITWISEOR : '|';
BITWISEXOR : '^';
QUESTION : '?';
DOLLAR : '$';
KW_RANGE : 'RANGE';
KW_REGEXP : 'REGEXP';
KW_MATCH : 'MATCH';
DOUBLE_QUOTE : '\"';
SINGLE_QUOTE : '\'';
fragment
Digit
:
'0'..'9'
;
fragment
Exponent
:
('e' | 'E') ( PLUS|MINUS )? (Digit)+
;
fragment
RegexComponent
: 'a'..'z' | 'A'..'Z' | '0'..'9' | '_'
| PLUS | STAR | QUESTION | MINUS | DOT
| LPAREN | RPAREN | LSQUARE | RSQUARE | LCURLY | RCURLY
| BITWISEXOR | BITWISEOR | DOLLAR | '\u0080'..'\u00FF' | '\u0400'..'\u04FF'
| '\u0600'..'\u06FF' | '\u0900'..'\u09FF' | '\u4E00'..'\u9FFF' | '\u0A00'..'\u0A7F'
;
Number
:
(Digit)+ ( DOT (Digit)* (Exponent)? | Exponent)?
;
WS : (' '|'\r'|'\t'|'\n'|'\u000C')* {$channel=HIDDEN;}
;
You could start by changing:
WS : (' '|'\r'|'\t'|'\n'|'\u000C')* {$channel=HIDDEN;}
;
to:
WS : (' '|'\r'|'\t'|'\n'|'\u000C')+ {$channel=HIDDEN;}
;
Your version matches an empty string, which might produce an infinite amount of tokens (which might throw an OOME).
Using ANTLR3, I want to parse Strings such:
name IS NOT empty AND age NOT IN (14, 15)
And for these cases, I want to get the following ASTs:
n0 [label="QUERY"];
n1 [label="AND"];
n1 [label="AND"];
n2 [label="IS NOT"];
n2 [label="IS NOT"];
n3 [label="name"];
n4 [label="empty"];
n5 [label="NOT IN"];
n5 [label="NOT IN"];
n6 [label="age"];
n7 [label="14"];
n8 [label="15"];
n0 -> n1 // "QUERY" -> "AND"
n1 -> n2 // "AND" -> "IS NOT"
n2 -> n3 // "IS NOT" -> "name"
n2 -> n4 // "IS NOT" -> "empty"
n1 -> n5 // "AND" -> "NOT IN"
n5 -> n6 // "NOT IN" -> "age"
n5 -> n7 // "NOT IN" -> "14"
n5 -> n8 // "NOT IN" -> "15"
But my n2 and n5 nodes are appearing like :
n2 [label="IS"];
n5 [label="NOT"];
Ie, just the first word is appearing. How can I join both tokens in just one?
My Grammar is:
query
: expr EOF -> ^(QUERY expr)
;
expr
: logical_expr
;
logical_expr
: equality_expr (logical_op^ equality_expr)*
;
equality_expr
: ID equality_op+ atom -> ^(equality_op ID atom)
| '(' expr ')' -> ^('(' expr)
;
atom
: ID
| id_list
| Int
| Number
| String
| '*'
;
id_list
: '(' ID (',' ID)+ ')' -> ID+
| '(' Number (',' Number)* ')' -> Number+
| '(' String (',' String)* ')' -> String+
;
equality_op
: 'IN'
| 'IS'
| 'NOT'
| 'in'
| 'is'
| 'not'
;
logical_op
: 'AND'
| 'OR'
| 'and'
| 'or'
;
Number
: Int ('.' Digit*)?
;
ID
: ('a'..'z' | 'A'..'Z' | '_' | '.' | '-' | '*' | '/' | ':' | Digit)*
;
String
#after {
setText(getText().substring(1, getText().length()-1).replaceAll("\\\\(.)", "$1"));
}
: '"' (~('"' | '\\') | '\\' ('\\' | '"'))* '"'
| '\'' (~('\'' | '\\') | '\\' ('\\' | '\''))* '\''
;
Comment
: '//' ~('\r' | '\n')* {skip();}
| '/*' .* '*/' {skip();}
;
Space
: (' ' | '\t' | '\r' | '\n' | '\u000C') {skip();}
;
fragment Int
: '1'..'9' Digit*
| '0'
;
fragment Digit
: '0'..'9'
;
indexes
: ('[' expr ']')+ -> ^(INDEXES expr+)
;
Do something like this instead (check the inline comments I added):
tokens {
IS_NOT; // added
NOT_IN; // added
QUERY;
INDEXES;
}
query
: expr EOF -> ^(QUERY expr)
;
expr
: logical_expr
;
logical_expr
: equality_expr (logical_op^ equality_expr)*
;
equality_expr
: ID equality_op atom -> ^(equality_op ID atom) // changed equality_op+ to equality_op
| '(' expr ')' -> ^('(' expr)
;
atom
: ID
| id_list
| Int
| Number
| String
| '*'
;
id_list
: '(' ID (',' ID)+ ')' -> ID+
| '(' Number (',' Number)* ')' -> Number+
| '(' String (',' String)* ')' -> String+
;
equality_op
: IS NOT -> IS_NOT // added
| NOT IN -> NOT_IN // added
| IN
| IS
| NOT
;
logical_op
: AND
| OR
;
IS : 'IS' | 'is'; // added
NOT : 'NOT' | 'not'; // added
IN : 'IN' | 'in'; // added
AND : 'AND' | 'and'; // added
OR : 'OR' | 'or'; // added
Number
: Int ('.' Digit*)?
;
ID
: ('a'..'z' | 'A'..'Z' | '_' | '.' | '-' | '*' | '/' | ':' | Digit)+
;
String
#after {
setText(getText().substring(1, getText().length()-1).replaceAll("\\\\(.)", "$1"));
}
: '"' (~('"' | '\\') | '\\' ('\\' | '"'))* '"'
| '\'' (~('\'' | '\\') | '\\' ('\\' | '\''))* '\''
;
Comment
: '//' ~('\r' | '\n')* {skip();}
| '/*' .* '*/' {skip();}
;
Space
: (' ' | '\t' | '\r' | '\n' | '\u000C') {skip();}
;
fragment Int
: '1'..'9' Digit*
| '0'
;
fragment Digit
: '0'..'9'
;
indexes
: ('[' expr ']')+ -> ^(INDEXES expr+)
;
which produces the following AST:
Also, lexer rules should always match at least 1 character (I've mentioned this before to you). Your lexer rule ID possibly matched 0 chars.
The problem is that equalityop+ will only have the value of the first match.
I see different workarounds: create specific rules if it is just for not or not in, create a subrules, or concatenate a variable like i do here:
equality_expr
: ID (full_op+=equality_op) + atom -> ^(full_op ID atom)
| '(' expr ')' -> ^('(' expr)
;
The following problem is different but gives my the idea:
Antlr AST generating (possible) madness
I have the following grammar:
rule: q=QualifiedName {System.out.println($q.text);};
QualifiedName
:
i=Identifier { $i.setText($i.text + "_");}
('[' (QualifiedName+ | Integer)? ']')*
;
Integer
: Digit Digit*
;
fragment
Digit
: '0'..'9'
;
fragment
Identifier
: ( '_'
| '$'
| ('a'..'z' | 'A'..'Z')
)
('a'..'z' | 'A'..'Z' | '0'..'9' | '_' | '$')*
;
and the code from Java:
ANTLRStringStream stream = new ANTLRStringStream("array1[array2[array3[index]]]");
TestLexer lexer = new TestLexer(stream);
CommonTokenStream tokens = new TokenRewriteStream(lexer);
TestParser parser = new TestParser(tokens);
try {
parser.rule();
} catch (RecognitionException e) {
e.printStackTrace();
}
For the input: array1[array2[array3[index]]], i want to modify each identifier. I was expecting to see the output: array1_[array_2[array3_[index_]]], but the output was the same as the input.
So the question is: why the setText() method doesn't work here?
EDIT:
I modified Bart's answer in the following way:
rule: q=qualifiedName {System.out.println($q.modified);};
qualifiedName returns [String modified]
:
Identifier
('[' (qualifiedName+ | Integer)? ']')*
{
$modified = $text + "_";
}
;
Identifier
: ( '_'
| '$'
| ('a'..'z' | 'A'..'Z')
)
('a'..'z' | 'A'..'Z' | '0'..'9' | '_' | '$')*
;
Integer
: Digit Digit*
;
fragment
Digit
: '0'..'9'
;
I want to modify each token matched by the rule qualifiedName. I tried the code above, and for the input array1[array2[array3[index]]] i was expecting to see the output array1[array2[array3[index_]_]_]_, but instead only the last token was modified: array1[array2[array3[index]]]_.
How can i solve this?
You can only use setText(...) once a token is created. You're recursively calling this token and setting some other text, which won't work. You'll need to create a parser rule out of QualifiedName instead of a lexer rule, and remove the fragment before Identifier.
rule: q=qualifiedName {System.out.println($q.text);};
qualifiedName
:
i=Identifier { $i.setText($i.text + "_");}
('[' (qualifiedName+ | Integer)? ']')*
;
Identifier
: ( '_'
| '$'
| ('a'..'z' | 'A'..'Z')
)
('a'..'z' | 'A'..'Z' | '0'..'9' | '_' | '$')*
;
Integer
: Digit Digit*
;
fragment
Digit
: '0'..'9'
;
Now, it will print: array1_[array2_[array3_[index_]]] on the console.
EDIT
I have no idea why you'd want to do that, but it seems you're simply trying to rewrite ] into ]_, which can be done in the same way as I showed above:
qualifiedName
:
Identifier
('[' (qualifiedName+ | Integer)? t=']' {$t.setText("]_");} )*
;