public void onStart() throws Exception
{
// start up code here
}enter code here
public void onExecute() throws Exception
{
// execute code (set executeOnChange flag on inputs)
String tmp = getInASCIIString().getValue();
setOutCSVString(new BStatusString(AsciiToBinary(tmp)));
}
public void onStop() throws Exception
{
// shutdown code here
}
public static String AsciiToBinary(String asciiString) throws Exception
{
String padding = "00000000";
StringBuffer dataString = new StringBuffer();
StringBuffer outCSV = new StringBuffer();
StringTokenizer Values = new StringTokenizer(asciiString,",");
while (Values.hasMoreTokens())
{
String bin = padding + Integer.toBinaryString(Integer.parseInt(Values.nextToken()));
String reversedString = new StringBuffer(bin.substring(bin.length() - 8, bin.length())).reverse().toString();
dataString.append(reversedString);
}
try
{
char[] charArray = dataString.toString().toCharArray();
for (int i = 1; i < charArray.length; i++)
{
if (charArray[i] == '1')
{
outCSV.append(i+"");
outCSV.append(',');
}
}
if (outCSV.toString().length() > 1)
{
return outCSV.toString().substring(0, outCSV.toString().length()-1);
}
else
{
return outCSV.toString();
}
}
catch(StringIndexOutOfBoundsException e)
{
return "";
}
}
We use a Tridium- which uses Java as the backend. This program seems to be randomly and occasionally throwing an error. I'm only limited to the packages that are pre-installed, including: java.util, java.baja.sys, javax.baja.status, javax.baja.util, com.tridium.program
Which is why some of the code is written using the logic/functions that it does. Anyway- I cannot figure out why this is throwing an error. Any thoughts?
java.lang.StringIndexOutOfBoundsException: String index out of range: 15 at java.lang.String.charAt(String.java:658)
Full stack trace:
java.lang.StringIndexOutOfBoundsException: String index out of range: 15
at java.lang.String.charAt(String.java:658)
at com.korsengineering.niagara.conversion.BStatusNumericToStatusBoolean.changed(BStatusNumericToStatusBoolean.java:38)
at com.tridium.sys.schema.ComponentSlotMap.fireComponentEvent(ComponentSlotMap.java:1000)
at com.tridium.sys.schema.ComponentSlotMap.modified(ComponentSlotMap.java:902)
at com.tridium.sys.schema.ComplexSlotMap.modified(ComplexSlotMap.java:1538)
at com.tridium.sys.schema.ComplexSlotMap.setDouble(ComplexSlotMap.java:1254)
at javax.baja.sys.BComplex.setDouble(BComplex.java:666)
at com.tridium.sys.schema.ComplexSlotMap.copyFrom(ComplexSlotMap.java:294)
at javax.baja.sys.BComplex.copyFrom(BComplex.java:246)
at javax.baja.sys.BLink.propagatePropertyToProperty(BLink.java:593)
at javax.baja.sys.BLink.propagate(BLink.java:523)
at com.tridium.sys.engine.SlotKnobs.propagate(SlotKnobs.java:56)
at com.tridium.sys.schema.ComponentSlotMap.modified(ComponentSlotMap.java:899)
at com.tridium.sys.schema.ComplexSlotMap.modified(ComplexSlotMap.java:1538)
at com.tridium.sys.schema.ComplexSlotMap.setDouble(ComplexSlotMap.java:1254)
at javax.baja.sys.BComplex.setDouble(BComplex.java:666)
at javax.baja.status.BStatusNumeric.setValue(BStatusNumeric.java:66)
at com.tridium.kitControl.conversion.BStatusStringToStatusNumeric.calculate(BStatusStringToStatusNumeric.java:161)
at com.tridium.kitControl.conversion.BStatusStringToStatusNumeric.changed(BStatusStringToStatusNumeric.java:155)
at com.tridium.sys.schema.ComponentSlotMap.fireComponentEvent(ComponentSlotMap.java:1000)
at com.tridium.sys.schema.ComponentSlotMap.modified(ComponentSlotMap.java:902)
at com.tridium.sys.schema.ComplexSlotMap.modified(ComplexSlotMap.java:1538)
at com.tridium.sys.schema.ComplexSlotMap.setString(ComplexSlotMap.java:1335)
at javax.baja.sys.BComplex.setString(BComplex.java:668)
at com.tridium.sys.schema.ComplexSlotMap.copyFrom(ComplexSlotMap.java:295)
at javax.baja.sys.BComplex.copyFrom(BComplex.java:246)
at javax.baja.sys.BLink.propagatePropertyToProperty(BLink.java:593)
at javax.baja.sys.BLink.propagate(BLink.java:523)
at com.tridium.sys.engine.SlotKnobs.propagate(SlotKnobs.java:56)
at com.tridium.sys.schema.ComponentSlotMap.modified(ComponentSlotMap.java:899)
at com.tridium.sys.schema.ComplexSlotMap.modified(ComplexSlotMap.java:1538)
at com.tridium.sys.schema.ComplexSlotMap.setString(ComplexSlotMap.java:1335)
at javax.baja.sys.BComplex.setString(BComplex.java:668)
at com.tridium.sys.schema.ComplexSlotMap.copyFrom(ComplexSlotMap.java:295)
at javax.baja.sys.BComplex.copyFrom(BComplex.java:238)
at javax.baja.control.BControlPoint.doExecute(BControlPoint.java:271)
at auto.javax_baja_control_BStringWritable.invoke(AutoGenerated)
at com.tridium.sys.schema.ComponentSlotMap.invoke(ComponentSlotMap.java:1599)
at com.tridium.sys.engine.EngineUtil.doInvoke(EngineUtil.java:49)
at com.tridium.sys.engine.EngineManager.checkAsyncActions(EngineManager.java:364)
at com.tridium.sys.engine.EngineManager.execute(EngineManager.java:209)
at com.tridium.sys.engine.EngineManager$EngineThread.run(EngineManager.java:691)
Something is happening outside your Program Object, once you wire your outCSVString to whatever other wire sheet logic you are linking it to. This is more than likely due to your AsciiToBinary method returning a null string output that the rest of your logic can't deal with.
Wire outCSVString to a StringWritable object that has a StringCov history extension on it, and look for what value the history records at the same timestamp where you see the exception, to make sure your ProgramObject is generating the output you expect.
Regarding your AsciiToBinary method, the Tridium framework is limited in the modules it provides and what you can import, however it does come packaged with Apache Oro Regular Expression Tools Version 2.0.8. Search for "oro" in the Niagara Help file for more information.
In my experience, you will be more assured that outCSVString will always follow a desired format if you use a regular expression with a substitution to build the string, rather than tokenizing and iterating through the string yourself.
Related
I am getting the strangest problem that I just can't wrap my head around. My web api which uses Spring Boot and postgresql/postgis, is getting inconsistent errors when trying to read geometries from the database. I have been using this code (with occasional modifications of course) for many, many years and this just starting happening on my last release.
I am using openjdk 11.0.4 2019-07-16 on ubuntu 18.04. Relevent pom.xml entries ...
<groupId>org.locationtech.jts</groupId>
<artifactId>jts-core</artifactId>
<version>1.16.1</version>
</dependency>
I am getting various errors from api calls of the following types ...
e.g. hexstring: 0101000020E6100000795C548B88184FC0206118B0E42750C0
org.locationtech.jts.io.ParseException: Unknown WKB type 0
at org.locationtech.jts.io.WKBReader.readGeometry(WKBReader.java:235)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:156)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:137)
at net.crowmagnumb.database.RecordSet.getGeom(RecordSet.java:1073)
e.g. hexstring: 0101000020E61000000080FB3F354F5AC0F3D30EF2C0773540
java.lang.ArrayIndexOutOfBoundsException: arraycopy: length -1 is negative
at java.base/java.lang.System.arraycopy(Native Method)
at org.locationtech.jts.io.ByteArrayInStream.read(ByteArrayInStream.java:59)
at org.locationtech.jts.io.ByteOrderDataInStream.readDouble(ByteOrderDataInStream.java:83)
at org.locationtech.jts.io.WKBReader.readCoordinate(WKBReader.java:378)
at org.locationtech.jts.io.WKBReader.readCoordinateSequence(WKBReader.java:345)
at org.locationtech.jts.io.WKBReader.readPoint(WKBReader.java:256)
at org.locationtech.jts.io.WKBReader.readGeometry(WKBReader.java:214)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:156)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:137)
at net.crowmagnumb.database.RecordSet.getGeom(RecordSet.java:1073)
e.g. hexstring: 0101000020E610000066666666669663C00D96D7371DD63440
org.locationtech.jts.io.ParseException: Unknown WKB type 326
at org.locationtech.jts.io.WKBReader.readGeometry(WKBReader.java:235)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:156)
at org.locationtech.jts.io.WKBReader.read(WKBReader.java:137)
at net.crowmagnumb.database.RecordSet.getGeom(RecordSet.java:1073)
The relevant parts of my RecordSet code is below (so line numbers will not match above stack traces).
public class RecordSet {
private static final Logger logger = LoggerFactory.getLogger(RecordSet.class);
private static WKBReader wkbReader;
private static WKBReader getWKBReader() {
if (wkbReader == null) {
wkbReader = new WKBReader();
}
return wkbReader;
}
private static byte[] hexStringToByteArray(final String hex) {
if (StringUtils.isBlank(hex)) {
return null;
}
int len = hex.length();
byte[] data = new byte[len / 2];
for (int i = 0; i < len; i += 2) {
data[i / 2] = (byte) ((Character.digit(hex.charAt(i), 16) << 4) + Character.digit(hex.charAt(i + 1), 16));
}
return data;
}
public static Geometry getGeom(final String geomStr) {
byte[] byteArray = hexStringToByteArray(geomStr);
if (byteArray == null) {
return null;
}
try {
return getWKBReader().read(byteArray);
} catch (Throwable ex) {
logger.error(String.format("Error parsing geometry [%s]", geomStr), ex);
return null;
}
}
}
So the extreme weirdness is that
It doesn't happen consistently. The exact same api call when I try to repeat it works fine.
The reported hex strings in the exception message are perfectly correct! If I run them in a test program using the same code give the correct answer and no exception.
Again all of the above reported hexstrings that lead to errors in production api calls are valid representations of POINT geometries.
Is this some weird potential memory leak issue?
Maybe this should have been obvious but in my defense I have been using the above code for many, many years (as I said) without issue so I think I just overlooked the obvious? Anyway, it suddenly dawned on me should I be reusing the same WKBReader over and over again in a multiple-threaded environment? Well, turns out no!
If I just create a new WBBReader() with each call (instead of getting a single static WKBReader) it works fine. Well there is the source of my "memory leak". Self caused!
My program needs to index with Lucene (4.10) unstructured documents which contents can be anything. So my custom Analyzer is making use of the ClassicTokenizer to first tokenize the documents.
Yet it does not completely fit my needs because for example I want to be able to search for parts of an email address or part of a serial number (can also be a telephone number or anything containing numbers) that can be written as 1234.5678.9012 or 1234-5678-9012 depending on who wrote the document being indexed.
Since this ClassicTokenizer recognizes email and treats points followed by numbers as a whole token, it ends up that the generated index includes email addresses as a whole and serial numbers as a whole too whereas I would also like to break those tokens into pieces to enable the user to later search for those pieces.
Let me give a concrete example : if the input document features xyz#gmail.com, the ClassicTokenizer recognizes it as an email and consequently tokenizes it as xyz#gmail.com. If the user searches for xyz they will find nothing whereas a search for xyz#gmail.com will yield the expected result.
After reading lots of blog postings or SO question I to the conclusion that one solution could be to use a TokenFilter that would split the email into its pieces (on each side of # sign). Please not that I don't want to create my own tokenizer with JFlex and co.
Dealing with email I wrote the following code inspired from Lucene in action 2nd Edition's Synonymfilter :
public class SymbolSplitterFilter extends TokenFilter {
private final CharTermAttribute termAtt;
private final PositionIncrementAttribute posIncAtt;
private final Stack<String> termStack;
private AttributeSource.State current;
public SymbolSplitterFilter(TokenStream in) {
super(in);
termStack = new Stack<>();
termAtt = addAttribute(CharTermAttribute.class);
posIncAtt = addAttribute(PositionIncrementAttribute.class);
}
#Override
public boolean incrementToken() throws IOException {
if (!input.incrementToken()) {
return false;
}
final String currentTerm = termAtt.toString();
System.err.println("The original word was " + termAtt.toString());
final int bufferLength = termAtt.length();
if (bufferLength > 1 && currentTerm.indexOf("#") > 0) { // There must be sth more than just #
// If this is the first pass we fill in the stack with the terms
if (termStack.isEmpty()) {
// We split the token abc#cd.com into abc and cd.com
termStack.addAll(Arrays.asList(currentTerm.split("#")));
// Now we have the constituting terms of the email in the stack
System.err.println("The terms on the stacks are ");
for (int i = 0; i < termStack.size(); i++) {
System.err.println(termStack.get(i));
/** The terms on the stacks are
* xyz
* gmail.com
*/
}
// I am not sure it is the right place for this.
current = captureState();
} else {
// This part seems to never be reached!
// We add the constituents terms as tokens.
String part = termStack.pop();
System.err.println("Current part is " + part);
restoreState(current);
termAtt.setEmpty().append(part);
posIncAtt.setPositionIncrement(0);
}
}
System.err.println("In the end we have " + termAtt.toString());
// In the end we have xyz#gmail.com
return true;
}
}
Please note : I just started with the email that's why I only showed that part of code but I'll have to enhance my code to also manage serial numbers (as explained earlier)
However the stack is never processed. Indeed I can't figure out how the incrementToken method works although I read this SO question and when it processes the given token from the TokenStream.
Finally the goal I want to achieve is : for xyz#gmail.com as input text, I want to generate the following subtokens :
xyz#gmail.com
xyz
gmail.com
Any help appreciated,
Your Problem is, that the input TokenStream is already exhausted when your Stack is filled the first time. So input.incrementToken() returns false.
You should check whether the stack is filled first before incrementing the input. Like so:
public final class SymbolSplitterFilter extends TokenFilter {
private final CharTermAttribute termAtt;
private final PositionIncrementAttribute posIncAtt;
private final Stack<String> termStack;
private AttributeSource.State current;
private final TypeAttribute typeAtt;
public SymbolSplitterFilter(TokenStream in)
{
super(in);
termStack = new Stack<>();
termAtt = addAttribute(CharTermAttribute.class);
posIncAtt = addAttribute(PositionIncrementAttribute.class);
typeAtt = addAttribute(TypeAttribute.class);
}
#Override
public boolean incrementToken() throws IOException
{
if (!this.termStack.isEmpty()) {
String part = termStack.pop();
restoreState(current);
termAtt.setEmpty().append(part);
posIncAtt.setPositionIncrement(0);
return true;
} else if (!input.incrementToken()) {
return false;
} else {
final String currentTerm = termAtt.toString();
final int bufferLength = termAtt.length();
if (bufferLength > 1 && currentTerm.indexOf("#") > 0) { // There must be sth more than just #
if (termStack.isEmpty()) {
termStack.addAll(Arrays.asList(currentTerm.split("#")));
current = captureState();
}
}
return true;
}
}
}
Note, that you might possibly want to correct your offsets as well and change the order of your tokens as the test shows your resulting tokens:
public class SymbolSplitterFilterTest extends BaseTokenStreamTestCase {
#Test
public void testSomeMethod() throws IOException
{
Analyzer analyzer = this.getAnalyzer();
assertAnalyzesTo(analyzer, "hey xyz#example.com",
new String[]{"hey", "xyz#example.com", "example.com", "xyz"},
new int[]{0, 4, 4, 4},
new int[]{3, 19, 19, 19},
new String[]{"word", "word", "word", "word"},
new int[]{1, 1, 0, 0}
);
}
private Analyzer getAnalyzer()
{
return new Analyzer()
{
#Override
protected Analyzer.TokenStreamComponents createComponents(String fieldName)
{
Tokenizer tokenizer = new MockTokenizer(MockTokenizer.WHITESPACE, false);
SymbolSplitterFilter testFilter = new SymbolSplitterFilter(tokenizer);
return new Analyzer.TokenStreamComponents(tokenizer, testFilter);
}
};
}
}
I use Spark 2.0.1.
I am trying to find distinct values in a JavaRDD as below
JavaRDD<String> distinct_installedApp_Ids = filteredInstalledApp_Ids.distinct();
I see that this line is throwing the below exception
Exception in thread "main" java.lang.StackOverflowError
at org.apache.spark.rdd.RDD.checkpointRDD(RDD.scala:226)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:84)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
..........
The same stacktrace is repeated again and again.
The input filteredInstalledApp_Ids has large input with millions of records.Will thh issue be the number of records or is there a efficient way to find distinct values in JavaRDD. Any help would be much appreciated. Thanks in advance. Cheers.
Edit 1:
Adding the filter method
JavaRDD<String> filteredInstalledApp_Ids = installedApp_Ids
.filter(new Function<String, Boolean>() {
#Override
public Boolean call(String v1) throws Exception {
return v1 != null;
}
}).cache();
Edit 2:
Added the method used to generate installedApp_Ids
public JavaRDD<String> getIdsWithInstalledApps(String inputPath, JavaSparkContext sc,
JavaRDD<String> installedApp_Ids) {
JavaRDD<String> appIdsRDD = sc.textFile(inputPath);
try {
JavaRDD<String> appIdsRDD1 = appIdsRDD.map(new Function<String, String>() {
#Override
public String call(String t) throws Exception {
String delimiter = "\t";
String[] id_Type = t.split(delimiter);
StringBuilder temp = new StringBuilder(id_Type[1]);
if ((temp.indexOf("\"")) != -1) {
String escaped = temp.toString().replace("\\", "");
escaped = escaped.replace("\"{", "{");
escaped = escaped.replace("}\"", "}");
temp = new StringBuilder(escaped);
}
// To remove empty character in the beginning of a
// string
JSONObject wholeventObj = new JSONObject(temp.toString());
JSONObject eventJsonObj = wholeventObj.getJSONObject("eventData");
int appType = eventJsonObj.getInt("appType");
if (appType == 1) {
try {
return (String.valueOf(appType));
} catch (JSONException e) {
return null;
}
}
return null;
}
}).cache();
if (installedApp_Ids != null)
return sc.union(installedApp_Ids, appIdsRDD1);
else
return appIdsRDD1;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
I assume the main dataset is in inputPath. It appears that it's a comma-separated file with JSON-encoded values.
I think you could make your code a bit simpler by combination of Spark SQL's DataFrames and from_json function. I'm using Scala and leave converting the code to Java as a home exercise :)
The lines where you load a inputPath text file and the line parsing itself can be as simple as the following:
import org.apache.spark.sql.SparkSession
val spark: SparkSession = ...
val dataset = spark.read.csv(inputPath)
You can display the content using show operator.
dataset.show(truncate = false)
You should see the JSON-encoded lines.
It appears that the JSON lines contain eventData and appType fields.
val jsons = dataset.withColumn("asJson", from_json(...))
See functions object for reference.
With JSON lines, you can select the fields of your interest:
val apptypes = jsons.select("eventData.appType")
And then union it with installedApp_Ids.
I'm sure the code gets easier to read (and hopefully to write too). The migration will give you extra optimizations that you may or may not be able to write yourself using assembler-like RDD API.
And the best is that filtering out nulls is as simple as using na operator that gives DataFrameNaFunctions like drop. I'm sure you'll like them.
It does not necessarily answer your initial question, but this java.lang.StackOverflowError might get away just by doing the code migration and the code gets easier to maintain, too.
The default behavior when the parser doesn't know what to do is to print messages to the terminal like:
line 1:23 missing DECIMAL at '}'
This is a good message, but in the wrong place. I'd rather receive this as an exception.
I've tried using the BailErrorStrategy, but this throws a ParseCancellationException without a message (caused by a InputMismatchException, also without a message).
Is there a way I can get it to report errors via exceptions while retaining the useful info in the message?
Here's what I'm really after--I typically use actions in rules to build up an object:
dataspec returns [DataExtractor extractor]
#init {
DataExtractorBuilder builder = new DataExtractorBuilder(layout);
}
#after {
$extractor = builder.create();
}
: first=expr { builder.addAll($first.values); } (COMMA next=expr { builder.addAll($next.values); })* EOF
;
expr returns [List<ValueExtractor> values]
: a=atom { $values = Arrays.asList($a.val); }
| fields=fieldrange { $values = values($fields.fields); }
| '%' { $values = null; }
| ASTERISK { $values = values(layout); }
;
Then when I invoke the parser I do something like this:
public static DataExtractor create(String dataspec) {
CharStream stream = new ANTLRInputStream(dataspec);
DataSpecificationLexer lexer = new DataSpecificationLexer(stream);
CommonTokenStream tokens = new CommonTokenStream(lexer);
DataSpecificationParser parser = new DataSpecificationParser(tokens);
return parser.dataspec().extractor;
}
All I really want is
for the dataspec() call to throw an exception (ideally a checked one) when the input can't be parsed
for that exception to have a useful message and provide access to the line number and position where the problem was found
Then I'll let that exception bubble up the callstack to whereever is best suited to present a useful message to the user--the same way I'd handle a dropped network connection, reading a corrupt file, etc.
I did see that actions are now considered "advanced" in ANTLR4, so maybe I'm going about things in a strange way, but I haven't looked into what the "non-advanced" way to do this would be since this way has been working well for our needs.
Since I've had a little bit of a struggle with the two existing answers, I'd like to share the solution I ended up with.
First of all I created my own version of an ErrorListener like Sam Harwell suggested:
public class ThrowingErrorListener extends BaseErrorListener {
public static final ThrowingErrorListener INSTANCE = new ThrowingErrorListener();
#Override
public void syntaxError(Recognizer<?, ?> recognizer, Object offendingSymbol, int line, int charPositionInLine, String msg, RecognitionException e)
throws ParseCancellationException {
throw new ParseCancellationException("line " + line + ":" + charPositionInLine + " " + msg);
}
}
Note the use of a ParseCancellationException instead of a RecognitionException since the DefaultErrorStrategy would catch the latter and it would never reach your own code.
Creating a whole new ErrorStrategy like Brad Mace suggested is not necessary since the DefaultErrorStrategy produces pretty good error messages by default.
I then use the custom ErrorListener in my parsing function:
public static String parse(String text) throws ParseCancellationException {
MyLexer lexer = new MyLexer(new ANTLRInputStream(text));
lexer.removeErrorListeners();
lexer.addErrorListener(ThrowingErrorListener.INSTANCE);
CommonTokenStream tokens = new CommonTokenStream(lexer);
MyParser parser = new MyParser(tokens);
parser.removeErrorListeners();
parser.addErrorListener(ThrowingErrorListener.INSTANCE);
ParserRuleContext tree = parser.expr();
MyParseRules extractor = new MyParseRules();
return extractor.visit(tree);
}
(For more information on what MyParseRules does, see here.)
This will give you the same error messages as would be printed to the console by default, only in the form of proper exceptions.
When you use the DefaultErrorStrategy or the BailErrorStrategy, the ParserRuleContext.exception field is set for any parse tree node in the resulting parse tree where an error occurred. The documentation for this field reads (for people that don't want to click an extra link):
The exception which forced this rule to return. If the rule successfully completed, this is null.
Edit: If you use DefaultErrorStrategy, the parse context exception will not be propagated all the way out to the calling code, so you'll be able to examine the exception field directly. If you use BailErrorStrategy, the ParseCancellationException thrown by it will include a RecognitionException if you call getCause().
if (pce.getCause() instanceof RecognitionException) {
RecognitionException re = (RecognitionException)pce.getCause();
ParserRuleContext context = (ParserRuleContext)re.getCtx();
}
Edit 2: Based on your other answer, it appears that you don't actually want an exception, but what you want is a different way to report the errors. In that case, you'll be more interested in the ANTLRErrorListener interface. You want to call parser.removeErrorListeners() to remove the default listener that writes to the console, and then call parser.addErrorListener(listener) for your own special listener. I often use the following listener as a starting point, as it includes the name of the source file with the messages.
public class DescriptiveErrorListener extends BaseErrorListener {
public static DescriptiveErrorListener INSTANCE = new DescriptiveErrorListener();
#Override
public void syntaxError(Recognizer<?, ?> recognizer, Object offendingSymbol,
int line, int charPositionInLine,
String msg, RecognitionException e)
{
if (!REPORT_SYNTAX_ERRORS) {
return;
}
String sourceName = recognizer.getInputStream().getSourceName();
if (!sourceName.isEmpty()) {
sourceName = String.format("%s:%d:%d: ", sourceName, line, charPositionInLine);
}
System.err.println(sourceName+"line "+line+":"+charPositionInLine+" "+msg);
}
}
With this class available, you can use the following to use it.
lexer.removeErrorListeners();
lexer.addErrorListener(DescriptiveErrorListener.INSTANCE);
parser.removeErrorListeners();
parser.addErrorListener(DescriptiveErrorListener.INSTANCE);
A much more complicated example of an error listener that I use to identify ambiguities which render a grammar non-SLL is the SummarizingDiagnosticErrorListener class in TestPerformance.
What I've come up with so far is based on extending DefaultErrorStrategy and overriding it's reportXXX methods (though it's entirely possible I'm making things more complicated than necessary):
public class ExceptionErrorStrategy extends DefaultErrorStrategy {
#Override
public void recover(Parser recognizer, RecognitionException e) {
throw e;
}
#Override
public void reportInputMismatch(Parser recognizer, InputMismatchException e) throws RecognitionException {
String msg = "mismatched input " + getTokenErrorDisplay(e.getOffendingToken());
msg += " expecting one of "+e.getExpectedTokens().toString(recognizer.getTokenNames());
RecognitionException ex = new RecognitionException(msg, recognizer, recognizer.getInputStream(), recognizer.getContext());
ex.initCause(e);
throw ex;
}
#Override
public void reportMissingToken(Parser recognizer) {
beginErrorCondition(recognizer);
Token t = recognizer.getCurrentToken();
IntervalSet expecting = getExpectedTokens(recognizer);
String msg = "missing "+expecting.toString(recognizer.getTokenNames()) + " at " + getTokenErrorDisplay(t);
throw new RecognitionException(msg, recognizer, recognizer.getInputStream(), recognizer.getContext());
}
}
This throws exceptions with useful messages, and the line and position of the problem can be gotten from either the offending token, or if that's not set, from the current token by using ((Parser) re.getRecognizer()).getCurrentToken() on the RecognitionException.
I'm fairly happy with how this is working, though having six reportX methods to override makes me think there's a better way.
For anyone interested, here's the ANTLR4 C# equivalent of Sam Harwell's answer:
using System; using System.IO; using Antlr4.Runtime;
public class DescriptiveErrorListener : BaseErrorListener, IAntlrErrorListener<int>
{
public static DescriptiveErrorListener Instance { get; } = new DescriptiveErrorListener();
public void SyntaxError(TextWriter output, IRecognizer recognizer, int offendingSymbol, int line, int charPositionInLine, string msg, RecognitionException e) {
if (!REPORT_SYNTAX_ERRORS) return;
string sourceName = recognizer.InputStream.SourceName;
// never ""; might be "<unknown>" == IntStreamConstants.UnknownSourceName
sourceName = $"{sourceName}:{line}:{charPositionInLine}";
Console.Error.WriteLine($"{sourceName}: line {line}:{charPositionInLine} {msg}");
}
public override void SyntaxError(TextWriter output, IRecognizer recognizer, Token offendingSymbol, int line, int charPositionInLine, string msg, RecognitionException e) {
this.SyntaxError(output, recognizer, 0, line, charPositionInLine, msg, e);
}
static readonly bool REPORT_SYNTAX_ERRORS = true;
}
lexer.RemoveErrorListeners();
lexer.AddErrorListener(DescriptiveErrorListener.Instance);
parser.RemoveErrorListeners();
parser.AddErrorListener(DescriptiveErrorListener.Instance);
For people who use Python, here is the solution in Python 3 based on Mouagip's answer.
First, define a custom error listener:
from antlr4.error.ErrorListener import ErrorListener
from antlr4.error.Errors import ParseCancellationException
class ThrowingErrorListener(ErrorListener):
def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e):
ex = ParseCancellationException(f'line {line}: {column} {msg}')
ex.line = line
ex.column = column
raise ex
Then set this to lexer and parser:
lexer = MyScriptLexer(script)
lexer.removeErrorListeners()
lexer.addErrorListener(ThrowingErrorListener())
token_stream = CommonTokenStream(lexer)
parser = MyScriptParser(token_stream)
parser.removeErrorListeners()
parser.addErrorListener(ThrowingErrorListener())
tree = parser.script()
I am trying to create a sort of simple GUI where im trying to save a couple of Strings and doubles and one int, im using the basic property of OOP which is inheritance I created a class Autos which is essentially the super Class.
The problem seem to arise in a method called "cargarDatosAutos" this is in my GUI class, and here is the code:
private void cargarDatosAutos()
{
regInt = at.numRegistros(); // number of registry
if (regInt != -1)
{
curInt = 0;
ats = new AutosRentables[regInt];
try
{
RandomAccessFile f = new RandomAccessFile("Autos.txt", "rw");
at.cargarDatos(f, ats, regInt); // method in subclass
f.close();
}
catch (IOException ex)
{
Logger.getLogger(Interfaz3.class.getName()).log(Level.SEVERE, null, ex);
}
this.mostrarAutos(ats[0]); // shows data
}
}
Here are the errors:
4-Dec-2011 11:35:20 PM rent_autos.Interfaz3 cargarDatosAutos
SEVERE: null
java.io.EOFException
at java.io.RandomAccessFile.readChar(RandomAccessFile.java:695)
at rent_autos.Autos.leerModelo(Autos.java:139)
at rent_autos.AutosRentables.cargarDatos(AutosRentables.java:84)
at rent_autos.Interfaz3.cargarDatosAutos(Interfaz3.java:6076)
at rent_autos.Interfaz3.<init>(Interfaz3.java:38)
at rent_autos.Interfaz3$159.run(Interfaz3.java:6107)
the leerModelo is a method that reads strings:
public String leerModelo(RandomAccessFile file) throws IOException
{
char cadena[] = new char[25], temp;
for (int c = 0; c < cadena.length; c++)
{
temp = file.readChar();
cadena[c] = temp;
}
return new String(cadena).replace('\0', ' ');
}
And the cargarDatos is to load my data:
public void cargarDatos(RandomAccessFile file, AutosRentables[] lista, int reg) throws IOException
{
int cont = 0;
do
{
modelo = this.leerModelo(file);
color = this.leerColor(file);
tipoAM = this.leerTipoAM(file);
rendimientoGalon = file.readDouble();
placa = this.leerPlaca(file);
ACRISS = this.leerACRISS(file);
codigo = file.readInt();
costo = file.readDouble();
marca = this.leerMarca(file);
detalles = this.leerDetalles(file);
lista[cont] = new AutosRentables(modelo, color, tipoAM, rendimientoGalon, placa, ACRISS, codigo, costo, marca, detalles);
cont++;
System.out.println("Entra");
}
while (cont < reg);
}
And heres the ArrayoutOfbound error:
Exception in thread "AWT-EventQueue-0" java.lang.ArrayIndexOutOfBoundsException: 0
at rent_autos.Interfaz3.cargarDatosAutos(Interfaz3.java:6081)
at rent_autos.Interfaz3.<init>(Interfaz3.java:38)
at rent_autos.Interfaz3$159.run(Interfaz3.java:6107)
at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209)
So if anyone knows whats going on please help me out here... is it the byte size of the file?, I really dont know, HELP!
EOFException means you tried to read past the end of the stream; i.e. the end of the file in this case. Probably you aren't positioning yourself correctly in the file. Reading chars from a random access file is tricky as you can't know how many bytes they are encoded as. I suspect you need to redesign the file actually. Or else you should be reading bytes not chars if it is coming from an external system?
java.io.EOFException:
For java.io.EOFException check this link http://docs.oracle.com/javase/1.4.2/docs/api/java/io/EOFException.html.
ArrayOutOfBounds:
Out of bounds exception is occurred when you try to access an array with index that exceeded its length. maximum index of a java array is (length -1). It means that you are trying to insert a value into an array element taht doesnt exist.
For handling it you should make sure that your program doesn't access an array with index bigger than length - 1.