I tried to compress Inputmask.js file. Here is my code to work
public class JSFileMinifyTest {
public static void main(String[] args) throws Exception {
String sourceFileName = "D:\\temp\\jquery.inputmask.bundle.js";
String outputFilename = "D:\\temp\\combined.min.js";
com.google.javascript.jscomp.Compiler.setLoggingLevel(Level.INFO);
com.google.javascript.jscomp.Compiler compiler = new com.google.javascript.jscomp.Compiler();
CompilerOptions options = new CompilerOptions();
CompilationLevel.WHITESPACE_ONLY.setOptionsForCompilationLevel(options);
options.setAggressiveVarCheck(CheckLevel.OFF);
options.setRuntimeTypeCheck(false);
options.setCheckRequires(CheckLevel.OFF);
options.setCheckProvides(CheckLevel.OFF);
options.setReserveRawExports(false);
WarningLevel.VERBOSE.setOptionsForWarningLevel(options);
// To get the complete set of externs, the logic in
// CompilerRunner.getDefaultExterns() should be used here.
SourceFile extern = SourceFile.fromCode("externs.js", "function alert(x) {}");
SourceFile jsFile = SourceFile.fromFile(sourceFileName);
compiler.compile(extern, jsFile, options);
for (JSError message : compiler.getWarnings()) {
System.err.println("Warning message: " + message.toString());
}
for (JSError message : compiler.getErrors()) {
System.err.println("Error message: " + message.toString());
}
FileWriter outputFile = new FileWriter(outputFilename);
outputFile.write(compiler.toSource());
outputFile.close();
}
}
But error occured while compressing this js file.
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.parsing.ParserRunner parse
INFO: Error parsing D:\temp\jquery.inputmask.bundle.js: Compilation produced 3 syntax errors. (D:\temp\jquery.inputmask.bundle.js#1)
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1002: ERROR - Parse error. identifier is a reserved word
static || null !== test.fn && void 0 !== testPos.input ? static && null !== test.fn && void 0 !== testPos.input && (static = !1,
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1003: ERROR - Parse error. syntax error
maskTemplate += "</span>") : (static = !0, maskTemplate += "<span class='im-static''>");
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager println
SEVERE: D:\temp\jquery.inputmask.bundle.js:1010: ERROR - Parse error. missing variable name
var maskTemplate = "", static = !1;
^
Apr 09, 2018 11:29:18 AM com.google.javascript.jscomp.LoggerErrorManager printSummary
WARNING: 3 error(s), 0 warning(s)
Error message: JSC_PARSE_ERROR. Parse error. identifier is a reserved word at D:\temp\jquery.inputmask.bundle.js line 1002 : 16
Error message: JSC_PARSE_ERROR. Parse error. syntax error at D:\temp\jquery.inputmask.bundle.js line 1003 : 29
Error message: JSC_PARSE_ERROR. Parse error. missing variable name at D:\temp\jquery.inputmask.bundle.js line 1010 : 39
Has there anyways to skip syntax validations or ignore errors and continue to compress ?
The compiler by default, parses code as 'use strict'. 'static' is a reserved word in strict mode. You can either change the language mode or simply renamed the variable from 'strict' to something else.
The strict mode reserve words are documented here:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Strict_mode#Paving_the_way_for_future_ECMAScript_versions
Related
I have a PCollection of KV where key is gcs file_patterns and value is some additional info of the files (e.g., the "Source" systems that generated the files). E.g.,
KV("gs://bucket1/dir1/*", "SourceX"),
KV("gs://bucket1/dir2/*", "SourceY")
I need a PTransferm to expand the file_patterns to all matching files in the GCS folders, and keep the "Source" field. E.g., if there are two files X1.dat, X2.dat under dir1 and one file (Y1.dat) under dir2, the output will be:
KV("gs://bucket1/dir1/X1.dat", "SourceX"),
KV("gs://bucket1/dir1/X2.dat", "SourceX")
KV("gs://bucket1/dir2/Y1.dat", "SourceY")
Could I use FileIO.matchAll() to achieve this? I am stuck on how to combine/join the "Source" field to the matching files. This is something I was trying, not quite there yet:
public PCollection<KV<String, String> expand(PCollection<KV<String, String>> filesAndSources) {
return filesAndSources
.apply("Get file names", Keys.create())
.apply(FileIO.matchAll())
.apply(FileIO.readMatches())
.apply(ParDo.of(
new DoFn<ReadableFile, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
ReadableFile file = c.element();
String fileName = file.getMetadata().resourceId().toString();
c.output(KV.of(fileName, XXXXX)); // How to get the value field ("Source") from the input KV?
My difficulty is the last line, for XXXXX, how do I get the value field ("Source") from the input KV? Any way to "join" or "combine" the input KV's value back to the 'expended' keys, as one key (file_pattern) is expanded to multiple values.
Thank you!
MatchResult.Medata contains the resourceId you are already using but not the GCS path (with wildcards) it matched.
You can achieve what you want using side inputs. To demonstrate this I created the following filesAndSources (as per your comment this could be an input parameter so it can't be hard-coded downstream):
PCollection<KV<String, String>> filesAndSources = p.apply("Create file pattern and source pairs",
Create.of(KV.of("gs://" + Bucket + "/sales/*", "Sales"),
KV.of("gs://" + Bucket + "/events/*", "Events")));
I materialize this into a side input (in this case as Map). The key will be the glob pattern converted into a regex one (thanks to this answer) and the value will be the source string:
final PCollectionView<Map<String, String>> regexAndSources =
filesAndSources.apply("Glob pattern to RegEx", ParDo.of(new DoFn<KV<String, String>, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
String regex = c.element().getKey();
StringBuilder out = new StringBuilder("^");
for(int i = 0; i < regex.length(); ++i) {
final char ch = regex.charAt(i);
switch(ch) {
case '*': out.append(".*"); break;
case '?': out.append('.'); break;
case '.': out.append("\\."); break;
case '\\': out.append("\\\\"); break;
default: out.append(ch);
}
}
out.append('$');
c.output(KV.of(out.toString(), c.element().getValue()));
}})).apply("Save as Map", View.asMap());
Then, after reading the filenames we can use the side input to parse each path to see which is the matching pattern/source pair:
filesAndSources
.apply("Get file names", Keys.create())
.apply(FileIO.matchAll())
.apply(FileIO.readMatches())
.apply(ParDo.of(new DoFn<ReadableFile, KV<String, String>>() {
#ProcessElement
public void processElement(ProcessContext c) {
ReadableFile file = c.element();
String fileName = file.getMetadata().resourceId().toString();
Set<Map.Entry<String,String>> patternSet = c.sideInput(regexAndSources).entrySet();
for (Map.Entry< String,String> pattern:patternSet)
{
if (fileName.matches(pattern.getKey())) {
String source = pattern.getValue();
c.output(KV.of(fileName, source));
}
}
}}).withSideInputs(regexAndSources))
Note that the regex conversion is done when before materializing the side input instead of here to avoid duplicate work.
The output, as expected in my case:
Feb 24, 2019 10:44:05 PM org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn process
INFO: Matched 2 files for pattern gs://REDACTED/events/*
Feb 24, 2019 10:44:05 PM org.apache.beam.sdk.io.FileIO$MatchAll$MatchFn process
INFO: Matched 2 files for pattern gs://REDACTED/sales/*
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/sales/sales1.csv, value=Sales
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/sales/sales2.csv, value=Sales
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/events/events1.csv, value=Events
Feb 24, 2019 10:44:05 PM com.dataflow.samples.RegexFileIO$3 processElement
INFO: key=gs://REDACTED/events/events2.csv, value=Events
Full code.
I get "java.lang.ClassCastException: Cannot cast org.apache.uima.jcas.tcas.Annotation to java.lang.String" exception after updating the ruta version from 2.5.0 to 2.6.1 (Installtion Details attached). How to resolve this exception ? The exception occurs while executing the below script. Initializing Process taking more time.
Script:
FOREACH(hlev) Headinglevel{}
{
Document{->tagClass="",onlyClass=true};
hlev{-> ASSIGN(tagClass,tagName + "." + className), Headinglevel.class = tagClass}
<-{TagName{->MATCHEDTEXT(tagName)} # ClassName{->MATCHEDTEXT(className)};};
CssDefinition{->onlyClass=false, family = CssDefinition.fontfamily, size = CssDefinition.fontsize, color = CssDefinition.fontcolor, bold = CssDefinition.bold, italic = CssDefinition.italic, underline = CssDefinition.underline, case = CssDefinition.case}
<-{CssStyles{PARSE(cssStylesStr),IF(contains(cssStylesStr,tagClass))};};
}
Stacktrace:
Aug 02, 2018 12:07:25 PM org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl callAnalysisComponentProcess(434)
SEVERE: Exception occurred
org.apache.uima.analysis_engine.AnalysisEngineProcessException: Annotator processing failed.
at org.apache.uima.ruta.engine.RutaEngine.process(RutaEngine.java:563)
at org.apache.uima.analysis_component.JCasAnnotator_ImplBase.process(JCasAnnotator_ImplBase.java:48)
at org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.callAnalysisComponentProcess(PrimitiveAnalysisEngine_impl.java:401)
at org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.processAndOutputNewCASes(PrimitiveAnalysisEngine_impl.java:318)
at org.apache.uima.analysis_engine.impl.AnalysisEngineImplBase.process(AnalysisEngineImplBase.java:269)
at org.apache.uima.ruta.ide.launching.RutaLauncher.processFile(RutaLauncher.java:242)
at org.apache.uima.ruta.ide.launching.RutaLauncher.main(RutaLauncher.java:191)
Caused by: java.lang.ClassCastException: Cannot cast org.apache.uima.jcas.tcas.Annotation to java.lang.String
at java.lang.Class.cast(Class.java:3369)
at org.apache.uima.ruta.RutaEnvironment.getVariableValue(RutaEnvironment.java:866)
at org.apache.uima.ruta.expression.string.StringVariableExpression.getStringValue(StringVariableExpression.java:38)
at org.apache.uima.ruta.rule.RutaLiteralMatcher.getMatchingAnnotations(RutaLiteralMatcher.java:51)
at org.apache.uima.ruta.rule.RutaLiteralMatcher.getMatchingAnnotations(RutaLiteralMatcher.java:33)
at org.apache.uima.ruta.rule.RutaRuleElement.getAnchors(RutaRuleElement.java:51)
at org.apache.uima.ruta.rule.RutaRuleElement.startMatch(RutaRuleElement.java:59)
at org.apache.uima.ruta.rule.ComposedRuleElement.startMatch(ComposedRuleElement.java:76)
at org.apache.uima.ruta.rule.RutaRule.apply(RutaRule.java:63)
at org.apache.uima.ruta.rule.RutaRule.apply(RutaRule.java:54)
at org.apache.uima.ruta.rule.RutaRule.apply(RutaRule.java:36)
at org.apache.uima.ruta.block.ForEachBlock.apply(ForEachBlock.java:92)
at org.apache.uima.ruta.block.RutaScriptBlock.apply(RutaScriptBlock.java:67)
at org.apache.uima.ruta.RutaModule.apply(RutaModule.java:56)
at org.apache.uima.ruta.engine.RutaEngine.process(RutaEngine.java:561)
...
Instead of using String, I used the annotation name directly.
FOREACH(hlev) Headinglevel{}
{
Document{->tagClass="",onlyClass=true};
Headinglevel{-> ASSIGN(tagClass,tagName + "." + className), Headinglevel.class = tagClass}
<-{TagName{->MATCHEDTEXT(tagName)} # ClassName{->MATCHEDTEXT(className)};};
CssDefinition{->onlyClass=false, family = CssDefinition.fontfamily, size = CssDefinition.fontsize, color = CssDefinition.fontcolor, bold = CssDefinition.bold, italic = CssDefinition.italic, underline = CssDefinition.underline, case = CssDefinition.case}
<-{CssStyles{PARSE(cssStylesStr),IF(contains(cssStylesStr,tagClass))};};
}
The error I am getting is as follows:
" Aug 25, 2015 1:47:41 PM com.orientechnologies.common.log.OLogManager log
INFO: OrientDB auto-config DISKCACHE=4,161MB (heap=1,776MB os=7,985MB disk=416,444MB)
Aug 25, 2015 1:47:41 PM com.orientechnologies.common.log.OLogManager log
WARNING: segment file 'database.ocf' was not closed correctly last time
Exception in thread "main" com.orientechnologies.common.exception.OException: Error on creation
of shared resource
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:55)
at com.orientechnologies.orient.core.metadata.OMetadataDefault.init(OMetadataDefault.java:175)
at com.orientechnologies.orient.core.metadata.OMetadataDefault.load(OMetadataDefault.java:77)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.initAtFirstOpen(ODatabaseDocumentTx.java:2633)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.open(ODatabaseDocumentTx.java:254)
at arss.db.main(db.java:17)
Caused by: com.orientechnologies.orient.core.exception.ORecordNotFoundException:
The record with id '#0:1' not found
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:266)
at com.orientechnologies.orient.core.record.impl.ODocument.reload(ODocument.java:665)
at com.orientechnologies.orient.core.type.ODocumentWrapper.reload(ODocumentWrapper.java:91)
at com.orientechnologies.orient.core.type.ODocumentWrapperNoClass.reload(ODocumentWrapperNoClass.java:73)
at com.orientechnologies.orient.core.metadata.schema.OSchemaShared.load(OSchemaShared.java:786)
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:180)
at com.orientechnologies.orient.core.metadata.OMetadataDefault$1.call(OMetadataDefault.java:175)
at com.orientechnologies.common.concur.resource.OSharedContainerImpl.getResource(OSharedContainerImpl.java:53)
... 5 more
Caused by: com.orientechnologies.orient.core.exception.ODatabaseException: Error
on retrieving record #0:1 (cluster: internal)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1605)
at com.orientechnologies.orient.core.tx.OTransactionNoTx.loadRecord(OTransactionNoTx.java:80)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:1453)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.reload(ODatabaseDocumentTx.java:117)
at com.orientechnologies.orient.core.record.ORecordAbstract.reload(ORecordAbstract.java:260)
... 12 more
Caused by: java.lang.NoSuchMethodError: com.orientechnologies.common.concur.lock.ONewLockManager.tryAcquireSharedLock(Ljava/lang/Object;J)Z
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.acquireReadLock(OAbstractPaginatedStorage.java:1301)
at com.orientechnologies.orient.core.tx.OTransactionAbstract.lockRecord(OTransactionAbstract.java:120)
at com.orientechnologies.orient.core.id.ORecordId.lock(ORecordId.java:282)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.lockRecord(OAbstractPaginatedStorage.java:1784)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.readRecord(OAbstractPaginatedStorage.java:1424)
at com.orientechnologies.orient.core.storage.impl.local.OAbstractPaginatedStorage.readRecord(OAbstractPaginatedStorage.java:697)
at com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1572)
... 16 more"
The code that I use is:
package arss;
import com.orientechnologies.orient.core.config.OGlobalConfiguration;
import com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx;
import com.orientechnologies.orient.core.record.impl.ODocument;
import com.orientechnologies.orient.core.serialization.serializer.record.ORecordSerializerFactory;
import com.orientechnologies.orient.core.serialization.serializer.record.binary.ORecordSerializerBinary;
import com.orientechnologies.orient.core.serialization.serializer.record.string.ORecordSerializerSchemaAware2CSV;
public class db {
public static void main(String[] args) {
ODatabaseDocumentTx db = new ODatabaseDocumentTx("plocal:C:/AR/AR/Newfolder/orientdb-community-2.0.3_S/databases/GratefulDeadConcerts").open("admin", "admin");
try {
// CREATE A NEW DOCUMENT AND FILL IT
ODocument doc = new ODocument("Person");
doc.field( "name", "Luke" );
doc.field( "surname", "Skywalker" );
doc.field( "city", new ODocument("City").field("name","Rome").field("country", "Italy") );
// SAVE THE DOCUMENT
doc.save();
db.close();
} finally {
db.close();
}
}
}"
Not sure if you need .flush with ODocuments, you should look that up. (or if save is an equivalent and its okay)
from those two errors lines:
WARNING: segment file 'database.ocf' was not closed correctly last time
and
com.orientechnologies.orient.core.db.document.ODatabaseDocumentTx.executeReadRecord(ODatabaseDocumentTx.java:1572) ... 16 more"
I think it has something to do with the database.ocf file itself. dont know if this helps, but try opening it manually, preferable with/without admin and close it again. ("Have you tried turning it off and on again?")
If there is still an erorr, check if it is a different one.
I am trying to create a UDF for importing into Pig that matches a Regex pattern on a date. The Regex has been tested and works accordingly, but I am having trouble with the following code:
package com.date.format;
import java.io.IOException;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
public class DATERANGE extends EvalFunc<String> {
#Override
public String exec(Tuple arg0) throws IOException {
try
{
String pattern = "(Oct\\W(?:1[5-9]|2[0-3])\\W(?:(?:0?9|10):\\d{2}:\\d{2}|11:00:00))";
Pattern pat = Pattern.compile(pattern);
Matcher match = pat.matcher((String) arg0.get(0));
if(match.find())
{
return match.group(0);
}
else return "none";
}
catch(Exception e)
{
throw new IOException("Caught exception processing input row ", e);
}
}
}
After compiling the above java code and exporting it as a jar and running it inside Hadoop using the following Pig script:
register 'DATEFormat.jar';
ld = LOAD 'dates/date_data_three' AS (date:chararray);
loop = foreach ld generate com.date.format.DATERANGE(date) as d:chararray;
dump loop;
I get the following error:
ERROR 2078: Caught error from UDF: com.date.format.DATERANGE [Caught exception
processing input row ]
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to open iterator
for alias loop
at org.apache.pig.PigServer.openIterator(PigServer.java:912)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.loadScript(GruntParser.java:566)
at org.apache.pig.tools.grunt.GruntParser.processScript(GruntParser.java:513)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.
Script(PigScriptParser.java:1014)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:550)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
at org.apache.pig.Main.run(Main.java:542)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store alias loop
at org.apache.pig.PigServer.storeEx(PigServer.java:1015)
at org.apache.pig.PigServer.store(PigServer.java:974)
at org.apache.pig.PigServer.openIterator(PigServer.java:887)
... 16 more
The data file contains dates as shown below:
Wed Oct 15 09:26:09 BST 2014
Wed Oct 15 19:26:09 BST 2014
Wed Oct 18 08:26:09 BST 2014
Wed Oct 23 10:26:09 BST 2014
Sun Oct 05 09:26:09 BST 2014
Wed Nov 20 19:26:09 BST 2014
Does anybody know the correct way to implement a Java UDF for Pig that would work with the Regex I have provided?
Thanks
I recommend you to use REGEX_EXTRACT build-in command, this will be very easy instead of writing UDF.
ld = LOAD 'input.txt' AS (date:chararray);
loop = foreach ld generate REGEX_EXTRACT(date,'(Oct\\W(?:1[5-9]|2[0-3])\\W(?:(?:0?9|10):\\d{2}:\\d{2}|11:00:00))',1) as d:chararray;
C = FILTER loop by d is not null;
D = FOREACH C GENERATE $0;
DUMP D;
Output:
(Oct 15 09:26:09)
(Oct 23 10:26:09)
Your Regex UDF also working fine for me. i just copied your input and java code and executed locally. It works perfectly. Please see the below output that i got from your UDF code. I guess you may need to check your classpath are properly set or not.
(Oct 15 09:26:09)
(none)
(none)
(Oct 23 10:26:09)
(none)
(none)
Even better, you could use ToDate:
load your data into filtered_raw_financings_csvs with close_date as a chararray:
financings_csvs = FOREACH filtered_raw_financings_csvs
GENERATE name,
city,
state,
(close_date==''?NULL:ToDate(close_date, 'dd-MMM-yy')) AS close_date
;
Build your date format string as described here:
http://docs.oracle.com/javase/6/docs/api/java/text/SimpleDateFormat.html
This snippet is shown in context here:
http://nathan.vertile.com/blog/2015/04/17/handling-dates-in-hadoop-pig/
I initialize the logger like this:
public static void init() {
ConsoleHandler handler = new ConsoleHandler();
handler.setFormatter(new LogFormatter());
Logger.getLogger(TrackerConfig.LOGGER_NAME).setUseParentHandlers(false);
Logger.getLogger(TrackerConfig.LOGGER_NAME).addHandler(handler);
}
The LogFormatter's format function:
#Override
public String format(LogRecord record) {
StringBuilder sb = new StringBuilder();
sb.append(new SimpleDateFormat("yyyy-MM-dd HH:mm:ss Z").format(new Date(record.getMillis())))
.append(" ")
.append(record.getLevel().getLocalizedName()).append(": ")
.append(formatMessage(record)).append(LINE_SEPARATOR);
return sb.toString();
}
To use the Log I use the following method:
private static void log(Level level, String message) {
Logger.getLogger(TrackerConfig.LOGGER_NAME).log(level, message);
if (level.intValue() >= TrackerConfig.DB_LOGGER_LEVEL.intValue()) {
DBLog.getInstance().log(level, message);
}
}
The DBLog.log method:
public void log(Level level, String message) {
try {
this.logBatch.setTimestamp(1, new Timestamp(Calendar.getInstance().getTime().getTime()));
this.logBatch.setString(2, level.getName());
this.logBatch.setString(3, message);
this.logBatch.addBatch();
} catch (SQLException ex) {
Log.logError("SQL error: " + ex.getMessage()); // if this happens the code will exit anyways so it will not cause a loop
}
}
Now a normal Log output looks like that:
2013-04-20 18:00:59 +0200 INFO: Starting up Tracker
It works for some time but the LogFormatter seems to be reset for whatever reason.
Sometimes only one Log entry is displayed correctly and after that the Log entries are displayed like:
Apr 20, 2013 6:01:01 PM package.util.Log log INFO:
Loaded 33266 database entries.
again.
What I tried:
For debugging purposes I added a thread that outputs the memory usage of the jvm every x seconds.
The output worked with the right Log Format until the reserved memory value changed (the free memory value change did not reset the log format) like this:
2013-04-20 18:16:24 +0200 WARNING: Memory usage: 23 / 74 / 227 MiB
2013-04-20 18:16:25 +0200 WARNING: Memory usage: 20 / 74 / 227 MiB
2013-04-20 18:16:26 +0200 WARNING: Memory usage: 18 / 74 / 227 MiB
Apr 20, 2013 6:16:27 PM package.util.Log log WARNING:
Memory usage: 69 / 96 / 227 MiB
Apr 20, 2013 6:16:27 PM package.util.Log log INFO:
Scheduler running
Apr 20, 2013 6:16:27 PM package.Log log WARNING:
Memory usage: 67 / 96 / 227 MiB
Also note that the log level seems to be reset from warning to info here.
Where the problem seems to be:
When I comment out the database log function like this:
private static void log(Level level, String message) {
Logger.getLogger(TrackerConfig.LOGGER_NAME).log(level, message);
if (level.intValue() >= TrackerConfig.DB_LOGGER_LEVEL.intValue()) {
// DBLog.getInstance().log(level, message);
}
}
the log is formatted properly.
Any ideas what could be wrong with the DBLog's log function or why the log suddenly resets?
I would not really call this a solution but it works now.
The cause seemed to be the memory calculation itself.
Even if I just calculated it without logging it, the log format was reset.
I have no idea why it worked when I just commented out the DBLog usage.
int mb = 1024 * 1024;
long freeMemory = Runtime.getRuntime().freeMemory() / mb;
long reservedMemory = Runtime.getRuntime().totalMemory() / mb;
long maxMemory = Runtime.getRuntime().maxMemory() / mb;
String memoryUsage = "Memory usage: " + freeMemory + " / " + reservedMemory + " / " + maxMemory + " MiB";
This is the code I used. As soon as I commented it out the log format did not reset anymore and now everything works as expected.