Custom logging to gather messages at runtime - java

Is there a way to create a log4j Logger at runtime that will gather logging messages into a buffer?
I currently have a class that logs a number of events. For a remote application that needs to monitor the logged events, I'd like to just swap in a logger that logs to a buffer and then retrieve the buffer, rather than refactor the class. E.g. given something like:
Class Foo{
Logger log = ....;
public void doSomething(){
log.debug(...
.. actual code
log.debug(...
}
}
//what I'd like to do from some outside code:
String showFooLog(){
Foo f = new Foo();
f.log=new Logger(...
f.doSomething();
return f.log.contents();
}
Is this possible?
Edit: Found a shorter solution, pointed to from Jared's posting( although it's still not threadsafe). Thanks for the help.
Logger l = Logger.getLogger( ... );
StringWriter writer = new StringWriter();
WriterAppender appender = new WriterAppender( new HTMLLayout(), writer );
l.addAppender( appender );
... run code here
writer.flush();
l.removeAppender( appender );
return writer.toString()

It's absolutely possible - although you're probably going to need to create your own Appender. This is really easy to do, though. Here's a rough-out of an example (need to think out thread-safety...this isn't threadsafe....and I'm not sure I like the statics...but this should be good enough to push you in the right direction):
public class BufferAppender extends org.apache.log4j.AppenderSkeleton {
private static Map<String, StringBuffer> buffers = new HashMap<String, StringBuffer>();
#Override
protected void append(LoggingEvent evt) {
String toAppend = this.layout.format(evt);
StringBuffer sb = getBuffer(evt.getLoggerName());
buffer.append(toAppend);
}
public static String getBufferContents(String loggerName) {
StringBuffer sb = buffers.get(sb);
if(sb == null) {
return null;
} else {
return sb.toString();
}
}
public static void clearBuffer(String loggerName) {
createBuffer(loggerName);
}
private static StringBuffer getBuffer(String loggerName) {
StringBuffer sb = buffers.get(loggerName);
if(sb == null) {
sb = createBuffer(loggerName);
}
return sb;
}
private static StringBuffer createBuffer(String loggerName) {
StringBuffer sb = new StringBuffer();
buffers.put(loggerName, sb);
return sb;
}
}

You could subclass org.apache.log4j.WriterAppender and provide it a ByteArrayOutputStream to store any messages. See the other subclasses of WriterAppender for inspriation. Then you can provide an instance of that object to your logger for appending via addAppender.

Related

Java: Logging: Custom Formatter

I wrote a simple Custom Formatter for my logging that prints a DateTime in the specified format. Everything works fine, but the datetime doesn't get printed in my log file. Below is my CustomFormatter.java:
CustomFormatter.java:
public class CustomFormatter extends Formatter {
SimpleDateFormat sdf;
public CustomFormatter() {
sdf = new SimpleDateFormat("MM-dd-yyyy HH:mm:ss");
}
public String format(LogRecord rec) {
StringBuffer buf = new StringBuffer(1000);
buf.append(formatMessage(rec));
return buf.toString();
}
public String getHead(Handler h) {
return (sdf.format(new java.util.Date()) + ": \t");
}
public String getTail(Handler h) {
return "\n";
}
}
Im my main class, I initialize my logger as:
Main.java:
private static Logger logger = Logger.getLogger("org.somecompany.someproject");
public static void main(String[] args){
try{
String _pattern = "myLogger.log.%g";
int numLogFiles = 10;
int fileSize = 1000000;
boolean appendToFile = true;
FileHandler fh = new FileHandler(pattern, fileSize, numLogFiles, appendToFile);
fh.setFormatter(new CustomFormatter());
logger.addHandler(fh);
logger.setLevel(Level.ALL);
} catch(IOException i) { System.out.println("Unable to init logger");}
logger.info("Begin");
logger.info("Line 1");
logger.info("Line 2");
logger.info("End");
fh.close();
fh = null;
}
The log file should have a datetime printed at the beginning of each line and it isn't. Any help is appreciated.
I think you misunderstand the purpose of getHead() and getTail().
Those methods are not called per log entry - getHead() is called once before the Handler (FileHandler) logs the first entry and getTail() is called once before the Handler (FileHandler) closes the log stream.
This is used for example in the XMLFormatter to create a valid XML file (where there must be an XML start tag before the first log entry and an XML end tag after the last log entry to be valid XML).
If you look closely at the created log file this is exactly what happens: the log file creates a timestamp at the start and ends with a newline.
Note that adding time stamps to your log entries doesn't require writing a CustomFormatter. Properly configuring a SimpleFormatter is enough for your purpose.

Spring batch FlatFileItemWriter write as csv from Object

I am using Spring batch and have an ItemWriter as follows:
public class MyItemWriter implements ItemWriter<Fixing> {
private final FlatFileItemWriter<Fixing> writer;
private final FileSystemResource resource;
public MyItemWriter () {
this.writer = new FlatFileItemWriter<>();
this.resource = new FileSystemResource("target/output-teste.txt");
}
#Override
public void write(List<? extends Fixing> items) throws Exception {
this.writer.setResource(new FileSystemResource(resource.getFile()));
this.writer.setLineAggregator(new PassThroughLineAggregator<>());
this.writer.afterPropertiesSet();
this.writer.open(new ExecutionContext());
this.writer.write(items);
}
#AfterWrite
private void close() {
this.writer.close();
}
}
When I run my spring batch job, the items are written to file as:
Fixing{id='123456', source='TEST', startDate=null, endDate=null}
Fixing{id='1234567', source='TEST', startDate=null, endDate=null}
Fixing{id='1234568', source='TEST', startDate=null, endDate=null}
1/ How can I write just the data so that the values are comma separated and where it is null, it is not written. So the target file should look like this:
123456,TEST
1234567,TEST
1234568,TEST
2/ Secondly, I am having an issue where only when I exit spring boot application, I am able to see the file get created. What I would like is once it has processed all the items and written, the file to be available without closing the spring boot application.
There are multiple options to write the csv file. Regarding second question writer flush will solve the issue.
https://howtodoinjava.com/spring-batch/flatfileitemwriter-write-to-csv-file/
We prefer to use OpenCSV with spring batch as we are getting more speed and control on huge file example snippet is below
class DocumentWriter implements ItemWriter<BaseDTO>, Closeable {
private static final Logger LOG = LoggerFactory.getLogger(StatementWriter.class);
private ColumnPositionMappingStrategy<Statement> strategy ;
private static final String[] columns = new String[] { "csvcolumn1", "csvcolumn2", "csvcolumn3",
"csvcolumn4", "csvcolumn5", "csvcolumn6", "csvcolumn7"};
private BufferedWriter writer;
private StatefulBeanToCsv<Statement> beanToCsv;
public DocumentWriter() throws Exception {
strategy = new ColumnPositionMappingStrategy<Statement>();
strategy.setType(Statement.class);
strategy.setColumnMapping(columns);
filename = env.getProperty("globys.statement.cdf.path")+"-"+processCount+".dat";
File cdf = new File(filename);
if(cdf.exists()){
writer = Files.newBufferedWriter(Paths.get(filename), StandardCharsets.UTF_8,StandardOpenOption.APPEND);
}else{
writer = Files.newBufferedWriter(Paths.get(filename), StandardCharsets.UTF_8,StandardOpenOption.CREATE_NEW);
}
beanToCsv = new StatefulBeanToCsvBuilder<Statement>(writer).withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)
.withMappingStrategy(strategy).withSeparator(',').build();
}
#Override
public void write(List<? extends BaseDTO> items) throws Exception {
List<Statement> settlementList = new ArrayList<Statement>();
for (int i = 0; i < items.size(); i++) {
BaseDTO baseDTO = items.get(i);
settlementList.addAll(baseDTO.getStatementList());
}
beanToCsv.write(settlementList);
writer.flush();
}
#PreDestroy
#Override
public void close() throws IOException {
writer.close();
}
}
Since you are using PassThroughLineAggregator which does item.toString() for writing the object, overriding the toString() function of classes extending Fixing.java should fix it.
1/ How can I write just the data so that the values are comma separated and where it is null, it is not written.
You need to provide a custom LineAggregator that filters out null fields.
2/ Secondly, I am having an issue where only when I exit spring boot application, I am able to see the file get created
This is probably because you are calling this.writer.open in the write method which is not correct. You need to make your item writer implement ItemStream and call this.writer.open and this this.writer.close respectively in ItemStream#open and ItemStream#close

From Java parallelstream spawns other parallelStreams and fails seldom

Considering the following function:
public void execute4() {
File filePath = new File(filePathData);
File[] files = filePath.listFiles((File filePathData) -> filePathData.getName().endsWith("CDR"));
List<CDR> cdrs = new ArrayList<CDR>();
Arrays.asList(files).parallelStream().forEach(file -> readCDRP(cdrs, file));
cdrs.sort(cdrsorter);
}
which reads a list of Files containing CDR and executes the readCDRP() which is this:
private void readCDRP(List<CDR> cdrs, File file) {
final CDR cdr = new CDR(file.getName());
try (BufferedReader bfr = new BufferedReader(new FileReader(file))) {
List<String> lines = bfr.lines().collect(Collectors.toList());
lines.parallelStream().forEach(e -> {
String[] data = e.split(",", -1);
CDREntry entry = new CDREntry(file.getName());
for (int i = 0; i < data.length; i++) {
entry.setField(i, data[i]);
}
cdr.addEntry(entry);
});
if (cdr != null) {
cdrs.add(cdr);
}
} catch (IOException e) {
e.printStackTrace();
}
}
What I observe is that occasionally and NOT all the time, I either get a ArrayIndexNotBound Exception at the readCDRP function over the line (which is awkward, as the list of cdr is an ArrayList() ):
cdr.addEntry(entry);
or
at the last line in execute4() where I apply the sorting.
I think the issue is that the first parallelStream from execute4 is not in a separate space in memory from the second parallelStream execution inside readCDRP() and also seems to share wrongly the data. Using "seem" word as I can't confirm and is just a hutch.
The questions are:
is my code buggy to the bone from JDK8 perspective?
Is there a workaround using the same flow, something like using CountDownLatch for example?
Is limitation of the ForkJoinPool ?
Thanks for any responce....
EDIT(1):
The addEntry is part of a class itself:
class CDR {
public final String fileName;
private final List<CDREntry> entries = new ArrayList<CDREntry>();
public CDR(String fileName) {
super();
this.fileName = fileName;
}
public List<CDREntry> getEntries() {
return entries;
}
public List<CDREntry> addEntry(CDREntry e) {
entries.add(e);
return entries;
}
public String getFileName() {
return this.fileName;
}
}
Your code is broken from a thread safety point of view. In readCDR you add elements to the cdrs list which is an ArrayList that does not support concurrent writes. That is why it breaks.
A better approach would be to have readCDR return a cdr object and do something like:
List<CDR> cdrs = Arrays.stream(files)
.parallel()
.map(this::readCDR)
.collect(Collectors.toList());
Also, using parallel streams for IO related operations is generally a bad idea, but that is another discussion.
When you starting programming in functional style you should prefer immutable objects which can be fully created via construction (or probably using builder pattern or some factory method). So your CDREntry class may look like this:
class CDREntry {
private final String[] fields;
private final String name;
public CDREntry(String name, String[] fields) {
this.name = name;
this.fields = fields;
}
// Add getters and whatever
}
And your CDR class may look like this:
class CDR {
private final String fileName;
private final List<CDREntry> entries;
public CDR(String fileName, List<CDREntry> entries) {
this.fileName = fileName;
this.entries = entries;
}
public List<CDREntry> getEntries() {
return entries;
}
public String getFileName() {
return this.fileName;
}
}
Having such classes things become easier. The rest of the code can be rewritten like this:
public void execute4() {
File filePath = new File(filePathData);
File[] files = filePath.listFiles((File data, String name) ->
data.getName().endsWith("CDR")); // fixed this line: it had compilation error
List<CDR> cdrs = Arrays.stream(files).parallel()
.map(this::readCDRP).sorted(cdrsorter)
.collect(Collectors.toList());
}
private CDR readCDRP(File file) {
try (BufferedReader bfr = new BufferedReader(new FileReader(file))) {
// I'm not sure that collecting lines into list
// before main processing was actually necessary
return bfr.lines().parallelStream()
.map(e -> new CDREntry(file.getName(), e.split(",", -1)))
.collect(Collectors.collectingAndThen(
Collectors.toList(), list -> new CDR(file.getName(), list)));
} catch (IOException e) {
throw new UncheckedIOException(e);
}
}
In general remember that forEach is usually not the cleanest way to solve the tasks. It may be helpful when you integrate the streams into legacy code, but in general should be avoided.
you are using a parallel stream and a lambda that has side effects
(the lambda updates the ArrayList 'cdrs')
try to use a Collector or a Reduction-Operation.

Turn off date comment in properties file [duplicate]

Is it possible to force Properties not to add the date comment in front? I mean something like the first line here:
#Thu May 26 09:43:52 CEST 2011
main=pkg.ClientMain
args=myargs
I would like to get rid of it altogether. I need my config files to be diff-identical unless there is a meaningful change.
Guess not. This timestamp is printed in private method on Properties and there is no property to control that behaviour.
Only idea that comes to my mind: subclass Properties, overwrite store and copy/paste the content of the store0 method so that the date comment will not be printed.
Or - provide a custom BufferedWriter that prints all but the first line (which will fail if you add real comments, because custom comments are printed before the timestamp...)
Given the source code or Properties, no, it's not possible. BTW, since Properties is in fact a hash table and since its keys are thus not sorted, you can't rely on the properties to be always in the same order anyway.
I would use a custom algorithm to store the properties if I had this requirement. Use the source code of Properties as a starter.
Based on https://stackoverflow.com/a/6184414/242042 here is the implementation I have written that strips out the first line and sorts the keys.
public class CleanProperties extends Properties {
private static class StripFirstLineStream extends FilterOutputStream {
private boolean firstlineseen = false;
public StripFirstLineStream(final OutputStream out) {
super(out);
}
#Override
public void write(final int b) throws IOException {
if (firstlineseen) {
super.write(b);
} else if (b == '\n') {
firstlineseen = true;
}
}
}
private static final long serialVersionUID = 7567765340218227372L;
#Override
public synchronized Enumeration<Object> keys() {
return Collections.enumeration(new TreeSet<>(super.keySet()));
}
#Override
public void store(final OutputStream out, final String comments) throws IOException {
super.store(new StripFirstLineStream(out), null);
}
}
Cleaning looks like this
final Properties props = new CleanProperties();
try (final Reader inStream = Files.newBufferedReader(file, Charset.forName("ISO-8859-1"))) {
props.load(inStream);
} catch (final MalformedInputException mie) {
throw new IOException("Malformed on " + file, mie);
}
if (props.isEmpty()) {
Files.delete(file);
return;
}
try (final OutputStream os = Files.newOutputStream(file)) {
props.store(os, "");
}
if you try to modify in the give xxx.conf file it will be useful.
The write method used to skip the First line (#Thu May 26 09:43:52 CEST 2011) in the store method. The write method run till the end of the first line. after it will run normally.
public class CleanProperties extends Properties {
private static class StripFirstLineStream extends FilterOutputStream {
private boolean firstlineseen = false;
public StripFirstLineStream(final OutputStream out) {
super(out);
}
#Override
public void write(final int b) throws IOException {
if (firstlineseen) {
super.write(b);
} else if (b == '\n') {
// Used to go to next line if did use this line
// you will get the continues output from the give file
super.write('\n');
firstlineseen = true;
}
}
}
private static final long serialVersionUID = 7567765340218227372L;
#Override
public synchronized Enumeration<java.lang.Object> keys() {
return Collections.enumeration(new TreeSet<>(super.keySet()));
}
#Override
public void store(final OutputStream out, final String comments)
throws IOException {
super.store(new StripFirstLineStream(out), null);
}
}
Can you not just flag up in your application somewhere when a meaningful configuration change takes place and only write the file if that is set?
You might want to look into Commons Configuration which has a bit more flexibility when it comes to writing and reading things like properties files. In particular, it has methods which attempt to write the exact same properties file (including spacing, comments etc) as the existing properties file.
You can handle this question by following this Stack Overflow post to retain order:
Write in a standard order:
How can I write Java properties in a defined order?
Then write the properties to a string and remove the comments as needed. Finally write to a file.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
properties.store(baos,null);
String propertiesData = baos.toString(StandardCharsets.UTF_8.name());
propertiesData = propertiesData.replaceAll("^#.*(\r|\n)+",""); // remove all comments
FileUtils.writeStringToFile(fileTarget,propertiesData,StandardCharsets.UTF_8);
// you may want to validate the file is readable by reloading and doing tests to validate the expected number of keys matches
InputStream is = new FileInputStream(fileTarget);
Properties testResult = new Properties();
testResult.load(is);

How to create my own Appender in log4j?

I am new in log4j. Can anyone explain how to create my own Appender? i.e. how to implement the classes and interfaces and how to override it?
Update: the provided solution is valid for Log4J 1.x . If you're looking for 2.x versions, take a look at this article: How to create a custom appender in log4j2
You should extend AppenderSkeleton class, that (quoting javadoc) "provides the code for common functionality, such as support for threshold filtering and support for general filters."
If you read the code of AppenderSkeleton, you'll see that it handles almost all, leaving to you just:
protected void append(LoggingEvent event)
public void close()
public boolean requiresLayout()
The core method is append. Remember that you don't need to implement the filtering logic in it because it is already implemented in doAppend that in turn calls append.
Here I made a (quite useless) class that stores the log entries in an ArrayList, just as a demo.
public /*static*/ class MyAppender extends AppenderSkeleton {
ArrayList<LoggingEvent> eventsList = new ArrayList();
#Override
protected void append(LoggingEvent event) {
eventsList.add(event);
}
public void close() {
}
public boolean requiresLayout() {
return false;
}
}
Ok, let's test it:
public static void main (String [] args) {
Logger l = Logger.getLogger("test");
MyAppender app = new MyAppender();
l.addAppender(app);
l.warn("first");
l.warn("second");
l.warn("third");
l.trace("fourth shouldn't be printed");
for (LoggingEvent le: app.eventsList) {
System.out.println("***" + le.getMessage());
}
}
You should have "first", "second", "third" printed; the fourth message shouldn't be printed since the log level of root logger is debug while the event level is trace. This proves that AbstractSkeleton implements "level management" correctly for us. So that's definitely seems the way to go... now the question: why do you need a custom appender while there are many built in that log to almost any destination? (btw a good place to start with log4j: http://logging.apache.org/log4j/1.2/manual.html)
If you would like to do some manipulations or decisions you can do it like this:
#Override
protected void append(LoggingEvent event) {
String message = null;
if(event.locationInformationExists()){
StringBuilder formatedMessage = new StringBuilder();
formatedMessage.append(event.getLocationInformation().getClassName());
formatedMessage.append(".");
formatedMessage.append(event.getLocationInformation().getMethodName());
formatedMessage.append(":");
formatedMessage.append(event.getLocationInformation().getLineNumber());
formatedMessage.append(" - ");
formatedMessage.append(event.getMessage().toString());
message = formatedMessage.toString();
}else{
message = event.getMessage().toString();
}
switch(event.getLevel().toInt()){
case Level.INFO_INT:
//your decision
break;
case Level.DEBUG_INT:
//your decision
break;
case Level.ERROR_INT:
//your decision
break;
case Level.WARN_INT:
//your decision
break;
case Level.TRACE_INT:
//your decision
break;
default:
//your decision
break;
}
}
I would like to expend #AgostinoX answer to support pro file configuration and the ability to start and stop the logging capture :
public class StringBufferAppender extends org.apache.log4j.AppenderSkeleton {
StringBuffer logs = new StringBuffer();
AtomicBoolean captureMode = new AtomicBoolean(false);
public void close() {
// TODO Auto-generated method stub
}
public boolean requiresLayout() {
// TODO Auto-generated method stub
return false;
}
#Override
protected void append(LoggingEvent event) {
if(captureMode.get())
logs.append(event.getMessage());
}
public void start()
{
//System.out.println("[StringBufferAppender|start] - Start capturing logs");
StringBuffer logs = new StringBuffer();
captureMode.set(true);
}
public StringBuffer stop()
{
//System.out.println("[StringBufferAppender|start] - Stop capturing logs");
captureMode.set(false);
StringBuffer data = new StringBuffer(logs);
logs = null;
return data;
}
}
Now all you have to do is to define in in the log4j.property file
log4j.rootLogger=...., myAppender # here you adding your appendr name
log4j.appender.myAppender=com.roi.log.StringBufferAppender # pointing it to the implementation
than when ever you want to enable it during runtume:
Logger logger = Logger.getRootLogger();
StringBufferAppender appender = (StringBufferAppender)logger.getAppender("myAppender");
appender.start();
and while want to stop it:
StringBuffer sb = appender.stop();
To create a own Appender you just implement the Appender Interface and just override it.
And also study this link start log

Categories

Resources