I have made a nice UI with three different logs (a general log and two class specific ones).
Every log can print different lines with different colors.
I was thinking of doing this so I can show info/errors/warnings.
Now, the thing is, that I'd like to have detailed debug only when I set a variable (something like detailedDebug = true).
I'd like something like this:
Simple | Detailed
Error thrown in ... | Error thrown.. + dump of all variables related to the error
Now, with if statements I can achieve that easily, but, that seems overly complicated (complicating the code for debugging reasons too).
How could I implement this (while making it easy to use and most importantly clean)?
Should I make a method in every class that uses the logging features that automatically checks for a variable then does what asked?
You should use the the log level as the variable to control the detail. When you want more detail, turn the level down to FINEST.
Hoewever, some operations that you wish to log might require considerable resources to calculate the detail (example, you may retrieve info from the DB, etc). In this case you should use if statements because the resources will be consumed even if the log level is at ERROR level.
Example :
The following code will always execute :
logger.log(Level.FINEST, "Some detailed log info which sows the results from DB {0}",
new Object[]{ getResults() });
If you only want to execute this code when you are showing FINEST, you need to wrap the statement in an if statement :
if (logger.isLoggable(Level.FINEST)) {
// Some intensive logging
}
Related
We implemented a small routine that in case of specific errors we get an email to let us know something happened. It's pretty easy to include some error information in the email but we would like to include what was the previous logged informations when it happened.
At first I tried to retrieve the last lines from the file where the log4j are saved. The problem is that many other threads are working at the same time and I never get relevant informations. Either the buffer doesn't have time to write the logs or something else is faster at writing something else.
Is it possible to add an appender to a specific log4j at runtime to be able to retrieve only those specific logs and only if needed ?
I would like to be able to still log with the usual commands log.error, log.warn... but I want to be able to retrieve the log content in a try/catch situation. something like a memory only buffer appender?
Would that work for all the children loggers ?
Class 1 - call Class 2
Class 2 - call Class 3
Class 3 try/catch an error
log.getLogs returns all logged informations from Class 1..3
Or am I dreaming here ?
We are using Jira and in logs in files, so I guess no query would work.
I have a use case when I need to capture the data flow from one API to another. For example my code reads data from database using hibernate and during the data processing I convert one POJO to another and perform some more processing and then finally convert into final result hibernate object. In a nutshell something like POJO1 to POJO2 to POJO3.
In Java is there a way where I can deduce that an attribute from POJO3 was made/transformed from this attribute of POJO1. I want to look something where I can capture data flow from one model to another. This tool can be either compile time or runtime, I am ok with both.
I am looking for a tool which can run in parallel with code and provide data lineage details on each run basis.
Now instead of Pojos I will call them States! You are having a start position you iterate and transform your model through different states. At the end you have a final terminal state that you would like to persist to the database
stream(A).map(P1).map(P2).map(P3)....-> set of B
If you use a technic known as Event sourcing you can deduce it yes. How would this look like then? Instead of mapping directly A to state P1 and state P1 to state P2 you will queue all your operations that are necessary and enough to map A to P1 and P1 to P2 and so on... If you want to recover P1 or P2 at any time, it will be just a product of the queued operations. You can at any time rewind forward or rewind backwards as long as you have not yet chaged your DB state. P1,P2,P3 can act as snapshots.
This way you will be able to rebuild the exact mapping flow for this attribute. How fine grained you will queue your oprations, if it is going to be as fine as attribute level , or more course grained it is up to you.
Here is a good article that depicts event sourcing and how it works: https://kickstarter.engineering/event-sourcing-made-simple-4a2625113224
UPDATE:
I can think of one more technic to capture the attribute changes. You can instument your Pojo-s, it is pretty much the same technic used by Hibernate to enhance Pojos and same technic profiles use to for tracing. Then you can capture and react to each setter invocation on the Pojo1,Pojo2,Pojo3. Not sure if I would have gone that way though....
Here is some detiled readin about the byte code instrumentation if https://www.cs.helsinki.fi/u/pohjalai/k05/okk/seminar/Aarniala-instrumenting.pdf
I would imagine two reasons, either the code is not developed by you and therefore you want to understand the flow of data along with combinations to convert input to output OR your code is behaving in a way that you are not expecting.
I think you need to log the values of all the pojos, inputs and outputs to any place that you can inspect later for each run.
Example: A database table if you might need after hundred of runs, but if its one time may be to a log in appropriate form. Then you need to yourself manually use those data values layer by later to map to the next layer. I think with availability of code that would be easy. If you have a different need pls. explain.
Please accept and like if you appreciate my gesture to help with my ideas n experience.
There are "time travelling debuggers". For Java, a quick search did only spill this out:
Chronon Time Travelling Debugger, see this screencast how it might help you .
Since your transformations probably use setters and getters this tool might also be interesting: Flow
Writing your own java agent for tracking this is probably not what you want. You might be able to use AspectJ to add some stack trace logging to getters and setters. See here for a quick introduction.
I was trying to use System.out.println to help with debugging and I found that it wasn't printing to the console. On inspection I found that my program had created 4 output consoles ( one for Java DB processes, one for the DB server, one for debugging the program, and one for program output ). I found my expected println in an unexpected console - the DB server output.
I would like to get a handle on these outputs. I expected the System class to have a list field of active output consoles ( printstreams ), something like :
ArrayList<PrintStream> getActivePrintOutputs()
But I don't see one. How do I get them?
System has no concept of multiple output streams beyond those specified by out and err, and you can access those by just referencing System.out and System.err respectively.
If there are other consoles or output streams being used, they must have been created by other points in your code (or other points in a library's code that you're using.)
Normally you have only one active System.out output stream, so there is no reason for the system to maintain a list.
If you want to trace all the PrintStreams created you can use instrumentation to track their creation, or put a break point in the constructor for the class and debug your program.
NOTE: Is is normal for a program to create multiple logs files for different purposes and these might be the PrintStreams you are thinking of.
I'm using the commoncrawl example code from their "Mapreduce for the Masses" tutorial. I'm trying to make modifications to the mapper and I'd like to be able to log strings to some output. I'm considering setting up some noSQL db and just pushing my output to it, but it doesn't feel like a good solution. What's the standard way to do this kind of logging from java?
While there is no special solution for the logs aside of usual logger (at least one I am aware about) I can see about some solutions.
a) if logs are of debug purpose - indeed write usual debug logs. In case of the failed tasks you can find them via UI and analyze.
b) if this logs are some kind of output you want to get alongside some other output from you job - assign them some specail key and write to the context. Then in the reducer you will need some special logic to put them to the output.
c) You can create directory on HDFS and make mapper to write to there. It is not classic way for MR because it is side effect - in some cases it can be fine. Especially taking to account that after each mapper will create its own file - you can use command hadoop fs -getmerge ... to get all logs as one file.
c) If you want to be able to monitor the progress of your job, number of error etc - you can use counters.
Is there a cleaner way for me to write debug level log statements? In some ways one could say that the string literals are basically commenting the code and providing logging in one line and that it is already very clean. But after I add debug level log statements, I find the code much less easier to read up and down. Take this example (I may update to a real example if I get back to my home PC):
int i = 0;
logger.debug("Setting i to 0,"); //Just an example, would show something more complex
i++;
InputStream is = socket.getInputStream();
DataOutputStream dos = new DataOutputStream(socket.getOutputStream());
IOUtils.write(request, dos);
logger.debug("request written to output");
while (!is.read(buffer))
logger.debug("Reading into buffer");
logger.debug("Data read completely from socket");
CustomObject.doStuff(buffer);
logger.debug("Stuff has been done to buffer");
You could try using aspects, although these have the limitation that you can only put log statements "around" method calls, i.e. before entering and/or after leaving a specific method.
For more detailed logging, I am afraid there is no other way than hand-coded log messages.
I typically strive to remove the not-so-much-needed debug log statements from the code once I made sure that it works the way it should (for which unit tests are a must).
Ask yourself if I run this in a different machine/country/planet, and things go wrong and all I have is only a log file what information do I need to know what has gone wrong ?
Use debug logs in a for loop, or a while loop sparingly. For example, if you are reading 1000 records from a file, performing an op for each record. You could record before the for loop that "file exists and is readable and is going to read 1000 records" and print status after the process is done. If it is say 1000000 records then you could print something every say 100 or 1000 iterations
In your code except for the logger for setting i to 0 everything else sorta makes sense to me. Also care to use log.isDebugEnabled() if your string in the logger statmeent is hard to compute..
ex:
if(log.isDebugEnabled) {
logger.debug("Here " + obj.aMethodCallThatTakes5MinsToEvaluateToString());
}
UPDATE 1: SLF4J solves only half the problem.
if(slfLog.isDebugEnabled) {
slfLog.debug(obj.getObjectThatTakes5Mins());
}
Yes the toString is prevented but if you are logging an actual object which is result of some computation you are not prevented.
If you want very fine grained debug instructions I am not sure you can separate the actual code from the debug code.
If you want it at a higher level, maybe adding your logging using AOP could help make things easier to read, maybe use a proxy object?
But if you have debug instructions as fine grained as in the example you provided, IMHO you could gain more by replacing the loggers with unit tests. Don't write in a log that something happened, test that it did.
You will not be able to do much if you dislike the log statements. The information needs to be there somehow.
What you CAN do, is strongly considering what NEEDS to be there. You are basically writing for the log file reader who per definition does not know about how your program works, so the information needs to be concise and correct. Personally I very frequently add the method name to the log statement.
Also note that slf4j allows you to use the {}-syntax which helps somewhat
log.debug("main() date={}, args={}", new java.util.Date(), args);
Also note that having unit tests, allow you to move much stuff to there simply because you know that THAT works.