Why printStackTrace is not recommended in ATG? - java

Why printStackTrace is not recommended to use in oracle ATG. If anyone knows please tell me.
Thanks in advance.

Avoiding the use printStackTrace is not just limited to ATG but should be applied to JAVA development in general. The SonarQube validation rules explains it as follow:
Throwable.printStackTrace(...) prints a throwable and its stack trace
to some stream. Loggers should be used instead to print throwables, as
they have many advantages:
Users are able to easily retrieve the logs.
The format of log messages is uniform and allow users to browse the logs easily
With the availability of logError you are able to easily spool your error messages, not only to the standard dynamo.log and error.log but you can also see them in context of the rest of your application logs.
There is another SO question touching the same topic available here.

Related

Exception Standard for different logging levels

Which is more suitable out of these two exception object (e) or e.getMessage() for log.info() and log.error() or log.debug().
What should be followed/rule of thumb for different logging levels.
It really depends on the reason of the exception being thrown. I would add here WARN level consideration as well. If this is an unexpected error that should not happen, meaning that this is something wrong with codebase you should definitely log the whole exception object, especially to get the stacktrace that allow developer to find and potentially fix issue faster. Therefore such situation should be logger on ERROR level if this is something wrong with the system, or WARN if this is something wrong with client's data.
INFO level should really not contain exception details, it should keep information easy to read by non-developers(for example testers) and describe the most important parts of the data processing flow.
I think it is up to you to put exception in DEBUG level but I would still recommend not to do it just to keep things clearer OR use e.getMessage() to describe it.
P.S. In general, I would redirect this question to this page since it is a general SE question but since you asked about using particular Java feature I wanted to keep things in the right place.
Don't try to create a fixed rule about including or not including the stacktrace depending on the log level. Instead, when creating a log entry, ask yourself:
Who will read that entry? A user, a system administrator, or a developer?
What information will be useful to that reader for understanding the situation? It's better to add too much information than to omit important parts.
If the entry is to be read by a developer, is the text enough or should I include the stacktrace? Non-developers typically get confused when seeing a stacktrace, but developers very much appreciate it.
This will greatly improve the quality of your logging.
As a very rough rule of thumb, include the stacktrace whenever you log an exception. An exception means that something went wrong, which might involve analysis by a developer, who will be very unhappy if the log entry only reads "NullPointerException" without a hint where it came from.
Of the typical log levels, INFO might be the one not addressing developers (thus not asking for a stacktrace), but generally you don't want to use INFO for exceptions.

How can logs generated by my java applications be aggregated in Azure?

I am running a Java application on Azure Cloud Services.
I have seen this article which shows how to configure a java project to send logs to Azure insights using log4j: https://azure.microsoft.com/en-us/documentation/articles/app-insights-java-trace-logs/
However, for various reasons I do not want to do this. My java application already writes multiple log files in a log directory (application.log, error.log, etc). I want to point to this directory in Azure Insights so that it can aggregate these log files over multiple instances of my application running on Cloud Services and then present them to me. (In a similar way that AWS Cloudwatch presents logs). How can I accomplish this?
I think this is a deep question and would require a bit of custom coding to accomplish it.
The problem as I read it is, you have multiple log files writing to a location and you just want to parse those log files and send the log lines. Moreover, you don't want to add the log appenders to your Java code for various reasons.
The short answer is, no. There isn't a way to have AI monitor a directory of log files and then send them.
The next short answer is, no. AI can't do it out of the box, but Log Analytics can. This is a bit more heavy handed and I haven't read enough about it to say it would fit in this scenario. However, since you're using a cloud service you could more than likely install the agent and start collecting logs.
The next answer is the long answer, kinda. You can do this but it would require a lot of custom coding on your part. What I envision is something akin to how the Azure Diagnostics Sink works.
You would need to create an application that reads the log files and enumerates them line by line, it would then parse them based on some format and then call the TrackTrace() method to log it.
This option requires some considerable thought since you would be reading the file and then determining what to do with it.

Java Applications Error/Crash reporting

I have seen how bugsense, sentry etc work. I like the way you can get the error/crash reports. What I want is a solution like those but for internal use. Using an external api like bugsense is out of the question.
Is there any similar open source solution that can be used internally?
If you are already logging properly, and are requiring a more granular configuration you could give logstash a try. It is basically a logshipper with various input, filter and output modules, including email as an output method.
The input can be configured to parse your existing logfiles, recieve messages from a queue and many more. For UDP/IP based logging, take a look at logstash-gelf which is basically an adapter for automated generation of well formed logging meta data. If you plan on parsing your Logfiles, look out for the "multiline codec" in regards to parsing stacktraces and "grok" as a filter for parsing the log entries. For grok, I found that the Grok debugger is a big help.
E.G.: Once you have your logs routed through the input, and your logging is configured to use a named logger for the emailed messages you can tag them in logstash input and tell the output to send an email if a message with the tag is coming through.
I think using a specific logger (slf4j, log4j...) could be used.
It can send e-mail for example for FATAL log with stack trace and what you want.

Repressing logging from another package

My code requests a DB to see if an object is present and then sets a value. The problem is even if the object is present, my logs get flooded with their logging code. How can I suppress these logging statements?
Yes! If you provide information about what logging system you use and other systematic information that could be helpful (like what db, where are the logs you want to suppress etc.) then someone will tell you how it can be done in your situation.

What information to include at each log level? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
Where/what level should logging code go?
Debug Levels
Is there a convention, a standard, or a widely used guide which will help with logging in Java? Specifically, what to include at each level (verbose, debug, ... etc.) rather than the actual mechanism of logging.
There are many guides out there of what to include at each log level, but none of them are specific; they're all vague, and this makes it hard to "follow orders".
Any tips would be appreciated.
It's subject to personal interpretation, but mine is (in order from finest to coursest):
Trace - The finest logging level. Can be used to log very specific information that is only relevant in a true debugging scenario, e.g., log every database access or every HTTP call etc.
Debug - Information to primary help you to debug your program. E.g., log every time a batching routine empties its batch or a new file is created on disk etc.
Info - General application flow, such as "Starting app", "connecting to db", "registering ...". In short, information which should help any observer understand what the application is doing in general.
Warn - Warns of errors that can be recovered. Such as failing to parse a date or using an unsafe routine. Note though that we should still try to obey the fail fast principle and not hide e.g., configuration errors using warning message, even though we a default value might be provided by the application.
Error - Denotes an often unrecoverable error. Such as failing to open a database connection.
Fatal/Critical Used to log an error the application cannot recover from, which might lead to an immediate program termination.
In the end it's up to you to define what suits you best. Personally, I run most production system with logging level of Info, where I'm mostly interested in following the applications main logic, and of course catch all warnings/errors.
Except for code cluttering, there is no such thing as too much logging. All logging which helps you reproduce or understand problems better are good logging. On a performance note, most logging systems (e.g., log4j) allows configuring which level to actually append to the physical log which is a great thing.
For what it's worth, we're using the following log levels:
DEBUG level messages give highly-detailed and/or specific information, only useful for tracking down problems.
INFORMATION messages give general information about what the system is doing. (e.g. processing file X)
WARNING messages warn the user about things which are not ideal, but should not affect the system. (e.g. configuration X missed out, using default value)
ERROR messages inform the user that something has gone wrong, but the system should be able to cope. (e.g. connection lost, but will try again)
CRITICAL messages inform the user when an un-recoverable error occurs. (i.e. I am about to abort the current task or crash)
I think the most important thing with log levels is to figure out a scheme, document it, and stick with it. Although making log level consistent between programs would be nice, as long as you've used common sense in defining your log levels, users will tolerate a certain amount of variance between programs.
Simple log what yuo think is important if you were to come back later and need to read the logs. This of course means that your Object.toStrings now need to be nice and readable and not a dump of crap thats impossible to read. This also means you need to do sensible things like quoting strings etc..

Categories

Resources