I've been able to find solutions to make the log file clear on each boot, but I don't want this to happen.
I only want to clear my log4j log file when we explicitly choose to, i.e. upon clicking a "clear log" button that calls an API to clear the log.
Is there a built-in log4j function to clear the log / how can I manually clear the log using Java code?
I use logback as well as log4j2 in my java web apps for logging. So far, I've setup log rotation (and purging) from within logback and log4j2 but now I intend to use logrotate at an infrastructure level since there are lots of services (in other languages as well) and it's relatively easier to maintain one common way of handling log files.
While doing a POC, I setup the java app to write logs to a file application.log and also setup logrotate with a size criteria (of 1 MB). As expected, when the file size reached 1 MB, the log file was rotated by way of moving it to another file named application.log.1. At this point, I expected the java app to continue writing new logs to application.log file. However, the logs kept getting written in the rotated file i.e. application.log.1.
This makes me wonder whether the component within logback/log4j2 that writes the log content in the file tracks the file by its name or something else like an inode number or a file handler. Since the original active log file was not deleted but just moved with a new name.
I'm aware of the copytruncate option in logrotate which creates a copy of the active log file and then truncates the active log file, but I don't want to use this as this can lead to loss of log events for agents running on the machines which pushes the logs to systems like Elasticsearch and CloudWatch. Since truncate can happen before the agents have processed all the log entries.
How can I get the logging component to always write logs to a file named application.log even after the original file underneath gets moved?
The logging framework opens the file for write and leaves the OutputStream open until the application ends or the file is rolled over or similar. On a Unix system you can move the file, rename it, or delete it but the application will keep writing to it. The application has no way of knowing the file was externally manipulated.
If you are using Log4j 2 you should use the RollingFileAppender and use the DeleteAction to delete old files.
I am trying to programatically purge log files from a running(!) system consisting of several Java and non-Java servers. I used Java's File.delete() operation and it usually works fine. I am also perfectly fine with log files that are currently in use not being deleted, so I just log it as a warning whenever File.delete() returns false.
However, in log files which are currently still being written to by NON-Java applications (Postgres, Apache HTTPD etc., Java applications might also be affected, but I didn't notice yet, and all are using the same logging framework anyway, which seems to be OK) are not actually deleted (which is what I expected), however, File.delete() returns "true" for them.
But not only do these files still exist on the file system (Windows explorer and "dir" still show them), but afterwards they are inaccessible... when I try to open them with a text editor etc. I get "access denied" or similar error messages, when I try to copy them with explorer, it also claims that I do not have permissions, when I check its "properties" with explorer, it gives me "You do not have permission to view or edit this object's permissions".
Just to be clear: before I ran the File.delete() operation, I could access or delete these files without any problems, the delete operation "breaks" them. Once I stop the application, the file then disappears, and on restart, the application creates it from scratch and everything is back to normal.
The problem is that when NOT restarting the application after the log file purge operation, the application logs to nirvana.
This behavior reminds me a bit of the file deletion behavior of Linux: if you delete a file that is still held open by an application, it disappears from the file system, but the application - still holding a file handle - will happily continue writing to that file, but you will never be able to access it afterwards. The only difference being that here the files are still visible in the FS, but also not accessible otherwise.
I should mention that both my Java program and the applications themselves are running with "system" user.
I also tried Files.delete(), which allegedly throws an IOException indicating the error... but it seems there is no error.
What I tried to work around the problem is to check if the files are currently locked, using the method described here https://stackoverflow.com/a/1390669/5837050, but this only works for some of the files, not for all of them.
I basically need a reliable way (at least for Windows, if it worked also for Linux, that would be great) to determine if a file is still being used by some program, so I could just not delete it.
Any hints appreciated.
I haven't reproduced it but it seems like an OS expected behaviour, normally different applications run with different users which have ownership on this type of files but I understand that you want like a master purge Java which checks the log files not in use to delete them (running with enough grants of course).
So, considering that the OS behaviour is not going to change I would suggest to configure your logs with "roll file appender" policies and then check the files that match these policies.
Check the rollback policies for logback to make you an idea:
http://logback.qos.ch/manual/appenders.html#onRollingPolicies
For example, if your appender file policy is "more than one day or more than 1Gb" then just delete files which last edition date are older than one day or size are 1Gb. With this rule you will be sure to delete log files that are not in use.
Note that.. with a proper rolling policy maybe you even don't need your purge method, look at this configuration example:
<!-- keep 30 days' worth of history capped at 3GB total size -->
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
I hope this could help you a bit!!
I have done log4j configuration properly and working fine to write logs from my application, i used log4j.XML in spring web application.
But the problem is if the current log file directory is crashed i need to write logs in some other directory to take logs during that time.
Give me any suggestion to meet above requirements.
I had a request to find a solution for making a log file secure from editing from the user (not root user) running the JBoss instance of an application (Linux environment)
First idea I had is to use the chattr +a from root user to allow only the appending of new raw in the log file.
But the Log4j file is configured to rotate the file each day and for this reason I suppose that I should repeat the chattr command for each file created everyday.
I also not sure that the past day file in its "append only" state can be zipped from rotation.
Any suggestion or alternative way to proceed is welcomed.
One way is to create your own "daily rolling file appender". In a similar situation, I created a file appender based on the CustodianDailyRollingFileAppender (see for more information the answers in this question). Put your custom version in a "log4j-custom.jar" and place that in the JBoss common lib-directory. Last step is to update the log4j-configuration file to use the custom file appender.
In your custom file appender you can execute commands (1) to change the file-attributes before and after rolling log-files. Make sure to test your custom rolling file appender with "corner cases" like "there are no previous log-files": I found a couple of (easy to solve) bugs in the original custodian appender.
(1) Or use the new Java 7 POSIX file system options.