Extending log4j/slf4j Logger - java

I am in a situation where multiple threads (from the same JVM) are writing to the same file (logging by using Logger).
I need to delete this file at some point, and next use of logger will create the file and log.
The logging library is synchronized, therefore I do not need to worry about concurrent logging to the same file.
But... I want to add an external operation which operates this file, and this operation is to delete the file, therefore I have to somehow synchronize the logging (Logger) with this delete operation because I do not want to delete the file while the Logger is doing work.
Things I thought of:
Use FileChannel.lock to lock the file, something Logger does, as well. I decided against this, because of this:
File locks are held on behalf of the entire Java virtual machine. They
are not suitable for controlling access to a file by multiple threads
within the same virtual machine.
Which means in my case (same JVM, multiple threads) this will not cause the effect I want.
What are my options?
Am I missing something vital here?
Perhaps there is a way to do this using the already existing stuff in the Logger?

It seems you are looking for log rolling and log archiving functionalities. Log rolling is a common feature in Log4j and Logback (SLF4j also).
You can configure the logging library to create a new log file based on size of the current file or the time of day. You can configure the file name format for the rolled file and then have the external process archive or delete old rolled log files.
You can refer to the Log4j 2 configuration given in this answer.

Filesystems are generally synchronized by the OS, so you can simply delete the file without having to worry about locks or anything. Depending on how log4j locks the file that delete process might fail though, and you need to add a retry-loop.
int attempts = 3;
final File logfile = new File(theLogFilePath);
while ((attempts > 0) && logfile.exists() && !logfile.delete()) {
--attempts;
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
attempts = 0;
}
}
This isn't exactly clean code, but then what you do isn't clean anyways. ;)
You interfere with the logging process rather rudely, but since also a user could delete that file at any time, log4j should handle that gracefully. Worst case my guess is that a message that was about to be logged will get lost, but that's probably not an issue considering that you simply delete the log-file anyways.
For a cleaner implementation see this question.

A trick I've used in the past when there is no other option (see Saptarshi Basu's log-rolling suggestion https://stackoverflow.com/a/53011323/823393) is to just rename the current log file.
After the rename, any outstanding logging that is queued up for it continues into the renamed one. Usually, any new log requests will create a new file.
All that remains is to clean up the renamed one. You can usually manage this using some external process or just delete any old log files whenever this process triggers.

Related

Java File watcher with multiple JVM watching single directory for incoming files

I have a situation where there are two java applications are watching a directory for incoming file. Say there is a directory DIR that is being watched by two JVM processes for any files with the extension .SGL.
The problem we face here is that, sometimes both nodes are being notified about the new files and both nodes are trying to process the same file.
Usually we handle these situations using a database that try to insert into a table with unique file name column and only one will succeed and continue processing.
But for this situation, we don't have database.
What is the best way to handle these kind of problems? Can we depend on the file renaming solutions? Is file renaming is atomic operation?
For such a situation Spring Integration suggests FileSystemPersistentAcceptOnceFileListFilter: https://docs.spring.io/spring-integration/reference/html/files.html#file-reading
Stores "seen" files in a MetadataStore to survive application restarts.
The default key is 'prefix' plus the absolute file name; value is the timestamp of the file.
Files are deemed as already 'seen' if they exist in the store and have the
same modified time as the current file.
When you have shared persistent MetadataStore for all your application instances only one of them will process the file. All others will just filter it.
Every watcher (even two in the same JVM) should always be notified of the new File being added.
If you want to divide the work, you can either
use one JVM to run twice as many threads and divide the work via a queue.
use an operation which will only succeed for one JVM. e.g.
file rename
create a lock file
lock the file itself
Is file renaming is atomic operation?
Yes, only one process can successful rename a file, even if both attempt to rename to same name.

Is file creation process safe among different processes at os level (ubuntu)?

I have two java application which works on some file exist check mechanism , where one application wait till file deletion occurs and create a file on deletion of file to manage concurrency. If the process are not process safe my application fails.
The pseudocode:
if file exists:
do something with it
It's not concurrent safe as nothing ensures the file does not get deleted between the first and the second line.
The safest way would be to use a FileLock. If you are planning to react to file creation/deletion events on Linux, I'd recommend to use some inotify based solution.

How can I safely read Log4j logs while another thread is logging at the same time?

If I have multiple threads that use log4j to write to a single log file, and I want another thread to read it back out, is there a way to safely read(line by line) those logs such that I always read a full line?
EDIT:
Reason for this is I need to upload all logs to a central location and it might be logs that are days old or those that are just being written
You should use a read write lock.
Read locks can be held by multiple users if there is no one writing to the file, but a write lock can only be held by 1 thread at a time no matter what.
Just make sure that as your writing thread is done writing, it releases the readwritelock to allow the reading threads to read. Likewise, always release the read lock when the reader are done reading so log4j can continue to write
Check out
http://docs.oracle.com/javase/1.5.0/docs/api/java/util/concurrent/locks/ReadWriteLock.html
However, coming to think of it, what is your purpose for this? If you simply want to monitor your logs, you should use a different solution rather than having a monitor thread within the same application. Seems to not make sense. If the data is available within the application / service, why pass it off to a file and read it right back in?
It is going to be a pain if you need to implement what you are doing, especially you have to deal with file rolling.
For your specific requirement, there are better choices:
If the location you are going to be backed up can be directly written (i.e. mounted in your file system), it is better to simply set your file rolling to write to that backup directory; or
Make use of log management tools like Splunk to monitor and manage your log files (so that you don't even need to copy to that backup directory); or
Even you need to do the backup all by yourself, you don't need to (and have no reason to) do it in a separate thread. Trying to write a shell script monitoring your log directory, and make use of tools like rsync or write similar logic by yourself, to do the upload only for files that are not matching in local and remote location.

Locking and unlocking files using the java API

One of our clients is using some Novel security software that sometimes locks some .class files that our software creates. This causes some nasty problems for them when this occurs and I am trying to research a workaround that we could add to our error handling to address this problem when it comes up. I am wondering are there any calls in the java api that can be used to detect if a file is locked, and if it is, unlock it.
Before attempting to write to a file, you can check if the file is writable by your java application using File.canWrite(). However, you still might run into an issue if the 3rd party app locks the file in between your File.canWrite() check and when your application actually attempts to write. For this reason, I would code your application to simply go ahead and try to write to the file and catch whatever Exception gets thrown when the file is locked. I don't believe there is a native Java way to unlock a file that has been locked by another application. You could exec a shell command as a privileged user to force things along but that seems inelegant.
File.canWrite() has the race condition that Asaph mentioned. You could try FileChannel.lock() and get an exclusive lock on the file. As long as the .class is on your local disk, this should work fine (file locking can be problematic on networked disks).
Alternatively, depending on how the .class name is discovered, you could create a new name for your .class each time; then if the anti-virus software locks your initial class, you can still create the new one.

Log4J able to recover from disk full?

We have several java application server running here, with several apps. They all log with Log4J into the same file system, which we created only for that reason.
From time to time it happens that the file system runs out of space and the app gets
log4j:ERROR Failed to flush writer,
java.io.IOException
Unfortunately Log4J does not recover from this error, so that even after space is freed in the file system, no more logs are written from that app. Are there any options, programming-wise or setting-wise, to get Log4J going again, besides restarting the app?
I didn't test this, but the website of logback states:
Graceful recovery from I/O failures
Logback's FileAppender and all its sub-classes, including
RollingFileAppender, can gracefully recover from I/O failures. Thus,
if a file server fails temporarily, you no longer need to restart your
application just to get logging working again. As soon as the file
server comes back up, the relevant logback appender will transparently
and quickly recover from the previous error condition.
I assume the same would be true for the above situation.
What do you see is an acceptable outcome here? I'd consider writing a new Appender that wraps whichever appender is accessing the disk, and tries to do something sensible when it detects IOExceptions. Maybe get it to wrap the underlying Appenders write methods in a try-catch block, and send you or a sysadmin an email.
limit the size of your logs and try using a custom appender to archive logs to a backup machine with lots of disk space.

Categories

Resources