I had a request to find a solution for making a log file secure from editing from the user (not root user) running the JBoss instance of an application (Linux environment)
First idea I had is to use the chattr +a from root user to allow only the appending of new raw in the log file.
But the Log4j file is configured to rotate the file each day and for this reason I suppose that I should repeat the chattr command for each file created everyday.
I also not sure that the past day file in its "append only" state can be zipped from rotation.
Any suggestion or alternative way to proceed is welcomed.
One way is to create your own "daily rolling file appender". In a similar situation, I created a file appender based on the CustodianDailyRollingFileAppender (see for more information the answers in this question). Put your custom version in a "log4j-custom.jar" and place that in the JBoss common lib-directory. Last step is to update the log4j-configuration file to use the custom file appender.
In your custom file appender you can execute commands (1) to change the file-attributes before and after rolling log-files. Make sure to test your custom rolling file appender with "corner cases" like "there are no previous log-files": I found a couple of (easy to solve) bugs in the original custodian appender.
(1) Or use the new Java 7 POSIX file system options.
Related
I'm using the Tinylog library with the "rolling file" function to write logs to a file. Sometimes I need to clear a file.
Tried to do it with normal Java methods but it breaks the file and makes it hard to read.
Is there a way to clear the log file at any point so it doesn't break?
You can use standard policies for starting new log files at defined events. This is the recommended way.
If you need more flexibility, you can use the DynamicPolicy for starting a new log file programmatically. This policy is part of tinylog 2.5. The first milestone is expected to be released during this month and will include this new policy.
Configuration:
writer = rolling file
writer.file = foo.log
writer.policies = dynamic
Start new log file in Java:
DynamicPolicy.setReset();
I use logback as well as log4j2 in my java web apps for logging. So far, I've setup log rotation (and purging) from within logback and log4j2 but now I intend to use logrotate at an infrastructure level since there are lots of services (in other languages as well) and it's relatively easier to maintain one common way of handling log files.
While doing a POC, I setup the java app to write logs to a file application.log and also setup logrotate with a size criteria (of 1 MB). As expected, when the file size reached 1 MB, the log file was rotated by way of moving it to another file named application.log.1. At this point, I expected the java app to continue writing new logs to application.log file. However, the logs kept getting written in the rotated file i.e. application.log.1.
This makes me wonder whether the component within logback/log4j2 that writes the log content in the file tracks the file by its name or something else like an inode number or a file handler. Since the original active log file was not deleted but just moved with a new name.
I'm aware of the copytruncate option in logrotate which creates a copy of the active log file and then truncates the active log file, but I don't want to use this as this can lead to loss of log events for agents running on the machines which pushes the logs to systems like Elasticsearch and CloudWatch. Since truncate can happen before the agents have processed all the log entries.
How can I get the logging component to always write logs to a file named application.log even after the original file underneath gets moved?
The logging framework opens the file for write and leaves the OutputStream open until the application ends or the file is rolled over or similar. On a Unix system you can move the file, rename it, or delete it but the application will keep writing to it. The application has no way of knowing the file was externally manipulated.
If you are using Log4j 2 you should use the RollingFileAppender and use the DeleteAction to delete old files.
I usually clear log files when I'm in developement mode and I need to have a fresh start to focus only on things I have to test.
If I clear a log file in linux (have not tested Windows), logback stops to write to that file
Maybe it's something about open handlers and file descriptors in linux.
How can I recover the situation without restarting the application?
Is it possibile to have an appender that can automatically recover this situation?
While your application is running (and Logback within your application has an open handle to the log file) ...
You won't be able to delete the file on Windows
You will be able to delete the file on Linux but as far as Logback is concerned the file still exists until Logback closes that open file handle. So, Logback won't know that the the file has been deleted but since the file has been deleted Logback cannot actually write anything to disk and this situation remains until Logback is re-initialised (and your FileAppender recreates the file). Typically, this will be done on application startup.
There's an open issue against Logback requesting a change in Logback's behaviour in this situation.
If you goal here is to have log output which focusses on recent-activity-only then you could define a rolling file appender with a minimal size and no history just to retain the (for example) last 1MB of data, this might help offer some focus on recent events only.
Alternatively, you'll have to:
Vote for this issue
Use grep/awk to find the relevant aspects of your log file (you can easily grep on timestamps) even if they are in a log file which contains the last few hours of log events
Ease the burden of restarting your application in this scenario by writing a simple script which does something like: (1) stop application; (2) delete log file; (3) start application
I have an issues with the file Error.log which is generate by Java.
It's too big (Currently >10Go) I can't open it with Notepad++/SublimeText etc.. and as it's on a dedicated computer, transfering it with Teamviewers make Teamviewer crash.
I would like to know if there is a way to configure how the error.log file is generated.
I want to have one file each days and only keep the last 7 days.
Can I configure Java to do that ? Or do I need to redirect System.err to a file ?
Thanks.
There are some java libraries you can use to manage log files the most popular log4j. So if you can edit the source code, this library can help achieve what you want. Besides that there are some tools that can handle large log files and give you search functionnality, edit reports and so on. try look for splunk, elasticsearch, kibana ..
If you have source code available just change log4j configuration. If not then try following
create a job which checks consistently to the log file and rename this when size exceeds some configurable value.
I am developing a Java Desktop Application. This app needs a configuration to be started. For this, I want to supply a defaultConfig.properties or defaultConfig.xml file with the application so that If user doesn't select any configuration, then the application will start with the help of defaultConfig file.
But I am afraid of my application crash if the user accidentally edit the defaultConfig file. So Is there any mechanism through which I can check before the start of the application that whether the config file has changed or not.
How other applications (out in the market) deal with this type of situation in which their application depends on a configuration file?
If the user edited the config file accidentally or intentionally, then the application won't run in future unless he re-installs the application.
I agree with David in that using a MD5 hash is a good and simple way to accomplish what you want.
Basically you would use the MD5 hashing code provided by the JDK (or somewhere else) to generate a hash-code based on the default data in Config.xml, and save that hash-code to a file (or hardcode it into the function that does the checking). Then each time your application starts load the hash-code that you saved to the file, and then load the Config.xml file and again generate a hash-code from it, compare the saved hash-code to the one generated from the loaded config file, if they are the same then the data has not changed, if they are different, then the data has been modified.
However as others are suggesting if the file should not be editable by the user then you should consider storing the configuration in a manner that the user can not easily edit. The easiest thing I can think of would be to wrap the Output Stream that you are using to write the Config.xml file in a GZIP Output Stream. Not only will this make it difficult for the user to edit the configuration file, but it will also cause the Config.xml file to take up less space.
I am not at all sure that this is a good approach but if you want to go ahead with this you can compute a hash of the configuration file (say md5) and recompute and compare every time the app starts.
Come to think of it, if the user is forbidden to edit a file why expose it? Stick it in a jar file for example, far away from the user's eyes.
If the default configuration is not supposed to be edited, perhaps you don't really want to store it in a file in the first place? Could you not store the default values of the configuration in the code directly?
Remove write permissions for the file. This way the user gets a warning before trying to change the file.
Add a hash or checksum and verify this before loading file
For added security, you can replace the simple hash with a cryptographic signature.
From I have found online so far there seems to be different approaches code wise. none appear to be a 100 hundred percent fix, ex:
The DirectoryWatcher implements
AbstractResourceWatcher to monitor a
specified directory.
Code found here twit88.com develop-a-java-file-watcher
one problem encountered was If I copy
a large file from a remote network
source to the local directory being
monitored, that file will still show
up in the directory listing, but
before the network copy has completed.
If I try to do almost anything non
trivial to the file at that moment
like move it to another directory or
open it for writing, an exception will
be thrown because really the file is
not yet completely there and the OS
still has a write lock on it.
found on the same site, further below.
How the program works It accepts a ResourceListener class, which is FileListener. If a change is detected in the program a onAdd, onChange, or onDelete event will be thrown and passing the file to.
will keep searching for more solutions.