I am looking for a simple daily file rolling utility that will create file with the current date in <filename>-yyyy-mm-dd.txt
and will continue to write on it and when the date changes,
it will create a new file & continue to write on it.
On googling most of the result I got was using logback or log4j.
Is there any way to get daily file roller available in java without logback or log4j?
Assuming you're talking about java.util.logging - you will have to implement your own java.util.logging.Handler.
I haven't tried this myself but take a look at Tomcats JULI package, they've implemented a Handler that should be able to do rolling-file logging. The class is org.apache.juli.FileHandler and can be found on maven central (version 9.0.0M4).
The simpliest I've done is extending a Writer and using a date format. It should be optimized to open file only when needed , but it works.
public class StackOverflowFileRolling extends Writer {
String baseFilePath;
String datePattern;
public StackOverflowFileRolling (String baseFilePath, String datePattern) {
super();
this.baseFilePath = baseFilePath;
this.datePattern = datePattern;
}
#Override
public void write (char[] cbuf, int off, int len) throws IOException {
String name = baseFilePath + new SimpleDateFormat(datePattern).format(new Date()) + ".log";
File file = new File(name);
file.getParentFile().mkdirs();
try (FileWriter fileWriter = new FileWriter(file, true)) {
fileWriter.write(cbuf, off, len);
}
}
#Override
public void flush () throws IOException {
// TODO Auto-generated method stub
}
#Override
public void close () throws IOException {
// TODO Auto-generated method stub
}
}
Then use it by wrapping it in a printWriter
try (
PrintWriter logger = new PrintWriter(
new StackOverflowFileRolling("/var/log/stackoverflowfilerolling", "yyyy-MM-dd"))
) {
//this will append 2 lines in the log, since the FileWriter is in append mode
logger.println("test");
logger.println("test2");
}
EDIT some improvements to create directories and use a PrintWriter
Related
I want to process files with a flink stream in which two lines belong together. In the first line there is a header and in the second line a corresponding text.
The files are located on my local file system. I am using the readFile(fileInputFormat, path, watchType, interval, pathFilter, typeInfo) method with a custom FileInputFormat.
My streaming job class looks like this:
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<Read> inputStream = env.readFile(new ReadInputFormatTest("path/to/monitored/folder"), "path/to/monitored/folder", FileProcessingMode.PROCESS_CONTINUOUSLY, 100);
inputStream.print();
env.execute("Flink Streaming Java API Skeleton");
and my ReadInputFormatTest like this:
public class ReadInputFormatTest extends FileInputFormat<Read> {
private transient FileSystem fileSystem;
private transient BufferedReader reader;
private final String inputPath;
private String headerLine;
private String readLine;
public ReadInputFormatTest(String inputPath) {
this.inputPath = inputPath;
}
#Override
public void open(FileInputSplit inputSplit) throws IOException {
FileSystem fileSystem = getFileSystem();
this.reader = new BufferedReader(new InputStreamReader(fileSystem.open(inputSplit.getPath())));
this.headerLine = reader.readLine();
this.readLine = reader.readLine();
}
private FileSystem getFileSystem() {
if (fileSystem == null) {
try {
fileSystem = FileSystem.get(new URI(inputPath));
} catch (URISyntaxException | IOException e) {
throw new RuntimeException(e);
}
}
return fileSystem;
}
#Override
public boolean reachedEnd() throws IOException {
return headerLine == null;
}
#Override
public Read nextRecord(Read r) throws IOException {
r.setHeader(headerLine);
r.setSequence(readLine);
headerLine = reader.readLine();
readLine = reader.readLine();
return r;
}
}
As expected, the headers and the text are stored together in one object. However, the file is read eight times. So the problem is the parallelization. Where and how can I specify that a file is processed only once, but several files in parallel?
Or do I have to change my custom FileInputFormat even further?
I would modify your source to emit the available filenames (instead of the actual file contents) and then add a new processor to read a name from the input stream and then emit pairs of lines. In other words, split the current source into a source followed by a processor. The processor can be made to run at any degree of parallelism and the source would be a single instance.
This is a followup to anonymous file streams reusing descriptors
As per my previous question, I can't depend on code like this (happens to work in JDK8, for now):
RandomAccessFile r = new RandomAccessFile(...);
FileInputStream f_1 = new FileInputStream(r.getFD());
// some io, not shown
f_1 = null;
f_2 = new FileInputStream(r.getFD());
// some io, not shown
f_2 = null;
f_3 = new FileInputStream(r.getFD());
// some io, not shown
f_3 = null;
However, to prevent accidental errors and as a form of self-documentation, I would like to invalidate each file stream after I'm done using it - without closing the underlying file descriptor.
Each FileInputStream is meant to be independent, with positioning controlled by the RandomAccessFile. I share the same FileDescriptor to prevent any race conditions arising from opening the same path multiple times. When I'm done with one FileInputStream, I want to invalidate it so as to make it impossible to accidentally read from it while using the second FileInputStream (which would cause the second FileInputStream to skip data).
How can I do this?
notes:
the libraries I use require compatibiity with java.io.*
if you suggest a library (I prefer builtin java semantics if at all possible), it must be commonly available (packaged) for linux (the main target) and usable on windows (experimental target)
but, windows support isn't a absolutely required
Edit: in response to a comment, here is my workflow:
RandomAccessFile r = new RandomAccessFile(String path, "r");
int header_read;
int header_remaining = 4; // header length, initially
byte[] ba = new byte[header_remaining];
ByteBuffer bb = new ByteBuffer.allocate(header_remaining);
while ((header_read = r.read(ba, 0, header_remaining) > 0) {
header_remaining -= header_read;
bb.put(ba, 0, header_read);
}
byte[] header = bb.array();
// process header, not shown
// the RandomAccessFile above reads only a small amount, so buffering isn't required
r.seek(0);
FileInputStream f_1 = new FileInputStream(r.getFD());
Library1Result result1 = library1.Main.entry_point(f_1)
// process result1, not shown
// Library1 reads the InputStream in large chunks, so buffering isn't required
// invalidate f_1 (this question)
r.seek(0)
int read;
while ((read = r.read(byte[4096] buffer)) > 0 && library1.continue()) {
library2.process(buffer, read);
}
// the RandomAccessFile above is read in large chunks, so buffering isn't required
// in a previous edit the RandomAccessFile was used to create a FileInputStream. Obviously that's not required, so ignore
r.seek(0)
Reader r_1 = new BufferedReader(new InputStreamReader(new FileInputStream(r.getFD())));
Library3Result result3 = library3.Main.entry_point(r_2)
// process result3, not shown
// I'm not sure how Library3 uses the reader, so I'm providing buffering
// invalidate r_1 (this question) - bonus: frees the buffer
r.seek(0);
FileInputStream f_2 = new FileInputStream(r.getFD());
Library1Result result1 = library1.Main.entry_point(f_2)
// process result1 (reassigned), not shown
// Yes, I actually have to call 'library1.Main.entry_point' *again* - same comments apply as from before
// invalidate f_2 (this question)
//
// I've been told to be careful when opening multiple streams from the same
// descriptor if one is buffered. This is very vague. I assume because I only
// ever use any stream once and exclusively, this code is safe.
//
A pure Java solution might be to create a forwarding decorator that checks on each method call whether the stream is validated or not. For InputStream this decorator may look like this:
public final class CheckedInputStream extends InputStream {
final InputStream delegate;
boolean validated;
public CheckedInputStream(InputStream stream) throws FileNotFoundException {
delegate = stream;
validated = true;
}
public void invalidate() {
validated = false;
}
void checkValidated() {
if (!validated) {
throw new IllegalStateException("Stream is invalidated.");
}
}
#Override
public int read() throws IOException {
checkValidated();
return delegate.read();
}
#Override
public int read(byte b[]) throws IOException {
checkValidated();
return read(b, 0, b.length);
}
#Override
public int read(byte b[], int off, int len) throws IOException {
checkValidated();
return delegate.read(b, off, len);
}
#Override
public long skip(long n) throws IOException {
checkValidated();
return delegate.skip(n);
}
#Override
public int available() throws IOException {
checkValidated();
return delegate.available();
}
#Override
public void close() throws IOException {
checkValidated();
delegate.close();
}
#Override
public synchronized void mark(int readlimit) {
checkValidated();
delegate.mark(readlimit);
}
#Override
public synchronized void reset() throws IOException {
checkValidated();
delegate.reset();
}
#Override
public boolean markSupported() {
checkValidated();
return delegate.markSupported();
}
}
You can use it like:
CheckedInputStream f_1 = new CheckedInputStream(new FileInputStream(r.getFD()));
// some io, not shown
f_1.invalidate();
f_1.read(); // throws IllegalStateException
Under unix you could generally avoid such problems by dup'ing a file descriptor.
Since java does not not offer such a feature one option would be a native library which exposes that. jnr-posix does that for example. On the other hand jnr depends on a lot more jdk implementation properties than your original question.
I just stopped by a question about this subject.
Say when I do like this. (Note that I used "rw" not "rws" or "rwd").
try(RandomAccessFile raw = new RandomAccessFile(file, "rw")) {
// write some
}
Does RandomAccessFile#close() do as if I explicitly do getFD().sync()?
try(RandomAccessFile raw = new RandomAccessFile(file, "rw")) {
// write some
raw.getFD().sync();
}
I searched RandomAccessFile.java and the source just do
public void close() throws IOException {
// doin' here, things look like irrelevant
close0();
}
private native void close0() throws IOException;
I have a file with name foo.txt. This file contains some text. I want to achieve following functionality:
I launch program
write something to the file (for example add one row: new string in foo.txt)
I want to get ONLY NEW content of this file.
Can you clarify the best solution of this problem? Also I want resolve related issues: in case if I modify foo.txt I want to see diff.
The closest tool which I found in Java is WatchService but if I understood right this tool can only detect type of event happened on filesystem (create file or delete or modify).
Java Diff Utils is designed for that purpose.
final List<String> originalFileContents = new ArrayList<String>();
final String filePath = "C:/Users/BackSlash/Desktop/asd.txt";
FileListener fileListener = new FileListener() {
#Override
public void fileDeleted(FileChangeEvent paramFileChangeEvent)
throws Exception {
// use this to handle file deletion event
}
#Override
public void fileCreated(FileChangeEvent paramFileChangeEvent)
throws Exception {
// use this to handle file creation event
}
#Override
public void fileChanged(FileChangeEvent paramFileChangeEvent)
throws Exception {
System.out.println("File Changed");
//get new contents
List<String> newFileContents = new ArrayList<String> ();
getFileContents(filePath, newFileContents);
//get the diff between the two files
Patch patch = DiffUtils.diff(originalFileContents, newFileContents);
//get single changes in a list
List<Delta> deltas = patch.getDeltas();
//print the changes
for (Delta delta : deltas) {
System.out.println(delta);
}
}
};
DefaultFileMonitor monitor = new DefaultFileMonitor(fileListener);
try {
FileObject fileObject = VFS.getManager().resolveFile(filePath);
getFileContents(filePath, originalFileContents);
monitor.addFile(fileObject);
monitor.start();
} catch (InterruptedException ex) {
ex.printStackTrace();
} catch (FileNotFoundException e) {
//handle
e.printStackTrace();
} catch (IOException e) {
//handle
e.printStackTrace();
}
Where getFileContents is :
void getFileContents(String path, List<String> contents) throws FileNotFoundException, IOException {
contents.clear();
BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream(path), "UTF-8"));
String line = null;
while ((line = reader.readLine()) != null) {
contents.add(line);
}
}
What I did:
I loaded the original file contents in a List<String>.
I used Apache Commons VFS to listen for file changes, using FileMonitor. You may ask, why? Because WatchService is only available starting from Java 7, while FileMonitor works with at least Java 5 (personal preference, if you prefer WatchService you can use it). note: Apache Commons VFS depends on Apache Commons Logging, you'll have to add both to your build path in order to make it work.
I created a FileListener, then I implemented the fileChanged method.
That method load new contents form the file, and uses Patch.diff to retrieve all differences, then prints them
I created a DefaultFileMonitor, which basically listens for changes to a file, and I added my file to it.
I started the monitor.
After the monitor is started, it will begin listening for file changes.
Is it possible to force Properties not to add the date comment in front? I mean something like the first line here:
#Thu May 26 09:43:52 CEST 2011
main=pkg.ClientMain
args=myargs
I would like to get rid of it altogether. I need my config files to be diff-identical unless there is a meaningful change.
Guess not. This timestamp is printed in private method on Properties and there is no property to control that behaviour.
Only idea that comes to my mind: subclass Properties, overwrite store and copy/paste the content of the store0 method so that the date comment will not be printed.
Or - provide a custom BufferedWriter that prints all but the first line (which will fail if you add real comments, because custom comments are printed before the timestamp...)
Given the source code or Properties, no, it's not possible. BTW, since Properties is in fact a hash table and since its keys are thus not sorted, you can't rely on the properties to be always in the same order anyway.
I would use a custom algorithm to store the properties if I had this requirement. Use the source code of Properties as a starter.
Based on https://stackoverflow.com/a/6184414/242042 here is the implementation I have written that strips out the first line and sorts the keys.
public class CleanProperties extends Properties {
private static class StripFirstLineStream extends FilterOutputStream {
private boolean firstlineseen = false;
public StripFirstLineStream(final OutputStream out) {
super(out);
}
#Override
public void write(final int b) throws IOException {
if (firstlineseen) {
super.write(b);
} else if (b == '\n') {
firstlineseen = true;
}
}
}
private static final long serialVersionUID = 7567765340218227372L;
#Override
public synchronized Enumeration<Object> keys() {
return Collections.enumeration(new TreeSet<>(super.keySet()));
}
#Override
public void store(final OutputStream out, final String comments) throws IOException {
super.store(new StripFirstLineStream(out), null);
}
}
Cleaning looks like this
final Properties props = new CleanProperties();
try (final Reader inStream = Files.newBufferedReader(file, Charset.forName("ISO-8859-1"))) {
props.load(inStream);
} catch (final MalformedInputException mie) {
throw new IOException("Malformed on " + file, mie);
}
if (props.isEmpty()) {
Files.delete(file);
return;
}
try (final OutputStream os = Files.newOutputStream(file)) {
props.store(os, "");
}
if you try to modify in the give xxx.conf file it will be useful.
The write method used to skip the First line (#Thu May 26 09:43:52 CEST 2011) in the store method. The write method run till the end of the first line. after it will run normally.
public class CleanProperties extends Properties {
private static class StripFirstLineStream extends FilterOutputStream {
private boolean firstlineseen = false;
public StripFirstLineStream(final OutputStream out) {
super(out);
}
#Override
public void write(final int b) throws IOException {
if (firstlineseen) {
super.write(b);
} else if (b == '\n') {
// Used to go to next line if did use this line
// you will get the continues output from the give file
super.write('\n');
firstlineseen = true;
}
}
}
private static final long serialVersionUID = 7567765340218227372L;
#Override
public synchronized Enumeration<java.lang.Object> keys() {
return Collections.enumeration(new TreeSet<>(super.keySet()));
}
#Override
public void store(final OutputStream out, final String comments)
throws IOException {
super.store(new StripFirstLineStream(out), null);
}
}
Can you not just flag up in your application somewhere when a meaningful configuration change takes place and only write the file if that is set?
You might want to look into Commons Configuration which has a bit more flexibility when it comes to writing and reading things like properties files. In particular, it has methods which attempt to write the exact same properties file (including spacing, comments etc) as the existing properties file.
You can handle this question by following this Stack Overflow post to retain order:
Write in a standard order:
How can I write Java properties in a defined order?
Then write the properties to a string and remove the comments as needed. Finally write to a file.
ByteArrayOutputStream baos = new ByteArrayOutputStream();
properties.store(baos,null);
String propertiesData = baos.toString(StandardCharsets.UTF_8.name());
propertiesData = propertiesData.replaceAll("^#.*(\r|\n)+",""); // remove all comments
FileUtils.writeStringToFile(fileTarget,propertiesData,StandardCharsets.UTF_8);
// you may want to validate the file is readable by reloading and doing tests to validate the expected number of keys matches
InputStream is = new FileInputStream(fileTarget);
Properties testResult = new Properties();
testResult.load(is);