I'm interested in writing some data I'm receiving into two different files (same data).
In my code, I'm using BufferedWriter and FileWriter to write the data to files, and I want, as a backup, to write the same data on the local storage and on the SD card.
My question is if I need to implement this with 2 FileWriters and 2 BufferedWriters, or is there a way to use the same BufferedWriter for both files?
Is there a more efficient way to implement this task?
Reusing the same writer isn't possible - unless you spend the time to implement your own special subclass of Writer that writes its output into multiple files at the same point in time. (to then pass an instance of such a CopyingWriter to the ctor of BufferedWriter).
But I suggest to not do that. Instead: write the file once. Then use other, existing technology to copy the output file.
Always aim for simplicity. You intend to create a very special solution, where one writer writes to n files. But there is no need to do that. Write your file once, then copy it n times. This approach doesn't require "innovation" - you just need to use what already exists (see here for example).
Related
I am using several java FileWriter to append data to an output files. The files all remain open during processing and are periodically written to (appended to). Occasionally, I hit a point in the logic where the contents of a currently open file needs to be deleted (the file length to become zero) and I start appending again from the top.
Without closing and reopening the file, is there an efficient method to accomplish this using FileWriter?
Without closing and reopening the file, is there an efficient method to accomplish this using FileWriter?
No. You would have use a RandomAccessFile with its performance and character-set issues.
Create a new FileWriter.
You could use FileOutputStream instead of FileWriter and use stream.getChannel().truncate(0). But note it's byte-oriented instead of character-oriented, so I wouldn't recommend it if FileWriter is a better fit.
Is it possible to replace part of a files content, without rewriting the entire file to the disk.
Say that i have a very large file of several gigabytes, how to i replace the bytes from, lets say position 100 to 200 without rewriting the entire file?
As an added bonus, i need a solution that does not use any features never than java 1.4.
If you're positive that you're going to be writing exactly the same number of bytes, you can use a RandomAccessFile to accomplish this (available since Java 1.0). Just open the file, seek to wherever you need to be, and overwrite those bytes with whatever your new data is.
RandomAccessFile f = new RandomAccessFile(new File("C:\\test\\huge.txt"), "rw");
f.seek(100); // Seek ahead
f.write("here is some new stuff".getBytes())
You can also read from the file at arbitrary points in the same fashion, in case you don't know exactly how much data you need to replace (e.g. so you can pad/truncate whatever you're writing to avoid doing something awful by accident).
This is my understanding regarding reading a file using BufferedReader in java. Please correct me if I am wrong somewhere...
Recently I had a requirement where we are required to read a file multiple times.
The usual way which I use is setting a mark() and doing a reset. But the input parameters to
a mark is an integer and it cannot accept a long number. Is there a way in which we can read the file, a large number of times.
In c++ we can do a seekg on the fstream and read the contents once again irrespective of the number of times we want to do so. Is there anything in java which is of this nature.
Just close the file and read it again.
But review your requirement. Why can't you process it in one pass?
Not much of a good answer but if you want to do random reading and writing then you can use Channels in java.nio package.
BufferedReader is for reading a file when you logically see it as a series of records and records are generally accessed sequentially.
Channels allow you to view your file as a series of blocks. Blocks are meant to be read randomly. :)
Using subclass of channel, FileChannel, you can read what you want from wherever you want. You need to specify two things:
Where to read from.
How much to read.
It has a read(dst,pstn) where dst is a ByteBuffer and pstn is a long position.
Don't worry that it is abstract because you use it via Files.newByteChannel() which does all the voodoo needed to make it work :)
I am trying to download a file from a server in a user specified number of parts (n). So there is a file of x bytes divided into n parts with each part downloading a piece of the whole file at the same time. I am using threads to implement this, but I have not worked with http before and do not really understand how downloading a file really works. I have read up on it and it seems "Range" needs to be used, but I do not know how to download different parts and being able to append them without corrupting the data.
(Since it's a homework assignment I will only give you a hint)
Appending to a single file will not help you at all, since this will mess up the data. You have two alternatives:
Download from each thread to a separate temporary file and then merge the temporary files in the right order to create the final file. This is probably easier to conceive, but a rather ugly and inefficient approach.
Do not stick to the usual stream-style semantics - use random access (1, 2) to write data from each thread straight to the right location within the output file.
Is there a way to force the temporary files created in a java program in memory? Since I use several large xml file, I would have advantages in this way? Should I use a transparent method that allows me to not upset the existing application.
UPDATE: I'm looking at the source code and I noticed that it uses libraries (I can not change) which requires the path of those files ...
Thanks
The only way I can think of is to create a RAM disk and then point the system property java.io.tmpdir to that RAM disk.
XML is just a String, why not just reference Strings in memory, I think the File interface is a distraction. Use StringBuilder if you need to manipulate the data. Use StringBuffer if you need thread safety. Put them in a type safe Map if you have a variable number of things that need to be looked up on with a key.
If you absolutely have to keep the File interface, then create a InMemoryFileWriter that wraps ByteArrayOutputStream and ByteArrayInputStream to keep them in memory, but again I think the whole File in memory thing is a bad decision if you just want to cache things in memory, that is a lot of overhead when a simple String would do.
Don't use files if you don't have to. Consider com.google.common.io.FileBackedOutputStream from Guava:
An OutputStream that starts buffering to a byte array, but switches to file buffering once the data reaches a configurable size.
You probably can force the default behaviour of java.io.File with some reflection magic, but I'm sure you don't want to do that as it can lead to unpredicted behaviour. You're better off providing a mechanism where it would be possible to switch between usual and in-memory behaviour, and route all calls via this mechanism.
Look at this example, it shows how to use file API to create in-memory files.
Assuming you have control over the the streams that are being used to write to the file -
Do you absolutely want the in-memory behavior? If all that you want to do is reduce the number of system calls to write to the disk, you can wrap the FileOutputStream in a BufferedOutputStream (with appropriately big buffer size) and write to this BufferedOutputStream (or BufferedWriter) instead of writing directly to the original FileOutputStream.
(This does require a change in the existing application)