I noticed this weird thing that opened FileChannel object works even after linked file is deleted while a file channel is in use. I have created 15GB test file and following program reads 100MB of file content consequently per second.
Path path = Paths.get("/home/elbek/tmp/file.txt");
FileChannel fileChannel = FileChannel.open(path, StandardOpenOption.READ);
ByteBuffer byteBuffer = ByteBuffer.allocate(1024 * 1024);
while (true) {
int read = fileChannel.read(byteBuffer);
if (read < 0) {
break;
}
Thread.sleep(10);
byteBuffer.clear();
System.out.println(fileChannel.position());
}
fileChannel.close();
After program runs ~5 seconds (it has read 0.5GB) I delete the file from the file system and expect an error to be thrown after a few reads, but the program goes on and reads the file till the end, I was initially thinking maybe it is being served from file cache and made file huge so it won't fit into cache, 15GB is big enough I think not to fit into it.
Anyways, how OS is serving read requests while the file itself is not there anymore? The OS I am testing this is Fedora.
Thanks.
Related
I'm trying to write two programs, one that writes to a text file, and the other one that reads from it. I've tried using java.io, but ran into concurrency problems. However, when I switched to java.nio, I ran into even bigger problems, probably not related to concurrency since I lock the file in both programs when trying to read/write, but the actual way of reading from or writing to a file.
Writer program code (the part that is relevant):
Path filePath = Paths.get("map.txt");
FileChannel fileChannel;
ByteBuffer buffer;
StringBuilder existingObjects = new StringBuilder();
while (true) {
for (FlyingObject fo : airbornUnitsList) {
existingObjects.append(fo.toString() + System.lineSeparator());
}
if(existingObjects.length() > System.lineSeparator().length())
existingObjects.setLength(existingObjects.length() - System.lineSeparator().length());
buffer = ByteBuffer.wrap(existingObjects.toString().getBytes());
fileChannel = FileChannel.open(filePath, StandardOpenOption.READ, StandardOpenOption.WRITE);
fileChannel.lock();
fileChannel.write(buffer);
fileChannel.close();
existingObjects.delete(0, existingObjects.length());
sleep(100);
}
FlyingObject is a simple class with some fields and an overridden toString() method and airbornUnitsList is a list of those objects, so I'm basically iterating through the list, appending the FlyingObject objects to StringBuilder object, removing the last "new line" from StringBuilder, putting it into the buffer and writing to the file. As you can see, I have locked the file prior to writing to the file and then unlocked it afterwards.
Reader program code (the part that is relevant):
Path filePath = Paths.get("map.txt");
FileChannel fileChannel;
ByteBuffer buffer;
StringBuilder readObjects = new StringBuilder();
while (true) {
fileChannel = FileChannel.open(filePath, StandardOpenOption.READ, StandardOpenOption.WRITE);
fileChannel.lock();
buffer = ByteBuffer.allocate(100);
numOfBytesRead = fileChannel.read(buffer);
while (numOfBytesRead != -1) {
buffer.flip();
readObjects.append(new String(buffer.array()));
buffer.clear();
numOfBytesRead = fileChannel.read(buffer);
}
fileChannel.close();
System.out.println(readObjects);
}
Even when I manually write a few lines in the file and then run the Reader program, it doesn't read it correctly. What could be the issue here?
EDIT: After playing with buffer size a bit, I realized that the file is read wrongly because the buffer size is smaller than the content in the file. Could this be related to file encoding?
I found out what the problem was.
Firstly, in the writer program, I needed to add the fileChannel.truncate(0); after opening the file channel. That way, I would delete the old content of the file and write it from the beginning. Without that line, I would just overwrite the old content of the file with new content when writing and if the new content is shorter than the old content, the old content would still remain in those positions not covered by new content. Only if I was sure that the new content is at least as big as the old content and would rewrite it completely, I wouldn't need the truncate option, but that wasn't the case for me.
Secondly, regarding the reader, the reason it wasn't reading the whole file is because the while loop would end before the last part of the file content was appended to the StringBuilder. After I modified the code and changed the order of operations a bit, like this:
numOfBytesRead = 0;
while (numOfBytesRead != -1) {
numOfBytesRead = fileChannel.read(buffer);
buffer.flip();
readObjects.append(new String(buffer.array()));
buffer.clear();
}
it worked without problems.
I am working on an Android app and trying to create a file of a certain size that won't be sparse. I literally want it to take up space on the Android device.
Here's what I have so far, I'm fairly new to Java and tried a couple different things to fill (takes waaay to long if the file is big like 5 GB) or append to the end (doesn't work? maybe I did it wrong).
File file = new File(dir, "testFile.txt");
try {
RandomAccessFile f = new RandomAccessFile(file, "rw");
f.setLength((long) userInputNum * 1048576 * 1024);
f.close();
} catch (IOException e) {
e.printStackTrace();
}
Currently, what's happening is the file is created and say I want it to be 5 GB, in the file details it says it's 5 GB but it's not actually taking up space on the device (this is a sparse file as I have found out). How can I make it create the file not sparse or what's a quick way to fill the file? I can use a command on pc/mac to make the file and push it to the device but I want the app to do it.
So this works:
byte[] x = new byte[1048576];
RandomAccessFile f = new RandomAccessFile(file, "rw");
while(file.length() != (userInputNum * 1048576 * 1024))
{
f.write(x);
}
f.close();
Granted it is pretty slow, but I believe it's much faster creating a 10 GB file in app vs pushing a 10 GB file to the device. If someone has an idea of how to optimize this or change it completely, please do post!
How it works:
It's writing to the file until the file has reached the size that the user wants. I believe I can do something different instead of byte[] but I'll leave that to whoever wants to figure that out. I'll do this on my own for myself, but hope this helps someone else!
I am using JSch to provide an utility that backs up an entire server data for my company.
The application is developped using Java 8 & JavaFX 2
My problem is that I believe that my recursive download is at fault because my program RAM usage is growing by the second and never seems to free up.
This is the order of the operations I perform :
Connexion to remote server : OK;
Opning SFT Channel -> session.openChannel("sftp") : OK
Retrieving local directory -> sftpChannel.cd(MAIN_DIRECTORY) : OK
Listing directory content -> final Vector<ChannelSftp.LsEntry> entries= sftpChannel.ls(".");
Calling recursive method to :
if (entry.getAttrs().isDir())-> calling recursive method
else -> it's a file there are no more sub folder to go to ;
Process download
Now, where I think the memory leak occurs in the Download Part :
Starting download & retrieving the inputstream
final InputStream is = sftpChannel.get(remoteFilePath, new SftpProgressMonitor());
Where SftpProgressMonitor() is an interface to provide progress monitoring which I use for updating UI (progressbar). this interface never references internally the inputstream just to make that clear. But it's still an non-static anonymous class so it does hold a reference to the DownloadMethod scope.
While it's downloading, I create the file to save and open an OutputStream to write the downloaded content in it :
final BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(fileToSave));
This is where I write to file as the remote file gets downloaded :
Code:
int readCount;
final byte[] buffer = new byte[8 * 1024];
while ((readCount = is.read(buffer)) > 0) {
bos.write(buffer, 0, readCount);
bos.flush();
}
And of course, once this is completed, I don't forget to close both streams:
is.close(); //the inputstream from sftChannel.get()
bos.close(); //the FileOutputStream
So as you can understand I recursively process these operations meaning :
List current directory content ;
Check first entry
if it's a directory, go inside, and do 1.
it it's a file, download it
Check second entry
etc.
Multiple tests show the exact same behaviour (and the content to download remain exactly the same during these tests). This means that my memory usage keeps growing and at the same pace.
[UPDATE 1]
I tried a solution where I let JSch write to the FileOutputStream itself :
final BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(fileToSave));
sftpChannel.get(remoteFilePath, bos, new SftpProgressMonitor()
And in SftpProgressMonitor.end() I close -> bos.close().
No changed at all.
I also tried listing all files, still recursively, adding their respective bytes length to a private long totalBytesToDownload and my program memory remained very stable : only 20Mb taken during the whole process (on the account that totalBytesToDownload kept increasing) which confirms that my downloading method is really at fault.
If I do close my streams, why the GC won't collect them ?
I,m trying to export some files from a system and save it in my drive, the problem is that some files are pretty big and I get the java out of memory error.
FileOutputStream fileoutstream = new FileOutputStream(filenameExtension);
fileoutstream.write(dataManagement.getContent(0).getData());
fileoutstream.flush();
fileoutstream.close();
Any recomendation that I can try, I add the flush but now diference, this will call the export method, generate the file and saved. I,m using a cursos to run over the data that I,m exporting not an array, I try to add more memory but the files are too big.
You are loading the whole file in memory before writing it. On the contrary you should:
load only a chunk of data
write it
repeat the steps above until you have processed all data.
If the files are really big, you may need to read/write them in chunks. If the files are big enough to fit in memory, then you can increase the size of the virtual machine memory.
i.e:
java -Xmx512M ...
FileInputStream fi = infile;
FileOutputStream fo = outfile
byte[] buffer = new byte[5000];
int n;
while((n = fi.read(buffer)) > 0)
fo.write(b, 0, n);
Hope this helps to get the idea.
you can use the spring batch framework to do the reading and writing the file in chunk size.
http://static.springsource.org/spring-batch/
I'm having a really strange problem. I'm trying to download some file and store. My code is relatively simple and straight forward (see below) and works fine on my local machine.
But it is intended to run on a Windows Terminal Server accessed through Citrix and a VPN. The file is to be saved to a mounted network drive. This mount is the local C:\ drive mounted through the Citrix VPN, so there might be some lag involved. Unfortunately I have no inside detail about how exactly the whole infrastructure is set up...
Now my problem is that the code below throws an IOException telling me there is no space left on the disk, when attempting to execute the write() call. The directory structure is created alright and a zero byte file is created, but content is never written.
There is more than a gigabyte space available on the drive, the Citrix client has been given "Full Access" permissions and copying/writing files on that mapped drive with Windows explorer or notepad works just fine. Only Java is giving me trouble here.
I also tried downloading to a temporary file first and then copying it to the destination, but since copying is basically the same stream operation as in my original code, there was no change in behavior. It still fails with a out of disk space exception.
I have no idea what else to try. Can you give any suggestions?
public boolean downloadToFile(URL url, File file){
boolean ok = false;
try {
file.getParentFile().mkdirs();
BufferedInputStream bis = new BufferedInputStream(url.openStream());
byte[] buffer = new byte[2048];
FileOutputStream fos = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream( fos , buffer.length );
int size;
while ((size = bis.read(buffer, 0, buffer.length)) != -1) {
bos.write(buffer, 0, size);
}
bos.flush();
bos.close();
bis.close();
ok = true;
}catch(Exception e){
e.printStackTrace();
}
return ok;
}
Have a try with commons-io. Esspecially the Util Classes FileUtils and IOUtils
After changing our code to use commons-io all file operations went much smouther. Even with mapped network drives.