Android FileOutputStream creates corrupted file - java

I have an app that creates multiple files using a byte array it gets from a Socket InputStream. The file saves perfectly when I just save one file, but if I save the one file then re-instantiate the file stream and save a different file, the first file gets corrupted and the second file is saved perfectly. I opened the two files in a text editor and it seems (about...)the first 1/5th of the first file is blank spaces but the second file is full, and they both have the same size properties(9,128,731 bytes). The following example is a duplication of the senario but with the same corruption result:
FileOutputStream outStream;
outStream = new FileOutputStream("/mnt/sdcard/testmp3.mp3");
File file = new File("/mnt/sdcard/test.mp3");
FileInputStream inStream = new FileInputStream(file);
byte[] buffer = new byte[9128731];
inStream.read(buffer);
outStream.write(buffer, 0, buffer.length);
inStream.close();
outStream.flush();
outStream.close();
outStream = null;
outStream = new FileOutputStream("/mnt/sdcard/testmp32.mp3");
outStream.write(buffer, 0, buffer.length);
inStream.close();
outStream.flush();
outStream.close();
outStream = null;
I tried this EXACT code in a regular java application and both files were saved without a problem. Does anyone know why the android is doing this?
Any help would be GREATLY appreciated

As jtahlborn mentioned you cannot assume that InputStream.read(byte[]) will always read as many bytes as you want. As well you should avoid using such a large byte array to write out at once. At least not without buffering, you could potentially overflow something. You can handle these concerns and save some memory by copying the file like this:
File inFile = new File("/mnt/sdcard/test.mp3");
File outFile = new File("/mnt/sdcard/testmp3.mp3");
FileInputStream inStream = new FileInputStream(inFile);
FileOutputStream outStream = new FileOutputStream(outFile);
byte[] buffer = new byte[65536];
int len;
while ((len = inStream.read(buffer)) != -1) {
outStream.write(buffer, 0, len);
}
inStream.close();
outStream.close();

I see some potential issues that can get you started debugging:
You writing to the first output stream before you close the input stream. This is a bit weird.
You can't accurately gauge the similarity/difference between two binary files using a text editor. You need to look at the files in a hex editor (or better, Audacity)
I would use BufferedOutputStream as suggested by the Android docs:
out = new BufferedOutputStream(new FileOutputStream(file));
http://developer.android.com/reference/java/io/FileOutputStream.html
As a debugging technique, print the contents of buffer after the first write. Also, inStream.read() returns an int. I would additionally compare this to buffer.length and make sure they are the same. Regardless, I would just call write(buffer) instead of write(buffer, 0, buffer.length) unless you have a really good reason.
-tjw

You are assuming that the read() call will read as many bytes as you want. that is incorrect. that method is free to read anywhere from 1 to buffer.length bytes. that is why you should always use the return value to determine how many bytes were actually read. there are plenty of streams tutorials out there which will show you how to correctly read from a java stream (i.e. how to completely fill your buffer).

If anyone's having the same problem and wondering how o fix it I found out the problem was being caused by my SD card. I bought a 32gb kingston sd card and just yesterday I decided to try running the same code again accept using the internal storage instead and everything worked perfectly. I also tried the stock 2gb SD card it came with and it also worked perfectly. I glad to know my code works great but a little frustrated I spent 50 bucks on a defective memory card. Thanks for everyones input.

Related

Writing to and reading from a text file using java.nio.FileChannel from two different JVMs concurrently

I'm trying to write two programs, one that writes to a text file, and the other one that reads from it. I've tried using java.io, but ran into concurrency problems. However, when I switched to java.nio, I ran into even bigger problems, probably not related to concurrency since I lock the file in both programs when trying to read/write, but the actual way of reading from or writing to a file.
Writer program code (the part that is relevant):
Path filePath = Paths.get("map.txt");
FileChannel fileChannel;
ByteBuffer buffer;
StringBuilder existingObjects = new StringBuilder();
while (true) {
for (FlyingObject fo : airbornUnitsList) {
existingObjects.append(fo.toString() + System.lineSeparator());
}
if(existingObjects.length() > System.lineSeparator().length())
existingObjects.setLength(existingObjects.length() - System.lineSeparator().length());
buffer = ByteBuffer.wrap(existingObjects.toString().getBytes());
fileChannel = FileChannel.open(filePath, StandardOpenOption.READ, StandardOpenOption.WRITE);
fileChannel.lock();
fileChannel.write(buffer);
fileChannel.close();
existingObjects.delete(0, existingObjects.length());
sleep(100);
}
FlyingObject is a simple class with some fields and an overridden toString() method and airbornUnitsList is a list of those objects, so I'm basically iterating through the list, appending the FlyingObject objects to StringBuilder object, removing the last "new line" from StringBuilder, putting it into the buffer and writing to the file. As you can see, I have locked the file prior to writing to the file and then unlocked it afterwards.
Reader program code (the part that is relevant):
Path filePath = Paths.get("map.txt");
FileChannel fileChannel;
ByteBuffer buffer;
StringBuilder readObjects = new StringBuilder();
while (true) {
fileChannel = FileChannel.open(filePath, StandardOpenOption.READ, StandardOpenOption.WRITE);
fileChannel.lock();
buffer = ByteBuffer.allocate(100);
numOfBytesRead = fileChannel.read(buffer);
while (numOfBytesRead != -1) {
buffer.flip();
readObjects.append(new String(buffer.array()));
buffer.clear();
numOfBytesRead = fileChannel.read(buffer);
}
fileChannel.close();
System.out.println(readObjects);
}
Even when I manually write a few lines in the file and then run the Reader program, it doesn't read it correctly. What could be the issue here?
EDIT: After playing with buffer size a bit, I realized that the file is read wrongly because the buffer size is smaller than the content in the file. Could this be related to file encoding?
I found out what the problem was.
Firstly, in the writer program, I needed to add the fileChannel.truncate(0); after opening the file channel. That way, I would delete the old content of the file and write it from the beginning. Without that line, I would just overwrite the old content of the file with new content when writing and if the new content is shorter than the old content, the old content would still remain in those positions not covered by new content. Only if I was sure that the new content is at least as big as the old content and would rewrite it completely, I wouldn't need the truncate option, but that wasn't the case for me.
Secondly, regarding the reader, the reason it wasn't reading the whole file is because the while loop would end before the last part of the file content was appended to the StringBuilder. After I modified the code and changed the order of operations a bit, like this:
numOfBytesRead = 0;
while (numOfBytesRead != -1) {
numOfBytesRead = fileChannel.read(buffer);
buffer.flip();
readObjects.append(new String(buffer.array()));
buffer.clear();
}
it worked without problems.

Is there are better way to zip large files in java?

I have around 5 to 6 large files each size 3 GB . My goal is to zip those files and then transfer it using file servlet .My current code takes a great amount of time resulting in timeout session on the browser . Is there a better way to zip the files .
File zipFile=new File( downloadedFileLocation.getAbsolutePath()+"/Download.zip" );
FileOutputStream fos = new FileOutputStream(zipFile);
ZipOutputStream zos = new ZipOutputStream(fos);
for( File f:downloadedFileLocation.listFiles() ) {
byte[] buffer = new byte[1024];
ZipEntry ze= new ZipEntry(f.getName());
zos.putNextEntry(ze);
FileInputStream in = new FileInputStream(f.getAbsolutePath());
int len;
while ((len = in.read(buffer)) > 0) {
zos.write(buffer, 0, len);
}
in.close();
zos.closeEntry();
f.delete();
}
zos.close();
fos.close();
Will changing the buffer size make any difference ?
Can anyone suggest any better way where zip can be done faster .
Can anyone suggest any better way where zip can be done faster
No, you can't do zipping faster, but you can do it "live".
Don't write the zipped content to a temporary file before transmitting it. Write it straight to the OutputStream in the Servlet.
The result is that zipped content is transmitted as it is compressed, so the connection will not time out, and total response time is reduced.
You should also use try-with-resources for resource management, and the newer NIO file classes for ease of use and better error messages.
Something like this:
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
resp.setContentType("application/zip");
try (ZipOutputStream zos = new ZipOutputStream(resp.getOutputStream())) {
for (File f : downloadedFileLocation.listFiles()) {
zos.putNextEntry(new ZipEntry(f.getName()));
Files.copy(f.toPath(), zos);
Files.delete(f.toPath());
}
}
}
I left the delete() in there, but depending on what you're doing, it is likely not appropriate when doing it this way. Or at the very least, you should not delete until download is complete, i.e. until after the for loop ends.
IMHO, there is always a better way of doing the things. Recently (of course it was Java 7 NIO) I got to know about NIO way of zipping files and its way faster than any conventional method so far. I havent worked on the time's numbers but its almost twice the speed given any conventional method so far.
Its worth a try. Refer this.
The FileOutputStream should be wrapped with a BufferedOuputStream. The ZipOutputStream writes many small chunks to its destination OutputStream when zipping the data. It should have a minimum buffer size of 16KB. This should speed it up by a factor of 10.
When reading file data, the buffer size should also be at least 16KB.

ImageIO.read closes inputs stream

I write images and other data to binary file. When I read image via ImageIO.read(InputStream) from that file, it reads image, it is ok, but method closes given input stream and I cant proceed to read other data.
Why so it is made?
Then how read image without closing stream?
EDIT: It is simple code that writes image and string after into file:
File f = new File("test.bin");
if(f.exists())
f.delete();
f.createNewFile();
DataOutputStream os = new DataOutputStream(new FileOutputStream(f));
BufferedImage img = ImageIO.read(new File("test.jpg"));
ImageIO.write(img, "jpg", os);
os.writeUTF("test string after image");
os.close();
And code that reads all:
DataInputStream is = new DataInputStream(new FileInputStream(f));
BufferedImage img = ImageIO.read(is);
String s = is.readUTF(); // on this line EOFException occurs
System.out.println(s);
NetBeans output:
Exception in thread "main" java.io.EOFException
at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:340)
at java.io.DataInputStream.readUTF(DataInputStream.java:589)
at java.io.DataInputStream.readUTF(DataInputStream.java:564)
at mediamanager.Main.test(Main.java:105)
at mediamanager.Main.main(Main.java:44)
May be I'm doing something wrong?
Quote from the documentation of ImageIO.read(InputStream)
This method does not close the provided InputStream after the read operation has completed; it is the responsibility of the caller to close the stream, if desired.
Emphasis not mine.
The problem is elsewhere. Probably in your code.
I can see two possible causes of such behaviour:
Image reader use buffer to read data from the stream to improve performance. So it reads more data from the stream.
Also image reader could try to read EXIF for already parsed image. Such information usually appended at the end of file to avoid full file rewriting when you are just adding a couple of piece of information about the image.
Try ImageIO.setUseCash(false) it could help.

IOException insufficient disk space when accessing Citrix mounted drive

I'm having a really strange problem. I'm trying to download some file and store. My code is relatively simple and straight forward (see below) and works fine on my local machine.
But it is intended to run on a Windows Terminal Server accessed through Citrix and a VPN. The file is to be saved to a mounted network drive. This mount is the local C:\ drive mounted through the Citrix VPN, so there might be some lag involved. Unfortunately I have no inside detail about how exactly the whole infrastructure is set up...
Now my problem is that the code below throws an IOException telling me there is no space left on the disk, when attempting to execute the write() call. The directory structure is created alright and a zero byte file is created, but content is never written.
There is more than a gigabyte space available on the drive, the Citrix client has been given "Full Access" permissions and copying/writing files on that mapped drive with Windows explorer or notepad works just fine. Only Java is giving me trouble here.
I also tried downloading to a temporary file first and then copying it to the destination, but since copying is basically the same stream operation as in my original code, there was no change in behavior. It still fails with a out of disk space exception.
I have no idea what else to try. Can you give any suggestions?
public boolean downloadToFile(URL url, File file){
boolean ok = false;
try {
file.getParentFile().mkdirs();
BufferedInputStream bis = new BufferedInputStream(url.openStream());
byte[] buffer = new byte[2048];
FileOutputStream fos = new FileOutputStream(file);
BufferedOutputStream bos = new BufferedOutputStream( fos , buffer.length );
int size;
while ((size = bis.read(buffer, 0, buffer.length)) != -1) {
bos.write(buffer, 0, size);
}
bos.flush();
bos.close();
bis.close();
ok = true;
}catch(Exception e){
e.printStackTrace();
}
return ok;
}
Have a try with commons-io. Esspecially the Util Classes FileUtils and IOUtils
After changing our code to use commons-io all file operations went much smouther. Even with mapped network drives.

Resources.openRawResource() issue Android

I have a database file in res/raw/ folder. I am calling Resources.openRawResource() with the file name as R.raw.FileName and I get an input stream, but I have an another database file in device, so to copy the contents of that db to the device db I use:
BufferedInputStream bi = new BufferedInputStream(is);
and FileOutputStream, but I get an exception that database file is corrupted. How can I proceed?
I try to read the file using File and FileInputStream and the path as /res/raw/fileName, but that also doesn't work.
Yes, you should be able to use openRawResource to copy a binary across from your raw resource folder to the device.
Based on the example code in the API demos (content/ReadAsset), you should be able to use a variation of the following code snippet to read the db file data.
InputStream ins = getResources().openRawResource(R.raw.my_db_file);
ByteArrayOutputStream outputStream=new ByteArrayOutputStream();
int size = 0;
// Read the entire resource into a local byte buffer.
byte[] buffer = new byte[1024];
while((size=ins.read(buffer,0,1024))>=0){
outputStream.write(buffer,0,size);
}
ins.close();
buffer=outputStream.toByteArray();
A copy of your file should now exist in buffer, so you can use a FileOutputStream to save the buffer to a new file.
FileOutputStream fos = new FileOutputStream("mycopy.db");
fos.write(buffer);
fos.close();
InputStream.available has severe limitations and should never be used to determine the length of the content available for streaming.
http://developer.android.com/reference/java/io/FileInputStream.html#available():
"[...]Returns an estimated number of bytes that can be read or skipped without blocking for more input. [...]Note that this method provides such a weak guarantee that it is not very useful in practice."
You have 3 solutions:
Go through the content twice, first just to compute content length, second to actually read the data
Since Android resources are prepared by you, the developer, hardcode its expected length
Put the file in the /asset directory and read it through AssetManager which gives you access to AssetFileDescriptor and its content length methods. This may however give you the UNKNOWN value for length, which isn't that useful.

Categories

Resources