I have a database file in res/raw/ folder. I am calling Resources.openRawResource() with the file name as R.raw.FileName and I get an input stream, but I have an another database file in device, so to copy the contents of that db to the device db I use:
BufferedInputStream bi = new BufferedInputStream(is);
and FileOutputStream, but I get an exception that database file is corrupted. How can I proceed?
I try to read the file using File and FileInputStream and the path as /res/raw/fileName, but that also doesn't work.
Yes, you should be able to use openRawResource to copy a binary across from your raw resource folder to the device.
Based on the example code in the API demos (content/ReadAsset), you should be able to use a variation of the following code snippet to read the db file data.
InputStream ins = getResources().openRawResource(R.raw.my_db_file);
ByteArrayOutputStream outputStream=new ByteArrayOutputStream();
int size = 0;
// Read the entire resource into a local byte buffer.
byte[] buffer = new byte[1024];
while((size=ins.read(buffer,0,1024))>=0){
outputStream.write(buffer,0,size);
}
ins.close();
buffer=outputStream.toByteArray();
A copy of your file should now exist in buffer, so you can use a FileOutputStream to save the buffer to a new file.
FileOutputStream fos = new FileOutputStream("mycopy.db");
fos.write(buffer);
fos.close();
InputStream.available has severe limitations and should never be used to determine the length of the content available for streaming.
http://developer.android.com/reference/java/io/FileInputStream.html#available():
"[...]Returns an estimated number of bytes that can be read or skipped without blocking for more input. [...]Note that this method provides such a weak guarantee that it is not very useful in practice."
You have 3 solutions:
Go through the content twice, first just to compute content length, second to actually read the data
Since Android resources are prepared by you, the developer, hardcode its expected length
Put the file in the /asset directory and read it through AssetManager which gives you access to AssetFileDescriptor and its content length methods. This may however give you the UNKNOWN value for length, which isn't that useful.
Related
This problem I am facing in title is very similar to this question previously raised here (Azure storage: Uploaded files with size zero bytes), but it was for .NET and the context for my Java scenario is that I am uploading small-size CSV files on a daily basis (about less than 5 Kb per file). In addition the API code uses the latest version of Azure API that I am using in contrast against the 2010 used by the other question.
I couldn't figure out where have I missed out, but the other alternative is to do it in File Storage, but of course the blob approach was recommended by a few of my peers.
So far, I have mostly based my code on uploading a file as a block of blob on the sample that was shown in the Azure Samples git [page] (https://github.com/Azure-Samples/storage-blob-java-getting-started/blob/master/src/BlobBasics.java). I have already done the container setup and file renaming steps, which isn't a problem, but after uploading, the size of the file at the blob storage container on my Azure domain shows 0 bytes.
I've tried alternating in converting the file into FileInputStream and upload it as a stream but it still produces the same manner.
fileName=event.getFilename(); //fileName is e.g eod1234.csv
String tempdir = System.getProperty("java.io.tmpdir");
file= new File(tempdir+File.separator+fileName); //
try {
PipedOutputStream pos = new PipedOutputStream();
stream= new PipedInputStream(pos);
buffer = new byte[stream.available()];
stream.read(buffer);
FileInputStream fils = new FileInputStream(file);
int content = 0;
while((content = fils.read()) != -1){
System.out.println((char)content);
}
//Outputstream was written as a test previously but didn't work
OutputStream outStream = new FileOutputStream(file);
outStream.write(buffer);
outStream.close();
// container name is "testing1"
CloudBlockBlob blob = container.getBlockBlobReference(fileName);
if(fileName.length() > 0){
blob.upload(fils,file.length()); //this is testing with fileInputStream
blob.uploadFromFile(fileName); //preferred, just upload from file
}
}
There are no error messages shown, just we know that the file touches the blob storage and shows a size 0 bytes. It's a one-way process by only uploading CSV-format files. At the blob container, it should be showing those uploaded files a size of 1-5 KBs each.
Instead of blob.uploadFromFile(fileName); you should use blob.uploadFromFile(file.getAbsolutePath()); because uploadFromFile method requires absolute path. And you don't need the blob.upload(fils,file.length());.
Refer to Microsoft Docs: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java#upload-blobs-to-the-container
The Azure team replied to a same query I've put on mail and I have confirmed that the problem was not on the API, but due to the Upload component in Vaadin which has a different behavior than usual (https://vaadin.com/blog/uploads-and-downloads-inputs-and-outputs). Either the CloudBlockBlob or the BlobContainerUrl approach works.
The out-of-the-box Upload component requires manual implementation of the FileOutputStream to a temporary object unlike the usual servlet object that is seen everywhere. Since there was limited time, I used one of their addons, EasyUpload, because it had Viritin UploadFileHandler incorporated into it instead of figuring out how to stream the object from scratch. Had there been more time, I would definitely try out the MultiFileUpload addon, which has additional interesting stuff, in my sandbox workspace.
I had this same problem working with .png (copied from multipart files) files I was doing this:
File file = new File(multipartFile.getOriginalFilename());
and the blobs on Azure were 0bytes but when I changed to this:
File file = new File("C://uploads//"+multipartFile.getOriginalFilename());
it started saving the files properly
I have a very confusing problem and hope that I can get some ideas here.
My problem is very simple, but I didn't find a solution yet.
I want to create a simple ZIP File with ZipEntry's in it. The ZipEntry's are created by a given byte array (saved in a Postgres-DB with Hibernate).
When I put this byte array into my ZipOutputStream.write(..) the ZIP File created is always corrupt. What am I doing wrong?
The ZIP File is transferred to a FTP-Server afterwards.
ByteArrayOutputStream bos = new ByteArrayOutputStream();
final ZipOutputStream zipOut = new ZipOutputStream(bos);
String filename = "test.zip";
for(final Attachment attachment : transportDoc.getAttachments()) {
log.debug("Adding "+attachment.getFileName()+" to ZIP file /tmp/"+filename);
ZipEntry ze = new ZipEntry(attachment.getFileName());
zipOut.putNextEntry(ze);
zipOut.write(attachment.getFileContent());
zipOut.flush();
zipOut.closeEntry();
}
zipOut.close();
org.apache.commons.io.FileUtils.writeByteArrayToFile(new File("/tmp/"+filename), bos.toByteArray());
I am confused, because when I replaced
zipOut.write(attachment.getFileContent()); //This is the byte array from db
with
zipOut.write("Bla bla".getBytes());
it worked!
But the byte array from the DB can't be corrupt, because it can be written to a file with
org.apache.commons.io.FileUtils.writeByteArrayToFile(new File("/tmp/test.png"), attachment.getFileContent());
with no problem. It is a correct file.
I hope you have some ideas left.
Thanks in advance.
EDIT:
I tried to repair the ZIP file offline and then this messages appears:
zip warning: no end of stream entry found: cglhnngplpmhipfg.png
(This png file is the byte-Array-File)
Simple unzip-command output the following:
unzip created.zip
Archive: created.zip
error [created.zip]: missing 2 bytes in zipfile
(attempting to process anyway)
error [created.zip]: attempt to seek before beginning of zipfile
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
(attempting to re-compensate)
replace cglhnngplpmhipfg.png? [y]es, [n]o, [A]ll, [N]one, [r]ename: y
inflating: cglhnngplpmhipfg.png
error: invalid compressed data to inflate
file #2: bad zipfile offset (local header sig): 24709
(attempting to re-compensate)
inflating: created.xml
EDIT 2:
When I write this file to the Filesystem and add this file to the ZIP by an InputStream it doesn't work either! But the File on the Filesystem is ok. I can open the Image with no problem. Its very confusing
File tmpAttachment = new File("/tmp/"+filename+attachment.getFileName());
FileUtils.writeByteArrayToFile(tmpAttachment, attachment.getFileContent());
FileInputStream inTmp = new FileInputStream(tmpAttachment);
int len;
byte[] buffer = new byte[1024];
while ((len = inTmp.read(buffer)) > 0) {
zipOut.write(buffer, 0, len);
}
inTmp.close();
EDIT 3:
This problem only appears when I try to add "complex" files like png or pdf. If I put a txt-file in it, it works.
The problem was NOT in the Zip-Library itself.
It was the transmission to an external FTP Server with wrong mode. (Not binary).
Thanks all for your help.
Try closeEntry() before flush(). Also you can try to explicitly specify the size of the entry using ze.setSize(attachment.getFileContent().length).
This question already has answers here:
How to read file from ZIP using InputStream?
(7 answers)
Closed 1 year ago.
How can I create new File (from java.io) in memory, not on the hard disk?
I am using the Java language. I don't want to save the file on the hard drive.
I'm faced with a bad API (java.util.jar.JarFile). It's expecting File file of String filename. I have no file (only byte[] content) and can create temporary file, but it's not beautiful solution. I need to validate the digest of a signed jar.
byte[] content = getContent();
File tempFile = File.createTempFile("tmp", ".tmp");
FileOutputStream fos = new FileOutputStream(tempFile);
fos.write(archiveContent);
JarFile jarFile = new JarFile(tempFile);
Manifest manifest = jarFile.getManifest();
Any examples of how to achieve getting manifest without creating a temporary file would be appreciated.
How can I create new File (from java.io) in memory , not in the hard disk?
Maybe you are confusing File and Stream:
A File is an abstract representation of file and directory pathnames. Using a File object, you can access the file metadata in a file system, and perform some operations on files on this filesystem, like delete or create the file. But the File class does not provide methods to read and write the file contents.
To read and write from a file, you are using a Stream object, like FileInputStream or FileOutputStream. These streams can be created from a File object and then be used to read from and write to the file.
You can create a stream based on a byte buffer which resides in memory, by using a ByteArrayInputStream and a ByteArrayOutputStream to read from and write to a byte buffer in a similar way you read and write from a file. The byte array contains the "File's" content. You do not need a File object then.
Both the File... and the ByteArray... streams inherit from java.io.OutputStream and java.io.InputStream, respectively, so that you can use the common superclass to hide whether you are reading from a file or from a byte array.
It is not possible to create a java.io.File that holds its content in (Java heap) memory *.
Instead, normally you would use a stream. To write to a stream, in memory, use:
OutputStream out = new ByteArrayOutputStream();
out.write(...);
But unfortunately, a stream can't be used as input for java.util.jar.JarFile, which as you mention can only use a File or a String containing the path to a valid JAR file. I believe using a temporary file like you currently do is the only option, unless you want to use a different API.
If you are okay using a different API, there is conveniently a class in the same package, named JarInputStream you can use. Simply wrap your archiveContent array in a ByteArrayInputStream, to read the contents of the JAR and extract the manifest:
try (JarInputStream stream = new JarInputStream(new ByteArrayInputStream(archiveContent))) {
Manifest manifest = stream.getManifest();
}
*) It's obviously possible to create a full file-system that resides in memory, like a RAM-disk, but that would still be "on disk" (and not in Java heap memory) as far as the Java process is concerned.
You could use an in-memory filesystem, such as Jimfs
Here's a usage example from their readme:
FileSystem fs = Jimfs.newFileSystem(Configuration.unix());
Path foo = fs.getPath("/foo");
Files.createDirectory(foo);
Path hello = foo.resolve("hello.txt"); // /foo/hello.txt
Files.write(hello, ImmutableList.of("hello world"), StandardCharsets.UTF_8);
I think temporary file can be another solution for that.
File tempFile = File.createTempFile(prefix, suffix, null);
FileOutputStream fos = new FileOutputStream(tempFile);
fos.write(byteArray);
There is a an answer about that here.
I have an app that creates multiple files using a byte array it gets from a Socket InputStream. The file saves perfectly when I just save one file, but if I save the one file then re-instantiate the file stream and save a different file, the first file gets corrupted and the second file is saved perfectly. I opened the two files in a text editor and it seems (about...)the first 1/5th of the first file is blank spaces but the second file is full, and they both have the same size properties(9,128,731 bytes). The following example is a duplication of the senario but with the same corruption result:
FileOutputStream outStream;
outStream = new FileOutputStream("/mnt/sdcard/testmp3.mp3");
File file = new File("/mnt/sdcard/test.mp3");
FileInputStream inStream = new FileInputStream(file);
byte[] buffer = new byte[9128731];
inStream.read(buffer);
outStream.write(buffer, 0, buffer.length);
inStream.close();
outStream.flush();
outStream.close();
outStream = null;
outStream = new FileOutputStream("/mnt/sdcard/testmp32.mp3");
outStream.write(buffer, 0, buffer.length);
inStream.close();
outStream.flush();
outStream.close();
outStream = null;
I tried this EXACT code in a regular java application and both files were saved without a problem. Does anyone know why the android is doing this?
Any help would be GREATLY appreciated
As jtahlborn mentioned you cannot assume that InputStream.read(byte[]) will always read as many bytes as you want. As well you should avoid using such a large byte array to write out at once. At least not without buffering, you could potentially overflow something. You can handle these concerns and save some memory by copying the file like this:
File inFile = new File("/mnt/sdcard/test.mp3");
File outFile = new File("/mnt/sdcard/testmp3.mp3");
FileInputStream inStream = new FileInputStream(inFile);
FileOutputStream outStream = new FileOutputStream(outFile);
byte[] buffer = new byte[65536];
int len;
while ((len = inStream.read(buffer)) != -1) {
outStream.write(buffer, 0, len);
}
inStream.close();
outStream.close();
I see some potential issues that can get you started debugging:
You writing to the first output stream before you close the input stream. This is a bit weird.
You can't accurately gauge the similarity/difference between two binary files using a text editor. You need to look at the files in a hex editor (or better, Audacity)
I would use BufferedOutputStream as suggested by the Android docs:
out = new BufferedOutputStream(new FileOutputStream(file));
http://developer.android.com/reference/java/io/FileOutputStream.html
As a debugging technique, print the contents of buffer after the first write. Also, inStream.read() returns an int. I would additionally compare this to buffer.length and make sure they are the same. Regardless, I would just call write(buffer) instead of write(buffer, 0, buffer.length) unless you have a really good reason.
-tjw
You are assuming that the read() call will read as many bytes as you want. that is incorrect. that method is free to read anywhere from 1 to buffer.length bytes. that is why you should always use the return value to determine how many bytes were actually read. there are plenty of streams tutorials out there which will show you how to correctly read from a java stream (i.e. how to completely fill your buffer).
If anyone's having the same problem and wondering how o fix it I found out the problem was being caused by my SD card. I bought a 32gb kingston sd card and just yesterday I decided to try running the same code again accept using the internal storage instead and everything worked perfectly. I also tried the stock 2gb SD card it came with and it also worked perfectly. I glad to know my code works great but a little frustrated I spent 50 bucks on a defective memory card. Thanks for everyones input.
I'm reading a bunch of files from an FTP. Then I need to unzip those files and write them to a fileshare.
I don't want to write the files first and then read them back and unzip them. I want to do it all in one go. Is that possible?
This is my code
FTPClient fileclient = new FTPClient();
..
ByteArrayOutputStream out = new ByteArrayOutputStream();
fileclient.retrieveFile(filename, out);
??????? //How do I get my out-stream into a File-object?
File file = new File(?);
ZipFile zipFile = new ZipFile(file,ZipFile.OPEN_READ);
Any ideas?
You should use a ZipInputStream wrapped around the InputStream returned from FTPClient's retrieveFileStream(String remote).
You don't need to create the File object.
If you want to save the file you should pipe the stream directly into a ZipOutputStream
ByteArrayOutputStream out = new ByteArrayOutputStream();
ZipOutputStream zos = new ZipOutputStream(out);
// do whatever with your zip file
If, instead, you want to open the just retrieved file work with the ZipInputStream:
new ZipInputStream(fileClient.retrieveFileStream(String remote));
Just read the doc here and here
I think you want:
ZipInputStream zis = new ZipInputStream( new ByteArrayInputStream( out.toByteArray() ) );
Then read your data from the ZipInputStream.
As others have pointed out, for what you are trying to do, you don't need to write the downloaded ZIP "file" to the file system at all.
Having said that, I'd like to point out a misconception in your question, that is also reflected in some of the answers.
In Java, a File object does no really represent a file at all. Rather, it represents a file name or *path". While this name or path often corresponds to an actual file, this doesn't need to be the case.
This may sound a bit like hair-splitting, but consider this scenario:
File dir = new File("/tmp/foo");
boolean isDirectory = dir.isDirectory();
if (isDirectory) {
// spend a long time computing some result
...
// create an output file in 'dir' containing the result
}
Now if instances of the File class represented objects in the file system, then you'd expect the code that creates the output file to succeed (modulo permissions). But in fact, the create could fail because, something deleted the "/tmp/foo", or replaced it with a regular file.
It must be said that some of the methods on the File class do seem to assume that the File object does correspond to a real filesystem entity. Examples are the methods for getting a file's size or timestamps, or for listing the names in a directory. However, in each case, the method is specified to throw an exception if the actual file does not exist or has the wrong type for the operation requested.
Well, you could just create a FileOutputStream and then write the data from that:
FileOutputStream fos = new FileOutputStream(filename);
try {
out.writeTo(fos);
} finally {
fos.close();
}
Then just create the File object:
File file = new File(filename);
You need to understand that a File object doesn't represent any real data on disk - it's just a filename, effectively. The file doesn't even have to exist. If you want to actually write data, that's what FileOutputStream is for.
EDIT: I've just spotted that you didn't want to write the data out first - but that's what you've got to do, if you're going to pass the file to something that expects a genuine file with data in.
If you don't want to do that, you'll have to use a different API which doesn't expect a file to exist... as per Qwerky's answer.
Just change the ByteArrayOutputStream to a FileOutputStream.