Java out of memory using fileoutstream - java

I,m trying to export some files from a system and save it in my drive, the problem is that some files are pretty big and I get the java out of memory error.
FileOutputStream fileoutstream = new FileOutputStream(filenameExtension);
fileoutstream.write(dataManagement.getContent(0).getData());
fileoutstream.flush();
fileoutstream.close();
Any recomendation that I can try, I add the flush but now diference, this will call the export method, generate the file and saved. I,m using a cursos to run over the data that I,m exporting not an array, I try to add more memory but the files are too big.

You are loading the whole file in memory before writing it. On the contrary you should:
load only a chunk of data
write it
repeat the steps above until you have processed all data.

If the files are really big, you may need to read/write them in chunks. If the files are big enough to fit in memory, then you can increase the size of the virtual machine memory.
i.e:
java -Xmx512M ...
FileInputStream fi = infile;
FileOutputStream fo = outfile
byte[] buffer = new byte[5000];
int n;
while((n = fi.read(buffer)) > 0)
fo.write(b, 0, n);
Hope this helps to get the idea.

you can use the spring batch framework to do the reading and writing the file in chunk size.
http://static.springsource.org/spring-batch/

Related

Azure Storage Blob: Uploaded CSV file shows zero bytes

This problem I am facing in title is very similar to this question previously raised here (Azure storage: Uploaded files with size zero bytes), but it was for .NET and the context for my Java scenario is that I am uploading small-size CSV files on a daily basis (about less than 5 Kb per file). In addition the API code uses the latest version of Azure API that I am using in contrast against the 2010 used by the other question.
I couldn't figure out where have I missed out, but the other alternative is to do it in File Storage, but of course the blob approach was recommended by a few of my peers.
So far, I have mostly based my code on uploading a file as a block of blob on the sample that was shown in the Azure Samples git [page] (https://github.com/Azure-Samples/storage-blob-java-getting-started/blob/master/src/BlobBasics.java). I have already done the container setup and file renaming steps, which isn't a problem, but after uploading, the size of the file at the blob storage container on my Azure domain shows 0 bytes.
I've tried alternating in converting the file into FileInputStream and upload it as a stream but it still produces the same manner.
fileName=event.getFilename(); //fileName is e.g eod1234.csv
String tempdir = System.getProperty("java.io.tmpdir");
file= new File(tempdir+File.separator+fileName); //
try {
PipedOutputStream pos = new PipedOutputStream();
stream= new PipedInputStream(pos);
buffer = new byte[stream.available()];
stream.read(buffer);
FileInputStream fils = new FileInputStream(file);
int content = 0;
while((content = fils.read()) != -1){
System.out.println((char)content);
}
//Outputstream was written as a test previously but didn't work
OutputStream outStream = new FileOutputStream(file);
outStream.write(buffer);
outStream.close();
// container name is "testing1"
CloudBlockBlob blob = container.getBlockBlobReference(fileName);
if(fileName.length() > 0){
blob.upload(fils,file.length()); //this is testing with fileInputStream
blob.uploadFromFile(fileName); //preferred, just upload from file
}
}
There are no error messages shown, just we know that the file touches the blob storage and shows a size 0 bytes. It's a one-way process by only uploading CSV-format files. At the blob container, it should be showing those uploaded files a size of 1-5 KBs each.
Instead of blob.uploadFromFile(fileName); you should use blob.uploadFromFile(file.getAbsolutePath()); because uploadFromFile method requires absolute path. And you don't need the blob.upload(fils,file.length());.
Refer to Microsoft Docs: https://learn.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-java#upload-blobs-to-the-container
The Azure team replied to a same query I've put on mail and I have confirmed that the problem was not on the API, but due to the Upload component in Vaadin which has a different behavior than usual (https://vaadin.com/blog/uploads-and-downloads-inputs-and-outputs). Either the CloudBlockBlob or the BlobContainerUrl approach works.
The out-of-the-box Upload component requires manual implementation of the FileOutputStream to a temporary object unlike the usual servlet object that is seen everywhere. Since there was limited time, I used one of their addons, EasyUpload, because it had Viritin UploadFileHandler incorporated into it instead of figuring out how to stream the object from scratch. Had there been more time, I would definitely try out the MultiFileUpload addon, which has additional interesting stuff, in my sandbox workspace.
I had this same problem working with .png (copied from multipart files) files I was doing this:
File file = new File(multipartFile.getOriginalFilename());
and the blobs on Azure were 0bytes but when I changed to this:
File file = new File("C://uploads//"+multipartFile.getOriginalFilename());
it started saving the files properly

FileChannel works even after removing backing file

I noticed this weird thing that opened FileChannel object works even after linked file is deleted while a file channel is in use. I have created 15GB test file and following program reads 100MB of file content consequently per second.
Path path = Paths.get("/home/elbek/tmp/file.txt");
FileChannel fileChannel = FileChannel.open(path, StandardOpenOption.READ);
ByteBuffer byteBuffer = ByteBuffer.allocate(1024 * 1024);
while (true) {
int read = fileChannel.read(byteBuffer);
if (read < 0) {
break;
}
Thread.sleep(10);
byteBuffer.clear();
System.out.println(fileChannel.position());
}
fileChannel.close();
After program runs ~5 seconds (it has read 0.5GB) I delete the file from the file system and expect an error to be thrown after a few reads, but the program goes on and reads the file till the end, I was initially thinking maybe it is being served from file cache and made file huge so it won't fit into cache, 15GB is big enough I think not to fit into it.
Anyways, how OS is serving read requests while the file itself is not there anymore? The OS I am testing this is Fedora.
Thanks.

How to create a file in Java of certain size (not sparse)

I am working on an Android app and trying to create a file of a certain size that won't be sparse. I literally want it to take up space on the Android device.
Here's what I have so far, I'm fairly new to Java and tried a couple different things to fill (takes waaay to long if the file is big like 5 GB) or append to the end (doesn't work? maybe I did it wrong).
File file = new File(dir, "testFile.txt");
try {
RandomAccessFile f = new RandomAccessFile(file, "rw");
f.setLength((long) userInputNum * 1048576 * 1024);
f.close();
} catch (IOException e) {
e.printStackTrace();
}
Currently, what's happening is the file is created and say I want it to be 5 GB, in the file details it says it's 5 GB but it's not actually taking up space on the device (this is a sparse file as I have found out). How can I make it create the file not sparse or what's a quick way to fill the file? I can use a command on pc/mac to make the file and push it to the device but I want the app to do it.
So this works:
byte[] x = new byte[1048576];
RandomAccessFile f = new RandomAccessFile(file, "rw");
while(file.length() != (userInputNum * 1048576 * 1024))
{
f.write(x);
}
f.close();
Granted it is pretty slow, but I believe it's much faster creating a 10 GB file in app vs pushing a 10 GB file to the device. If someone has an idea of how to optimize this or change it completely, please do post!
How it works:
It's writing to the file until the file has reached the size that the user wants. I believe I can do something different instead of byte[] but I'll leave that to whoever wants to figure that out. I'll do this on my own for myself, but hope this helps someone else!

write file in memory with java.nio?

With nio it is possible to map an existing file in memory. But is it possible to create it only in memory without file on the hard drive ?
I want to mimic the CreateFileMapping windows functions which allow you to write in memory.
Is there an equivalent system in Java ?
The goal is to write in memory in order for another program ( c ) to read it.
Have a look at the following. A file is created but this might be as close as your going to get.
MappedByteBuffer
MappedByteBuffer.load()
FileChannel
FileChannel.map()
Here is a snippet to try and get you started.
filePipe = new File(tempDirectory, namedPipe.getName() + ".pipe");
try {
int pipeSize = 4096;
randomAccessFile = new RandomAccessFile(filePipe, "rw");
fileChannel = randomAccessFile.getChannel();
mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, 0, pipeSize);
mappedByteBuffer.load();
} catch (Exception e) {
...
Most libraries in Java deal with input and output streams as opposed to java.io.File objects.
Examples: image reading, XML, audio, zip
Where possible, when dealing with I/O, use streams.
This may not be what you want, however, if you need random access to the data.
When using memory mapped files, and you get a MappedByteBuffer from a FileChannel using FileChannel.map(), if you don't need a file just use a ByteBuffer instead, which exists totally in memory. Create one of these using ByteBuffer.allocate() or ByteBuffer.allocateDirect().

Resources.openRawResource() issue Android

I have a database file in res/raw/ folder. I am calling Resources.openRawResource() with the file name as R.raw.FileName and I get an input stream, but I have an another database file in device, so to copy the contents of that db to the device db I use:
BufferedInputStream bi = new BufferedInputStream(is);
and FileOutputStream, but I get an exception that database file is corrupted. How can I proceed?
I try to read the file using File and FileInputStream and the path as /res/raw/fileName, but that also doesn't work.
Yes, you should be able to use openRawResource to copy a binary across from your raw resource folder to the device.
Based on the example code in the API demos (content/ReadAsset), you should be able to use a variation of the following code snippet to read the db file data.
InputStream ins = getResources().openRawResource(R.raw.my_db_file);
ByteArrayOutputStream outputStream=new ByteArrayOutputStream();
int size = 0;
// Read the entire resource into a local byte buffer.
byte[] buffer = new byte[1024];
while((size=ins.read(buffer,0,1024))>=0){
outputStream.write(buffer,0,size);
}
ins.close();
buffer=outputStream.toByteArray();
A copy of your file should now exist in buffer, so you can use a FileOutputStream to save the buffer to a new file.
FileOutputStream fos = new FileOutputStream("mycopy.db");
fos.write(buffer);
fos.close();
InputStream.available has severe limitations and should never be used to determine the length of the content available for streaming.
http://developer.android.com/reference/java/io/FileInputStream.html#available():
"[...]Returns an estimated number of bytes that can be read or skipped without blocking for more input. [...]Note that this method provides such a weak guarantee that it is not very useful in practice."
You have 3 solutions:
Go through the content twice, first just to compute content length, second to actually read the data
Since Android resources are prepared by you, the developer, hardcode its expected length
Put the file in the /asset directory and read it through AssetManager which gives you access to AssetFileDescriptor and its content length methods. This may however give you the UNKNOWN value for length, which isn't that useful.

Categories

Resources