With nio it is possible to map an existing file in memory. But is it possible to create it only in memory without file on the hard drive ?
I want to mimic the CreateFileMapping windows functions which allow you to write in memory.
Is there an equivalent system in Java ?
The goal is to write in memory in order for another program ( c ) to read it.
Have a look at the following. A file is created but this might be as close as your going to get.
MappedByteBuffer
MappedByteBuffer.load()
FileChannel
FileChannel.map()
Here is a snippet to try and get you started.
filePipe = new File(tempDirectory, namedPipe.getName() + ".pipe");
try {
int pipeSize = 4096;
randomAccessFile = new RandomAccessFile(filePipe, "rw");
fileChannel = randomAccessFile.getChannel();
mappedByteBuffer = fileChannel.map(FileChannel.MapMode.READ_WRITE, 0, pipeSize);
mappedByteBuffer.load();
} catch (Exception e) {
...
Most libraries in Java deal with input and output streams as opposed to java.io.File objects.
Examples: image reading, XML, audio, zip
Where possible, when dealing with I/O, use streams.
This may not be what you want, however, if you need random access to the data.
When using memory mapped files, and you get a MappedByteBuffer from a FileChannel using FileChannel.map(), if you don't need a file just use a ByteBuffer instead, which exists totally in memory. Create one of these using ByteBuffer.allocate() or ByteBuffer.allocateDirect().
Related
I'm writing a java rest service to support parallel upload of parts of a large file. I am writing these parts in separate files and merging them using file channel. I have a sample implemented in Golang, it does the same but when it merges the parts, it takes no time. When I use file channel or read from one stream and write to the final file, it takes long time. The difference I think is, Golang has ability to keep the data on the disk as it is and just merge them by not actually moving the data. Is there any way I can do the same in java?
Here is my code that merges parts, I loop through this method for all parts:
private void mergeFileUsingChannel(String destinationPath, String sourcePath, long partSize, long offset) throws Exception{
FileChannel outputChannel = null;
FileChannel inputChannel = null;
try{
outputChannel = new FileOutputStream(new File(destinationPath)).getChannel();
outputChannel.position(offset);
inputChannel = new FileInputStream(new File(sourcePath)).getChannel();
inputChannel.transferTo(0, partSize, outputChannel);
}catch(Exception e){
e.printStackTrace();
}
finally{
if(inputChannel != null)
inputChannel.close();
if(outputChannel != null){
outputChannel.close();
}
}
}
The documentation of FileChannel transferTo states:
"Many operating systems can transfer bytes directly from the filesystem cache to the target channel without actually copying them."
So the code you have written is correct, and the inefficiency you are seeing is probably related to the underlying file-system type.
One small optimization I could suggest would be to open the file in append mode.
"Whether the advancement of the position and the writing of the data are done in a single atomic operation is system-dependent"
Beyond that, you may have to think of a way to work around the problem. For example, by creating a large enough contiguous file as a first step.
EDIT: I also noticed that you are not explicitly closing your FileOutputStream. It would be best to hang on to that and close it, so that all the File Descriptors are closed.
I am trying to open a file for reading or create the file if it was not there.
I use this code:
String location = "/test1/test2/test3/";
new File(location).mkdirs();
location += "fileName.properties";
Path confDir = Paths.get(location);
InputStream in = Files.newInputStream(confDir, StandardOpenOption.CREATE);
in.close();
And I get java.nio.file.NoSuchFileException
Considering that I am using StandardOpenOption.CREATE option, the file should be created if it is not there.
Any idea why I am getting this exception?
It seems that you want one of two quite separate things to happen:
If the file exists, read it; or
If the file does not exist, create it.
The two things are mutually exclusive but you seem to have confusingly merged them. If the file did not exist and you've just created it, there's no point in reading it. So keep the two things separate:
Path confDir = Paths.get("/test1/test2/test3");
Files.createDirectories(confDir);
Path confFile = confDir.resolve("filename.properties");
if (Files.exists(confFile))
try (InputStream in = Files.newInputStream(confFile)) {
// Use the InputStream...
}
else
Files.createFile(confFile);
Notice also that it's better to use "try-with-resources" instead of manually closing the InputStream.
Accordingly to the JavaDocs you should have used newOutputStream() method instead, and then you will create the file:
OutputStream out = Files.newOutputStream(confDir, StandardOpenOption.CREATE);
out.close();
JavaDocs:
// Opens a file, returning an input stream to read from the file.
static InputStream newInputStream(Path path, OpenOption... options)
// Opens or creates a file, returning an output stream that
// may be used to write bytes to the file.
static OutputStream newOutputStream(Path path, OpenOption... options)
The explanation is that OpenOption constants usage relies on wether you are going to use it within a write(output) stream or a read(input) stream. This explains why OpenOption.CREATE only works deliberatery with the OutputStream but not with InputStream.
NOTE: I agree with #EJP, you should take a look to Oracle's tutorials to create files properly.
I think you intended to create an OutputStream (for writing to) instead of an InputStream (which is for reading)
Another handy way of creating an empty file is using apache-commons FileUtils like this
FileUtils.touch(new File("/test1/test2/test3/fileName.properties"));
I have some old code that was working until recently, but seems to barf now that it runs on a new server using OpenJDK 6 rather than Java SE 6.
The problem seems to revolve around JAI.create. I have jpeg files which I scale and convert to png files. This code used to work with no leaks, but now that the move has been made to a box running OpenJDK, the file descriptors seem to never close, and I see more and more tmp files accumulate in the tmp directory on the server. These are not files I create, so I assume it is JAI that does it.
Another reason might be the larger heap size on the new server. If JAI cleans up on finalize, but GC happens less frequently, then maybe the files pile up because of that. Reducing the heap size is not an option, and we seem to be having unrelated issues with increasing ulimit.
Here's an example of a file that leaks when I run this:
/tmp/imageio7201901174018490724.tmp
Some code:
// Processor is an internal class that aggregates operations
// performed on the image, like resizing
private byte[] processImage(Processor processor, InputStream stream) {
byte[] bytes = null;
SeekableStream s = null;
try {
// Read the file from the stream
s = SeekableStream.wrapInputStream(stream, true);
RenderedImage image = JAI.create("stream", s);
BufferedImage img = PlanarImage.wrapRenderedImage(image).getAsBufferedImage();
// Process image
if (processor != null) {
image = processor.process(img);
}
// Convert to bytes
bytes = convertToPngBytes(image);
} catch (Exception e){
// error handling
} finally {
// Clean up streams
IOUtils.closeQuietly(stream);
IOUtils.closeQuietly(s);
}
return bytes;
}
private static byte[] convertToPngBytes(RenderedImage image) throws IOException {
ByteArrayOutputStream out = null;
byte[] bytes = null;
try {
out = new ByteArrayOutputStream();
ImageIO.write(image, "png", out);
bytes = out.toByteArray();
} finally {
IOUtils.closeQuietly(out);
}
return bytes;
}
My questions are:
Has anyone run into this and solved it? Since the tmp files created are not mine, I don't know what their names are and thus can't really do anything about them.
What're some of the libraries of choice for resizing and reformatting images? I heard of Scalr - anything else I should look into?
I would rather not rewite the old code at this time, but if there is no other choice...
Thanks!
Just a comment on the temp files/finalizer issue, now that you seem to have solved the root of the problem (too long for a comment, so I'll post it as an answer... :-P):
The temp files are created by ImageIO's FileCacheImageInputStream. These instances are created whenever you call ImageIO.createImageInputStream(stream) and the useCache flag is true (the default). You can set it to false to disable the disk caching, at the expense of in-memory caching. This might make sense as you have a large heap, but probably not if you are processing very large images.
I also think you are (almost) correct about the finalizer issue. You'll find the following ´finalize´ method on FileCacheImageInputStream (Sun JDK 6/1.6.0_26):
protected void finalize() throws Throwable {
// Empty finalizer: for performance reasons we instead use the
// Disposer mechanism for ensuring that the underlying
// RandomAccessFile is closed/deleted prior to garbage collection
}
There's some quite "interesting" code in the class' constructor, that sets up automatic stream closing and disposing when the instance is finalized (should client code forget to do so). This might be different in the OpenJDK implentation, at least it seems kind of hacky. It's also unclear to me at the moment exactly what "performance reasons" we are talking about...
In any case, it seems calling close on the ImageInputStream instance, as you now do, will properly close the file descriptor and delete the temp file.
Found it!
So a stream gets wrapped by another stream in a different area in the code:
iis = ImageIO.createImageInputStream(stream);
And further down, stream is closed.
This doesn't seem to leak any resources when running with Sun Java, but does seem to cause a leak when running with Open JDK.
I'm not sure why that is (I have not looked at source code to verify, though I have my guesses), but that's what seems to be happening. Once I explicitly closed the wrapping stream, all was well.
I,m trying to export some files from a system and save it in my drive, the problem is that some files are pretty big and I get the java out of memory error.
FileOutputStream fileoutstream = new FileOutputStream(filenameExtension);
fileoutstream.write(dataManagement.getContent(0).getData());
fileoutstream.flush();
fileoutstream.close();
Any recomendation that I can try, I add the flush but now diference, this will call the export method, generate the file and saved. I,m using a cursos to run over the data that I,m exporting not an array, I try to add more memory but the files are too big.
You are loading the whole file in memory before writing it. On the contrary you should:
load only a chunk of data
write it
repeat the steps above until you have processed all data.
If the files are really big, you may need to read/write them in chunks. If the files are big enough to fit in memory, then you can increase the size of the virtual machine memory.
i.e:
java -Xmx512M ...
FileInputStream fi = infile;
FileOutputStream fo = outfile
byte[] buffer = new byte[5000];
int n;
while((n = fi.read(buffer)) > 0)
fo.write(b, 0, n);
Hope this helps to get the idea.
you can use the spring batch framework to do the reading and writing the file in chunk size.
http://static.springsource.org/spring-batch/
I'm working on a sc2replay parsing tool. I build it on top of MPQLIB http://code.google.com/p/mpqlib/
Unfortunately the tool uses filechannels to read through the bzip files,
and uses map(MapMode.READ_ONLY, hashtablePosition, hashTableSize);
After calling that function closing the file channel does not release the file in the process.
To be specific I cannot rename/move the file.
The problem occurs in Java 7 and it works fine on Java 6.
Here is a simple code snippet to replicate it:
FileInputStream f = new FileInputStream("test.SC2Replay");
FileChannel fc = f.getChannel();
fc.map(MapMode.READ_ONLY, 0,1);
fc.close();
new File("test.SC2Replay").renameTo(new File("test1.SC2Replay"));
commenting out the fc.map will allow you to rename the file.
P.S. from here Should I close the FileChannel?
It states that you do not need to close both filechannel and filestream because closing one will close another. I also tried closing either or both and still did not worked.
Is there a workaround on renaming the file after reading the data using FileChannel.map on Java 7, because every one seems to have Java 7 nowadays?
Good day,
it seems that FileChannel.map causes the problem on java 7. if you use FileChannel.map, you can no longer close the the file.
a quick work around is instead of using FileChannel.map(MapMode.READ_ONLY, position, length)
you can use
ByteBuffer b = ByteBuffer.allocate(length);
fc.read(b,position);
b.rewind();
It's a documented bug. The bug report referes to Java 1.4, and they consider it a documentation bug. Closing the filechannel does not close the underlying stream.
If you're using Sun JRE, you can cheat by casting to their implementation and telling it to release itself. I'd only recommend doing this if you're not reliant on the file being closed or never plan to use another JRE.
At some point, I hope that something like this will make it into the proper public API.
try (FileInputStream stream = new FileInputStream("test.SC2Replay");
FileChannel channel = stream.getChannel()) {
MappedByteBuffer mappedBuffer = channel.map(FileChannel.MapMode.READ_ONLY, 0, 1);
try {
// do stuff with it
} finally {
if (mappedBuffer instanceof DirectBuffer) {
((DirectBuffer) mappedBuffer).cleaner().clean();
}
}
}