Is there an easy (therefore quick) way to accomplish this? Basically just take some input stream, could be something like a socket.getInputStream(), and have the stream's buffer autmoatically redirect to standard out?
There are no easy ways to do it, because InputStream has a pull-style interface, when OutputStream has a push-style one. You need some kind of pumping loop to pull data from InputStream and push them into OutputStream. Something like this (run it in a separate thread if necessary):
int size = 0;
byte[] buffer = new byte[1024];
while ((size = in.read(buffer)) != -1) out.write(buffer, 0, size);
It's already implemented in Apache Commons IO as IOUtils.copy()
You need a simple thread which reads from the input stream and writes to standard output. Make sure it yields to other threads.
Since Java 9, you can use InputStream.transferTo
Example
try (InputStream stream = Application.class.getResourceAsStream("/test.txt")) {
stream.transferTo(System.out);
}
Related
Goal: Decrypt data from one source and write the decrypted data to a file.
try (FileInputStream fis = new FileInputStream(targetPath.toFile());
ReadableByteChannel channel = newDecryptedByteChannel(path, associatedData))
{
FileChannel fc = fis.getChannel();
long position = 0;
while (position < ???)
{
position += fc.transferFrom(channel, position, CHUNK_SIZE);
}
}
The implementation of newDecryptedByteChannel(Path,byte[]) should not be of interest, it just returns a ReadableByteChannel.
Problem: What is the condition to end the while loop? When is the "end of the byte channel" reached? Is transferFrom the right choice here?
This question might be related (answer is to just set the count to Long.MAX_VALUE). Unfortunately this doesn't help me because the docs say that up to count bytes may be transfered, depending upon the natures and states of the channels.
Another thought was to just check whether the amount of bytes actually transferred is 0 (returned from transferFrom), but this condition may be true if the source channel is non-blocking and has fewer than count bytes immediately available in its input buffer.
It is one of the bizarre features of FileChannel. transferFrom() that it never tells you about end of stream. You have to know the input length independently.
I would just use streams for this: specifically, a CipherInputStream around a BufferedInputStream around a FileInputStream, and a FileOutputStream.
But the code you posted doesn't make any sense anyway. It can't work. You are transferring into the input file, and via a channel that was derived from a FileInputStream, so it is read-only, so transferFrom() will throw an exception.
As commented by #user207421, as you are reading from ReadableByteChannel, the target channel needs to be derived from FileOutputStream rather than FileInputStream. And the condition for ending loop in your code should be the size of file underlying the ReadableByteChannel which is not possible to get from it unless you are able to get FileChannel and find the size through its size method.
The way I could find for transferring is through ByteBuffer as below.
ByteBuffer buf = ByteBuffer.allocate(1024*8);
while(readableByteChannel.read(buf)!=-1)
{
buf.flip();
fc.write(buf); //fc is FileChannel derived from FileOutputStream
buf.compact();
}
buf.flip();
while(buf.hasRemainig())
{
fc.write(buf);
}
I am reading dds textures, but since once built the jar I can't access those textures through url and file and have to use InputStream instead.
So I would need to know how I can obtain a java.nio.ByteBuffer from an java.io.InputStream.
Ps: no matter through 3rd part libraries, I just need it working
For me the best in this case is Apache commons-io to handle this and similar tasks.
The IOUtils type has a static method to read an InputStream and return a byte[].
InputStream is;
byte[] bytes = IOUtils.toByteArray(is);
Internally this creates a ByteArrayOutputStream and copies the bytes to the output, then calls toByteArray().
UPDATE: as long as you have the byte array, as #Peter pointed, you have to convert to ByteBuffer
ByteBuffer.wrap(bytes)
JAVA 9 UPDATE: as stated by #saka1029 if you're using java 9+ you can use the default InputStream API which now includes InputStream::readAllBytes function, so no external libraries needed
InputStream is;
byte[] bytes = is.readAllBytes()
What is about:
ReadableByteChannel channel = Channels.newChannel(inputStream);
ByteBuffer buffer = ByteBuffer.allocate(bufferSize);
while (channel.read(buffer) != -1) {
//write buffer
};
A neat solution with no 3rd party library needed is
ByteBuffer byteBuffer = ByteBuffer.allocate(inputStream.available());
Channels.newChannel(inputStream).read(byteBuffer);
See ReadableByteChannel#read(ByteBuffer)
I'm dealing with some Java code in which there's an InputStream that I read one time and then I need to read it once again in the same method.
The problem is that I need to reset it's position to the start in order to read it twice.
I've found a hack-ish solution to the problem:
is.mark(Integer.MAX_VALUE);
//Read the InputStream is fully
// { ... }
try
{
is.reset();
}
catch (IOException e)
{
e.printStackTrace();
}
Does this solution lead to some unespected behaviours? Or it will work in it's dumbness?
As written, you have no guarantees, because mark() is not required to report whether it was successful. To get a guarantee, you must first call markSupported(), and it must return true.
Also as written, the specified read limit is very dangerous. If you happen to be using a stream that buffers in-memory, it will potentially allocate a 2GB buffer. On the other hand, if you happen to be using a FileInputStream, you're fine.
A better approach is to use a BufferedInputStream with an explicit buffer.
It depends on the InputStream implementation. You can also think whether it will be better if you use byte[]. The easiest way is to use Apache commons-io:
byte[] bytes = IOUtils.toByteArray(inputSream);
You can't do this reliably; some InputStreams (such as ones connected to terminals or sockets) don't support mark and reset (see markSupported). If you really have to traverse the data twice, you need to read it into your own buffer.
Instead of trying to reset the InputStream load it into a buffer like a StringBuilder or if it's a binary data stream a ByteArrayOutputStream. You can then process the buffer within the method as many times as you want.
ByteArrayOutputStream bos = new ByteArrayOutputStream();
int read = 0;
byte[] buff = new byte[1024];
while ((read = inStream.read(buff)) != -1) {
bos.write(buff, 0, read);
}
byte[] streamData = bos.toByteArray();
For me, the easiest solution was to pass the object from which the InputStream could be obtained, and just obtain it again. In my case, it was from a ContentResolver.
I am downloading databases from the network, which are between 100 Kbytes and 500 Kbytes large. Here is my code (removed useless code):
URLConnection uConnection = downloadUrl.openConnection();
InputStream iS = uConnection.getInputStream();
BufferedInputStream bIS = new BufferedInputStream(iS);
byte[] buffer = new byte[1024];
FileOutputStream fOS = new FileOutputStream(db);
int bufferLength = 0;
while ((bufferLength = bIS.read(buffer)) > 0) {
fOS.write(buffer, 0, bufferLength);
}
fOS.close();
My problem is, that it takes a long time for him to finish the while-statement. Have I messed up the code somewhere? It shouldn't take that long for such small files, shouldn't it? I'm talking about 1 minute, for three files not larger than 1 MB altogether... Thanks in advance!
"Slow" is really rather ambiguous. That being said, considering what you're trying to do you shouldn't be using a BufferedInputStream and your buffer is way too small.
The buffered wrappers are for optimizing small reads/writes. Since all you're doing is trying to read a ton of data as fast as you can, you should just read directly from the InputStream, and use a large buffer (Say, 64k since the underlying native code is probably going to chunk at that size anyway).
byte[] buffer = new byte[65536];
...
while ((bufferLength = iS.read(buffer, 0, buffer.length) > 0) {
...
I've found the real solution in Jdk 1.7, which is made by reliable, fast, simple and almost definitively will spawn a pity veil on older java.io solutions.Despite the web is still plenty full of examples of copying files in java using In/out Streams I'll warmely suggest everyone to use a simple method : java.nio.Files.copy(Path origin, Path destination) with optional parameters for replacing destination,migrate metadata file attributes and even try a transactional move of files (if permitted by the underlying O.S.). That's a really good Job, waited for so long! You can easily convert code from copy(File file1, File file2) by appending a ".toPath()" to the File instance (e.g. file1.toPath(), file2.toPath(). Note also that the boolean method isSameFile(file1.toPath(), file2.toPath()), is already used inside the above copy method but easily usable in every case you want. For every case you can't upgrade to 1.7 using community libraries from Apache (commons-io) or Google (guava commons) is still suggested.
This is a newbie question, I know. Can you guys help?
I'm talking about big files, of course, above 100MB. I'm imagining some kind of loop, but I don't know what to use. Chunked stream?
One thins is for certain: I don't want something like this (pseudocode):
File file = new File(existing_file_path);
byte[] theWholeFile = new byte[file.length()]; //this allocates the whole thing into memory
File out = new File(new_file_path);
out.write(theWholeFile);
To be more specific, I have to re-write a applet that downloads a base64 encoded file and decodes it to the "normal" file. Because it's made with byte arrays, it holds twice the file size in memory: one base64 encoded and the other one decoded. My question is not about base64. It's about saving memory.
Can you point me in the right direction?
Thanks!
From the question, it appears that you are reading the base64 encoded contents of a file into an array, decoding it into another array before finally saving it.
This is a bit of an overhead when considering memory. Especially given the fact that Base64 encoding is in use. It can be made a bit more efficient by:
Reading the contents of the file using a FileInputStream, preferably decorated with a BufferedInputStream.
Decoding on the fly. Base64 encoded characters can be read in groups of 4 characters, to be decoded on the fly.
Writing the output to the file, using a FileOutputStream, again preferably decorated with a BufferedOutputStream. This write operation can also be done after every single decode operation.
The buffering of read and write operations is done to prevent frequent IO access. You could use a buffer size that is appropriate to your application's load; usually the buffer size is chosen to be some power of two, because such a number does not have an "impedance mismatch" with the physical disk buffer.
Perhaps a FileInputStream on the file, reading off fixed length chunks, doing your transformation and writing them to a FileOutputStream?
Perhaps a BufferedReader? Javadoc: http://download-llnw.oracle.com/javase/1.4.2/docs/api/java/io/BufferedReader.html
Use this base64 encoder/decoder, which will wrap your file input stream and handle the decoding on the fly:
InputStream input = new Base64.InputStream(new FileInputStream("in.txt"));
OutputStream output = new FileOutputStream("out.txt");
try {
byte[] buffer = new byte[1024];
int readOffset = 0;
while(input.available() > 0) {
int bytesRead = input.read(buffer, readOffset, buffer.length);
readOffset += bytesRead;
output.write(buffer, 0, bytesRead);
}
} finally {
input.close();
output.close();
}
You can use org.apache.commons.io.FileUtils. This util class provides other options too beside what you are looking for. For example:
FileUtils.copyFile(final File srcFile, final File destFile)
FileUtils.copyFile(final File input, final OutputStream output)
FileUtils.copyFileToDirectory(final File srcFile, final File destDir)
And so on.. Also you can follow this tut.