I read from here that a big difference from Java IO and Java NIO is that in the first we can navigate from the data only after the creation of a buffer (I think with, for example, a BufferedInputStreamer object).
In the second the data read from a channel is stored directly in a buffer.
Please, can anyone write some code snippets that show how to navigate, back and forth from an old IO buffer and the same translate with the new IO API?
Thanks.
Example of skipping 1024, reading the next 1024, and seeking back to 0;
nio:
int i=1024;
Path p = Paths.get("./","file.txt");
SeekableByteChannel sbc = Files.newByteChannel(p, StandardOpenOption.READ);
sbc.position((long)i);
ByteBuffer bf = ByteBuffer.allocate(i);
sbc.read(bf);
byte[] b = bf.array();
sbc.position(0L);
io:
int i=1024;
File f = new File("./file.txt");
BufferedInputStream bis = new BufferedInputStream(new FileInputStream(f));
bis.mark(i*2);
bis.skip((long)i);
byte[] b = new byte[i];
bis.read(byte[] b);
bis.reset();
Related
Using Java, I am trying to send some file data over a DatagramSocket. I need to read a file in 1000-byte chunks and send them over as packets. My code:
reads a file into a byte array wrapped in a byte buffer
places the data in a packet and sends it
has the receiver open the packet and re-write the contents to a new file.
I am having a problem with writing the byte array back to a file. Please see my code below.
Client/Sender:
byte[] data = new byte[1000];
ByteBuffer b = ByteBuffer.wrap(data);
DatagramPacket pkt;
File file = new File(sourceFile);
FileInputStream fis = new FileInputStream(file);
BufferedInputStream bis = new BufferedInputStream(fis);
CRC32 crc = new CRC32();
while(true){
b.clear();
b.putLong(0); // I need to put the checksum at the beginning for easy retrieval
bytesRead = bis.read(data);
if(bytesRead==-1) { break; }
crc.reset();
crc.update(data, 8, data.length-8);
long chksum = crc.getValue();
b.rewind();
b.putLong(chksum);
pkt = new DatagramPacket(data, 1000, addr); // addr is valid, works fine
sk.send(pkt);
}
bis.close();
fis.close();
Server/Receiver:
DatagramSocket sk = new DatagramSocket(port);
File destfile = new File("hello.txt");
FileOutputStream fos = new FileOutputStream(destfile);
BufferedOutputStream bos = new BufferedOutputStream(fos);
PrintStream ps = new PrintStream(fos);
byte[] data = new byte[1000];
DatagramPacket pkt = new DatagramPacket(data, data.length);
ByteBuffer b = ByteBuffer.wrap(data);
CRC32 crc = new CRC32();
while(true) {
pkt.setLength(data.length);
sk.receive(pkt);
b.rewind();
// compare checksum, print error if checksum is different
// if checksum is the same:
bos.write(data); // Where the problem seems to be occurring.
// send acknowledgement packet.
}
bos.close();
fos.close();
Here, I am mainly having issues with writing the file back. With a small text file that says Hello World!, I get a strange output that says vˇ]rld!. Also, the input file is only 12 bytes but the file that the receiver creates is 1KB.
I think my issue is dealing with a byte buffer - I've written a program that copies files using file streams and buffered streams, which worked well. But I'm confused with how streams work in this sort of situation, and I would really appreciate any help. Thanks!
In the sender's data[] you overwrite the text, which was read from the file by the crc! You have to read the text in a position after the long. When correcting this in the Sender, it works:
//int bytesRead = bis.read(data); --old code
int bytesRead=bis.read(data,8,data.length-8);
Furthermore you send 1000 bytes, so will receive 1000 bytes, which will go into the destfile.
BTW: you do not check the crc in the server.... so why send it ?
We are using Apache Camel for compressing and decompressing our files.
We use the standard .marshal().gzip() and .unmarshall().gzip() APIs.
Our problem is that when we get really large files, say 800MB to more than 1GB file size, our application runs out of memory, since the entire file is loading into memory for compression and decompression.
Are there any camel apis or java libraries which will help zip/unzip the file without loading the entire file in memory.
There is a similar unanswered question here
Explanation
Use a different approach: Stream the file.
That is, don't load it into memory completely but read it byte per byte and simultaneously write it back byte per byte .
Get an InputStream to the file, wrap some GZipInputStream around. Read byte per byte, write to an OutputStream.
The opposite if you want to compress an archive. Then you wrap the OutputStream by some GZipOutputStream.
Code
The example uses Apache Commons Compress but the logic of the code remains the same for all libraries.
Unpacking a gz archive:
Path inputPath = Paths.get("archive.tar.gz");
Path outputPath = Paths.get("archive.tar");
try (InputStream fin = Files.newInputStream(inputPath );
OutputStream out = Files.newOutputStream(outputPath);) {
GZipCompressorInputStream in = new GZipCompressorInputStream(
new BufferedInputStream(fin));
// Read and write byte by byte
final byte[] buffer = new byte[buffersize];
int n = 0;
while (-1 != (n = in.read(buffer))) {
out.write(buffer, 0, n);
}
}
Packing as gz archive:
Path inputPath = Paths.get("archive.tar");
Path outputPath = Paths.get("archive.tar.gz");
try (InputStream in = Files.newInputStream(inputPath);
OutputStream fout = Files.newOutputStream(outputPath);) {
GZipCompressorOutputStream out = new GZipCompressorOutputStream(
new BufferedOutputStream(fout));
// Read and write byte by byte
final byte[] buffer = new byte[buffersize];
int n = 0;
while (-1 != (n = in.read(buffer))) {
out.write(buffer, 0, n);
}
}
You could also wrap BufferedReader and PrintWriter around if you feel more comfortable with them. They manage the buffering themselves and you can read and write lines instead of bytes. Note that this only works correct if you read a file with lines and not some other format.
i need to bring out inputstream from inputstream , for example inputstream A is 1024 byte and i need to bring out inputstream B from A of Hundred and fiftieth byte to end , from certain offset to certain end . i search in google and stackoverflow ...Is there any solution available in java ??
You can use the method "skip" to skip the first 150 bytes.
Here is an example:
byte[] buf = {1,2,3,4,5,6,7,8,9};
InputStream is1 = new ByteArrayInputStream(buf);
long skip = is1.skip(5);
System.out.println(is1.read());
If you know that you have a FileInputStream, you can use FileChannel.position() to set where in the file that stream will read from.
FileInputStream in = new FileInputStream("whatever");
FileChannel channel = in.getChannel();
channel.position(10);
This will not work with other types of streams.
I need help on my homework, any help will be much appreciated. I can send small files without a problem. But when i try to send let’s say a 1GB file byte array sends OutOfMemoryError so i need a better solution to send file from server to client. How can i improve this code and send big files, please help me.
Server Code:
FileInputStream fis = new FileInputStream(file);
byte[] fileByte = new byte[fis.available()]; //This causes the problem.
bytesRead = fis.read(fileByte);
oos = new ObjectOutputStream(sock.getOutputStream());
oos.writeObject(fileByte);
Client Code:
ois = new ObjectInputStream(sock.getInputStream());
byte[] file = (byte[]) ois.readObject();
fos = new FileOutputStream(file);
fos.write(file);
Don't read the whole file into memory, use a small buffer and write while you are reading the file:
BufferedOutputStream bos = new BufferedOutputStream(sock.getOutputStream())
File file = new File("asd");
FileInputStream fis = new FileInputStream(file);
BufferedInputStream bis = new BufferedInputStream(fis);
byte[] buffer = new byte[1024*1024*10];
int n = -1;
while((n = bis.read(buffer))!=-1) {
bos.write(buffer,0,n):
}
Use Buffered* to optimize the writing and reading from Streams
Just split the array to smaller chunks so that you don't need to allocate any big array.
For example you could split the array into 16Kb chunks, eg new byte[16384] and send them one by one. On the receiving side you would have to wait until a chunk can be fully read and then store them somewhere and start with next chunk.
But if you are not able to allocate a whole array of the size you need on server side you won't be able to store all the data that you are going to receive anyway.
You could also compress the data before sending it to save bandwidth (and time), take a look at ZipOutputStream and ZipInputStream.
Here's how I solved it:
Client Code:
bis=new BufferedInputStream(sock.getInputStream());
fos = new FileOutputStream(file);
int n;
byte[] buffer = new byte[8192];
while ((n = bis.read(buffer)) > 0){
fos.write(buffer, 0, n);}
Server Code:
bos= new BufferedOutputStream(sock.getOutputStream());
FileInputStream fis = new FileInputStream(file);
BufferedInputStream bis = new BufferedInputStream(fis);
int n=-1;
byte[] buffer = new byte[8192];
while((n = bis.read(buffer))>-1)
bos.write(buffer,0,n);
Depending on whether or not you have to write the code yourself, there are existing libraries which solve this problem, e.g. rmiio. If you are not using RMI, just plain java serialization, you can use the DirectRemoteInputStream, which is kind of like a Serializable InputStream. (this library also has support for things like auto-magically compressing the data).
Actually, if you are only sending file data, you would be better off ditching the Object streams and use DataInput/DataOutput streams. first write an integer indicating the file length, then copy the bytes directly to the stream. on the receiving side, read the integer file length, then read exactly that many bytes.
when you copy the data between streams, use a small, fixed size byte[] to move chunks of data between the input and output streams in a loop. there are numerous examples of how to do this correctly available online (e.g. #ErikFWinter's answer).
I would like to send image file from java server to android app using this code:
Server(Java):
File file = new File("./clique.jpg");
FileInputStream stream = new FileInputStream(file);
DataOutputStream writer = new DataOutputStream(socket.getOutputStream());
byte[] contextB = new byte[4096];
int n;
int i = 0;
while ( (n=stream.read(contextB))!=-1 ){
writer.write(contextB, 0, n);
writer.flush();
System.out.println(n);
i+=n;
}
writer.flush();
stream.close();
android app:
DataInputStream reader = new DataInputStream(socket.getInputStream());
byte[] buffer = new byte[4096];
ByteArrayOutputStream content = new ByteArrayOutputStream();
int n;
int i = 0;
reader = new DataInputStream(socket.getInputStream());
while ( (n=reader.read(buffer)) != null){
content.write(buffer, 0, n);
content.flush();
}
Utility.CreateImageFile(content.toByteArray());
What I noticed is that in the android app n which is the number of read bytes is not 4096 while I am sending from server byte blocks of 4096 size,also I can not get n=-1 which is the end of stream,it blocks until I close the app then I get n=-1.
Regarding the number of bytes you read at a time has nothing to do with the number of bytes you write -it very much depends on the network conditions and will be variable with every chunk (basically as many bytes managed to be transmitted in short period of time between your reads as many you will get in the read chunk.
Regarding the end of stream - in your server code you have forgotten to close the output stream (you only close the stream which is input stream - you should also close the writer which in turn will close the underlying output stream.
Two comments:
1) I would really recommend to use Buffered Readers/Writers wrapping the writes/readers - the code you will get will be nicer and you will not have to create/manage buffers yourself.
2) Use try {} finally and close your streams in finally clauses - this is the best practice that will make sure that you will close the streams and free resources even in case of problems while reading/writing.
You got a problem in your android code:
while ( (n=reader.read(buffer)) != null) {
n can not be null.
Use writer.close() instead of writer.flush() after your loop on the server.