Transferring big data over Socket - java

I currently try to send a big Message over a Socket-Connection. My Message has something about 1.3 MB. Here is my code:
public void send(String message) throws IOException {
byte[] bytes = message.getBytes(encoding);
ByteBuffer byteBuffer = ByteBuffer.wrap(bytes);
while (socketChannel.write(byteBuffer) > 0) {
}
}
On the other side of the Client is a Sever, which just prints out the read data. It always stops reading after something about 260 kB. It seems to me that some Buffer is full?
What can I do to make this Socket-Connection working?

while (socketChannel.write(byteBuffer) > 0) {
}
If you aren't in blocking mode, this loop will terminate as soon as the local socket send buffer fills up, which causes write() to return zero.
A simple but poor quality fix would be to change the loop condition from > to >=. However if write() returns zero you should really be using a Selector and OP_WRITE to detect when the channel becomes writable again.

I think it would be better to allocate enough space for your ByteBuffer and flip it before you send it, because you want to read from it from now on. Maybe this works for you.
final byte[] bytesToSend = message.getBytes(encoding);
ByteBuffer buffer = ByteBuffer.allocate(bytesToSend.length);
buffer.clear();
buffer.put(bytesToSend);
buffer.flip();
while (buffer.hasRemaining()) {
try {
socketChannel.write(buf);
} catch (IOException e) {
e.printStackTrace();
}
}

Related

DataInputStream read not blocking

First off forgive me if I am mistaken for how blocking works, to my understanding blocking will pause the thread until it is ready, for exsample when reading user input the program will wait until the user hits return.
My problem is that instead of waiting for data to become available it reads bytes with the value 0. Is there a way to block until data becoms available?
The method readBytes is called in a loop.
public byte[] readBytes(){
try{
//read the head int that will be 4 bytes telling the number of bytes that follow containing data
byte[] rawLen = new byte[4];
socketReader.read(rawLen);
ByteBuffer bb = ByteBuffer.wrap(rawLen);
int len = bb.getInt();
byte[] data = new byte[len];
if (len > 0) {
socketReader.readFully(data);
}
return data;
} catch (Exception e){
e.printStackTrace();
logError("Failed to read data: " + socket.toString());
return null;
}
}
If read() returned -1, the peer has disconnected. You aren't handling that case. If you detect end of stream you must close the connection and stop reading. At present you have no way of doing so. You need to reconsider your method signature.
You should use readInt() instead of those four lines of code that read the length. At present you are assuming have read four bytes without actually checking. readInt() will check for you.
This way also you will never get out of sync with the sender, which at present is a serious risk.

Incomplete File Copy Java NIO

I'm having problem with reading my file. Im quite new to NIO too. The actual size of the file I want to send to the server is almost 900MB and only received 3MB.
The server's side code for reading:
private void read(SelectionKey key) throws IOException{
SocketChannel socket = (SocketChannel)key.channel();
RandomAccessFile aFile = null;
ByteBuffer buffer = ByteBuffer.allocate(300000000);
try{
aFile = new RandomAccessFile("D:/test2/test.rar","rw");
FileChannel inChannel = aFile.getChannel();
while(socket.read(buffer) > 0){
buffer.flip();
inChannel.write(buffer);
buffer.compact();
}
System.out.println("End of file reached..");
}catch(Exception e){
e.printStackTrace();
}
}
This is my code for the write method of the client side:
private void write(SelectionKey key) throws IOException {
SocketChannel socket = (SocketChannel) key.channel();
RandomAccessFile aFile = null;
try {
File f = new File("D:/test.rar");
aFile = new RandomAccessFile(f, "r");
ByteBuffer buffer = ByteBuffer.allocate(300000000);
FileChannel inChannel = aFile.getChannel();
while (inChannel.read(buffer) > 0) {
buffer.flip();
socket.write(buffer);
buffer.compact();
}
aFile.close();
inChannel.close();
key.interestOps(SelectionKey.OP_READ);
} catch (Exception e) {
e.printStackTrace();
}
}
You're opening a new file every time the socket channel becomes readable. Every TCP segment that arrives, you're recreating the target file, and therefore throwing away whatever was received before.
The simple fix to that would be to open the file for append on every OP_READ, but it would remain ridicously inefficient. You should open the target file as soon as you know what it is, and close it when you read end of stream from the sender, or when you've read the entire contents if that isn't signalled by end of stream. You haven't disclosed your application protocol so I can't be more specific.
read() returns zero when there is no data available to be read without blocking. You're treating that as an end of file. It isn't.
The canonical way to write between channels is as follows:
while ((in.read(buffer) > 0 || buffer.position() > 0)
{
buffer.flip();
out.write(buffer);
buffer.compact();
}
However if the target is a non-blocking socket channel this gets considerably more complex: you have to manipulate whether you're selected for OP_WRITE or not depending on whether or not the last write() returned zero. You will find this explained in a large number of posts here, many by me.
I have never seen any cogent reason for non-blocking I/O in the client side, unless it connects to multiple servers (a web crawler for example). I would use blocking mode or java.net.Socket at the client, which will obviate the write complexity referred to above.
NB you don't need to close both the RandomAccessFile and the FileChannel that was derived from it. Either will do.

Socket Inputstream doesn't return -1 at the end of stream

This is a code snippet where the problem is happening:
public static byte[] copyLargeExt(InputStream input) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buffer = new byte[1024*8];
int n = 0;
while(-1 != (n = input.read(buffer))) {
baos.write(buffer, 0, n);
// i just append this pattern ({###END###}) to force the break
/*if(baos.toString(UTF8.name()).endsWith("{###END###}")) {
break;
}*/
}
return baos.toByteArray();
}
Can someone help me?
The code in the question reads to the end of the socket stream. If the method is blocking and in the read call, then that can only mean that the other end has not closed its corresponding output stream yet.
You have two choices:
Change the other end to close its outputstream so that this code will see an EOF.
Change the "protocol" so that this code knows how many bytes of data to expect ... and reads exactly that number of bytes.
In Java.
If the Server only Socket.close() without OutputStream.close() first, the Client's InputStream.read() will not return -1.
Instead, the Client's InputStream.read() will block until timeout.
So the best practice on Server is:
OutputStream.close();
InputStream.close();
Socket.close();

Java -- How to read an unknown number of bytes from an inputStream (socket/socketServer)?

Looking to read in some bytes over a socket using an inputStream. The bytes sent by the server may be of variable quantity, and the client doesn't know in advance the length of the byte array. How may this be accomplished?
byte b[];
sock.getInputStream().read(b);
This causes a 'might not be initialized error' from the Net BzEAnSZ. Help.
You need to expand the buffer as needed, by reading in chunks of bytes, 1024 at a time as in this example code I wrote some time ago
byte[] resultBuff = new byte[0];
byte[] buff = new byte[1024];
int k = -1;
while((k = sock.getInputStream().read(buff, 0, buff.length)) > -1) {
byte[] tbuff = new byte[resultBuff.length + k]; // temp buffer size = bytes already read + bytes last read
System.arraycopy(resultBuff, 0, tbuff, 0, resultBuff.length); // copy previous bytes
System.arraycopy(buff, 0, tbuff, resultBuff.length, k); // copy current lot
resultBuff = tbuff; // call the temp buffer as your result buff
}
System.out.println(resultBuff.length + " bytes read.");
return resultBuff;
Assuming the sender closes the stream at the end of the data:
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buf = new byte[4096];
while(true) {
int n = is.read(buf);
if( n < 0 ) break;
baos.write(buf,0,n);
}
byte data[] = baos.toByteArray();
Read an int, which is the size of the next segment of data being received. Create a buffer with that size, or use a roomy pre-existing buffer. Read into the buffer, making sure it is limited to the aforeread size. Rinse and repeat :)
If you really don't know the size in advance as you said, read into an expanding ByteArrayOutputStream as the other answers have mentioned. However, the size method really is the most reliable.
Without re-inventing the wheel, using Apache Commons:
IOUtils.toByteArray(inputStream);
For example, complete code with error handling:
public static byte[] readInputStreamToByteArray(InputStream inputStream) {
if (inputStream == null) {
// normally, the caller should check for null after getting the InputStream object from a resource
throw new FileProcessingException("Cannot read from InputStream that is NULL. The resource requested by the caller may not exist or was not looked up correctly.");
}
try {
return IOUtils.toByteArray(inputStream);
} catch (IOException e) {
throw new FileProcessingException("Error reading input stream.", e);
} finally {
closeStream(inputStream);
}
}
private static void closeStream(Closeable closeable) {
try {
if (closeable != null) {
closeable.close();
}
} catch (Exception e) {
throw new FileProcessingException("IO Error closing a stream.", e);
}
}
Where FileProcessingException is your app-specific meaningful RT exception that will travel uninterrupted to your proper handler w/o polluting the code in between.
The simple answer is:
byte b[] = new byte[BIG_ENOUGH];
int nosRead = sock.getInputStream().read(b);
where BIG_ENOUGH is big enough.
But in general there is a big problem with this. A single read call is not guaranteed to return all that the other end has written.
If the nosRead value is BIG_ENOUGH, your application has no way of knowing for sure if there are more bytes to come; the other end may have sent exactly BIG_ENOUGH bytes ... or more than BIG_ENOUGH bytes. In the former case, you application will block (for ever) if you try to read. In the latter case, your application has to do (at least) another read to get the rest of the data.
If the nosRead value is less than BIG_ENOUGH, your application still doesn't know. It might have received everything there is, part of the data may have been delayed (due to network packet fragmentation, network packet loss, network partition, etc), or the other end might have blocked or crashed part way through sending the data.
The best answer is that EITHER your application needs to know beforehand how many bytes to expect, OR the application protocol needs to somehow tell the application how many bytes to expect or when all bytes have been sent.
Possible approaches are:
the application protocol uses fixed message sizes (not applicable to your example)
the application protocol message sizes are specified in message headers
the application protocol uses end-of-message markers
the application protocol is not message based, and the other end closes the connection to say that is the end.
Without one of these strategies, your application is left to guess, and is liable to get it wrong occasionally.
Then you use multiple read calls and (maybe) multiple buffers.
Stream all Input data into Output stream. Here is working example:
InputStream inputStream = null;
byte[] tempStorage = new byte[1024];//try to read 1Kb at time
int bLength;
try{
ByteArrayOutputStream outputByteArrayStream = new ByteArrayOutputStream();
if (fileName.startsWith("http"))
inputStream = new URL(fileName).openStream();
else
inputStream = new FileInputStream(fileName);
while ((bLength = inputStream.read(tempStorage)) != -1) {
outputByteArrayStream.write(tempStorage, 0, bLength);
}
outputByteArrayStream.flush();
//Here is the byte array at the end
byte[] finalByteArray = outputByteArrayStream.toByteArray();
outputByteArrayStream.close();
inputStream.close();
}catch(Exception e){
e.printStackTrace();
if (inputStream != null) inputStream.close();
}
Either:
Have the sender close the socket after transferring the bytes. Then at the receiver just keep reading until EOS.
Have the sender prefix a length word as per Chris's suggestion, then read that many bytes.
Use a self-describing protocol such as XML, Serialization, ...
Use BufferedInputStream, and use the available() method which returns the size of bytes available for reading, and then construct a byte[] with that size. Problem solved. :)
BufferedInputStream buf = new BufferedInputStream(is);
int size = buf.available();
Here is a simpler example using ByteArrayOutputStream...
socketInputStream = socket.getInputStream();
int expectedDataLength = 128; //todo - set accordingly/experiment. Does not have to be precise value.
ByteArrayOutputStream baos = new ByteArrayOutputStream(expectedDataLength);
byte[] chunk = new byte[expectedDataLength];
int numBytesJustRead;
while((numBytesJustRead = socketInputStream.read(chunk)) != -1) {
baos.write(chunk, 0, numBytesJustRead);
}
return baos.toString("UTF-8");
However, if the server does not return a -1, you will need to detect the end of the data some other way - e.g., maybe the returned content always ends with a certain marker (e.g., ""), or you could possibly solve using socket.setSoTimeout(). (Mentioning this as it is seems to be a common problem.)
This is both a late answer and self-advertising, but anyone checking out this question may want to take a look here:
https://github.com/GregoryConrad/SmartSocket
This question is 7 years old, but i had a similiar problem, while making a NIO and OIO compatible system (Client and Server might be whatever they want, OIO or NIO).
This was quit the challenge, because of the blocking InputStreams.
I found a way, which makes it possible and i want to post it, to help people with similiar problems.
Reading a byte array of dynamic sice is done here with the DataInputStream, which kann be simply wrapped around the socketInputStream. Also, i do not want to introduce a specific communication protocoll (like first sending the size of bytes, that will be send), because i want to make this as vanilla as possible. First of, i have a simple utility Buffer class, which looks like this:
import java.util.ArrayList;
import java.util.List;
public class Buffer {
private byte[] core;
private int capacity;
public Buffer(int size){
this.capacity = size;
clear();
}
public List<Byte> list() {
final List<Byte> result = new ArrayList<>();
for(byte b : core) {
result.add(b);
}
return result;
}
public void reallocate(int capacity) {
this.capacity = capacity;
}
public void teardown() {
this.core = null;
}
public void clear() {
core = new byte[capacity];
}
public byte[] array() {
return core;
}
}
This class only exists, because of the dumb way, byte <=> Byte autoboxing in Java works with this List. This is not realy needed at all in this example, but i did not want to leave something out of this explanation.
Next up, the 2 simple, core methods. In those, a StringBuilder is used as a "callback". It will be filled with the result which has been read and the amount of bytes read will be returned. This might be done different of course.
private int readNext(StringBuilder stringBuilder, Buffer buffer) throws IOException {
// Attempt to read up to the buffers size
int read = in.read(buffer.array());
// If EOF is reached (-1 read)
// we disconnect, because the
// other end disconnected.
if(read == -1) {
disconnect();
return -1;
}
// Add the read byte[] as
// a String to the stringBuilder.
stringBuilder.append(new String(buffer.array()).trim());
buffer.clear();
return read;
}
private Optional<String> readBlocking() throws IOException {
final Buffer buffer = new Buffer(256);
final StringBuilder stringBuilder = new StringBuilder();
// This call blocks. Therefor
// if we continue past this point
// we WILL have some sort of
// result. This might be -1, which
// means, EOF (disconnect.)
if(readNext(stringBuilder, buffer) == -1) {
return Optional.empty();
}
while(in.available() > 0) {
buffer.reallocate(in.available());
if(readNext(stringBuilder, buffer) == -1) {
return Optional.empty();
}
}
buffer.teardown();
return Optional.of(stringBuilder.toString());
}
The first method readNext will fill the buffer, with byte[] from the DataInputStream and return the amount bytes read this way.
In the secon method, readBlocking, i utilized the blocking nature, not to worry about consumer-producer-problems. Simply readBlocking will block, untill a new byte-array is received. Before we call this blocking method, we allocate a Buffer-size. Note, i called reallocate after the first read (inside the while loop). This is not needed. You can safely delete this line and the code will still work. I did it, because of the uniqueness of my problem.
The 2 things, i did not explain in more detail are:
1. in (the DataInputStream and the only short varaible here, sorry for that)
2. disconnect (your disconnect routine)
All in all, you can now use it, this way:
// The in has to be an attribute, or an parameter to the readBlocking method
DataInputStream in = new DataInputStream(socket.getInputStream());
final Optional<String> rawDataOptional = readBlocking();
rawDataOptional.ifPresent(string -> threadPool.execute(() -> handle(string)));
This will provide you with a way of reading byte arrays of any shape or form over a socket (or any InputStream realy). Hope this helps!

Connecting an input stream to an outputstream

update in java9: https://docs.oracle.com/javase/9/docs/api/java/io/InputStream.html#transferTo-java.io.OutputStream-
I saw some similar, but not-quite-what-i-need threads.
I have a server, which will basically take input from a client, client A, and forward it, byte for byte, to another client, client B.
I'd like to connect my inputstream of client A with my output stream of client B. Is that possible? What are ways to do that?
Also, these clients are sending each other messages, which are somewhat time sensitive, so buffering won't do. I do not want a buffer of say 500 and a client sends 499 bytes and then my server holds off on forwarding the 500 bytes because it hasn't received the last byte to fill the buffer.
Right now, I am parsing each message to find its length, then reading length bytes, then forwarding them. I figured (and tested) this would be better than reading a byte and forwarding a byte over and over because that would be very slow. I also did not want to use a buffer or a timer for the reason I stated in my last paragraph — I do not want messages waiting a really long time to get through simply because the buffer isn't full.
What's a good way to do this?
Just because you use a buffer doesn't mean the stream has to fill that buffer. In other words, this should be okay:
public static void copyStream(InputStream input, OutputStream output)
throws IOException
{
byte[] buffer = new byte[1024]; // Adjust if you want
int bytesRead;
while ((bytesRead = input.read(buffer)) != -1)
{
output.write(buffer, 0, bytesRead);
}
}
That should work fine - basically the read call will block until there's some data available, but it won't wait until it's all available to fill the buffer. (I suppose it could, and I believe FileInputStream usually will fill the buffer, but a stream attached to a socket is more likely to give you the data immediately.)
I think it's worth at least trying this simple solution first.
How about just using
void feedInputToOutput(InputStream in, OutputStream out) {
IOUtils.copy(in, out);
}
and be done with it?
from jakarta apache commons i/o library which is used by a huge amount of projects already so you probably already have the jar in your classpath already.
JDK 9 has added InputStream#transferTo(OutputStream out) for this functionality.
For completeness, guava also has a handy utility for this
ByteStreams.copy(input, output);
You can use a circular buffer :
Code
// buffer all data in a circular buffer of infinite size
CircularByteBuffer cbb = new CircularByteBuffer(CircularByteBuffer.INFINITE_SIZE);
class1.putDataOnOutputStream(cbb.getOutputStream());
class2.processDataFromInputStream(cbb.getInputStream());
Maven dependency
<dependency>
<groupId>org.ostermiller</groupId>
<artifactId>utils</artifactId>
<version>1.07.00</version>
</dependency>
Mode details
http://ostermiller.org/utils/CircularBuffer.html
Asynchronous way to achieve it.
void inputStreamToOutputStream(final InputStream inputStream, final OutputStream out) {
Thread t = new Thread(new Runnable() {
public void run() {
try {
int d;
while ((d = inputStream.read()) != -1) {
out.write(d);
}
} catch (IOException ex) {
//TODO make a callback on exception.
}
}
});
t.setDaemon(true);
t.start();
}
BUFFER_SIZE is the size of chucks to read in. Should be > 1kb and < 10MB.
private static final int BUFFER_SIZE = 2 * 1024 * 1024;
private void copy(InputStream input, OutputStream output) throws IOException {
try {
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead = input.read(buffer);
while (bytesRead != -1) {
output.write(buffer, 0, bytesRead);
bytesRead = input.read(buffer);
}
//If needed, close streams.
} finally {
input.close();
output.close();
}
}
Use org.apache.commons.io.IOUtils
InputStream inStream = new ...
OutputStream outStream = new ...
IOUtils.copy(inStream, outStream);
or copyLarge for size >2GB
This is a Scala version that is clean and fast (no stackoverflow):
import scala.annotation.tailrec
import java.io._
implicit class InputStreamOps(in: InputStream) {
def >(out: OutputStream): Unit = pipeTo(out)
def pipeTo(out: OutputStream, bufferSize: Int = 1<<10): Unit = pipeTo(out, Array.ofDim[Byte](bufferSize))
#tailrec final def pipeTo(out: OutputStream, buffer: Array[Byte]): Unit = in.read(buffer) match {
case n if n > 0 =>
out.write(buffer, 0, n)
pipeTo(out, buffer)
case _ =>
in.close()
out.close()
}
}
This enables to use > symbol e.g. inputstream > outputstream and also pass in custom buffers/sizes.
In case you are into functional this is a function written in Scala showing how you could copy an input stream to an output stream using only vals (and not vars).
def copyInputToOutputFunctional(inputStream: InputStream, outputStream: OutputStream,bufferSize: Int) {
val buffer = new Array[Byte](bufferSize);
def recurse() {
val len = inputStream.read(buffer);
if (len > 0) {
outputStream.write(buffer.take(len));
recurse();
}
}
recurse();
}
Note that this is not recommended to use in a java application with little memory available because with a recursive function you could easily get a stack overflow exception error

Categories

Resources