I am writing some java TCP/IP networking code ( client - server ) in which I have to deal with scenarios where the sends are much faster than the receives , thus blocking the send operations at one end. ( because the send and recv buffers fill up ). In order to design my code , I wanted to first play around these kind of situations first and see how the client and servers behave under varying load. But I am not able to set the parameters appropriately for acheiving this back pressure. I tried setting Socket.setSendBufferSize(int size) and Socket.setReceiveBufferSize(int size) to small values - hoping that would fill up soon, but I can see that send operation completes without waiting for the client to consume enough data already written. ( which means that the small send and recv buffer size has no effect )
Another approach I took is to use Netty , and set ServerBootstrap.setOption("child.sendBufferSize", 256);, but even this is of not much use. Can anyone help me understand what I am doing wrong /
The buffers have an OS dependent minimium size, this is often around 8 KB.
public static void main(String... args) throws IOException, InterruptedException {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.bind(new InetSocketAddress(0)); // open on a random port
InetSocketAddress remote = new InetSocketAddress("localhost", ssc.socket().getLocalPort());
SocketChannel sc = SocketChannel.open(remote);
configure(sc);
SocketChannel accept = ssc.accept();
configure(accept);
ByteBuffer bb = ByteBuffer.allocateDirect(16 * 1024 * 1024);
// write as much as you can
while (sc.write(bb) > 0)
Thread.sleep(1);
System.out.println("The socket write wrote " + bb.position() + " bytes.");
}
private static void configure(SocketChannel socketChannel) throws IOException {
socketChannel.configureBlocking(false);
socketChannel.socket().setSendBufferSize(8);
socketChannel.socket().setReceiveBufferSize(8);
}
on my machine prints
The socket write wrote 32768 bytes.
This is the sum of the send and receive buffers, but I suspect they are both 16 KB
I think Channel.setReadable is what you need. setReadable tell netty temporary pause to read data from system socket in buffer, when the buffer is full, the other end will have to wait.
Related
For my networked Java application, I wrote a thread-safe, non-blocking i/o wrapper over the system-level DatagramSocket. However, I'm confused about something. What happens if a new packet comes in while callback.onDataReceived() is executing? Is the packet dropped? Is it added to an OS-level queue and "received" on the next iteration of the loop? If so, does that queue have a maximum size?
/*
* Receiving thread
*/
rxThread = new Thread(new Runnable()
{
#Override
public void run()
{
byte[] incomingBuffer = new byte[DataConstants.NUM_BYTES_IN_DATAGRAM_PACKET_BUF];
while (!Thread.currentThread().isInterrupted())
{
DatagramPacket incomingPacket = new DatagramPacket(incomingBuffer, incomingBuffer.length);
try
{
basicSocket.receive(incomingPacket);
callback.onDataReceived(
DatatypeUtil.getNBytes(incomingPacket.getData(), incomingPacket.getLength()),
incomingPacket.getAddress());
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
});
You are correct about the OS level buffering that occurs, you can manipulate the size of the buffers using the API methods DatagramSocket.setRecieveBufferSize() and DatagramSocket.getRecieveBufferSize() .
But
It is also possible to lose some packets as your UDP packets may get dropped if the buffer overflows or if the packets get lost in transit. After all UDP is a connectionless protocol. If you do need to make sure that all your packets are safe and intact you could switch over to TCP.
Coming back to your question, if it so happens that callback.onDataReceived() takes a little longer to execute, the sender is pumping packets and your UDP buffers get full, then you might start losing packets
my clients send udp packets with high rate.
i'm sure that my java app layer doesn't receive all udp packets that clients sent becuase the number of recieved packets in wireshark and my java app doesn't match.
because wireshark receive more udp packets so i'm sure udp packets didn't lost in network.
the code is here:
receive packets in a thread and offer to a LinkedBlockingQueue and on another thread consume take packets from LinkedBlockingQueue and then call onNext on a
rx-java subject.
socket = new DatagramSocket(this.port);
socket.setBroadcast(true);
socket.setReceiveBufferSize(2 * 1024 * 1024);
// thread-1
while (true) {
byte[] bytes = new byte[532];
DatagramPacket packet = new DatagramPacket(bytes, bytes.length);
try {
this.socket.receive(packet);
queue.offer(
new UdpPacket(
packet.getPort(), packet.getAddress().getHostAddress(), packet.getData()));
} catch (IOException e) {
e.printStackTrace();
}
}
// thread-2
UdpPacket packet;
while ((packet = queue.take()) != null) {
this.receiveMessageSubject.onNext(packet);
}
Host OS: Ubutnu 18.04
Very difficult to give a straight answer but from my experience with UDP message processing in Java it really matters to improve performance of processing the messages, especially with large volumes of data.
So here are some things that I would consider:
1) You are correct to process UDP messages on a different queue. But, the queue has a limited size. Do you manage to process messages fast? Otherwise, the queue fills up and you are blocking the while loop. Some simple logging there could let you know if this is the case. Putting them on a queue where they can be pooped out on a different step is awesome but you also need to make sure that the processing is fast as way and that the queue does not fill up.
2) Are all your data-grams less than 532 bytes? Maybe some loss occurs due to larger messages that don't fill the buffer.
Hope this helps,
I had a similar issue to this recently in a different language. I'm unsure if it works the same in Java, but this may be helpful to you.
So as data packets come into the socket, they are buffered and you have set your buffer size, but you are still only reading a single data packet, even though the buffer could be holding more. As you're processing one datagram at a time, your buffer is filling up even more and eventually when its full, data could be lost as it can't store any more datagrams.
I checked the documentation for DatagramSocket
Receives a datagram packet from this socket
I'm unsure on the functions you would need to call in Java, but here's a little snippet that I am using.
while (!m_server->BufferEmpty()) {
std::shared_ptr<Stream> inStream = std::make_shared<Stream>();
std::vector<unsigned char>& buffer = inStream->GetBuffer();
boost::asio::ip::udp::endpoint senderEndpoint = m_server->receive(boost::asio::buffer(buffer),
boost::posix_time::milliseconds(-1), ec);
if (ec)
{
std::cout << "Receive error: " << ec.message() << "\n";
}
else
{
std::unique_ptr<IPacketIn> incomingPacket = std::make_unique<IPacketIn>();
incomingPacket->ReadHeader(inStream);
m_packetProcessor->ProcessPacket(incomingPacket, senderEndpoint);
incomingPacket.reset();
}
++packetsRead;
inStream.reset();
}
This basically says that if the socket has any data for the current frame in its buffer, keep reading datagrams until the buffer is empty.
Unsure on how the LinkedBlockingQueue works, but this could also be causing a bit of a problem if both threads are trying to access it at the same time. In your UDP reading thread you could be blocked for some time, and then packets could be received during this time.
Awareness of the fact that TCP checksum is actually a very poor checksum prompted me to include in the data block an additional checksum (SHA-256) to verify the integrity of data on the server and in case of corrupted, request the data block again. But the addition of ACK greatly reduces the data transfer rate. In my case (the data is transmitted by wifi) the speed has decreased from ~90mbps to ~12mbps.
Client:
SocketChannel socketChannel = SocketChannel.open(new InetSocketAddress("192.168.31.30", 3333));
ByteBuffer byteBufferData = ByteBuffer.allocateDirect(1024 * 8);
ByteBuffer byteBufferACK = ByteBuffer.allocateDirect(1);
for (int i = 0; i < 1024; i++) {
// write data (payload + checksum (SHA-256))
socketChannel.write(byteBufferData);
byteBufferData.clear();
// read ACK
socketChannel.read(byteBufferACK);
byteBufferACK.clear();
// if (byteBufferACK.get() == XXX)
// ... retransmission byteBufferData
}
Server:
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();
serverSocketChannel.socket().bind(new InetSocketAddress(3333));
SocketChannel socketChannel = serverSocketChannel.accept();
ByteBuffer byteBufferData = ByteBuffer.allocateDirect(1024 * 8);
ByteBuffer byteBufferACK = ByteBuffer.allocateDirect(1);
long startTime = System.currentTimeMillis();
while ((socketChannel.read(byteBufferData)) != -1) {
// when 8192 bytes of data were read
if (!byteBufferData.hasRemaining()) {
byteBufferData.clear();
// write ACK
socketChannel.write(byteBufferACK);
byteBufferACK.clear();
}
}
System.out.println(System.currentTimeMillis() - startTime);
Please note that the code is a test code and is not intended to convey any useful data. It is intended only for testing the data transfer rate.
I have as 2 questions:
Maybe I do not understand something or do it incorrectly, but why sending one byte of data as a confirmation of data acceptance (ACK) affects the overall data transfer rate so much? How to avoid this?
Is the SHA-256 sufficient as a checksum for data of 8kb size? (On top of the existing TCP CRC)
Because you're waiting for it. Lets say there's 200ms of latency between you and the server. Without the ack, you'd write packets as quickly as possible, saturate the bandwidth, and stop. With the ack, it looks like this:
t=0 send 1st 8k
t=200 server recieves
t=205ish server sends ack
t=405 client recieves ack.
t=410ish client sends 2nd 8k
You waste 50% of your sending time. I'm actually surprised it wasn't worse.
TCP has a LOT of features in it that prevent these kinds of issues, including sliding windows of data (you don't send one packet and ack it, you send N packets and the server acks the ones it receives, allowing missing packets to be resent out of order). YOu're reimplementing TCP badly and almost certainly shouldn't be.
If you are going to do this- don't use TCP. Use UDP or raw sockets and write your new protocol on top of that. You're still using TCP acks and checksums, so yours are redundant.
I have to implement sending data with specific source port and in the same time listen to that port. Full duplex. Does anybody know how to implement it on java. I tried to create separate thread for listening on socket input stream but it doesnt work. I cannot bind ServerSocket and client socket to the same source port and the the same with netty.
It there any solution for dull duplex?
init(){
socket = new Socket(InetAddress.getByName(Target.getHost()), Target.getPort(), InetAddress.getByName("localhost"), 250);
in = new DataInputStream(socket.getInputStream());
out = new DataOutputStream(socket.getOutputStream());
}
private static void writeAndFlush(OutputStream out, byte[] b) throws IOException {
out.write(b);
out.flush();
}
public class MessageReader implements Runnable {
#Override
public void run() {
//this method throw exception EOF
read(in);
}
private void read(DataInputStream in){
while (isConnectionAlive()) {
StringBuffer strBuf = new StringBuffer();
byte[] b = new byte[1000];
while ((b[0] = bufferedInputStream.read(b)) != 3) {
strBuf.append(new String(b));
}
log.debug(strBuf.toString());
}
}
}
What you're trying to do is quite strange: A ServerSocket is a fully implemented socket that accepts connections, it handles its own messages and you definitely cannot piggy-back another socket on top of it.
Full duplex is fairly simple to do with NIO:
Create a Channel for your Socket in non-blocking mode
Add read to the interest OPs
Sleep with a Selector's select() method
Read any readable bytes, write any writable bytes
If writing is done, remove write from interest OPs
GOTO 3.
If you need to write, add bytes to a buffer, add write to interest OPs and wake up selector. (slightly simplified, but I'm sure you can find your way around the Javadoc)
This way you will be completely loading the outgoing buffer every time there is space and reading from the incoming one at the same time (well, single thread, but you don't have to finish writing to start reading etc).
I had run into the same question and decided to answer it myself. I would like to share with you guys the code repo. It is really simple, you can get the idea to make your stuff work. It is an elaborate example. The steps accidentally look like Ordous's solution.
https://github.com/khanhhua/full-duplex-chat
Feel free to clone! It's my weekend homework.
Main thread:
Create background thread(s) that will connect to any target machines(s).
These threads will connect to target machines and transmit data and die
Create an infinite loop
Listen for incoming connections.
Thread off any connection to handle I/O
Classes:
Server
Listens for incoming connections and threads off a Client object
Client
This class is created upon the server accepting the incoming connection, the TcpClient or NetClient (i forget what java calls it) is used to send data. Upon completion it dies.
Target
Is created during the start up and connects to a specific target and send data.
once complete it dies.
Currently I'm experimenting with this code (I know it doesn't fit the purpose).
I tried sending from 3 sources simultaneously (UDP Test Tool) and it seems ok, but I wan't to know how this would behave if form those 10K possible clients 2K are sending at the same time? The packets are approximately 70 bytes in size. I'm supposed do to some simple operations on the contents and write the results to a database.
public class Test{
public static void main(String [] args){
int PACKETSIZE=1400;
int port=5555;
byte[] bytes = new byte[PACKETSIZE];
//ByteBuffer bb = ByteBuffer.allocate(4);
//Byte lat=null;
try
{
DatagramSocket socket = new DatagramSocket(port);
System.out.println("The server is runing on port " + port +"\n");
while (true)
{
DatagramPacket packet = new DatagramPacket(bytes, bytes.length);
socket.receive(packet);
System.out.println("Packet length = " + packet.getLength());
System.out.println("Sender IP = " + packet.getAddress() + " Port = " + packet.getPort());
for(int i=0; i<=packet.getLength();i++){System.out.print(" "+ packet.getData()[i] + " ");}
Firstly UDP sockets are not connection oriented so the number of "connections" is meaningless. The number that you actually care about is number of datagrams per second. The other issue that is normally overlooked is whether the datagrams span IP packets or not since that affects packet assembly time and, ultimately, how expensive they are to receive. Your packet size is 1,400 which will fit comfortably in an Ethernet frame.
Now, what you need to do is limit your processing time using multiple threads, queueing, or some other processing scheme. You want the receiving thread busy pulling datagrams off of the wire and putting them somewhere else for workers to process. This is a common processing idiom that has been in use for years. It should scale to meet your needs provided that you can separate the processing of the data from the network IO.
You can also use asynchronous or event-driven IO so that you do not have a thread responsible for reading datagrams from the socket directly. See this question for a discussion of Java NIO.
I'm not sure if this is homework or not, but you should read The C10K problem Dan Kegel's excellent article on this very subject. I think that you will probably find it enlightening to say the least.
Check out these two open source projects :
http://mina.apache.org/
http://www.jboss.org/netty
Also check this blog post:
http://urbanairship.com/blog/2010/08/24/c500k-in-action-at-urban-airship/