I have tcp communication via socket code like :
public void openConnection() throws Exception
{
socket = new Socket();
InetAddress iNet = InetAddress.getByName("server");
InetSocketAddress sock = new InetSocketAddress(iNet, Integer.parseInt(port));
socket.connect(sock, 0);
out = new PrintWriter(socket.getOutputStream(), true);
in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
}
and send method as :
synchronized void send(String message)
{
try
{
out.println(message);
}
catch (Exception e)
{
throw new RuntimeException(this.getClass() + ": Error Sending Message: "
+ message, e);
}
}
This writes message on socket and communicated through tcp. (non-blocking call)
My doubt is, how can we determine if this packet was successfully sent or if dropped, what was the reason through java code?
TCP acknowledgement indicates that the data is pushed to the other end of the TCP/IP stack & it necessarily doesn't mean that the receiver application has processed the data. In windows/linux a successful send completion indicates the buffer is copied to the kernel mode socket buffer.
You can try setting the socket buffer to zero which makes the TCP/IP stack to complete the send call only after receiving acknowledgement for the buffer. This happens at least in windows & this behavior can't be assumed in java.
TCP provides reliable communication with the endpoint. If there is no error then the message was received. The wikipedia page on TCP states that:
TCP provides reliable, ordered and error-checked delivery of a stream
of octets between programs running on computers connected to a local
area network, intranet or the public Internet.
https://en.wikipedia.org/wiki/Transmission_Control_Protocol
If the communication fails then you can inspect the specific exception that has been thrown. You should review the API documentation to determine the specific exceptions that are thrown and the reason for them. To help with this it is useful to be more specific with your exception handling (handle specific types of exception separately rather than just catching Exception).
Related
I am currently trying to make an application that will send messages to a server using one port, but will receive messages on another port. However, based on tutorials I have followed, it looks like the act of connecting to the server is where ports come into play and my client is receiving and sending messages on the same port. How do I make it so it sends on one port but receives on the other?
Here is the code that I think is relevant from the client side (I put some stuff that seems unrelated because I think they are things that would be altered by receiving on one port but sending on another, and ignore the comment about replacing inetaddress, that is just me working on implementing this in a gui):
public void startRunning(){
try{
connectToServer();
setupStreams();
whileChatting();
}catch(EOFException eofException){
showMessage("\n Client terminated connection");
}catch(IOException ioException){
ioException.printStackTrace();
}finally{
closeStuff();
}
}
//connect to server
private void connectToServer() throws IOException{
showMessage("Attempting connection... \n");
connection = new Socket(InetAddress.getByName(serverIP), 480);//replace serverIP with ipTextField.getText or set serverIP to equal ipTextField.getText? Same with port number.
showMessage("Connected to: " + connection.getInetAddress().getHostName() );
}
//set up streams to send and receive messages
private void setupStreams() throws IOException{
output = new ObjectOutputStream(connection.getOutputStream());
output.flush();
input = new ObjectInputStream(connection.getInputStream());
showMessage("\n Streams are good! \n");
}
//while talking with server
private void whileChatting() throws IOException{
ableToType(true);
do{
try{
message = (String) input.readObject();
showMessage("\n" + message);
}catch(ClassNotFoundException classNotfoundException){
showMessage("\n Don't know that object type");
}
}while(!message.equals("SERVER - END"));
}
//send messages to server
private void sendMessage(String message){
try{
output.writeObject("CLIENT - " + message);
output.flush();
showMessage("\nCLIENT - " + message);
}catch(IOException ioException){
messageWindow.append("\n something messed up ");
}
}
//change/update message window
private void showMessage(final String m){
SwingUtilities.invokeLater(
new Runnable(){
public void run(){
messageWindow.append(m);
}
}
);
}
EDIT/UPDATE: To help clarify some things, here is some more information. The device that sends the first message is connected to a sensor, and it sends information when that sensor detects something to the other device. The receiving device sends a message back on a different port telling the original sending device how to respond. Lets name these two devices the "reporter-action taker" and the "decision maker-commander".
If you want to use TCP/IP sockets you can't use a a socket to send and another to read. That's not what they are for.
If you use a centralized distributed algorithm (server/client communication) you have to set the server to listen on a single socket port with the ServerSocket class: then the server tries to accept clients through that socket.
Example:
ServerSocket listener = new ServerSocket(Port)
While (true) {
new Clienthandler(listener.accept());
}
The server will listen on that port, and when a client tries to connect to that port if it is accepted the server launches its handler. On this handler constructor the Socket object used on the client is received on an argument and can then be used to get the writers and the readers. The reader on this handler class will be the writer on the client class and vice-versa, maybe that's what you were looking for.
Your question about using two ports in this manner is a bit strange. You state that you have a client and a server and that they should communicate on different ports.
Just to clarify picture the server as a hanging rack for jackets with several hooks in a row. Each port the server listened on represents a hook. When it comes to the client server relationship the client or jacket knows where to find its hook, however the hook is blind and have no idea where to find jackets.
Now, the client selects a port or a hook and connects to it. The connection is like a pipeline with two pipes. One for the client to deliver data to the server with and the other to send data from the server back to the client. When the connection is established data can be transferred both ways. This means that we only need one port open on the server to send data both from the client to the server and in the opposite direction.
The reason for only having one open port open on the server for the clients to connect to is that holding an open port for connections is hard to do on a regular client computer. The normal desktop user will be behind several firewalls blocking incoming connections. If that wasn't the case the client would probably be hacked senseless from malicious viruses.
Moving on with the two port solution we could not call this a client server connection per say. It would be more like a peer to peer connection or something like that. But if this is what you want to do, the application connecting first would have to start by telling the other application what ip and port to use for connecting back, it should probably also want to give some kind of token that are to be used to pair the new incoming connection when connecting back.
You should take note that making such an implementation is not a good idea most of the time as it complicates things a whole lot for simple data transfer between a client and server application.
UDP in Java thinks that UDP has "connections". This surprised me, coming from a C background where I had always used UDP as a fire-and-forget type of protocol.
When testing UDP in Java, I noticed that if the remote UDP port is not listening, I get an error in Java before I attempt to send anything.
What does Java do (without me asking it to) in order to be able to tell whether a remote UDP port is listening?
(The code below is run in the receiving thread for the socket. Sending is done in a different thread.)
try {
socket = new DatagramSocket(udpPort);
socket.connect(udpAddr, udpPort);
} catch (SocketException e) {
Log.d(TAG, "disconnected", e);
}
...
while (true) {
// TODO: don't create a new datagram for each iteration
DatagramPacket packet = new DatagramPacket(new byte[BUF_SIZE], BUF_SIZE);
try {
socket.receive(packet); // line 106
} catch (IOException e) {
Log.d(TAG, "couldn't recv", e);
}
...
produces the error below, if the remote socket is not listening.
java.net.PortUnreachableException:
at libcore.io.IoBridge.maybeThrowAfterRecvfrom(IoBridge.java:556)
at libcore.io.IoBridge.recvfrom(IoBridge.java:516)
at java.net.PlainDatagramSocketImpl.doRecv(PlainDatagramSocketImpl.java:161)
at java.net.PlainDatagramSocketImpl.receive(PlainDatagramSocketImpl.java:169)
at java.net.DatagramSocket.receive(DatagramSocket.java:253)
at com.example.mypkg.MyClass.run(MyClass.java:106)
at java.lang.Thread.run(Thread.java:856)
Caused by: libcore.io.ErrnoException: recvfrom failed: ECONNREFUSED (Connection refused)
at libcore.io.Posix.recvfromBytes(Native Method)
at libcore.io.Posix.recvfrom(Posix.java:131)
at libcore.io.BlockGuardOs.recvfrom(BlockGuardOs.java:164)
...
First of all, it is clear that this is not implemented using real Java. The "libcore.io" packages are not part of the Java SE libraries. These are Android stacktraces. (This doesn't change anything ... but it could.)
OK, so lets start with the exception. The javadoc for java.net.PortUnreachableException says:
"Signals that an ICMP Port Unreachable message has been received on a connected datagram."
And for DatagramSocket.connect(...):
"If the remote destination to which the socket is connected does not exist, or is otherwise unreachable, and if an ICMP destination unreachable packet has been received for that address, then a subsequent call to send or receive may throw a PortUnreachableException. Note, there is no guarantee that the exception will be thrown."
So here's what I think has happened. Prior to creating the incoming socket, something on the client system has sent a UDP packet to the server on that port, and the server has responded with an ICMP Port Unreachable. Then your socket is created, and connected, and you call receive. This does a recvfrom syscall, and network stack responds with an ECONREFUSED error code ... which Java turns into a PortUnreachableException,
So does this mean that UDP is connection oriented?
Not really, IMO. It is simply reporting the that it received an ICMP message in response to something that happened earlier.
What about the connect methods, and the "connected socket" / "connected datagram" phraseology?
IMO, this is just some clumsy wording. The "connection" is really just referring to the fact that the datagram socket has been bound to a specific remote address and port ... so that you can send and receive datagrams without specifying the IP and port1.
These "connections" are pretty tenuous and certainly don't amount to making UDP "connection oriented".
What does Java do (without me asking it to) in order to be able to tell whether a remote UDP port is listening?
It is not doing anything. Java is simply reporting information from a previous ICMP message.
1 - Actually, there is a bit more to it than that. For example, binding tells the client-side OS to buffer UDP packets from that host / port an route UDP packets (and ICMP notifications) to the application. It also tells it not to respond with an ICMP Port Unreachable.
UDP in Java thinks that UDP has "connections".
No it doesn't, but UDP (regardless of Java) does have connected sockets. Not the same thing.
This surprised me, coming from a C background where I had always used UDP as a fire-and-forget type of protocol.
You can connect() a UDP socket in C too. Look it up. What you describe has nothing to do with Java specifically.
When testing UDP in Java, I noticed that if the remote UDP port is not listening, I get an error in Java before I attempt to send anything.
That's because you connected the socket. One of the side-effects of that is that incoming ICMP messages can be routed back to the sending socket in the form of errors.
What does Java do (without me asking it to) in order to be able to tell whether a remote UDP port is listening?
It calls the BSD Sockets connect() method.
The UDP server needs to listen on a local port.
Here's a code stub for a server.
int portNumber = 59123;
DatagramSocket server = new DatagramSocket(portNumber);
// read incoming packets
DatagramPacket packet = new DatagramPacket(buffer, buffer.length);
while(true)
{
server.receive(packet);
byte[] data = packet.getData();
String text = new String(data, 0, packet.getLength());
echo(packet.getAddress().getHostAddress() + ":" + packet.getPort() + " received: '" + text + "'");
}
I have encountered a problem of socket communication on linux system, the communication process is like below: client send a message to ask the server to do a compute task, and wait for the result message from server after the task completes.
But the client would hangs up to wait for the result message if the task costs a long time such as about 40 minutes even though from the server side, the result message has been written to the socket to respond to the client, but it could normally receive the result message if the task costs little time, such as one minute. Additionally, this problem only happens on customer environment, the communication process behaves normally in our testing environment.
I have suspected the cause to this problem is the default timeout value of socket is different between customer environment and testing environment, but the follow values are identical on these two environment, and both Client and server.
getSoTimeout:0
getReceiveBufferSize:43690
getSendBufferSize:8192
getSoLinger:-1
getTrafficClass:0
getKeepAlive:false
getTcpNoDelay:false
the codes on CLient are like:
Message msg = null;
ObjectInputStream in = client.getClient().getInputStream();
//if no message readObject() will hang here
while ( true ) {
try {
Object recObject = in.readObject();
System.out.println("Client received msg.");
msg = (Message)recObject;
return msg;
}catch (Exception e) {
e.printStackTrace();
return null;
}
}
the codes on server are like,
ObjectOutputStream socketOutStream = getSocketOutputStream();
try {
MessageJobComplete msgJobComplete = new MessageJobComplete(reportFile, outputFile );
socketOutStream.writeObject(msgJobComplete);
}catch(Exception e) {
e.printStackTrace();
}
in order to solve this problem, i have added the flush and reset method, but the problem still exists:
ObjectOutputStream socketOutStream = getSocketOutputStream();
try {
MessageJobComplete msgJobComplete = new MessageJobComplete(reportFile, outputFile );
socketOutStream.flush();
logger.debug("AbstractJob#reply to the socket");
socketOutStream.writeObject(msgJobComplete);
socketOutStream.reset();
socketOutStream.flush();
logger.debug("AbstractJob#after Flush Reply");
}catch(Exception e) {
e.printStackTrace();
logger.error("Exception when sending MessageJobComplete."+e.getMessage());
}
so do anyone knows what the next steps i should do to solve this problem.
I guess the cause is the environment setting, but I do not know what the environment factors would affect the socket communication?
And the socket using the Tcp/Ip protocal to communicate, the problem is related with the long time task, so what values about tcp would affect the timeout of socket communication?
After my analysis about the logs, i found after the message are written to the socket, there were no exceptions are thrown/caught. But always after 15 minutes, there are exceptions in the objectInputStream.readObject() codes snippet of Server Side which is used to accept the request from client. However, socket.getSoTimeout value is 0, so it is very strange that the a Timed out Exception was thrown.
{2012-01-09 17:44:13,908} ERROR java.net.SocketException: Connection timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:146)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:312)
at sun.security.ssl.InputRecord.read(InputRecord.java:350)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:809)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:766)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:94)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:69)
at java.io.ObjectInputStream$PeekInputStream.peek(ObjectInputStream.java:2265)
at java.io.ObjectInputStream$BlockDataInputStream.peek(ObjectInputStream.java:2558)
at java.io.ObjectInputStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2568)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1314)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:368)
so why the Connection Timed out exceptions are thrown?
This problem is solved. using the tcpdump to capture the messages flows. I have found that while in the application level, ObjectOutputStream.writeObject() method was invoked, in the tcp level, many times [TCP ReTransmission] were found.
So, I concluded that the connection is possibly be dead, although using the netstat -an command the tcp connection state still was ESTABLISHED.
So I wrote a testing application to periodically sent Testing messages as the heart-beating messages from the Server. Then this problem disappeared.
The read() methods of java.io.InputStream are blocking calls., which means they wait "forever" if they are called when there is no data in the stream to read.
This is completely expected behaviour and as per the published contract in javadoc if the server does not respond.
If you want a non-blocking read, use the java.nio.* classes.
I am currently debugging two Java applications that exchange data via a TCP connection.
One of the applications, the TCP client, periodically sends urgent data to the other, the TCP server, by calling Socket#sendUrgentData(int). On the 18th attempt to send the urgent data, the TCP client throws the following exception
java.io.IOException:BrokenPipe
at java.net.PlainSocketImpl.socketSendUrgentData(Native Method)
at java.net.PlainSocketImpl.sendUrgentData(PlainSocketImpl.java:541)
at java.net.Socket.sendUrgentData(Socket.java:927)
The TCP server throws this exception
java.net.SocketException: Software caused connection abort: recv failed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(Unknown Source)
at java.net.SocketInputStream.read(Unknown Source)
I believe the exceptions are caused by an attempt to write/read to a closed connection/socket. What I don't understand is why the connection or socket becomes closed after calling sendUrgentData() 17 times. I am able to repeat it and it occurs always after 17 times.
If I run the client and server on Windows, the issue occurs. If I run the client and server on Solaris the issue does not occur. If I run the client on Solaris and the server on Windows the issue occurs. If I run the client on Windows and the server on Solaris the issue does not occur. This makes me think it may be Windows related?
Using Wireshark I see the following traffic on the connection
--> = from TCP client to TCP server
<-- = from TCP server to TCP client
--> [PSH, ACK, URG] (Seq=1, Ack=1)
<-- [ACK] (Seq=1, Ack=2)
--> [PSH, ACK, URG] (Seq=2, Ack=1)
<-- [ACK] (Seq=1, Ack=3)
...
--> [PSH, ACK, URG] (Seq=17, Ack=1)
<-- [RST, ACK] (Seq=1, Ack=18)
I wrote some simple test classes which show the issue.
TCPServer.java IP_Address Port
public class TCPServer
{
public static void main(String[] args) throws Exception
{
ServerSocket socket = new ServerSocket();
socket.bind(new InetSocketAddress(args[0], Integer.parseInt(args[1])));
System.out.println("BOUND/" + socket);
Socket connection = socket.accept();
System.out.println("CONNECTED/" + connection);
int b;
while ((b = connection.getInputStream().read()) != -1) {
System.out.println("READ byte: " + b);
}
System.out.println("CLOSING ..");
connection.close();
socket.close();
}
}
TCPClient.java IP_Address Port Interval_Between_Urgent_Data
public class TCPClient
{
public static void main(String[] args) throws Exception
{
final Socket socket = new Socket();
socket.connect(new InetSocketAddress(InetAddress.getByName(args[0]), Integer.parseInt(args[1])));
System.out.println("CONNECTED/"+socket);
Timer urgentDataTimer = new Timer(true);
urgentDataTimer.scheduleAtFixedRate(new TimerTask()
{
int n = 0;
public void run() {
try {
System.out.println("SENDING URGENT DATA ("+(++n)+") ..");
socket.sendUrgentData(1);
System.out.println("SENT URGENT DATA");
} catch (Exception e) {
e.printStackTrace();
}
}
}, 1000, Integer.parseInt(args[2]));
int b;
while ((b = socket.getInputStream().read()) != 1) {
System.out.println("READ byte: " + b);
}
System.out.println("CLOSING ..");
urgentDataTimer.cancel();
socket.close();
}
}
Could someone explain what is happening here?
Thanks.
I assume that you are actually correctly receiving the urgent data in the application that's failing and that the data is as you expect it to be?
There are many reasons for this to fail, especially if you're attempting it in a cross platform situation: In TCP there are two conflicting descriptions of how urgent data works, RFC 793 which details TCP says that the Urgent Pointer indicates the byte that follows the urgent data but RFC 1122 corrects this and states that the Urgent Pointer indicates the final byte of urgent data. This leads to interoperability issues if one peer uses the RFC 793 definition and the other uses the RFC 1122 definition.
So, first confirm that your application is actually getting the correct byte of urgent data. Yes, I said byte, there's more compatibility complexity in that Windows only supports a single byte of out of band data whereas RFC 1122 specifies that TCP MUST support sequences of urgent data bytes of any length. Windows also doesn't specify how or if it will buffer subsequent out of band data, so if you are slow in reading a byte of urgent data and another byte of urgent data arrives then one of the bytes may be lost; though our tests have shown that Windows does buffer urgent data. This all makes the use of out of band signalling using urgent data somewhat unreliable on Windows with TCP.
And then there are all the other issues that come about if you happen to be using overlapped I/O.
I've covered this in a little more depth, albeit from a C++ perspective, here: http://www.serverframework.com/asynchronousevents/2011/10/out-of-band-data-and-overlapped-io.html
Urgent data is received in-line by Java, which would put the data stream out of order. Probably the receiver didn't understand the out-of-order data and closed the connection. Then you kept writing to it, and that can cause 'connection reset by peer'. Moral is that you basically can't use urgent TCP data in Java unless the receiver is very carefully written.
Is there a way to have reliable communications (the sender get informed that the message it sent is already received by the receiver) using Java TCP/IP library in java.net.*? I understand that one of the advantages of TCP over UDP is its reliability. Yet, I couldn't get that assurance in the experiment below:
I created two classes:
1) echo server => always sending back the data it received.
2) client => periodically send "Hello world" message to the echo server.
They were run on different computers (and worked perfectly). During the middle of the execution, I disconnected the network (unplugged the LAN cable). After disconnected, the server still keep waiting for a data until a few seconds passed (it eventually raised an exception). Similarly, the client also keep sending a data until a few seconds passed (an exception is raised).
The problem is, objectOutputStream.writeObject(message) doesn't guarantee the delivery status of the message (I expect it to block the thread, keep resending the data until delivered). Or at least I get informed, which messages are missing.
Server Code:
import java.net.*;
import java.io.*;
import java.io.Serializable;
public class SimpleServer {
public static void main(String args[]) {
try {
ServerSocket serverSocket = new ServerSocket(2002);
Socket socket = new Socket();
socket = serverSocket.accept();
InputStream inputStream = socket.getInputStream();
ObjectInputStream objectInputStream = new ObjectInputStream(
inputStream);
while (true) {
try {
String message = (String) objectInputStream.readObject();
System.out.println(message);
Thread.sleep(1000);
} catch (Exception ex) {
ex.printStackTrace();
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Client code:
import java.net.*;
import java.io.*;
public class SimpleClient {
public static void main(String args[]) {
try {
String serverIpAddress = "localhost"; //change this
Socket socket = new Socket(serverIpAddress, 2002);
OutputStream outputStream = socket.getOutputStream();
ObjectOutputStream objectOutputStream = new ObjectOutputStream(
outputStream);
while (true) {
String message = "Hello world!";
objectOutputStream.writeObject(message);
System.out.println(message);
Thread.sleep(1000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
If you need to know which messages have arrived in the peer application, the peer application has to send acknowledgements.
If you want this level of guarantees it sounds like you really want JMS. This can ensure not only that messages have been delivered but also have been processed correctly. i.e. there is no point having very reliable delivery if it can be discarded due to a bug.
You can monitor which messages are waiting and which consumers are falling behind. Watch a producer to see what messages it is sending, and have messages saved when it is down and are available when it restarts. i.e. reliable delivery even if the consumer is restarted.
TCP is always reliable. You don't need confirmations. However, to check that a client is up, you might also want to use a UDP stream with confirmations. Like a PING? PONG! system. Might also be TCP settings you can adjust.
Your base assumption (and understanding of TCP) here is wrong. If you unplug and then re-plug, the message most likely will not be lost.
It boils down on how long to you want the sender to wait. One hour, one day? If you'd make the timeout one day, you would unplug for two days and still say "does not work".
So the guaranteed delivery is that "either data is delivered - or you get informed". In the second case you need to solve it on application level.
You could consider using the SO_KEEPALIVE socket option which will cause the connection to be closed if no data is transmitted over the socket for 2 hours. However, obviously in many cases this doesn't offer the level of control typically needed by applications.
A second problem is that some TCP/IP stack implementations are poor and can leave your server with dangling open connections in the event of a network outage.
Therefore, I'd advise adding application level heartbeating between your client and server to ensure that both parties are still alive. This also offers the advantage of severing the connection if, for example a 3rd party client remains alive but becomes unresponsive and hence stops sending heartbeats.