PULL socket not connecting fast enough to ROUTER - java

I'm using this program to test a PULL socket with ROUTER. I create/bind a ROUTER, then connect a PULL socket with an identity to it; the ROUTER then sends a message addressed specifically for the client using its identity (basic zeromq enveloping)
Test Program
public static void main(String[] o){
ZContext routerCtx = new ZContext();
Socket rtr = routerCtx.createSocket( ZMQ.ROUTER);
rtr.setRouterMandatory(true);
rtr.bind("tcp://*:5500");
ZContext clientCtx = new ZContext();
Socket client1 = clientCtx.createSocket( ZMQ.PULL);
client1.setIdentity("client1".getBytes());
client1.connect("tcp://localhost:5500");
try{
//Thread.currentThread().sleep(2000);
rtr.sendMore("client1");
rtr.sendMore("");
rtr.send("Hello!");
System.out.println( client1.recvStr());
System.out.println("Client Received: " + client1.recvStr());
}catch(Exception e1){
System.out.println( "Could not send to client1: " + e1.getMessage());
}
routerCtx.destroy();
clientCtx.destroy();
}
Results
The expected result is to print Client Received: Hello!", but instead the ROUTER throws an exception consistent with unaddressable message; I'm using setRouterMandatory(true) to throw that exception under such circumstances, however, the client explicitly sets an identity and the server sends to that identity, so I don't understand why the exception is raised.
Temporary Fix
If I add a slight delay by uncommenting Thread.currentThread().sleep(2000);, the message is delivered successfully, but I despise using sleeps and waits, it creates messy and brittle code, but more importantly, doesn't answer the "why?"
Questions
Why is this happening? It was my understanding that "late joining" applied only to PUB/SUB sockets.
Is PULL with ROUTER an invalid socket combination? I'm using it for a chat program, and aside from this issue, it works great.

Why is this happening?
You have a race condition. The client1.connect call starts the connection process, but there is no guarantee the actual connection is established when you call rtr.sendMore("client1");. Your sleep() workaround pretty much proves this.
Changing PULL to DEALER is a step in the right direction, because DEALER can send and receive. In order to avoid the need for sleeps and waits you would have to change your protocol. A simple change to the code above would be to have the DEALER connect and then immediately send a "HELLO" message to the ROUTER (could be just an empty message). The router code must be redesigned such that it does nothing until it receives a HELLO message from the DEALER. Once you have received the HELLO message you know the connection is successfully established and you can safely send your chat messages.
Also, this protocol eliminates the need for your router to know the client id in advance. Instead you can extract it from the HELLO message. A message from a DEALER to a ROUTER is guaranteed to be a multi-part message and the first part is the client ID.

Related

How do I make a client-server Java application to send messages on one port but receive them on another?

I am currently trying to make an application that will send messages to a server using one port, but will receive messages on another port. However, based on tutorials I have followed, it looks like the act of connecting to the server is where ports come into play and my client is receiving and sending messages on the same port. How do I make it so it sends on one port but receives on the other?
Here is the code that I think is relevant from the client side (I put some stuff that seems unrelated because I think they are things that would be altered by receiving on one port but sending on another, and ignore the comment about replacing inetaddress, that is just me working on implementing this in a gui):
public void startRunning(){
try{
connectToServer();
setupStreams();
whileChatting();
}catch(EOFException eofException){
showMessage("\n Client terminated connection");
}catch(IOException ioException){
ioException.printStackTrace();
}finally{
closeStuff();
}
}
//connect to server
private void connectToServer() throws IOException{
showMessage("Attempting connection... \n");
connection = new Socket(InetAddress.getByName(serverIP), 480);//replace serverIP with ipTextField.getText or set serverIP to equal ipTextField.getText? Same with port number.
showMessage("Connected to: " + connection.getInetAddress().getHostName() );
}
//set up streams to send and receive messages
private void setupStreams() throws IOException{
output = new ObjectOutputStream(connection.getOutputStream());
output.flush();
input = new ObjectInputStream(connection.getInputStream());
showMessage("\n Streams are good! \n");
}
//while talking with server
private void whileChatting() throws IOException{
ableToType(true);
do{
try{
message = (String) input.readObject();
showMessage("\n" + message);
}catch(ClassNotFoundException classNotfoundException){
showMessage("\n Don't know that object type");
}
}while(!message.equals("SERVER - END"));
}
//send messages to server
private void sendMessage(String message){
try{
output.writeObject("CLIENT - " + message);
output.flush();
showMessage("\nCLIENT - " + message);
}catch(IOException ioException){
messageWindow.append("\n something messed up ");
}
}
//change/update message window
private void showMessage(final String m){
SwingUtilities.invokeLater(
new Runnable(){
public void run(){
messageWindow.append(m);
}
}
);
}
EDIT/UPDATE: To help clarify some things, here is some more information. The device that sends the first message is connected to a sensor, and it sends information when that sensor detects something to the other device. The receiving device sends a message back on a different port telling the original sending device how to respond. Lets name these two devices the "reporter-action taker" and the "decision maker-commander".
If you want to use TCP/IP sockets you can't use a a socket to send and another to read. That's not what they are for.
If you use a centralized distributed algorithm (server/client communication) you have to set the server to listen on a single socket port with the ServerSocket class: then the server tries to accept clients through that socket.
Example:
ServerSocket listener = new ServerSocket(Port)
While (true) {
new Clienthandler(listener.accept());
}
The server will listen on that port, and when a client tries to connect to that port if it is accepted the server launches its handler. On this handler constructor the Socket object used on the client is received on an argument and can then be used to get the writers and the readers. The reader on this handler class will be the writer on the client class and vice-versa, maybe that's what you were looking for.
Your question about using two ports in this manner is a bit strange. You state that you have a client and a server and that they should communicate on different ports.
Just to clarify picture the server as a hanging rack for jackets with several hooks in a row. Each port the server listened on represents a hook. When it comes to the client server relationship the client or jacket knows where to find its hook, however the hook is blind and have no idea where to find jackets.
Now, the client selects a port or a hook and connects to it. The connection is like a pipeline with two pipes. One for the client to deliver data to the server with and the other to send data from the server back to the client. When the connection is established data can be transferred both ways. This means that we only need one port open on the server to send data both from the client to the server and in the opposite direction.
The reason for only having one open port open on the server for the clients to connect to is that holding an open port for connections is hard to do on a regular client computer. The normal desktop user will be behind several firewalls blocking incoming connections. If that wasn't the case the client would probably be hacked senseless from malicious viruses.
Moving on with the two port solution we could not call this a client server connection per say. It would be more like a peer to peer connection or something like that. But if this is what you want to do, the application connecting first would have to start by telling the other application what ip and port to use for connecting back, it should probably also want to give some kind of token that are to be used to pair the new incoming connection when connecting back.
You should take note that making such an implementation is not a good idea most of the time as it complicates things a whole lot for simple data transfer between a client and server application.

Trouble with UDP ports and DatagramSockets

I'm working on a project that is suppose to send a file from one machine to another using DatagramPackets and DatagramSockets. The implementation is suppose to mimic the TCP protocol. So once the receiver gets a packet it sends back an ACK to the sender, confirming the packet was delivered. My program so far without making any checks for ACKs. Im having trouble implementing the ACK messages. On my receiver program, it shows that the ACKs are being sent, but the sender application is not getting them.
I keep getting an error from creating the socket. "java.net.BindException: Address already in use: Cannot bind". I'm confused because nowhere else in the sender applicaion have a specified the port. I simply use DatagramSocket socket = new DatagramSocket();
but I do use
DatagramPacket packet = new DatagramPacket(packetData, packetData.length, internetAddress, 49000);
socket.send(packet); when sending packets.
I have tried removing the datagram declaration in my waitForAck() method and used the same datagramSocket I used to send packets. But socket.receive(packet); will hang and never recieve anything because it hasnt been assigned a port to listen on.
This is my method to listen for ACKs:
public void waitForACK(){
//listen for ack for a period of time
//if ACK received, then break send next packet
//if ACK not received or time out, send last packet
//TODO: implement a timeout
System.out.println("### Sender waiting for ACK");
try {
DatagramSocket receivingSocket = new DatagramSocket(49000);
while (!ACKreceived) {
byte[] buf = new byte[1500]; // Actual Ethernet packet size is 1500 bytes
// receive request
DatagramPacket packet = new DatagramPacket(buf, buf.length);
receivingSocket.receive(packet); //socket.receive(packet); <--
byte[] packetData = Arrays.copyOf(packet.getData(), packet.getLength());
ACKreceived = checkACK(packetData);//check the recieved packet contains an ACK message
}
System.out.println("### Sender recieved ACK");
} catch (Exception e) {
System.out.println("### never got ACK");
System.out.println(e);
}
}
I've also tried this but the scoket will hang and never actualy recieve anything. Even though the application that recieves the file successfully reports sending an ACK. I'm guessing its because it does not know to recieve the ACK on port 49000.
public void waitForACK(){
//listen for ack for a period of time
//if ACK received, then break send next packet
//if ACK not received or time out, send last packet
//TODO: implement a timeout
System.out.println("### Sender waiting for ACK");
try {
while (!ACKreceived) {
byte[] buf = new byte[1500]; // Actual Ethernet packet size is 1500 bytes
// receive request
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet); //<--- HANGS RIGHT HERE
byte[] packetData = Arrays.copyOf(packet.getData(), packet.getLength());
ACKreceived = checkACK(packetData);//check the recieved packet contains an ACK message
}
System.out.println("### Sender recieved ACK");
} catch (Exception e) {
System.out.println("### never got ACK");
System.out.println(e);
}
}
You're leaking sockets.
Don't create a new socket just to wait for an ACK. You should have exactly one DatagramSocket open for the life of the application.
Try to use the netstat command to check if another program (or even your program) is active on the port. On unix netstat -lp as su will show you, on windows netstat exists too with different command line options
Before we get into the problems with your code: Why is the client trying to listen on port 49000?
If you don't already realize this: the local port and peer port do not have to be the same, and generally are not. When you call DatagramSocket(), you get an arbitrary local port assigned by the OS. The fact that you sent to 49000 doesn't change your local port. And if the other side of the connection just sends back to the tuple it received a packet from, that won't arrive at 49000, it will arrive at your local port.
If that's your problem, the fix is to use the second version (just use your existing socket to listen as well as sending), and then fix the other side (that you haven't shown us the code for) to send the ACK to the complete address tuple of the packet's sender, not port 49000 on the sender's host.
If you realize that, but think that both sides need to have local port 49000 for some reason… well, they probably don't. Generally, a protocol needs one side (the "server") to have a well-known port to connect, but the other side (the "client") doesn't need that. That's why you can use DatagramSocket() instead of DatagramSocket(49000) on the client and things work.
Again, same fix.
In the rare cases where both sides really do need to have a well-known port (e.g., so you can explicit open it in your company's internal firewalls), you almost certainly want the sending to also happen on that port.
So, instead of creating a DatagramSocket() to send from, and a DatagramSocket(48000) to listen on, just create a DatagramSocket(48000) in the first place and use it for both.
However, note that this solution, like any solution that uses a fixed port, has two additional problems:
First, if client and server both want to bind port 48000, they can't both run on the same machine. You can renumber one of them to 48001, or just accept that.
Second, if you expect to start and stop the client frequently, it's often going to try to bind port 49000 while the OS still has a socket for that port in TIME_WAIT state, so you're going to get a bind error. This is what SO_REUSEADDR is for; use it.
What if you really do want to use an arbitrary-port sender on the client, but a fixed-port listener? There are some cases where that makes sense, but unless you can explain why you really need this, you don't have one.
If you do, then, and only then, could you use something like your first version. But you still probably want to create the listener socket once, not each time you listen for ACKs; it just should be a different attribute from the sending socket. (And of course you still need to deal with the same things as in the last section.)
And if you really do want to create a new listener socket for each ACK, then you have to make sure you close it immediately, rather than waiting for the Java GC and the OS to collectively get around to closing it for you, or the next time you wait for an ACK, you're likely to get a bind error, because the old listener socket is still bound to it.

How do I communicate with all threads on a Multithreaded server?

Ok. I'm trying to grasp some multithreading Java concepts. I know how to set up a multiclient/server solution. The server will start a new thread for every connected client.
Conceptually like this...
The loop in Server.java:
while (true) {
Socket socket = serverSocket.accept();
System.out.println(socket.getInetAddress().getHostAddress() + " connected");
new ClientHandler(socket).start();
}
The ClientHandler.java loop is:
while(true)
{
try {
myString = (String) objectInputStream.readObject();
}
catch (ClassNotFoundException | IOException e) {
break;
}
System.out.println(myClientAddress + " sent " + myString);
try {
objectOutputStream.writeObject(someValueFromTheServer);
objectOutputStream.flush();
}
catch (IOException e) {
return;
}
}
This is just a concept to grasp the idea. Now, I want the server to be able to send the same object or data at the same time - to all clients.
So somehow I must get the Server to speak to every single thread. Let's say I want the server to generate random numbers with a certain time interval and send them to the clients.
Should I use properties in the Server that the threads can access? Is there a way to just call a method in the running threads from the main thread? I have no clue where to go from here.
Bonus question:
I have another problem too... Which might be hard to see in this code. But I want every client to be able to receive messages from the server AND send messages to the sever independently. Right now I can get the Client to stand and wait for my gui to give something to send. After sending, the Client will wait for the server to send something back that it will give to the gui. You can see that my ClientHandler has that problem too.
This means that while the Client is waiting for the server to send something it cannot send anything new to the server. Also, while the Client is waiting for the gui to give it something to send, it cannot receive from the server.
I have only made a server/client app that uses the server to process data it receives from the Client - and the it sends the processed data back.
Could anyone point me in any direction with this? I think I need help how to think conceptually there. Should I have two different ClientHandlers? One for the instream and one for the outstream? I fumbling in the dark here.
"Is there a way to just call a method in the running threads from the main thread?"
No.
One simple way to solve your problem would be to have the "server" thread send the broadcast to every client. Instead of simply creating new Client objects and letting them go (as in your example), it could keep all of the active Client objects in a collection. When it's time to send a broadcast message, it could iterate over all of the Client objects, and call a sendBroadcast() method on each one.
Of course, you would have to synchronize each client thread's use of a Client object outputStream with the server thread's use of the same stream. You also might have to deal with client connections that don't last forever (their Client objects must somehow be removed from the collection.)

when to close and reopen socket after HL7 message sent

I am trying to open a basic connection to an HL7 server where I send a request and get the ACK response. This will be done continuously.
If this is being done continuously, when do I close the socket? Am I implementing this correctly, in this case?
If I close the socket, how do I open it again? The javadocs for ConnectionHub indicates the following:
attach(java.lang.String host, int port, Parser parser,
java.lang.Class<? extends LowerLayerProtocol> llpClass)
Returns a Connection to the given address, opening this Connection if necessary.
However, in real life, it will not open a new connection if it was already closed.
Patient patient = appt.getPatient();
Parser parser = new GenericParser();
Message hl7msg = parser.parse(wlp.getORMString(appt));
//Connect to listening servers
ConnectionHub connectionHub = ConnectionHub.getInstance();
// A connection object represents a socket attached to an HL7 server
Connection connection = connectionHub.attach(serverIP, serverPort,
new PipeParser(), MinLowerLayerProtocol.class);
if (!connection.isOpen()) {
System.out.println("CONNNECTION is CLOSED");
connection = connectionHub.attach(serverIP, serverPort, new PipeParser(),
MinLowerLayerProtocol.class);
if (!connection.isOpen()) {
System.out.println("CONNNECTION is still CLOSED");
}
}
Initiator initiator = connection.getInitiator();
Message response = initiator.sendAndReceive(hl7msg);
String responseString = parser.encode(response);
System.out.println("Received response:\n" + responseString);
connection.close();
Result:
The first pass goes through perfectly, with request sent and ACK received. Any subsequent call to this method results in java.net.SocketException: Socket closed" on the client side.
If I remove the connection.close() call, then it will run fine for a certain amount of time then the socket will close itself.
If you are communicating via HL7 2.X, the expected behavior on the socket is to never disconnect -- you allocate the connection and keep the socket active. Said another way, an HL7 application does not act like a web browser wherein it connects as needed and disconnects when done. Rather, both ends work to keep the socket continuously connected. Most applications will be annoyed if you disconnect. Further, most integration engines have alerts that will fire if you are disconnected for too long.
Once the socket is connected, you need to use the HL7 Minimum Lower Layer Protocol (MLLP or MLP) to communicate the HL7 2.X content. If you are sending data, you should wait for an HL7 Acknowledgment before you send the next message. If you are receiving data, you should generate the HL7 Ack.
References:
MLP - http://www.hl7standards.com/blog/2007/05/02/hl7-mlp-minimum-layer-protocol-defined
Acks - http://www.corepointhealth.com/resource-center/hl7-resources/hl7-acknowledgement

Java Sockets and Dropped Connections

What's the most appropriate way to detect if a socket has been dropped or not? Or whether a packet did actually get sent?
I have a library for sending Apple Push Notifications to iPhones through the Apple gatways (available on GitHub). Clients need to open a socket and send a binary representation of each message; but unfortunately Apple doesn't return any acknowledgement whatsoever. The connection can be reused to send multiple messages as well. I'm using the simple Java Socket connections. The relevant code is:
Socket socket = socket(); // returns an reused open socket, or a new one
socket.getOutputStream().write(m.marshall());
socket.getOutputStream().flush();
logger.debug("Message \"{}\" sent", m);
In some cases, if a connection is dropped while a message is sent or right before; Socket.getOutputStream().write() finishes successfully though. I expect it's due to the TCP window isn't exhausted yet.
Is there a way that I can tell for sure whether a packet actually got in the network or not? I experimented with the following two solutions:
Insert an additional socket.getInputStream().read() operation with a 250ms timeout. This forces a read operation that fails when the connection was dropped, but hangs otherwise for 250ms.
set the TCP sending buffer size (e.g. Socket.setSendBufferSize()) to the message binary size.
Both of the methods work, but they significantly degrade the quality of the service; throughput goes from a 100 messages/second to about 10 messages/second at most.
Any suggestions?
UPDATE:
Challenged by multiple answers questioning the possibility of the described. I constructed "unit" tests of the behavior I'm describing. Check out the unit cases at Gist 273786.
Both unit tests have two threads, a server and a client. The server closes while the client is sending data without an IOException thrown anyway. Here is the main method:
public static void main(String[] args) throws Throwable {
final int PORT = 8005;
final int FIRST_BUF_SIZE = 5;
final Throwable[] errors = new Throwable[1];
final Semaphore serverClosing = new Semaphore(0);
final Semaphore messageFlushed = new Semaphore(0);
class ServerThread extends Thread {
public void run() {
try {
ServerSocket ssocket = new ServerSocket(PORT);
Socket socket = ssocket.accept();
InputStream s = socket.getInputStream();
s.read(new byte[FIRST_BUF_SIZE]);
messageFlushed.acquire();
socket.close();
ssocket.close();
System.out.println("Closed socket");
serverClosing.release();
} catch (Throwable e) {
errors[0] = e;
}
}
}
class ClientThread extends Thread {
public void run() {
try {
Socket socket = new Socket("localhost", PORT);
OutputStream st = socket.getOutputStream();
st.write(new byte[FIRST_BUF_SIZE]);
st.flush();
messageFlushed.release();
serverClosing.acquire(1);
System.out.println("writing new packets");
// sending more packets while server already
// closed connection
st.write(32);
st.flush();
st.close();
System.out.println("Sent");
} catch (Throwable e) {
errors[0] = e;
}
}
}
Thread thread1 = new ServerThread();
Thread thread2 = new ClientThread();
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (errors[0] != null)
throw errors[0];
System.out.println("Run without any errors");
}
[Incidentally, I also have a concurrency testing library, that makes the setup a bit better and clearer. Checkout the sample at gist as well].
When run I get the following output:
Closed socket
writing new packets
Finished writing
Run without any errors
This not be of much help to you, but technically both of your proposed solutions are incorrect. OutputStream.flush() and whatever else API calls you can think of are not going to do what you need.
The only portable and reliable way to determine if a packet has been received by the peer is to wait for a confirmation from the peer. This confirmation can either be an actual response, or a graceful socket shutdown. End of story - there really is no other way, and this not Java specific - it is fundamental network programming.
If this is not a persistent connection - that is, if you just send something and then close the connection - the way you do it is you catch all IOExceptions (any of them indicate an error) and you perform a graceful socket shutdown:
1. socket.shutdownOutput();
2. wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
After much trouble with dropped connections, I moved my code to use the enhanced format, which pretty much means you change your package to look like this:
This way Apple will not drop a connection if an error happens, but will write a feedback code to the socket.
If you're sending information using the TCP/IP protocol to apple you have to be receiving acknowledgements. However you stated:
Apple doesn't return any
acknowledgement whatsoever
What do you mean by this? TCP/IP guarantees delivery therefore receiver MUST acknowledge receipt. It does not guarantee when the delivery will take place, however.
If you send notification to Apple and you break your connection before receiving the ACK there is no way to tell whether you were successful or not so you simply must send it again. If pushing the same information twice is a problem or not handled properly by the device then there is a problem. The solution is to fix the device handling of the duplicate push notification: there's nothing you can do on the pushing side.
#Comment Clarification/Question
Ok. The first part of what you understand is your answer to the second part. Only the packets that have received ACKS have been sent and received properly. I'm sure we could think of some very complicated scheme of keeping track of each individual packet ourselves, but TCP is suppose to abstract this layer away and handle it for you. On your end you simply have to deal with the multitude of failures that could occur (in Java if any of these occur an exception is raised). If there is no exception the data you just tried to send is sent guaranteed by the TCP/IP protocol.
Is there a situation where data is seemingly "sent" but not guaranteed to be received where no exception is raised? The answer should be no.
#Examples
Nice examples, this clarifies things quite a bit. I would have thought an error would be thrown. In the example posted an error is thrown on the second write, but not the first. This is interesting behavior... and I wasn't able to find much information explaining why it behaves like this. It does however explain why we must develop our own application level protocols to verify delivery.
Looks like you are correct that without a protocol for confirmation their is no guarantee the Apple device will receive the notification. Apple also only queue's the last message. Looking a little bit at the service I was able to determine this service is more for convenience for the customer, but cannot be used to guarantee service and must be combined with other methods. I read this from the following source.
http://blog.boxedice.com/2009/07/10/how-to-build-an-apple-push-notification-provider-server-tutorial/
Seems like the answer is no on whether or not you can tell for sure. You may be able to use a packet sniffer like Wireshark to tell if it was sent, but this still won't guarantee it was received and sent to the device due to the nature of the service.

Categories

Resources