Explanation
I'm revisiting the project I used to teach myself Java.
In this project I want to be able to stop the server from accepting new clients and then perform a few 'cleanup' operations before exiting the JVM.
In that project I used the following style for a client accept/handle loop:
//Exit loop by changing running to false and waiting up to 2 seconds
ServerSocket serverSocket = new ServerSocket(123);
serverSocket.setSoTimeout(2000);
Socket client;
while (running){ // 'running' is a private static boolean
try{
client = serverSocket.accept();
createComms(client); //Handles Connection in New Thread
} catch (IOException ex){
//Do Nothing
}
}
In this approach a SocketTimeoutException will be thrown every 2 seconds, if there are no clients connecting, and I don't like relying on exceptions for normal operation unless it's necessary.
I've been experimenting with the following style to try and minimise relying on Exceptions for normal operation:
//Exit loop by calling serverSocket.close()
ServerSocket serverSocket = new ServerSocket(123);
Socket client;
try{
while ((client = serverSocket.accept()) != null){
createComms(client); //Handles Connection in New Thread
}
} catch (IOException ex){
//Do Nothing
}
In this case my intention is that an Exception will only be thrown when I call serverSocket.close() or if something goes wrong.
Question
Is there any significant difference in the two approaches, or are they both viable solutions?
I'm totally self-taught so I have no idea if I've re-invented the wheel for no reason or if I've come up something good.
I've been lurking on SO for a while, this is the first time I've not been able to find what I need already.
Please feel free to suggest completely different approaches =3
The problem with second approach is that the server will die if an exception occurs in the while loop.
The first approach is better, though you might want to add logging exceptions using Log4j.
while (running){
try{
client = serverSocket.accept();
createComms(client);
} catch (IOException ex){
// Log errors
LOG.warn(ex,ex);
}
}
Non-blocking IO is what you're looking for. Instead of blocking until a SocketChannel (non-blocking alternative to Socket) is returned, it'll return null if there is currently no connection to accept.
This will allow you to remove the timeout, since nothing will be blocking.
You could also register a Selector, which informs you when there is a connection to accept or when there is data to read. I have a small example of that here, as well as a non-blocking ServerSocket that doesnt use a selector
EDIT: In case something goes wrong with my link, here is the example of non-blocking IO, without a selector, accepting a connection:
class Server {
public static void main(String[] args) throws Exception {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false);
while(true) {
SocketChannel sc = ssc.accept();
if(sc != null) {
//handle channel
}
}
}
}
The second approach is better (for the reasons you mentioned: relying on exceptions in normal program flow is not a good practise) allthough your code suggests that serverSocket.accept() can return null, which it can not. The method can throw all kinds of exceptions though (see the api-docs). You might want to catch those exceptions: a server should not go down without a very good reason.
I have been using the second approach with good success, but added some more code to make it more stable/reliable: see my take on it here (unit tests here). One of the 'cleanup' tasks to consider is to give some time to the threads that are handling the client communications so that these threads can finish or properly inform the client the connection will be closed. This prevents situations where the client is not sure if the server completed an important task before the connection was suddenly lost/closed.
Related
I am using java.net.DatagramSocket to send UDP packets to a statsd server from a Google App Engine servlet. This generally works; however, we periodically see the following exception:
IOException - Socket is closed: Unknown socket_descriptor..
When these IOExceptions occur, calling DatagramSocket.isClosed() returns false.
This issue happens frequently enough that it is concerning, and although I've put in place some workarounds (allocate a new socket and use a DeferredTask queue to retry), it would be good to understand the underlaying reason for these errors.
The Google docs mention, "Sockets may be reclaimed after 2 minutes of inactivity; any socket operation keeps the socket alive for a further 2 minutes." It is unclear to me how this would play into UDP datagrams; however, one suspicion I have is that this is related to GAE instance lifecycle in some way.
My code (sanitized and extracted) looks like:
DatagramSocket _socket;
void init() {
_socket = new DatagramSocket();
}
void send() {
DatagramPacket packet = new DatagramPacket(<BYTES>, <LENGTH>, <HOST>, <PORT>);
_socket.send(packet);
}
Appreciate any feedback on this!
The approach taken to workaround this issue was simply to manage a single static DatagramSocket instance with a couple of helper methods, getSocket() and releaseSocket() to release sockets throwing IOExceptions through the release method, and then allocate upon next access through the get method. Not shown in this code is retry logic to retry the failed socket.send(). Under load testing, this seems to work reliably.
try {
DatagramPacket packet = new DatagramPacket(<BYTES>, <LENGTH>, <HOST>, <PORT>);
getSocket().send(packet);
} catch (IOException ioe) {
releaseSocket();
}
I have implemented a socket with a server and single client. The way it's structured currently, the server closes whenever the client closes. My intent is have the server run until manual shutdown instead.
Here's the server:
public static void main(String args[])
{
;
try
{
ServerSocket socket= new ServerSocket(17);
System.out.println("connect...");
Socket s = socket.accept();
System.out.println("Client Connected.");
while (true)
{
work with server
}
}
catch (IOException e)
{
e.getStackTrace();
}
}
I've tried surrounding the entire try/catch loop with another while(true) loop, but it does nothing, the same issue persists. Any ideas on how to keep the server running?
It looks like what's going to happen in your code there is that you connect to a client, infinitely loop over interactions with the client, then when someone disrupts the connections (closes clearning, or interrupts it rudly - e.g., unplug the network cable) you're going to get an IOException, sending you down to the catch clause which runs and then continues after that (and I'm guessing "after that" is the end of your main()?)...
So what you need to do is, from that point, loop back to the accept() call so that you can accept another, new client connection. For example, here's some pseudocode:
create server socket
while (1) {
try {
accept client connection
set up your I/O streams
while (1) {
interact with client until connection closes
}
} catch (...) {
handle errors
}
} // loop back to the accept call here
Also, notice how the try-catch block in this case is situated so that errors will be caught and handled within the accept-loop. That way an error on a single client connection will send you back to accept() instead of terminating the server.
Keep a single server socket outside of the loop -- the loop needs to start before accept(). Just put the ServerSocket creation into a separate try/catch block. Otherwise, you'll open a new socket that will try to listen on the same port, but only a single connection has been closed, not the serverSocket. A server socket can accept multiple client connections.
When that works, you probably want to start a new Thread on accept() to support multiple clients. Simplest way to do so is usually to add a "ClinentHandler" class that implements the Runnable interface. And in the client you probably want to put reading from the socket into a separate thread, too.
Is this homework / some kind of assignment?
I'm currently involved in a project at school in which we are building a communication system to be used on Android phones. For this, we will be using a server which opens sockets towards all clients, making them communicate.
I've done several chat applications before without any problems with sockets or thread handling but this time, for some reason, it boogles my mind.
The problem is that the application starts to listen as soon as I initiate the ServerSocket object, serverSocket = new ServerSocket(5000), and not at the serverSocket.accept().
Why is that?
As soon as I use the following method:
public void startListen(String port) {
try {
serverSocket = new ServerSocket(Integer.parseInt(port));
portField.setEditable(false);
} catch (IOException e) {
printMessage("Failed to initiate serverSocket: " + e.toString());
}}
The port is showing up as Listening in the command prompt (using netstat). If I don't call it, the port is not listed as listening.
TCP 0.0.0.0:5000 Computer:0 LISTENING
So, is here anything I'm missing when using the ServerSocket object? My older programs using ServerSocket doesnt start listening until I call accept().
If you're talking about the Java ServerSocket, there's no listen method for it, presumably since it's distinct from a client-side socket. In that case, once it has a port number (either in the constructor or as part of a bind), it can just go ahead and listen automagically.
The reason "regular" sockets (a la BSD) have a listen is because the same type is used for client and server, so you need to decide yourself how you're going to use it. That's not the case with ServerSocket since, well, it's a server socket :-)
To be honest, I'm not sure why you'd care whether or not the listening is active before accept is called. It's the "listen" call (which is implicit in this class) that should mark your server open for business. At that point, the communication layers should start allowing incoming calls to be queued up waiting for you to call accept. That's generally the way they work, queuing the requests in case your program is a little slow in accepting them.
In terms as to why it does it, it's actually supposed to according to the source code. In the OpenJDK6 source/share/classes/java/net/ServerSocket.java, the constructors all end up calling a single constructor:
public ServerSocket(int port, int backlog, InetAddress bindAddr)
throws IOException {
setImpl();
if (port < 0 || port > 0xFFFF)
throw new IllegalArgumentException(
"Port value out of range: " + port);
if (backlog < 1)
backlog = 50;
try {
bind(new InetSocketAddress(bindAddr, port), backlog);
} catch(SecurityException e) {
close();
throw e;
} catch(IOException e) {
close();
throw e;
}
}
And that call to bind (same file) follows:
public void bind(SocketAddress endpoint, int backlog) throws IOException {
if (isClosed())
throw new SocketException("Socket is closed");
if (!oldImpl && isBound())
throw new SocketException("Already bound");
if (endpoint == null)
endpoint = new InetSocketAddress(0);
if (!(endpoint instanceof InetSocketAddress))
throw new IllegalArgumentException("Unsupported address type");
InetSocketAddress epoint = (InetSocketAddress) endpoint;
if (epoint.isUnresolved())
throw new SocketException("Unresolved address");
if (backlog < 1)
backlog = 50;
try {
SecurityManager security = System.getSecurityManager();
if (security != null)
security.checkListen(epoint.getPort());
getImpl().bind(epoint.getAddress(), epoint.getPort());
getImpl().listen(backlog);
bound = true;
} catch(SecurityException e) {
bound = false;
throw e;
} catch(IOException e) {
bound = false;
throw e;
}
}
The relevant bit there is:
getImpl().bind(epoint.getAddress(), epoint.getPort());
getImpl().listen(backlog);
meaning that both the bind and listen are done at the lower level when you create the socket.
So the question is not so much "why is it suddenly appearing in netstat?" but "why wasn't it appearing in netstat before?"
I'd probably put that down to a mis-read on your part, or a not-so-good implementation of netstat. The former is more likely unless you were specifically testing for a socket you hadn't called accept on, which would be unlikely.
I think you have a slightly wrong idea of the purpose of accept. Liken a ServerSocket to a queue and accept to a blocking dequeue operation. The socket enqueues incoming connections as soon as it is bound to a port and the accept method dequeues them at its own pace. So yes, they could have named accept better, something less confusing.
A key reason may be myServerSocket.setSoTimeout(). The accept() call blocks - except if you define a timeout before calling it, then it only blocks for that duration and afterwards, harmlessly (i.e. ServerSocket is still valid), throws a SocketTimeoutException.
This way, the thread stays under your control ... but what happens in those milliseconds while you're not in your temporarily blocking accept() call? Will clients find a listening port or not? - That's why it's a good thing the accept() call is not required for the port to be listening.
The problem is that the application starts to listen as soon as I
initiate the ServerSocket object, serverSocket = new ServerSocket(5000), and not at the serverSocket.accept().
I'm thankful for this question (-> upvote) because it's exactly this behavior I wanted to know about, without having to do the experiment. Google-term was 'java serversocket what happens if trying to connect without accept', this question was in the link list of the first hit.
This question has no doubt been asked in various forms in the past, but not so much for a specific scenario.
What is the most correct way to stop a Thread that is blocking while waiting to receive a network message over UDP.
For example, say I have the following Thread:
public class ClientDiscoveryEngine extends Thread {
private final int PORT;
public ClientDiscoveryEngine(final int portNumber) {
PORT = portNumber;
}
#Override
public void run() {
try {
socket = new DatagramSocket(RECEIVE_PORT);
while (true) {
final byte[] data = new byte[256];
final DatagramPacket packet = new DatagramPacket(data, data.length);
socket.receive(packet);
}
} catch (SocketException e) {
// do stuff 1
} catch (IOException e) {
// do stuff 2
}
}
}
Now, would the more correct way be using the interrupt() method? For example adding the following method:
#Override
public void interrupt() {
super.interrupt();
// flip some state?
}
My only concern is, is socket.receive() not a non-interruptable blocking method? The one way that I have thought of would be to implement the interrupt method as above, in that method call socket.close() and then cater for it in the run method in the catch for the SocketException. Or maybe instead of while(true) use some state that gets flipped in the interrupt method. Is this the best way? Or is there a more elegant way?
Thanks
The receive method doesn't seem to be interruptible. You could close the socket: the javadoc says:
Any thread currently blocked in receive(java.net.DatagramPacket) upon
this socket will throw a SocketException
You could also use setSoTimeout to make the receive method block only for a small amount of time. After the method has returned, your thread can check if it has been interrupted, and retry to receive again for this small amount of time.
Read this answer Interrupting a thread that waits on a blocking action?
To stop a thread, you should not user neither interrupt nor stop in java. The best way, as you suggested by the end of your question, is to have the loop inside the main method controlled by a flag that you can rise as needed.
Here is an old link about this :
http://download.oracle.com/javase/1.4.2/docs/guide/misc/threadPrimitiveDeprecation.html
Other ways of stopping a thread are deprecated and don't provide as much control as this one. Also, this may have changed a bit with executor services, I didn't have time to learn much about it yet.
Also, if you want to avoid your thread to be blocked in some IO state, waiting for a socket, you should give your socket a connection and reading time out (method setSoTimeout).
Regards,
Stéphane
This is one of the easier ones. If it's blocked on a UDP socket, send the socket a UDP message that instructs the receiving thread to 'stop'.
Rgds,
Martin
What's the most appropriate way to detect if a socket has been dropped or not? Or whether a packet did actually get sent?
I have a library for sending Apple Push Notifications to iPhones through the Apple gatways (available on GitHub). Clients need to open a socket and send a binary representation of each message; but unfortunately Apple doesn't return any acknowledgement whatsoever. The connection can be reused to send multiple messages as well. I'm using the simple Java Socket connections. The relevant code is:
Socket socket = socket(); // returns an reused open socket, or a new one
socket.getOutputStream().write(m.marshall());
socket.getOutputStream().flush();
logger.debug("Message \"{}\" sent", m);
In some cases, if a connection is dropped while a message is sent or right before; Socket.getOutputStream().write() finishes successfully though. I expect it's due to the TCP window isn't exhausted yet.
Is there a way that I can tell for sure whether a packet actually got in the network or not? I experimented with the following two solutions:
Insert an additional socket.getInputStream().read() operation with a 250ms timeout. This forces a read operation that fails when the connection was dropped, but hangs otherwise for 250ms.
set the TCP sending buffer size (e.g. Socket.setSendBufferSize()) to the message binary size.
Both of the methods work, but they significantly degrade the quality of the service; throughput goes from a 100 messages/second to about 10 messages/second at most.
Any suggestions?
UPDATE:
Challenged by multiple answers questioning the possibility of the described. I constructed "unit" tests of the behavior I'm describing. Check out the unit cases at Gist 273786.
Both unit tests have two threads, a server and a client. The server closes while the client is sending data without an IOException thrown anyway. Here is the main method:
public static void main(String[] args) throws Throwable {
final int PORT = 8005;
final int FIRST_BUF_SIZE = 5;
final Throwable[] errors = new Throwable[1];
final Semaphore serverClosing = new Semaphore(0);
final Semaphore messageFlushed = new Semaphore(0);
class ServerThread extends Thread {
public void run() {
try {
ServerSocket ssocket = new ServerSocket(PORT);
Socket socket = ssocket.accept();
InputStream s = socket.getInputStream();
s.read(new byte[FIRST_BUF_SIZE]);
messageFlushed.acquire();
socket.close();
ssocket.close();
System.out.println("Closed socket");
serverClosing.release();
} catch (Throwable e) {
errors[0] = e;
}
}
}
class ClientThread extends Thread {
public void run() {
try {
Socket socket = new Socket("localhost", PORT);
OutputStream st = socket.getOutputStream();
st.write(new byte[FIRST_BUF_SIZE]);
st.flush();
messageFlushed.release();
serverClosing.acquire(1);
System.out.println("writing new packets");
// sending more packets while server already
// closed connection
st.write(32);
st.flush();
st.close();
System.out.println("Sent");
} catch (Throwable e) {
errors[0] = e;
}
}
}
Thread thread1 = new ServerThread();
Thread thread2 = new ClientThread();
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (errors[0] != null)
throw errors[0];
System.out.println("Run without any errors");
}
[Incidentally, I also have a concurrency testing library, that makes the setup a bit better and clearer. Checkout the sample at gist as well].
When run I get the following output:
Closed socket
writing new packets
Finished writing
Run without any errors
This not be of much help to you, but technically both of your proposed solutions are incorrect. OutputStream.flush() and whatever else API calls you can think of are not going to do what you need.
The only portable and reliable way to determine if a packet has been received by the peer is to wait for a confirmation from the peer. This confirmation can either be an actual response, or a graceful socket shutdown. End of story - there really is no other way, and this not Java specific - it is fundamental network programming.
If this is not a persistent connection - that is, if you just send something and then close the connection - the way you do it is you catch all IOExceptions (any of them indicate an error) and you perform a graceful socket shutdown:
1. socket.shutdownOutput();
2. wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
After much trouble with dropped connections, I moved my code to use the enhanced format, which pretty much means you change your package to look like this:
This way Apple will not drop a connection if an error happens, but will write a feedback code to the socket.
If you're sending information using the TCP/IP protocol to apple you have to be receiving acknowledgements. However you stated:
Apple doesn't return any
acknowledgement whatsoever
What do you mean by this? TCP/IP guarantees delivery therefore receiver MUST acknowledge receipt. It does not guarantee when the delivery will take place, however.
If you send notification to Apple and you break your connection before receiving the ACK there is no way to tell whether you were successful or not so you simply must send it again. If pushing the same information twice is a problem or not handled properly by the device then there is a problem. The solution is to fix the device handling of the duplicate push notification: there's nothing you can do on the pushing side.
#Comment Clarification/Question
Ok. The first part of what you understand is your answer to the second part. Only the packets that have received ACKS have been sent and received properly. I'm sure we could think of some very complicated scheme of keeping track of each individual packet ourselves, but TCP is suppose to abstract this layer away and handle it for you. On your end you simply have to deal with the multitude of failures that could occur (in Java if any of these occur an exception is raised). If there is no exception the data you just tried to send is sent guaranteed by the TCP/IP protocol.
Is there a situation where data is seemingly "sent" but not guaranteed to be received where no exception is raised? The answer should be no.
#Examples
Nice examples, this clarifies things quite a bit. I would have thought an error would be thrown. In the example posted an error is thrown on the second write, but not the first. This is interesting behavior... and I wasn't able to find much information explaining why it behaves like this. It does however explain why we must develop our own application level protocols to verify delivery.
Looks like you are correct that without a protocol for confirmation their is no guarantee the Apple device will receive the notification. Apple also only queue's the last message. Looking a little bit at the service I was able to determine this service is more for convenience for the customer, but cannot be used to guarantee service and must be combined with other methods. I read this from the following source.
http://blog.boxedice.com/2009/07/10/how-to-build-an-apple-push-notification-provider-server-tutorial/
Seems like the answer is no on whether or not you can tell for sure. You may be able to use a packet sniffer like Wireshark to tell if it was sent, but this still won't guarantee it was received and sent to the device due to the nature of the service.