ServerSocket listens without accept() - java

I'm currently involved in a project at school in which we are building a communication system to be used on Android phones. For this, we will be using a server which opens sockets towards all clients, making them communicate.
I've done several chat applications before without any problems with sockets or thread handling but this time, for some reason, it boogles my mind.
The problem is that the application starts to listen as soon as I initiate the ServerSocket object, serverSocket = new ServerSocket(5000), and not at the serverSocket.accept().
Why is that?
As soon as I use the following method:
public void startListen(String port) {
try {
serverSocket = new ServerSocket(Integer.parseInt(port));
portField.setEditable(false);
} catch (IOException e) {
printMessage("Failed to initiate serverSocket: " + e.toString());
}}
The port is showing up as Listening in the command prompt (using netstat). If I don't call it, the port is not listed as listening.
TCP 0.0.0.0:5000 Computer:0 LISTENING
So, is here anything I'm missing when using the ServerSocket object? My older programs using ServerSocket doesnt start listening until I call accept().

If you're talking about the Java ServerSocket, there's no listen method for it, presumably since it's distinct from a client-side socket. In that case, once it has a port number (either in the constructor or as part of a bind), it can just go ahead and listen automagically.
The reason "regular" sockets (a la BSD) have a listen is because the same type is used for client and server, so you need to decide yourself how you're going to use it. That's not the case with ServerSocket since, well, it's a server socket :-)
To be honest, I'm not sure why you'd care whether or not the listening is active before accept is called. It's the "listen" call (which is implicit in this class) that should mark your server open for business. At that point, the communication layers should start allowing incoming calls to be queued up waiting for you to call accept. That's generally the way they work, queuing the requests in case your program is a little slow in accepting them.
In terms as to why it does it, it's actually supposed to according to the source code. In the OpenJDK6 source/share/classes/java/net/ServerSocket.java, the constructors all end up calling a single constructor:
public ServerSocket(int port, int backlog, InetAddress bindAddr)
throws IOException {
setImpl();
if (port < 0 || port > 0xFFFF)
throw new IllegalArgumentException(
"Port value out of range: " + port);
if (backlog < 1)
backlog = 50;
try {
bind(new InetSocketAddress(bindAddr, port), backlog);
} catch(SecurityException e) {
close();
throw e;
} catch(IOException e) {
close();
throw e;
}
}
And that call to bind (same file) follows:
public void bind(SocketAddress endpoint, int backlog) throws IOException {
if (isClosed())
throw new SocketException("Socket is closed");
if (!oldImpl && isBound())
throw new SocketException("Already bound");
if (endpoint == null)
endpoint = new InetSocketAddress(0);
if (!(endpoint instanceof InetSocketAddress))
throw new IllegalArgumentException("Unsupported address type");
InetSocketAddress epoint = (InetSocketAddress) endpoint;
if (epoint.isUnresolved())
throw new SocketException("Unresolved address");
if (backlog < 1)
backlog = 50;
try {
SecurityManager security = System.getSecurityManager();
if (security != null)
security.checkListen(epoint.getPort());
getImpl().bind(epoint.getAddress(), epoint.getPort());
getImpl().listen(backlog);
bound = true;
} catch(SecurityException e) {
bound = false;
throw e;
} catch(IOException e) {
bound = false;
throw e;
}
}
The relevant bit there is:
getImpl().bind(epoint.getAddress(), epoint.getPort());
getImpl().listen(backlog);
meaning that both the bind and listen are done at the lower level when you create the socket.
So the question is not so much "why is it suddenly appearing in netstat?" but "why wasn't it appearing in netstat before?"
I'd probably put that down to a mis-read on your part, or a not-so-good implementation of netstat. The former is more likely unless you were specifically testing for a socket you hadn't called accept on, which would be unlikely.

I think you have a slightly wrong idea of the purpose of accept. Liken a ServerSocket to a queue and accept to a blocking dequeue operation. The socket enqueues incoming connections as soon as it is bound to a port and the accept method dequeues them at its own pace. So yes, they could have named accept better, something less confusing.

A key reason may be myServerSocket.setSoTimeout(). The accept() call blocks - except if you define a timeout before calling it, then it only blocks for that duration and afterwards, harmlessly (i.e. ServerSocket is still valid), throws a SocketTimeoutException.
This way, the thread stays under your control ... but what happens in those milliseconds while you're not in your temporarily blocking accept() call? Will clients find a listening port or not? - That's why it's a good thing the accept() call is not required for the port to be listening.
The problem is that the application starts to listen as soon as I
initiate the ServerSocket object, serverSocket = new ServerSocket(5000), and not at the serverSocket.accept().
I'm thankful for this question (-> upvote) because it's exactly this behavior I wanted to know about, without having to do the experiment. Google-term was 'java serversocket what happens if trying to connect without accept', this question was in the link list of the first hit.

Related

IOException "Socket is closed: Unknown socket_descriptor" using java.net.DatagramSocket with Google App Engine

I am using java.net.DatagramSocket to send UDP packets to a statsd server from a Google App Engine servlet. This generally works; however, we periodically see the following exception:
IOException - Socket is closed: Unknown socket_descriptor..
When these IOExceptions occur, calling DatagramSocket.isClosed() returns false.
This issue happens frequently enough that it is concerning, and although I've put in place some workarounds (allocate a new socket and use a DeferredTask queue to retry), it would be good to understand the underlaying reason for these errors.
The Google docs mention, "Sockets may be reclaimed after 2 minutes of inactivity; any socket operation keeps the socket alive for a further 2 minutes." It is unclear to me how this would play into UDP datagrams; however, one suspicion I have is that this is related to GAE instance lifecycle in some way.
My code (sanitized and extracted) looks like:
DatagramSocket _socket;
void init() {
_socket = new DatagramSocket();
}
void send() {
DatagramPacket packet = new DatagramPacket(<BYTES>, <LENGTH>, <HOST>, <PORT>);
_socket.send(packet);
}
Appreciate any feedback on this!
The approach taken to workaround this issue was simply to manage a single static DatagramSocket instance with a couple of helper methods, getSocket() and releaseSocket() to release sockets throwing IOExceptions through the release method, and then allocate upon next access through the get method. Not shown in this code is retry logic to retry the failed socket.send(). Under load testing, this seems to work reliably.
try {
DatagramPacket packet = new DatagramPacket(<BYTES>, <LENGTH>, <HOST>, <PORT>);
getSocket().send(packet);
} catch (IOException ioe) {
releaseSocket();
}

'ServerSocket.accept()' Loop Without SocketTimeoutException (Java) (Alternative Solution)

Explanation
I'm revisiting the project I used to teach myself Java.
In this project I want to be able to stop the server from accepting new clients and then perform a few 'cleanup' operations before exiting the JVM.
In that project I used the following style for a client accept/handle loop:
//Exit loop by changing running to false and waiting up to 2 seconds
ServerSocket serverSocket = new ServerSocket(123);
serverSocket.setSoTimeout(2000);
Socket client;
while (running){ // 'running' is a private static boolean
try{
client = serverSocket.accept();
createComms(client); //Handles Connection in New Thread
} catch (IOException ex){
//Do Nothing
}
}
In this approach a SocketTimeoutException will be thrown every 2 seconds, if there are no clients connecting, and I don't like relying on exceptions for normal operation unless it's necessary.
I've been experimenting with the following style to try and minimise relying on Exceptions for normal operation:
//Exit loop by calling serverSocket.close()
ServerSocket serverSocket = new ServerSocket(123);
Socket client;
try{
while ((client = serverSocket.accept()) != null){
createComms(client); //Handles Connection in New Thread
}
} catch (IOException ex){
//Do Nothing
}
In this case my intention is that an Exception will only be thrown when I call serverSocket.close() or if something goes wrong.
Question
Is there any significant difference in the two approaches, or are they both viable solutions?
I'm totally self-taught so I have no idea if I've re-invented the wheel for no reason or if I've come up something good.
I've been lurking on SO for a while, this is the first time I've not been able to find what I need already.
Please feel free to suggest completely different approaches =3
The problem with second approach is that the server will die if an exception occurs in the while loop.
The first approach is better, though you might want to add logging exceptions using Log4j.
while (running){
try{
client = serverSocket.accept();
createComms(client);
} catch (IOException ex){
// Log errors
LOG.warn(ex,ex);
}
}
Non-blocking IO is what you're looking for. Instead of blocking until a SocketChannel (non-blocking alternative to Socket) is returned, it'll return null if there is currently no connection to accept.
This will allow you to remove the timeout, since nothing will be blocking.
You could also register a Selector, which informs you when there is a connection to accept or when there is data to read. I have a small example of that here, as well as a non-blocking ServerSocket that doesnt use a selector
EDIT: In case something goes wrong with my link, here is the example of non-blocking IO, without a selector, accepting a connection:
class Server {
public static void main(String[] args) throws Exception {
ServerSocketChannel ssc = ServerSocketChannel.open();
ssc.configureBlocking(false);
while(true) {
SocketChannel sc = ssc.accept();
if(sc != null) {
//handle channel
}
}
}
}
The second approach is better (for the reasons you mentioned: relying on exceptions in normal program flow is not a good practise) allthough your code suggests that serverSocket.accept() can return null, which it can not. The method can throw all kinds of exceptions though (see the api-docs). You might want to catch those exceptions: a server should not go down without a very good reason.
I have been using the second approach with good success, but added some more code to make it more stable/reliable: see my take on it here (unit tests here). One of the 'cleanup' tasks to consider is to give some time to the threads that are handling the client communications so that these threads can finish or properly inform the client the connection will be closed. This prevents situations where the client is not sure if the server completed an important task before the connection was suddenly lost/closed.

What constitutes the condition "a server is up"

I'm looking for verification on the following:
In order to find out whether a server is up, I'm supposed to
establish a TCP connection to the host:port combination of the server given to me.
And in that case, "if a connection is established, then the service is up, otherwise -
if the connection is refused, the service is down".
So, should i be satisfied that the server is up when getRemoteSocketAddress() of Socket returns an object and not null? That is, does the following code always print the accurate info to the console?
Socket clientSocket = new Socket(hostName, port);
System.out.println("To console: The server is " + (clientSocket.getRemoteSocketAddress()==null?"down.":"up.") );
To me, it does. However, i haven't practical info on these things and won't make sure without a second opinion.
Note: I'm aware that, the server being up doesn't necessarily mean that it is accepting and processing requests. That goes by exchanging some greetings to see/hear one another on who's who and go from there based on the protocol in between. However, these aren't relevant on this one.
TIA
You would not even need to call
clientSocket.getRemoteSocketAddress();
because the constructor call from:
Socket clientSocket = new Socket(hostName, port);
will try to connect to the socket and will throw an IOException if it fails to do so. So I would rather do this:
public boolean hostUp(String hostName, int port) {
try {
Socket clientSocket = new Socket(hostName, port);
return true;
} catch(IOException e) {
return false;
}
}
That should do the trick.
Establishing TCP connection is a health-check at level 3 (OSI). It tells you that the service is up and running and listening on the port. However, it doesnt tells you anything about upper layers. For instance, if you use the server to serve http objects, you could do with http GET /sample.file on top of the established tcp connection. Alternatively, you could use this server for REST API, and then not only would you like to see 200 OK response from http layer, but maybe something more sophisticated in the response body.

Keep Socket Server Open After Client Closes

I have implemented a socket with a server and single client. The way it's structured currently, the server closes whenever the client closes. My intent is have the server run until manual shutdown instead.
Here's the server:
public static void main(String args[])
{
;
try
{
ServerSocket socket= new ServerSocket(17);
System.out.println("connect...");
Socket s = socket.accept();
System.out.println("Client Connected.");
while (true)
{
work with server
}
}
catch (IOException e)
{
e.getStackTrace();
}
}
I've tried surrounding the entire try/catch loop with another while(true) loop, but it does nothing, the same issue persists. Any ideas on how to keep the server running?
It looks like what's going to happen in your code there is that you connect to a client, infinitely loop over interactions with the client, then when someone disrupts the connections (closes clearning, or interrupts it rudly - e.g., unplug the network cable) you're going to get an IOException, sending you down to the catch clause which runs and then continues after that (and I'm guessing "after that" is the end of your main()?)...
So what you need to do is, from that point, loop back to the accept() call so that you can accept another, new client connection. For example, here's some pseudocode:
create server socket
while (1) {
try {
accept client connection
set up your I/O streams
while (1) {
interact with client until connection closes
}
} catch (...) {
handle errors
}
} // loop back to the accept call here
Also, notice how the try-catch block in this case is situated so that errors will be caught and handled within the accept-loop. That way an error on a single client connection will send you back to accept() instead of terminating the server.
Keep a single server socket outside of the loop -- the loop needs to start before accept(). Just put the ServerSocket creation into a separate try/catch block. Otherwise, you'll open a new socket that will try to listen on the same port, but only a single connection has been closed, not the serverSocket. A server socket can accept multiple client connections.
When that works, you probably want to start a new Thread on accept() to support multiple clients. Simplest way to do so is usually to add a "ClinentHandler" class that implements the Runnable interface. And in the client you probably want to put reading from the socket into a separate thread, too.
Is this homework / some kind of assignment?

Java Threadpool TCP Server (Port keeps changing!!)

Good Day,
I was taking a look at this tutorial to do a TCP Threadpool server.
http://tutorials.jenkov.com/java-multithreaded-servers/thread-pooled-server.html
It works great for listening/RECEIVING to clients and processing, and returning a response. There is a class inside that I pass in WorkerRunnable into, and that basically prints out the remote socket address (who it was sent from)
public void run(){
synchronized(this){
this.runningThread = Thread.currentThread();
}
openServerSocket();
while(! isStopped()){
Socket clientSocket = null;
try {
clientSocket = this.serverSocket.accept();
} catch (IOException e) {
if(isStopped()) {
System.out.println("Server Stopped.") ;
return;
}
throw new RuntimeException(
"Error accepting client connection", e);
}
this.threadPool.execute(
new WorkerRunnable(clientSocket,
"Thread Pooled Server"));
}
this.threadPool.shutdown();
System.out.println("Server Stopped.") ;
}
The problem is. The remote address is supposed to stay fixed (I am working within my own home wifi router). However, the IP address of the sender stays the same, but the port keeps changing!!
This is a big problem for me..as I need to be able to return a response to the user for future tasks and I actually save this address to use again to send data. When I ran this in a single TCP thread..it stayed fixed (the port).
Why does the threadpool cause the TCP remote address port to keep changing?
With TCP, the client socket port is most of the time (almost 99%, except for specific protocols) randomly chosen. But to you don't have to know it, the only thing you have to do is to keep the clientSocket reference to write back data to the client. If you want to send data to the other host after that the connection is closed, you have to start a ServerSocket on both sides with a fixed port.
Even if you test from same machine the client port will be random by default. I am not sure if there is any way to set the client source port. However, if you use netstat or capture the packet you can be sure the source port is different for every connection.

Categories

Resources