When I declare in Java
Socket s = new Socket((String)null, 12345);
Does this actually open a socket and use system and network resources, or is that deferred until I attach an input/output buffer? I would like to create a Socket object at the start of my program that is all set up to connect to the server, and just open/close it as necessary, instead of having to pass an address and port around (it seems cleaner), but not if it means the port will be open the entire time.
EDIT
It seems from the answers that this will not work like I wanted. How can I create a closed socket that is all set up with address and only needs to connect?
http://docs.oracle.com/javase/6/docs/api/java/net/Socket.html#Socket(java.net.InetAddress,%20int) <- it depends on the constructor you use. For the constructor you have specified, it connects.
According to http://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#Socket(java.lang.String,int)
,the way you are initializing your object it will be connected.
For your edit: You will have to set up your own class that holds all setup information and can then be opened later. Maybe yüu'll just store the data in there and make a method that returns a socket. It's up to you, there's plenty ways to do that. But make sure all sockets were correctly closed at the end ;)
Every constructor of Socket creates an underlying socket, which uses system resources, and all but the no-args constructor connect it as well, which uses network resources. There is no such operation as 'attach[ing] an input/output buffer' to a Socket.
Related
I've searched here and found a similar article but I didn't really get the answer I'm looking for. I'm learning Networking with Java through some examples and some pseudo-reverse engineering. Oracle's documentation is helping quite a bit too but I've got a few questions.
Why exactly do you bind an IP address to a Socket? Is it necessary? When would you use said binding?
Here is part of the code that raised the question to me:
ServerSocket myServerSocket = new ServerSocket(1337);
System.out.println("Server is waiting for an incoming connection from client...");
Socket recievingSocket = myServerSocket.accept();
Now from what I understand that if I was to bind a Socket it would be right after the running accept() correct?
Why exactly do you bind an IP address to a Socket?
To determine which outbound interface it will connect via.
Is it necessary?
In theory, no. In practice it is sometimes required when connecting via a VPN.
Now from what I understand that if I was to bind a Socket it would be right after the running accept() correct?
Incorrect. An accepted or connected Socket is already bound. The only ways to bind a Socket are:
Create it with new Socket() with no arguments and then call bind(), or
Create it with the four-argument constructor, where the first two arguments are the target address and the second two are the bind-address.
The major use of bind() is in conjunction with ServerSocket. For instance, in your example, calling new ServerSocket(1337) creates a socket, binds it to 0.0.0.0:1337, and puts it into the LISTEN state.
You bind a socket to an address in order to restrict where the socket is going to be listening to. It is not necessary if you want it to use default behavior, which IIRC is to listen to ANY.
You would bind before you use accept because accept tells the socket to start listening on the socket, but bind tell it where to look. The socket needs to know where to look before it listens.
Socket is essentially = IP + Port.
So yes you need an IP address to create a socket. And the process is termed as binding because you may bind multiple ports to same address all listening to their respective incoming connections.
Above pretty much sums your question of is it necessary but to add another point - Lets say you create a client to connect to your server. How will it connect if does not know server IP address and port to which it is suppose to connect.
I am trying to write code to hot-swap sockets in Java.
Here is the gist of the code I am currently using:
// initial server initialization
ServerSocket server1 = new ServerSocket(port);
// ... code to accept and process requests
// new server initialization
ServerSocket server2 = new ServerSocket();
// attempt at hotswap
server1.close();
server2.bind(port);
// .. more code
The code works as above but I am wondering about the possibility of dropped messages between the time the first socket is closed and the second one is opened.
Two questions:
Is there a way to ensure that no connections are dropped?
If there is a way to ensure that no connections are dropped does it still work if the instances of the ServerSocket class are in different virtual machines?
Thanks in advance.
The closing of a ServerSocket means that that server1's handler does not handle new incoming connections, these are taken care of by the server2. So far so good. You can garbage collect server1 when it no longer has any connected Sockets left.
There will be a (shorter or longer) period of time where the port is marked as "not open" in the OS networking driver after the first ServerSocket is closed and the second one is opened (since the OS cannot know our intention to start a new socket directly after closing the first one).
An incoming TCP request during this time will get a message back from the OS that the port is not open, and will likely not retry, since it got a confirmation that the port was not open.
A Possible work-around
Use the java NIO constructs, which spawn a new thread per incoming request, see the ServerSocketChannel and be sure to check out the library http://netty.io/ which have several constructs for this.
Make sure that you can set the handler for the incoming request dynamically (and thread safe :), this will make it possible to seamlessly change the handling of the incoming requests, but you will not be able to exchange the ServerSocket (but that's likely not exactly what you want, either).
I'm trying to load test a Java server by opening a large number of socket connections to the server, authenticating, closing the connection, then repeating. My app runs great for awhile but eventually I get:
java.net.BindException: Address already in use: connect
According to documentation I read, the reason for this is that closed sockets still occupy the local address assigned to them for a period of time after close() was called. This is OS dependent but can be on the order of minutes. I tried calling setReuseAddress(true) on the socket with the hopes that its address would be reusable immediately after close() was called. Unfortunately this doesn't seem to be the case.
My code for socket creation is:
Socket socket = new Socket();
socket.setReuseAddress(true);
socket.connect(new InetSocketAddress(m_host, m_port));
But I still get this error:
java.net.BindException: Address already in use: connect after awhile.
Is there any other way to accomplish what I'm trying to do? I would like to for instance: open 100 sockets, close them all, open 200 sockets, close them all, open 300, etc. up to a max of 2000 or so sockets.
Any help would be greatly appreciated!
You are exhausing the space of outbound ports by opening that many outbound sockets within the TIME_WAIT period of two minutes. The first question you should ask yourself is does this represent a realistic load test at all? Is a real client really going to do that? If not, you just need to revise your testing methodology.
BTW SO_LINGER is the number of seconds the application will wait during close() for data to be flushed. It is normally zero. The port will hang around for the TIME_WAIT interval anyway if this is the end that issued the close. This is not the same thing. It is possible to abuse the SO_LINGER option to patch the problem. However that will also cause exceptional behaviour at the peer and again this is not the purpose of a test.
Not using bind() but setReuseAddress(true) is just weird, I hope you do understand the implications of setReuseAddress (and the point of). 100-2000 is not a great number of sockets to open, however the server you are attempting to connect to (since it looks the same addr/port pair), may just drop them w/ a normal backlog of 50.
Edit:
if you need to open multiple sockets quickly (ermm port scan?), I'd very strongly recommend using NIO and connect()/finishConnect() + Selector. Opening 1000 sockets in the same thread is just plain slow.
Forgot you may need finishConnect() either way in your code.
I think that you should plan on the port you want to use to connect to be in use. By that I mean try to connect using the given port. If the connect fails (or in your case throws an exception), try to open the connection using the next port number.
Try wrapping the connect statement in a try/catch.
Here's some pseudo-code that conveys what I think will work:
portNumber = x //where x is the first port number you will try
numConnections = 200 // or however many connections you want to open
while(numConnections > 0){
try{
connect(host, portNumber)
numConnections--
}catch(){}
portNumber++
}
This code doesn't cover corner cases such as "what happens when all ports are in use?"
I have a socket tcp connection between two java applications. When one side closes the socket the other side remains open. but I want it to be closed. And also I can't wait on it to see whether it is available or not and after that close it. I want some way to close it completely from one side.
What can I do?
TCP doesn't work like this. The OS won't release the resources, namely the file descriptor and thus the port, until the application explicitly closes the socket or dies, even if the TCP stack knows that the other side closed it. There's no callback from kernel to user application on receipt of the FIN from the peer. The OS acknowledges it to the other side but waits for the application to call close() before sending its FIN packet. Take a look at the TCP state transition diagram - you are in the passive close box.
One way to detect a situation like this without dedicating a thread to each socket is to use the select/poll/epoll/kqueue family of functions. The socket being passively closed will be signaled as readable and read attempt will return the EOF.
Hope this helps.
Both sides have to read from the connection, so they can detect when the peer has closed. When read returns -1 it will mean the other end closed the connection and that's your clue to close your end.
If you are still reading from your socket, then you will detect the -1 when it closes.
If you are no longer reading from your socket, go ahead and close it.
If it's neither of these, you are probably having a thread wait on an event. This is NOT the way you want to handle thousands of ports! Java will start to get pukey at around 3000 threads in windows--much less in Linux (I don't know why).
Make sure you are using NIO. Use a single thread to manage all your ports (connection pool). It should just grab the data from a thread, forward it to a queue. At that point I think I'd have a thread pool take the data out of the queues and process it because actually processing the data from a port will take some time.
Attaching a thread to each port will NOT work, and is the biggest reason NIO was needed.
Also, having some kind of a "Close" message as part of your stream to trigger closing the port may make things work faster--but you'll still need to handle the -1 to cover the case of broken streams
The usual solution is to let the other side know you are going to close the connection, before actually closing it. For instance, in the case of the SMTP protocol, the server will send '221 Bye' before it closes the connection.
You probably want to have a connection pool.
This question already has answers here:
Java socket API: How to tell if a connection has been closed?
(9 answers)
Closed 1 year ago.
Hey all. I have a server written in java using the ServerSocket and Socket classes.
I want to be able to detect and handle disconnects, and then reconnect a new client if necessary.
What is the proper procedure to detect client disconnections, close the socket, and then accept new clients?
Presumably, you're reading from the socket, perhaps using a wrapper over the input stream, such as a BufferedReader. In this case, you can detect the end-of-stream when the corresponding read operation returns -1 (for raw read() calls), or null (for readLine() calls).
Certain operations will cause a SocketException when performed on a closed socket, which you will also need to deal with appropriately.
The only safe way to detect the other end has gone is to send heartbeats periodically and have the other end to timeout based on a lack of a heartbeat.
Is it just me, or has nobody noticed that the JavaDoc states a method under ServerSocket api, which allows us to obtain a boolean based on the closed state of the serversocket?
you can just loop every few seconds to check the state of it:
if(!serverSocket.isClosed()){
// whatever you want to do if the serverSocket is connected
}else{
// treat a disconnected serverSocket
}
EDIT: Just reading your question again, it seems that you require the server to just continually search for connections and if the client disconnects, it should be able to re-detect when the client attempts to re-connect. should'nt that just be your solution in the first place?
Have a server that is listening, once it picks up a client connection, it should pass it to a worker thread object and launch it to operate asynchronously. Then the server can just loop back to listening for new connections. If the client disconnects, the launched thread should die and when it reconnects, a new thread is launched again to handle the new connection.
Jenkov provides a great example of this implementation.