I am trying to setup my MessageServer class so that it services each client in a separate request (you'll see below that it's pretty linear right now)
How should I go about it?
import java.net.*;
import java.io.*;
public class MessageServer {
public static final int PORT = 6100;
public static void main(String[] args) {
Socket client = null;
ServerSocket sock = null;
BufferedReader reader = null;
try {
sock = new ServerSocket(PORT);
// now listen for connections
while (true) {
client = sock.accept();
reader = new BufferedReader(new InputStreamReader(client.getInputStream()));
Message message = new MessageImpl(reader.readLine());
// set the appropriate character counts
message.setCounts();
// now serialize the object and write it to the socket
ObjectOutputStream soos = new ObjectOutputStream(client.getOutputStream());
soos.writeObject(message);
System.out.println("wrote message to the socket");
client.close();
}
}
catch (IOException ioe) {
System.err.println(ioe);
}
}
}
Sorry, but your question doesn't make much sense.
If we are using the term "request" in the normal way, a client sends a request to the server and the server processes each request. It simply makes no sense for a server to not service the requests separately (in some sense).
Perhaps you are asking something different. (Do you mean, "service each client request in a separate thread"?) Whatever you mean, please review your terminology.
Given that you are talking about executing requests in different threads, then using the ExecutorService API is a good choice. Use an implementation class that allows you to put an upper bound on the number of worker threads. If you don't, you open yourself up for problems where overload results in the allocation of large numbers of threads, which only makes the server slower. (Besides, creating new threads is not cheap. It pays to recycle them.)
You should also consider configuring your executor so that it doesn't have a request queue. You want the executor service to block the thread that is trying to submit the job if there isn't a worker available. Let the operating system queue incoming connections / requests at the ServerSocket level. If you queue requests internally, you can run into the situation where you are wasting time by processing requests that the client-side has already timed out / abandoned.
Related
I have to implement sending data with specific source port and in the same time listen to that port. Full duplex. Does anybody know how to implement it on java. I tried to create separate thread for listening on socket input stream but it doesnt work. I cannot bind ServerSocket and client socket to the same source port and the the same with netty.
It there any solution for dull duplex?
init(){
socket = new Socket(InetAddress.getByName(Target.getHost()), Target.getPort(), InetAddress.getByName("localhost"), 250);
in = new DataInputStream(socket.getInputStream());
out = new DataOutputStream(socket.getOutputStream());
}
private static void writeAndFlush(OutputStream out, byte[] b) throws IOException {
out.write(b);
out.flush();
}
public class MessageReader implements Runnable {
#Override
public void run() {
//this method throw exception EOF
read(in);
}
private void read(DataInputStream in){
while (isConnectionAlive()) {
StringBuffer strBuf = new StringBuffer();
byte[] b = new byte[1000];
while ((b[0] = bufferedInputStream.read(b)) != 3) {
strBuf.append(new String(b));
}
log.debug(strBuf.toString());
}
}
}
What you're trying to do is quite strange: A ServerSocket is a fully implemented socket that accepts connections, it handles its own messages and you definitely cannot piggy-back another socket on top of it.
Full duplex is fairly simple to do with NIO:
Create a Channel for your Socket in non-blocking mode
Add read to the interest OPs
Sleep with a Selector's select() method
Read any readable bytes, write any writable bytes
If writing is done, remove write from interest OPs
GOTO 3.
If you need to write, add bytes to a buffer, add write to interest OPs and wake up selector. (slightly simplified, but I'm sure you can find your way around the Javadoc)
This way you will be completely loading the outgoing buffer every time there is space and reading from the incoming one at the same time (well, single thread, but you don't have to finish writing to start reading etc).
I had run into the same question and decided to answer it myself. I would like to share with you guys the code repo. It is really simple, you can get the idea to make your stuff work. It is an elaborate example. The steps accidentally look like Ordous's solution.
https://github.com/khanhhua/full-duplex-chat
Feel free to clone! It's my weekend homework.
Main thread:
Create background thread(s) that will connect to any target machines(s).
These threads will connect to target machines and transmit data and die
Create an infinite loop
Listen for incoming connections.
Thread off any connection to handle I/O
Classes:
Server
Listens for incoming connections and threads off a Client object
Client
This class is created upon the server accepting the incoming connection, the TcpClient or NetClient (i forget what java calls it) is used to send data. Upon completion it dies.
Target
Is created during the start up and connects to a specific target and send data.
once complete it dies.
We are making an somewhat RTS networked game in java. i have this main server that accepts other players which has the serversocket. Then on our game when you created your own game room
i filtered all the players that has joined my room.then when the game starts the creator of the room should be the host. should i be still using my main server or should i establish a new serversocket for those who are connected to my room? and 1 more thing should a inputstream.readObject() what for an message to go for another loop?or it continuously looping? here is the sample code snippet for the inputstream.
public void run() {
while (running) {
try {
inStream = new ObjectInputStream(client.getInputStream());
command = (String) inStream.readObject();
Thread.sleep(10);
}//try
catch (Exception e) {
e.printStackTrace();
}//catch
}//while
}//run
////accepting new client
while (running) {
try {
clientConnecting = serverSocket.accept();
new TCPServerHandle(clientConnecting).start();
Thread.sleep(10);
}//try
catch (Exception e) {
e.printStackTrace();
}//catch
}//while
You could deffinitely create a second ServerSocket for the "party" to communicate with the host. Incoming packets are demultiplexed to the correct process, using port numbers. You can set multiple TCPServerSockets to listen and accept incoming connection requests on different ports.
ServerSocket welcomeSocket = new ServerSocket(portNumber);
Socket clientSocket = welcomeSocket.accept();
And yes, it is in many cases more efficient to use a combination of TCP and UDP, because as you mention some data is more critical than other. UDP only provides a best effort service, where packets can get lost. If you want to setup a UDP socket:
DatagramSocket UDPSocket = new DatagramSocket();
Using blocking I/O with Object I/O streams aren't optimal conditions for an RTS because you don't want other clients to wait during the login process of another client. You might be thinking that you could just multi-thread everything to avoid the wait but it wouldn't make much of a difference because there are still blocking read/write operations. Also, with Object I/O streams, all objects sent have to be serialized first (known as serialization), which could be a pretty lengthy process depending on your user-base. If there are a lot of players, literally every millisecond counts. You should use non-blocking I/O (such as NIO) along with ByteBuffers. I would suggest looking at an NIO tutorial instead, this is a very detailed tutorial on how to make a simple server-client application.
I have been learning about sockets for sometime now (I'm quite young) and I think I have a good grip on java sockets. I have decided to create a simple multiplayer Java 2D social game. My goal is to have the server output players' X,Y coordinates and chat every 10 milliseconds. From what I have read, my very average logic tells me that only one user at a time can connect to a socket. So therefore I will need a separate thread and socket for each player that connects.
Is it necessary to have one ServerSocket and thread per player?
You should have just one ServerSocket listening on a port that is known to the client. When a client connects to the server, a new Socket object is created and the original ServerSocket goes back to listening again. You should then spin off a new Thread or hand over to an Executor the actual work of talking to the client, otherwise your server will stop listening for client connections.
Here is a very basic sketch of the code you will need.
import java.net.*;
import java.util.concurrent.*;
public class CoordinateServer {
public static void main(String... argv) throws Exception {
// 'port' is known to the server and the client
int port = Integer.valueOf(argv[0]);
ServerSocket ss = new ServerSocket(port);
// You should decide what the best type of service is here
ExecutorService es = Executors.newCachedThreadPool ();
// How will you decide to shut the server down?
while (true) {
// Blocks until a client connects, returns the new socket
// to use to talk to the client
Socket s = ss.accept ();
// CoordinateOutputter is a class that implements Runnable
// and sends co-ordinates to a given socket; it's also
// responsible for cleaning up the socket and any other
// resources when the client leaves
es.submit(new CoordinateOutputter(s));
}
}
}
I have put sockets here since they are easier to get started with, but once you have this working well and want to boost your performance you will probably want to investigate the java.nio.channels package. There's a good tutorial over at IBM.
Yes.
A Socket is connection between two points (the client and the sever). This means that each player would require their own socket connection on the server end.
If you want your application to be responsive in any meaningful manner, then each incoming connection on the server should be processed within their own thread.
This allows clients that might have a slow connection not to become a bottle neck for others. It also means that if a client connection is lost, you don't burden any updates waiting for time-outs.
Is there a way to have reliable communications (the sender get informed that the message it sent is already received by the receiver) using Java TCP/IP library in java.net.*? I understand that one of the advantages of TCP over UDP is its reliability. Yet, I couldn't get that assurance in the experiment below:
I created two classes:
1) echo server => always sending back the data it received.
2) client => periodically send "Hello world" message to the echo server.
They were run on different computers (and worked perfectly). During the middle of the execution, I disconnected the network (unplugged the LAN cable). After disconnected, the server still keep waiting for a data until a few seconds passed (it eventually raised an exception). Similarly, the client also keep sending a data until a few seconds passed (an exception is raised).
The problem is, objectOutputStream.writeObject(message) doesn't guarantee the delivery status of the message (I expect it to block the thread, keep resending the data until delivered). Or at least I get informed, which messages are missing.
Server Code:
import java.net.*;
import java.io.*;
import java.io.Serializable;
public class SimpleServer {
public static void main(String args[]) {
try {
ServerSocket serverSocket = new ServerSocket(2002);
Socket socket = new Socket();
socket = serverSocket.accept();
InputStream inputStream = socket.getInputStream();
ObjectInputStream objectInputStream = new ObjectInputStream(
inputStream);
while (true) {
try {
String message = (String) objectInputStream.readObject();
System.out.println(message);
Thread.sleep(1000);
} catch (Exception ex) {
ex.printStackTrace();
}
}
} catch (Exception ex) {
ex.printStackTrace();
}
}
}
Client code:
import java.net.*;
import java.io.*;
public class SimpleClient {
public static void main(String args[]) {
try {
String serverIpAddress = "localhost"; //change this
Socket socket = new Socket(serverIpAddress, 2002);
OutputStream outputStream = socket.getOutputStream();
ObjectOutputStream objectOutputStream = new ObjectOutputStream(
outputStream);
while (true) {
String message = "Hello world!";
objectOutputStream.writeObject(message);
System.out.println(message);
Thread.sleep(1000);
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
If you need to know which messages have arrived in the peer application, the peer application has to send acknowledgements.
If you want this level of guarantees it sounds like you really want JMS. This can ensure not only that messages have been delivered but also have been processed correctly. i.e. there is no point having very reliable delivery if it can be discarded due to a bug.
You can monitor which messages are waiting and which consumers are falling behind. Watch a producer to see what messages it is sending, and have messages saved when it is down and are available when it restarts. i.e. reliable delivery even if the consumer is restarted.
TCP is always reliable. You don't need confirmations. However, to check that a client is up, you might also want to use a UDP stream with confirmations. Like a PING? PONG! system. Might also be TCP settings you can adjust.
Your base assumption (and understanding of TCP) here is wrong. If you unplug and then re-plug, the message most likely will not be lost.
It boils down on how long to you want the sender to wait. One hour, one day? If you'd make the timeout one day, you would unplug for two days and still say "does not work".
So the guaranteed delivery is that "either data is delivered - or you get informed". In the second case you need to solve it on application level.
You could consider using the SO_KEEPALIVE socket option which will cause the connection to be closed if no data is transmitted over the socket for 2 hours. However, obviously in many cases this doesn't offer the level of control typically needed by applications.
A second problem is that some TCP/IP stack implementations are poor and can leave your server with dangling open connections in the event of a network outage.
Therefore, I'd advise adding application level heartbeating between your client and server to ensure that both parties are still alive. This also offers the advantage of severing the connection if, for example a 3rd party client remains alive but becomes unresponsive and hence stops sending heartbeats.
I have the starts of a very basic multi-hreaded web server, it can recieve all GET requests as long as they come one at a time.
However, when multiple GET requests come in at the same time, sometimes they all are recieved, and other times, some are missing.
I tested this by creating a html page with multiple image tags pointing to my webserver and opening the page in firefox. I always use shift+refresh.
Here is my code, I must be doing something fundamentally wrong.
public final class WebServer
{
public static void main(String argv[]) throws Exception
{
int port = 6789;
ServerSocket serverSocket = null;
try
{
serverSocket = new ServerSocket(port);
}
catch(IOException e)
{
System.err.println("Could not listen on port: " + port);
System.exit(1);
}
while(true)
{
try
{
Socket clientSocket = serverSocket.accept();
new Thread(new ServerThread(clientSocket)).start();
}
catch(IOException e)
{
}
}
}
}
public class ServerThread implements Runnable
{
static Socket clientSocket = null;
public ServerThread(Socket clientSocket)
{
this.clientSocket = clientSocket;
}
public void run()
{
String headerline = null;
DataOutputStream out = null;
BufferedReader in = null;
int i;
try
{
out = new DataOutputStream(clientSocket.getOutputStream());
in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
while((headerline = in.readLine()).length() != 0)
{
System.out.println(headerline);
}
}
catch(Exception e)
{
}
}
First, #skaffman's comment is spot on. You should not catch-and-ignore exceptions like your code is currently doing. In general, it is a terrible practice. In this case, you could well be throwing away the evidence that would tell you what the real problem is.
Second, I think you might be suffering from a misapprehension of what a server is capable of. No matter how you implement it, a server can only handle a certain number of requests per second. If you throw more requests at it than that, some have to be dropped.
What I suspect is happening is that you are sending too many requests in a short period of time, and overwhelming the operating system's request buffer.
When your code binds to a server socket, the operating system sets up a request queue to hold incoming requests on the bound IP address/port. This queue has a finite size, and if the queue is full when a new request comes, the operating system will drop requests. This means that if your application is not able to accept requests fast enough, some will be dropped.
What can you do about it?
There is an overload of ServerSocket.bind(...) that allows you to specify the backlog of requests to be held in the OS-level queue. You could use this ... or use a larger backlog.
You could change your main loop to pull requests from the queue faster. One issue with your current code is that you are creating a new Thread for each request. Thread creation is expensive, and you can reduce the cost by using a thread pool to recycle threads used for previous requests.
CAVEATS
You need to be a bit careful. It is highly likely that you can modify your application to accept (not drop) more requests in the short term. But in the long term, you should only accept requests as fast as you can actually process them. If it accepts them faster than you can process them, a number of bad things can happen:
You will use a lot of memory with all of the threads trying to process requests. This will increase CPU overheads in various ways.
You may increase contention for internal Java data structures, databases and so on, tending to reduce throughput.
You will increase the time taken to process and reply to individual GET requests. If the delay is too long, the client may timeout the request ... and send it again. If this happens, the work done by the server will be wasted.
To defend yourself against this, it is actually best to NOT eagerly accept as many requests as you can. Instead, use a bounded thread pool, and tune the pool size (etc) to optimize the throughput rate while keeping the time to process individual requests within reasonable limits.
I actually discovered the problem was this:
static Socket clientSocket = null;
Once I removed the static, it works perfectly now.