I am testing a chat application for number of users. So what I am trying is as follows:
I am trying to run my chat application by login for chat for only one user for 1000 times in for loop. here is my part of code .
public void LoginChatConnect() {
try {
// while(true){
for(int i=0;i<1000;i++){
System.out.println("inside loginChatLogin");
String userId = "Rahul";
String password = "rahul";
sockChatListen = new Socket("localhost", 5004);
// /sockChatListen.
dosChatListen = new DataOutputStream(
sockChatListen.getOutputStream());
disChatListen = new DataInputStream(sockChatListen.getInputStream());
dosChatListen.writeUTF(userId);
dosChatListen.writeUTF(password);
// System.out.println(dosChatListen.toString());
dosChatListen.flush();
// sockChatListen.close();
boolean b = sockChatListen.isClosed();
System.out.println("connection open**********************" + b);
sockChatListen.close();
System.out.println("connection closed**********************" + b);
count++;
System.out.println("count" + count);
}
} catch (UnknownHostException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
In above code I am just trying to login for only one user for 1000 times. But after certain login it is giving me this socket error.
java.net.SocketException: Connection reset by peer: socket write error
at java.net.SocketOutputStream.socketWrite0(Native Method)
Here I am trying to create a connection with a single port 5004. why I am getting error after 100+ successful connections(login).?
How should I recover this problem?
Any suggestions will be helpful.
What I understand from your post is that you want to simulate 1000 users logging to the chat server concurrently. I believe you are trying to test the load on your chat server.
However, from your code, I see that you are establishing and closing the socket connection every time in the loop. This is similar to 1000 users waiting in a queue and attempting to login to the server one after the other. This does not simulate the concurrent load but a 1000 sequential calls to the server and would not be appropriate to load test your server.
My comments are based on the above stated understanding. Please calrify if this is not the case.
Regarding the exception you get, I have no idea why it should not work after 100+ attempts. May be you need to check your server side code to figure out the problem.
Related
I am experiencing an error which I am at a loss to explain. I feel like I'm so close but I just can't seem to get a connection.
I have setup two RMI server objects on a remote server and I want to connect to and use these. I am able to connect to the RMIRegistry on the server on port 1099 and with the call Registry.list() I get the correct names of the stubs which are setup on the server.
Now for my code...
Server object 1
Registry registry = null;
try {
registry = LocateRegistry.createRegistry(1099);
} catch (RemoteException e){
System.out.println("Registry already exists - connecting...");
try {
registry = LocateRegistry.getRegistry(1099);
String[] objects = registry.list();
for (int n=0; n<objects.length; n++){
System.out.println(objects[n]);
}
} catch (RemoteException ex) {
ex.printStackTrace();
System.out.println("RMI registry connection fail.");
System.exit(1);
}
}
BioBingoLogic bb = null;
bb = new BioBingoLogic();
BioBingoInterface bbStub = null;
try {
bbStub = (BioBingoInterface)
UnicastRemoteObject.exportObject(bb, 9753);
} catch (RemoteException e){
e.printStackTrace();
System.out.println("RemoteServer export fail.");
System.exit(1);
}
try {
registry.rebind("BioBingoServer", bbStub);
} catch (RemoteException e){
e.printStackTrace();
System.out.println("Registry rebind fail.");
System.exit(1);
}
Server object 2
Completely the same as Server object 1 only exported on port 9754 and called "DatabaseServer".
Output
My output from these two objects is in the following picture:
Output from running the two server objects.
Client side
The server objects work as I expect them to. It is the Client which doesn't seem to be able to connect to the individual server objects.
System.out.println("Creating RMI Registry stub.");
Registry remoteRegistry = null;
try {
remoteRegistry = LocateRegistry.getRegistry("biobingo", 1099);
String[] objects = remoteRegistry.list();
System.out.println("\nObjects in stub:");
for (int n = 0; n < objects.length; n++) {
System.out.println(objects[n]
}
System.out.println();
System.out.println("Connecting to BioBingoServer object.");
try {
game = (BioBingoInterface) remoteRegistry.lookup("BioBingoServer");
db = (DatabaseInterface) remoteRegistry.lookup("DatabaseServer");
} catch (Exception e) {
e.printStackTrace();
System.out.println("Stub not found.");
System.exit(1);
}
} catch (RemoteException e) {
e.printStackTrace();
System.out.println("Registry not found.");
System.exit(1);
}
System.out.println("Connected to BioBingoServer.");
System.out.println("Connected to DatabaseServer");
biobingo is the IP of the remote server registered with the alias in my hosts file.
Output
This is where the problem arises....
I get the output in the following picture:
Output from client side application
It should be understood from the picture, that I never get an exception of any kind. It just hangs on the call Registry.lookup() until it, I suppose, gets a timeout from the server and executes the next part of the client code - calls to the server object then throws a RemoteException.
It should be noted that the remote server is behind a NAT, however, the NAT and its firewall is setup to allow all incoming TCP traffic, from all IP's, on all the specified ports; 1099 + 9753 + 9754.
I have also verified that the ports are indeed open with a port scanner.
This is where I am at a loss...
Any suggestions to what is preventing me from connecting to the server objects, when I am entirely able to connect to the RMIRegistry?
Any help is greatly appreciated - thank you!
---------------------------------------------
EDIT
---------------------------------------------
I tried running the server objects and client with the java vm option:
-Dsun.rmi.transport.tcp.logLevel=VERBOSE
The following picture shows the output and includes description of the flow and where the possible error occurs:
output with java vm option.
Client output
No output, just a 3 minute delay on Registry.lookup(). Afterwards the following code is executed and then on function calls to the RMI stub there’s a 3 minute delay followed by a ConnectException saying *connection timed out to 10.230.56.71` (which is the local IP of the server, although I’m connecting to it’s global IP - so it seems that my call does find it’s way to the NAT which the server is behind).
Server output
Nothing, really.
I have a client that I want to try to continuously connect to a server until a connection is established (i.e. until I start the server).
clientSocket = new Socket();
while (!clientSocket.isConnected()) {
try {
clientSocket.connect(new InetSocketAddress(serverAddress, serverPort));
} catch (IOException e) {
e.printStackTrace();
}
// sleep prevents a billion SocketExceptions from being printed,
// and hopefully stops the server from thinking it's getting DOS'd
try {
Thread.sleep(1500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
After the first attempt, I get a ConnectionException; expected, since there is nothing to connect to. After that, however, I start getting SocketException: Socket closed which doesn't make sense to me since clientSocket.isClosed() always returns false, before and after the connect() call.
How should I change my code to get the functionality I need?
You can't reconnect a Socket, even if the connect attempt failed. You have to close it and create a new one.
I've created a client-server connection, something like a chat system. Previously I was using a while loop on the client side, and it was waiting to read a message from the console every time (of course server has a while loop as well to serve forever). But now, I'm trying to first create a connection at the beginning of the session, and then occasionally send a message during the session, so to maintain a permanent and persistent connection.
Currently, without the while loop, the client closes the connection and I don't know how to find a workaround.
Here is the client code:
import java.net.*;
import java.io.*;
public class ControlClientTest {
private Socket socket = null;
// private BufferedReader console = null;
private DataOutputStream streamOut = null;
public static void main(String args[]) throws InterruptedException {
ControlClientTest client = null;
String IP="127.0.0.1";
client = new ControlClientTest(IP, 5555);
}
public ControlClientTest(String serverName, int serverPort) throws InterruptedException {
System.out.println("Establishing connection. Please wait ...");
try {
socket = new Socket(serverName, serverPort);
System.out.println("Connected: " + socket);
start();
} catch (UnknownHostException uhe) {
System.out.println("Host unknown: " + uhe.getMessage());
} catch (IOException ioe) {
System.out.println("Unexpected exception: " + ioe.getMessage());
}
String line = "";
// while (!line.equals(".bye")) {
try {
Thread.sleep(1000);
//TODO get data from input
// line = console.readLine();
line="1";
if(line.equals("1"))
line="1,123";
streamOut.writeUTF(line);
streamOut.flush();
} catch (IOException ioe) {
System.out.println("Sending error: " + ioe.getMessage());
}
// }
}
public void start() throws IOException {
// console = new BufferedReader(new InputStreamReader(System.in));
streamOut = new DataOutputStream(socket.getOutputStream());
}
}
And here is the Server code:
import java.awt.*;
import java.io.*;
import java.net.ServerSocket;
import java.net.Socket;
public class ControlServer {
private Socket socket = null;
private ServerSocket server = null;
private DataInputStream streamIn = null;
public static void main(String args[]) {
ControlServer server = null;
server = new ControlServer(5555);
}
public ControlServer(int port) {
try {
System.out
.println("Binding to port " + port + ", please wait ...");
server = new ServerSocket(port);
System.out.println("Server started: " + server);
System.out.println("Waiting for a client ...");
socket = server.accept();
System.out.println("Client accepted: " + socket);
open();
boolean done = false;
while (!done) {
try {
String line = streamIn.readUTF();
// TODO get the data and do something
System.out.println(line);
done = line.equals(".bye");
} catch (IOException ioe) {
done = true;
}
}
close();
} catch (IOException ioe) {
System.out.println(ioe);
}
}
public void open() throws IOException {
streamIn = new DataInputStream(new BufferedInputStream(
socket.getInputStream()));
}
public void close() throws IOException {
if (socket != null)
socket.close();
if (streamIn != null)
streamIn.close();
}
}
I would like to summarize some good practices regarding the stability of TCP/IP connections which I apply on a daily basis.
Good practice 1 : Built-in Keep-Alive
socket.setKeepAlive(true);
It automatically sends a signal after a period of inactivity and checks for a reply. The keep-alive interval is operating system dependent though, and has some shortcomings. But all by all, it could improve the stability of your connection.
Good practice 2 : SoTimeout
Whenver you perform a read (or readUTF in your case), your thread will actually block forever. In my experience this is bad practice for the following reasons: It's difficult to close your application. Just calling socket.close() is dirty.
A clean solution, is a simple read time-out (e.g. 200ms). You can do this with the setSoTimeoutmethod. When the read() method timeouts it will throw a SocketTimeoutException. (which is a subclass of IOException).
socket.setSoTimeout(timeoutInterval);
Here is an example to implement the loop. Please note the shutdown condition. Just set it to true, and your thread will die peacefully.
while (!shutdown)
{
try
{
// some method that calls your read and parses the message.
code = readData();
if (code == null) continue;
}
catch (SocketTimeoutException ste)
{
// A SocketTimeoutExc. is a simple read timeout, just ignore it.
// other IOExceptions will not be stopped here.
}
}
Good practice 3 : Tcp No-Delay
Use the following setting when you are often interfacing small commands that need to be handled quickly.
try
{
socket.setTcpNoDelay(true);
}
catch (SocketException e)
{
}
Good practice 4 : A heartbeat
Actually there are a lot of side scenario's that are not covered yet.
One of them for example are server applications that are designed to only communicate with 1 client at a time. Sometimes they accept connections and even accept messages, but never reply to them.
Another one: sometimes when you lose your connection it actually can take a long time before your OS notices this. Possibly due to the shortcomings described in good practice 3, but also in more complex network situations (e.g. using RS232-To-Ethernet converters, VMware servers, etc) this happens often.
The solution here is to create a thread that sends a message every x seconds and then waits for a reply. (e.g. every 15 seconds). For this you need to create a second thread that just sends a message every 15 seconds. Secondly, you need to expand the code of good practice 2 a little bit.
try
{
code = readData();
if (code == null) continue;
lastRead = System.currentTimeMillis();
// whenever you receive the heart beat reply, just ignore it.
if (MSG_HEARTBEAT.equals(code)) continue;
// todo: handle other messages
}
catch (SocketTimeoutException ste)
{
// in a typical situation the soTimeout is about 200ms
// the heartbeat interval is usually a couple of seconds.
// and the heartbeat timeout interval a couple of seconds more.
if ((heartbeatTimeoutInterval > 0) &&
((System.currentTimeMillis() - lastRead) > heartbeatTimeoutInterval))
{
// no reply to heartbeat received.
// end the loop and perform a reconnect.
break;
}
}
You need to decide if your client or server should send the message. That decision is not so important. But e.g. if your client sends the message, then your client will need an additional thread to send the message. Your server should send a reply when it receives the message. When your client receives the answer, it should just continue (i.e. see code above). And both parties should check: "how long has it been?" in a very similar way.
You could wrap a thread around the connection and have it periodically send a status to keep the line open, say every 30 seconds or whatever. Then, when it actually has data to send it would reset the keep alive to be 30 seconds after the last transmission. The status could be helpful to see if the client is still alive anyway, so at least it can be a useful ping.
Also, you should change your server code, you appear to only handle one connection at the moment. You should loop and when a socket connection comes in spawn a thread to handle the client request and go back to listening. I may be reading to much into what may just be your test code, though.
Make the client socket connection wrapped around a thread. Use a blocking queue to wait for messages. There should only be a single sender queue throughout your application, so use a singleton pattern.
e.g.
QueueSingleton queue = QueueSingleton.getSenderQueue();
Message message = queue.take() // blocks thread
send(message); //send message to server
When you need to send a message to the server, you can use the blocking queue to send the message.
QueueSingleton queue = QueueSingleton.getSenderQueue();
queue.put(message)
The client thread will wake up and process the message.
For maintaining the connection, use a timer task. This is special type of thread that calls a run method repetitively at specified periods. You can use this to post a message, a ping message, every so often.
For processing the received message, you could have another thread, waiting for messages on another blocking queue (receiver queue). The client thread will put the received message on this queue.
I would like help creating a relay bot with PircBot. I want it for each message it sends a message to a channel with that message. And I wish to have this multi networks. I made a command
if (split[0].equalsIgnoreCase(commandPrefix + "addnet")) {
// sendRawLine("QUIT : joining " + split[1]);
BrookieBot bot = new BrookieBot();
bot.setVerbose(true);
addnet = addnet + " " + split[1];
try {
bot.connect(split[1]);
sendMessage("nickserv", "identify pass");
bot.joinChannel("#brookies-use-of-bot");
} catch (NickAlreadyInUseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IrcException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
bot = new BrookieBot();
quit = 6;
this.joinChannel("#brookies-use-of-bot");
}
That is how I made it connect to multiple networks, but I want it to reconise the net and send a message each time it recieves a message to that channel no matter what network.
Having the message be in this format: [<net>] [<sender>] [<message>]. Thank you for all your help! I have this version: http://www.jibble.org/pircbot.php
I've made one such relay bot before. Let's discuss what you need.
When a bot recieves a message in a channel to be relayed, the message is sent using an array of bots to their respective channels based on server and channel combination, provided:
The channel selected in the loop is not the current channel
The channel is in the list of to be synchronized channels
The sender is not part of the bot names to prevent infinite loops
The same goes for quit, part, join and anything else you want.
1) PircBot isn't good for multi networks. Also, it has several design problems.
I would recommend PircBotX.
2) If you really have to use PircBot:
Create one PircBot object per connection.
That will create one thread per PircBot.
Then, create a bus which will distribute the messages amongst PircBot instances.
Be careful with synchronization.
For an example on how to send messages outside PircBot object, see JawaBot, which is based on it.
First of all, thanks for reading. This is my first time in stackoverflow as user, although I've always read it and found useful solutions :D. By the way, sorry if I'm not clear enough explaining myself, I know that my English isn't very good.
My socket based program is having a strange behaviour, and some performance issues. The client and server communicate with each other by reading/writing serialized objects into object input and output streams, in a multi-threaded way. Let me show you the code basics. I have simplified it to be more readable and a complete exception handling for example is intentionally ommited. The server works like this:
Server:
// (...)
public void serve() {
if (serverSocket == null) {
try {
serverSocket = (SSLServerSocket) SSLServerSocketFactory
.getDefault().createServerSocket(port);
serving = true;
System.out.println("Waiting for clients...");
while (serving) {
SSLSocket clientSocket = (SSLSocket) serverSocket.accept();
System.out.println("Client accepted.");
//LjServerThread class is below
new LjServerThread(clientSocket).start();
}
} catch (Exception e) {
// Exception handling code (...)
}
}
}
public void stop() {
serving = false;
serverSocket = null;
}
public boolean isServing() {
return serving;
}
LjServerThread class, one instance created per client:
private SSLSocket clientSocket;
private String IP;
private long startTime;
public LjServerThread(SSLSocket clientSocket) {
this.clientSocket = clientSocket;
startTime = System.currentTimeMillis();
this.IP = clientSocket.getInetAddress().getHostAddress();
}
public synchronized String getClientAddress() {
return IP;
}
#Override
public void run() {
ObjectInputStream in = null;
ObjectOutputStream out = null;
//This is my protocol handling object, and as you will see below,
//it works processing the object received and returning another as response.
LjProtocol protocol = new LjProtocol();
try {
try {
in = new ObjectInputStream(new BufferedInputStream(
clientSocket.getInputStream()));
out = new ObjectOutputStream(new BufferedOutputStream(
clientSocket.getOutputStream()));
out.flush();
} catch (Exception ex) {
// Exception handling code (...)
}
LjPacket output;
while (true) {
output = protocol.processMessage((LjPacket) in.readObject());
// When the object received is the finish mark,
// protocol.processMessage()object returns null.
if (output == null) {
break;
}
out.writeObject(output);
out.flush();
out.reset();
}
System.out.println("Client " + IP + " finished successfully.");
} catch (Exception ex) {
// Exception handling code (...)
} finally {
try {
out.close();
in.close();
clientSocket.close();
} catch (Exception ex) {
// Exception handling code (...)
} finally {
long stopTime = System.currentTimeMillis();
long runTime = stopTime - startTime;
System.out.println("Run time: " + runTime);
}
}
}
And, the client, is like this:
private SSLSocket socket;
#Override
public void run() {
LjProtocol protocol = new LjProtocol();
try {
socket = (SSLSocket) SSLSocketFactory.getDefault()
.createSocket(InetAddress.getByName("here-goes-hostIP"),
4444);
} catch (Exception ex) {
}
ObjectOutputStream out = null;
ObjectInputStream in = null;
try {
out = new ObjectOutputStream(new BufferedOutputStream(
socket.getOutputStream()));
out.flush();
in = new ObjectInputStream(new BufferedInputStream(
socket.getInputStream()));
LjPacket output;
// As the client is which starts the connection, it sends the first
//object.
out.writeObject(/* First object */);
out.flush();
while (true) {
output = protocol.processMessage((LjPacket) in.readObject());
out.writeObject(output);
out.flush();
out.reset();
}
} catch (EOFException ex) {
// If all goes OK, when server disconnects EOF should happen.
System.out.println("suceed!");
} catch (Exception ex) {
// (...)
} finally {
try {
// FIRST STRANGE BEHAVIOUR:
// I have to comment the "out.close()" line, else, Exception is
// thrown ALWAYS.
out.close();
in.close();
socket.close();
} catch (Exception ex) {
System.out.println("This shouldn't happen!");
}
}
}
}
Well, as you see, the LjServerThread class which handles accepted clients in the server side, measures the time it takes... Normally, it takes between 75 - 120 ms (where the x is the IP):
Client x finished successfully.
Run time: 82
Client x finished successfully.
Run time: 80
Client x finished successfully.
Run time: 112
Client x finished successfully.
Run time: 88
Client x finished successfully.
Run time: 90
Client x finished successfully.
Run time: 84
But suddenly, and with no predictable pattern (at least for me):
Client x finished successfully.
Run time: 15426
Sometimes reaches 25 seconds!
Ocasionally a small group of threads go a little slower but that doesn't worry me much:
Client x finished successfully.
Run time: 239
Client x finished successfully.
Run time: 243
Why is this happening? Is this perhaps because my server and my client are in the same machine, with the same IP? (To do this tests I execute the server and the client in the same machine, but they connect over internet, with my public IP).
This is how I test this, I make requests to the server like this in main():
for (int i = 0; i < 400; i++) {
try {
new LjClientThread().start();
Thread.sleep(100);
} catch (Exception ex) {
// (...)
}
}
If I do it in loop without "Thread.sleep(100)", I get some connection reset exceptions (7 or 8 connections resetted out of 400, more or less), but I think I understand why it happens: when serverSocket.accept() accepts a connection, a very small amount of time has to be spent to reach serverSocket.accept() again. During that time, the server cannot accept connections. Could it be because of that? If not, why? It would be rare 400 connections arriving to my server exactly at the same time, but it could happen. Without "Thread.sleep(100)", the timing issues are worse also.
Thanks in advance!
UPDATED:
How stupid, I tested it in localhost... and it doesn't give any problem! With and without "Thread.sleep(100)", doesn't matter, it works fine! Why! So, as I can see, my theory about why the connection reset is beeing thrown is not correct. This makes things even more strange! I hope somebody could help me... Thanks again! :)
UPDATED (2):
I have found sightly different behaviours in different operating systems. I usually develop in Linux, and the behaviour I explained was about what was happening in my Ubuntu 10.10. In Windows 7, when I pause 100ms between connections, all its fine, and all threads are lighting fast, no one takes more than 150ms or so (no slow connection issues!). This is not what is happening in Linux. However, when I remove the "Thread.sleep(100)", instead of only some of the connections getting the connection reset exception, all of them fail and throw the exception (in Linux only some of them, 6 or so out of 400 were failing).
Phew! I've just find out that not only the OS, the JVM enviroment has a little impact also! Not a big deal, but noteworthy. I was using OpenJDK in Linux, and now, with the Oracle JDK, I see that as I reduce the sleep time between connections, it starts failing earlier (with 50 ms OpenJDK works fine, no exceptions are thrown, but with Oracle's one quite a lot with 50ms sleep time, while with 100ms works fine).
The server socket has a queue that holds incoming connection attempts. A client will encounter a connection reset error if that queue is full. Without the Thread.sleep(100) statement, all of your clients are trying to connect relatively simultaneously, which results in some of them encountering the connection reset error.
Two points I think you may further consider researching. Sorry for a bit vague here but this is what I think.
1) Under-the-hood, at tcp level there are few platform dependent things control the amount of time it takes to send/receive data across a socket. The inconsistent delay could be because of the settings such as tcp_syn_retries. You may be interested to look at here http://www.frozentux.net/ipsysctl-tutorial/chunkyhtml/tcpvariables.html#AEN370
2)Your calculated execution time is not only the amount of time it took to complete the execution but includes the time until the finalization is done which is not guaranteed to happen immediately when an object is ready for finalization.