Java Multithreaded Web Server - Not recieving multiple GET requests - java

I have the starts of a very basic multi-hreaded web server, it can recieve all GET requests as long as they come one at a time.
However, when multiple GET requests come in at the same time, sometimes they all are recieved, and other times, some are missing.
I tested this by creating a html page with multiple image tags pointing to my webserver and opening the page in firefox. I always use shift+refresh.
Here is my code, I must be doing something fundamentally wrong.
public final class WebServer
{
public static void main(String argv[]) throws Exception
{
int port = 6789;
ServerSocket serverSocket = null;
try
{
serverSocket = new ServerSocket(port);
}
catch(IOException e)
{
System.err.println("Could not listen on port: " + port);
System.exit(1);
}
while(true)
{
try
{
Socket clientSocket = serverSocket.accept();
new Thread(new ServerThread(clientSocket)).start();
}
catch(IOException e)
{
}
}
}
}
public class ServerThread implements Runnable
{
static Socket clientSocket = null;
public ServerThread(Socket clientSocket)
{
this.clientSocket = clientSocket;
}
public void run()
{
String headerline = null;
DataOutputStream out = null;
BufferedReader in = null;
int i;
try
{
out = new DataOutputStream(clientSocket.getOutputStream());
in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
while((headerline = in.readLine()).length() != 0)
{
System.out.println(headerline);
}
}
catch(Exception e)
{
}
}

First, #skaffman's comment is spot on. You should not catch-and-ignore exceptions like your code is currently doing. In general, it is a terrible practice. In this case, you could well be throwing away the evidence that would tell you what the real problem is.
Second, I think you might be suffering from a misapprehension of what a server is capable of. No matter how you implement it, a server can only handle a certain number of requests per second. If you throw more requests at it than that, some have to be dropped.
What I suspect is happening is that you are sending too many requests in a short period of time, and overwhelming the operating system's request buffer.
When your code binds to a server socket, the operating system sets up a request queue to hold incoming requests on the bound IP address/port. This queue has a finite size, and if the queue is full when a new request comes, the operating system will drop requests. This means that if your application is not able to accept requests fast enough, some will be dropped.
What can you do about it?
There is an overload of ServerSocket.bind(...) that allows you to specify the backlog of requests to be held in the OS-level queue. You could use this ... or use a larger backlog.
You could change your main loop to pull requests from the queue faster. One issue with your current code is that you are creating a new Thread for each request. Thread creation is expensive, and you can reduce the cost by using a thread pool to recycle threads used for previous requests.
CAVEATS
You need to be a bit careful. It is highly likely that you can modify your application to accept (not drop) more requests in the short term. But in the long term, you should only accept requests as fast as you can actually process them. If it accepts them faster than you can process them, a number of bad things can happen:
You will use a lot of memory with all of the threads trying to process requests. This will increase CPU overheads in various ways.
You may increase contention for internal Java data structures, databases and so on, tending to reduce throughput.
You will increase the time taken to process and reply to individual GET requests. If the delay is too long, the client may timeout the request ... and send it again. If this happens, the work done by the server will be wasted.
To defend yourself against this, it is actually best to NOT eagerly accept as many requests as you can. Instead, use a bounded thread pool, and tune the pool size (etc) to optimize the throughput rate while keeping the time to process individual requests within reasonable limits.

I actually discovered the problem was this:
static Socket clientSocket = null;
Once I removed the static, it works perfectly now.

Related

Optimizing a java multithreaded server that uses an inputstreamreader

I'm currently working on a project where i have to host a server wich fetches an inputstream, parses the data and sends it to the database. Every client that connects to my server sends an inputstream wich never stops once it is connected. Every client is assigned a socket and its own parser thread object so the server can deal with the datastream coming from the client. The parser object just deals with the incoming data and sends it to the database.
Server / parser generator:
public void generateParsers() {
while (keepRunning) {
try {
Socket socket = s.accept();
// new connection
t = new Thread(new Parser(socket));
t.start();
} catch (IOException e) {
appLog.severe(e.getMessage());
}
}
}
Parser thread:
#Override
public void run() {
while (!socket.isClosed() && socket.isConnected()) {
try {
BufferedReader bufReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
String line = bufReader.readLine();
String data = "";
if (line == null) {
socket.close();
} else if (Objects.equals(line, "<DATA")) {
while (!Objects.equals(line, "</DATA>")) {
data += line;
line = bufReader.readLine();
}
/*
Send the string that was build
from the client's datastream to the database
using the parse() function.
*/
parse(data);
}
}
} catch (IOException e) {
System.out.println("ERROR : " + e);
}
}
}
My setup is functional but the problem is that it delivers too much stress on my server when too much clients are connected and thus too many threads are parsing data concurrently. The parsing of the incoming data and the sending of the data to the database is hardly effecting the performance at all. The bottleneck is mostly the concurrent reading of the client's datastreams from the connected clients.
Is there any way that i can optimize my current setup ? I was thinking of capping the amount of connections and once a full datafile is recieved, parse it and move to the next client in the connection que or something similar.
The bottleneck is mostly the concurrent reading
No. The bottleneck is string concatenation. Use a StringBuffer or StringBuilder.
And probably improper behaviour when a client disconnects. It's hard to believe this works at all. It shouldn't:
You should use the same BufferedReader for the life of the socket, otherwise you can lose data.
Socket.isClosed() and Socket.isConnected() don't do what you think they do: the correct loop termination condition is readLine() returning null, or throwing an IOException:
while ((line = bufReader.readLine()) != null)
Capping the number of concurrent connections can't possibly achieve anything if the clients never disconnect. All you'll accomplish is never listening to clients beyond the first N to connect, which can't possibly be what you want. 'Move to the next client' will never happen.
If your problem is indeed that whatever you are doing while client is connected is expensive, you will have to use client queue. The most simple way to do this will be to use ExecutorService with N numer of max threads.
For example
private ExecutorService pool=Executors.newFixedThreapPool(N);
...
and then
Socket socket = s.accept();
pool.submit(new Parser(socket)));
This will limit concurent client handling to N at the time, and queue any additional clients that exceeds N.
Also depends on what you are doing with the data, you could always split the process to phases for example
Read raw data from client and enqueue for processing - close socket etc. so you can save resources
Process the data in separate thread (possibly thread pool) and enqueue the result
Do something with the result (check for validity, persist into DB etc) in another pool.
This is especially helpfull if you got some blocking operations like network I/O, or expensive one etc.
Looks like in your case, client does not have to wait for whole backend proess to complete. He only needs to deliver the data, so splitting data reading and parsing/persisting into separate phases (subtasks) sounds like reasonable approach.

java: Single socket on read write operation. Full duplex

I have to implement sending data with specific source port and in the same time listen to that port. Full duplex. Does anybody know how to implement it on java. I tried to create separate thread for listening on socket input stream but it doesnt work. I cannot bind ServerSocket and client socket to the same source port and the the same with netty.
It there any solution for dull duplex?
init(){
socket = new Socket(InetAddress.getByName(Target.getHost()), Target.getPort(), InetAddress.getByName("localhost"), 250);
in = new DataInputStream(socket.getInputStream());
out = new DataOutputStream(socket.getOutputStream());
}
private static void writeAndFlush(OutputStream out, byte[] b) throws IOException {
out.write(b);
out.flush();
}
public class MessageReader implements Runnable {
#Override
public void run() {
//this method throw exception EOF
read(in);
}
private void read(DataInputStream in){
while (isConnectionAlive()) {
StringBuffer strBuf = new StringBuffer();
byte[] b = new byte[1000];
while ((b[0] = bufferedInputStream.read(b)) != 3) {
strBuf.append(new String(b));
}
log.debug(strBuf.toString());
}
}
}
What you're trying to do is quite strange: A ServerSocket is a fully implemented socket that accepts connections, it handles its own messages and you definitely cannot piggy-back another socket on top of it.
Full duplex is fairly simple to do with NIO:
Create a Channel for your Socket in non-blocking mode
Add read to the interest OPs
Sleep with a Selector's select() method
Read any readable bytes, write any writable bytes
If writing is done, remove write from interest OPs
GOTO 3.
If you need to write, add bytes to a buffer, add write to interest OPs and wake up selector. (slightly simplified, but I'm sure you can find your way around the Javadoc)
This way you will be completely loading the outgoing buffer every time there is space and reading from the incoming one at the same time (well, single thread, but you don't have to finish writing to start reading etc).
I had run into the same question and decided to answer it myself. I would like to share with you guys the code repo. It is really simple, you can get the idea to make your stuff work. It is an elaborate example. The steps accidentally look like Ordous's solution.
https://github.com/khanhhua/full-duplex-chat
Feel free to clone! It's my weekend homework.
Main thread:
Create background thread(s) that will connect to any target machines(s).
These threads will connect to target machines and transmit data and die
Create an infinite loop
Listen for incoming connections.
Thread off any connection to handle I/O
Classes:
Server
Listens for incoming connections and threads off a Client object
Client
This class is created upon the server accepting the incoming connection, the TcpClient or NetClient (i forget what java calls it) is used to send data. Upon completion it dies.
Target
Is created during the start up and connects to a specific target and send data.
once complete it dies.

Java - Server that services each client in a seperate thread?

I am trying to setup my MessageServer class so that it services each client in a separate request (you'll see below that it's pretty linear right now)
How should I go about it?
import java.net.*;
import java.io.*;
public class MessageServer {
public static final int PORT = 6100;
public static void main(String[] args) {
Socket client = null;
ServerSocket sock = null;
BufferedReader reader = null;
try {
sock = new ServerSocket(PORT);
// now listen for connections
while (true) {
client = sock.accept();
reader = new BufferedReader(new InputStreamReader(client.getInputStream()));
Message message = new MessageImpl(reader.readLine());
// set the appropriate character counts
message.setCounts();
// now serialize the object and write it to the socket
ObjectOutputStream soos = new ObjectOutputStream(client.getOutputStream());
soos.writeObject(message);
System.out.println("wrote message to the socket");
client.close();
}
}
catch (IOException ioe) {
System.err.println(ioe);
}
}
}
Sorry, but your question doesn't make much sense.
If we are using the term "request" in the normal way, a client sends a request to the server and the server processes each request. It simply makes no sense for a server to not service the requests separately (in some sense).
Perhaps you are asking something different. (Do you mean, "service each client request in a separate thread"?) Whatever you mean, please review your terminology.
Given that you are talking about executing requests in different threads, then using the ExecutorService API is a good choice. Use an implementation class that allows you to put an upper bound on the number of worker threads. If you don't, you open yourself up for problems where overload results in the allocation of large numbers of threads, which only makes the server slower. (Besides, creating new threads is not cheap. It pays to recycle them.)
You should also consider configuring your executor so that it doesn't have a request queue. You want the executor service to block the thread that is trying to submit the job if there isn't a worker available. Let the operating system queue incoming connections / requests at the ServerSocket level. If you queue requests internally, you can run into the situation where you are wasting time by processing requests that the client-side has already timed out / abandoned.

Which is a suitable architecture?

I have tested a socket connection programme with the idea where the socket connection will be one separate thread by itself and then it will enqueue and another separate thread for dbprocessor will pick from the queue and run through a number of sql statement. So I notice here is where the bottle neck that the db processing. I would like to get some idea is what I am doing the right architecture or I should change or improve on my design flow?
The requirement is to capture data via socket connections and run through a db process then store it accordingly.
public class cServer
{
private LinkedBlockingQueue<String> databaseQueue = new LinkedBlockingQueue<String>();
class ConnectionHandler implements Runnable {
ConnectionHandler(Socket receivedSocketConn1) {
this.receivedSocketConn1=receivedSocketConn1;
}
// gets data from an inbound connection and queues it for databse update
public void run(){
databaseQueue.add(message); // put to db queue
}
}
class DatabaseProcessor implements Runnable {
public void run(){
// open database connection
createConnection();
while (true){
message = databaseQueue.take(); // keep taking message from the queue add by connectionhandler and here I will have a number of queries to run in terms of select,insert and updates.
}
}
void createConnection(){
System.out.println("Crerate Connection");
connCreated = new Date();
try{
dbconn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test1?"+"user=user1&password=*******");
dbconn.setAutoCommit(false);
}
catch(Throwable ex){
ex.printStackTrace(System.out);
}
}
}
public void main()
{
new Thread(new DatabaseProcessor()).start(); //calls the DatabaseProcessor
try
{
final ServerSocket serverSocketConn = new ServerSocket(8000);
while (true){
try{
Socket socketConn1 = serverSocketConn.accept();
new Thread(new ConnectionHandler(socketConn1)).start();
}
catch(Exception e){
e.printStackTrace(System.out);
}
}
}
catch (Exception e){
e.printStackTrace(System.out);
}
}
}
It's hard (read 'Impossible') to judge a architecture without the requirements. So I will just make some up:
Maximum Throughput:
Don't use a database, write to a flatfile, possibly stored on something fast like a solid state disc.
Guaranteed Persistence (If the user gets an answer not consisting of an error, the data must be stored securely):
Make the whole thing single threaded, save everything in a database with redundant discs. Make sure you have a competent DBA who knows about Back up and Recovery. Test those on regular intervals.
Minimum time for finishing the user request:
Your approach seems reasonable.
Minimum time for finishing the user request + Maximizing Throughput + Good Persistence (what ever that means):
Your approach seems good. You might plan for multiple threads processing the DB requests. But test how much (more) throughput you actually get and where precisely the bottleneck is (Network, DB CPU, IO, Lock contention ...). Make sure you don't introduce bugs by using a concurrent approach.
Generally, your architecture sounds correct. You need to make sure that your two threads are synchronised correctly when reading/writing from/to the queue.
I am not sure what you mean by "bottle neck that the db processing"? If DB processing takes a long time and and you end up with a long queue, there's not much you can do apart from having multiple threads performing the DB processing (assuming the processing can be parallelised, of course) or do some performance tuning in the DB thread.
If you post some specific code that you believe is causing the problem, we can have another look.
You don't need two threads for this simple task. Just read the socket and execute the statements.

Java Sockets and Dropped Connections

What's the most appropriate way to detect if a socket has been dropped or not? Or whether a packet did actually get sent?
I have a library for sending Apple Push Notifications to iPhones through the Apple gatways (available on GitHub). Clients need to open a socket and send a binary representation of each message; but unfortunately Apple doesn't return any acknowledgement whatsoever. The connection can be reused to send multiple messages as well. I'm using the simple Java Socket connections. The relevant code is:
Socket socket = socket(); // returns an reused open socket, or a new one
socket.getOutputStream().write(m.marshall());
socket.getOutputStream().flush();
logger.debug("Message \"{}\" sent", m);
In some cases, if a connection is dropped while a message is sent or right before; Socket.getOutputStream().write() finishes successfully though. I expect it's due to the TCP window isn't exhausted yet.
Is there a way that I can tell for sure whether a packet actually got in the network or not? I experimented with the following two solutions:
Insert an additional socket.getInputStream().read() operation with a 250ms timeout. This forces a read operation that fails when the connection was dropped, but hangs otherwise for 250ms.
set the TCP sending buffer size (e.g. Socket.setSendBufferSize()) to the message binary size.
Both of the methods work, but they significantly degrade the quality of the service; throughput goes from a 100 messages/second to about 10 messages/second at most.
Any suggestions?
UPDATE:
Challenged by multiple answers questioning the possibility of the described. I constructed "unit" tests of the behavior I'm describing. Check out the unit cases at Gist 273786.
Both unit tests have two threads, a server and a client. The server closes while the client is sending data without an IOException thrown anyway. Here is the main method:
public static void main(String[] args) throws Throwable {
final int PORT = 8005;
final int FIRST_BUF_SIZE = 5;
final Throwable[] errors = new Throwable[1];
final Semaphore serverClosing = new Semaphore(0);
final Semaphore messageFlushed = new Semaphore(0);
class ServerThread extends Thread {
public void run() {
try {
ServerSocket ssocket = new ServerSocket(PORT);
Socket socket = ssocket.accept();
InputStream s = socket.getInputStream();
s.read(new byte[FIRST_BUF_SIZE]);
messageFlushed.acquire();
socket.close();
ssocket.close();
System.out.println("Closed socket");
serverClosing.release();
} catch (Throwable e) {
errors[0] = e;
}
}
}
class ClientThread extends Thread {
public void run() {
try {
Socket socket = new Socket("localhost", PORT);
OutputStream st = socket.getOutputStream();
st.write(new byte[FIRST_BUF_SIZE]);
st.flush();
messageFlushed.release();
serverClosing.acquire(1);
System.out.println("writing new packets");
// sending more packets while server already
// closed connection
st.write(32);
st.flush();
st.close();
System.out.println("Sent");
} catch (Throwable e) {
errors[0] = e;
}
}
}
Thread thread1 = new ServerThread();
Thread thread2 = new ClientThread();
thread1.start();
thread2.start();
thread1.join();
thread2.join();
if (errors[0] != null)
throw errors[0];
System.out.println("Run without any errors");
}
[Incidentally, I also have a concurrency testing library, that makes the setup a bit better and clearer. Checkout the sample at gist as well].
When run I get the following output:
Closed socket
writing new packets
Finished writing
Run without any errors
This not be of much help to you, but technically both of your proposed solutions are incorrect. OutputStream.flush() and whatever else API calls you can think of are not going to do what you need.
The only portable and reliable way to determine if a packet has been received by the peer is to wait for a confirmation from the peer. This confirmation can either be an actual response, or a graceful socket shutdown. End of story - there really is no other way, and this not Java specific - it is fundamental network programming.
If this is not a persistent connection - that is, if you just send something and then close the connection - the way you do it is you catch all IOExceptions (any of them indicate an error) and you perform a graceful socket shutdown:
1. socket.shutdownOutput();
2. wait for inputStream.read() to return -1, indicating the peer has also shutdown its socket
After much trouble with dropped connections, I moved my code to use the enhanced format, which pretty much means you change your package to look like this:
This way Apple will not drop a connection if an error happens, but will write a feedback code to the socket.
If you're sending information using the TCP/IP protocol to apple you have to be receiving acknowledgements. However you stated:
Apple doesn't return any
acknowledgement whatsoever
What do you mean by this? TCP/IP guarantees delivery therefore receiver MUST acknowledge receipt. It does not guarantee when the delivery will take place, however.
If you send notification to Apple and you break your connection before receiving the ACK there is no way to tell whether you were successful or not so you simply must send it again. If pushing the same information twice is a problem or not handled properly by the device then there is a problem. The solution is to fix the device handling of the duplicate push notification: there's nothing you can do on the pushing side.
#Comment Clarification/Question
Ok. The first part of what you understand is your answer to the second part. Only the packets that have received ACKS have been sent and received properly. I'm sure we could think of some very complicated scheme of keeping track of each individual packet ourselves, but TCP is suppose to abstract this layer away and handle it for you. On your end you simply have to deal with the multitude of failures that could occur (in Java if any of these occur an exception is raised). If there is no exception the data you just tried to send is sent guaranteed by the TCP/IP protocol.
Is there a situation where data is seemingly "sent" but not guaranteed to be received where no exception is raised? The answer should be no.
#Examples
Nice examples, this clarifies things quite a bit. I would have thought an error would be thrown. In the example posted an error is thrown on the second write, but not the first. This is interesting behavior... and I wasn't able to find much information explaining why it behaves like this. It does however explain why we must develop our own application level protocols to verify delivery.
Looks like you are correct that without a protocol for confirmation their is no guarantee the Apple device will receive the notification. Apple also only queue's the last message. Looking a little bit at the service I was able to determine this service is more for convenience for the customer, but cannot be used to guarantee service and must be combined with other methods. I read this from the following source.
http://blog.boxedice.com/2009/07/10/how-to-build-an-apple-push-notification-provider-server-tutorial/
Seems like the answer is no on whether or not you can tell for sure. You may be able to use a packet sniffer like Wireshark to tell if it was sent, but this still won't guarantee it was received and sent to the device due to the nature of the service.

Categories

Resources