Many thread connection making system unstable - java

Friends,
I am building a JAVA TCP listener, where it can handle 6000 incoming request at a time. I am creating a socket connection and accepting data, after accepting data, i am doing some operation over it by creating a thread, but i am not killing this thread, since device will send data in every two minutes, so i am only making thread to sleep mode for 30 seconds.
But after running system for five minute, my application which is running under tomcat6.0 giving error - "The web application appears to have started a thread named [Thread-214] but has failed to stop it. This is very likely to create a memory leak."
Please help me to understand where i am doing wrong?
Thanks in advance.

If you have many sockets, instead of using thread per channel.
Try using One thread that go over all sockets.
look at the Java Selector
http://www.exampledepot.com/egs/java.nio/NbClient.html

You should be aware that your operating system can not handle that much threads. Moreover, memory is allocated for each thread, so you will fill up your heap very quickly.
As I don't know what your trying to achieve, I am only guessing that you have a design flaw in your application, usually threads are reused to handle requests.

I think Selector may help. You might want to read a short introduction about Selector in this link http://tutorials.jenkov.com/java-nio/selectors.html
"A Selector is a Java NIO component which can examine one or more NIO Channel's, and determine which channels are ready for e.g. reading or writing. This way a single thread can manage multiple channels, and thus multiple network connections."

If you need to handle many TCP connections with Java - you should use NIO. But programming bare NIO (Selector) is hard - so use Netty, it's designed specially for such tasks. Also Netty works fine inside Tomcat.

may be you should use thread pool
http://docs.oracle.com/javase/tutorial/essential/concurrency/pools.html

Related

Handling client -> server connections (maximum number and best practices)

I am designing architecture for android app -> server communication via TCP. It will be custom application protocol based on the TCP ... I am familiar with network programming, but I did not work a lot with the java (even, this is more general question then Java question) - and I did not work a lot with applications where you have undefined number of clients (in this case, in dependency of how many android app users you have). Hence I have few doubts and questions.
Scenario:
Users are going to install android application and login
After login, they will establish TCP connection with the server
Obviously, server side implementation needs to process requests in paralel
Consider that, both client and server side will be implemented using Java
Lets consider that application is really successfull - and it have 3mil + installations - which means a lot of users (yeah right :) )
Question:
What is the best way (best practice) to implement server side in order to handle client connections for this type of application?
Based on my research, you have only three approaches possible:
Using threadpool
Using one thread per tcp connection
Using threadpool and non blocking async approach in java (similar to what nodejs is doing with libuv)
EDIT:
Elaboration:
1. Using just a threadpool here seems "weird", due to the fact that we would need to have a huge threadpool in order to be able to assign thread per tcp connection. Using threadpool in order to serve e.g. http request (where tcp connection will be closed after request is completed), seems more then great idea.. but not for the tcp connections which are going to be used for a longer time)
2. Creating one thread per tcp connection every single time - seems limited as well (Why is creating a Thread said to be expensive?)
Java threads are implemented as native threads and huge numbers of threads is the wrong way to write a practical Java application.
I suppose that this depends, of course, and what type of application you have in general.
3. Using threadpool and non blocking async approach in java (similar to what nodejs is doing with libuv).
Based on what I read (and suggested so far as well), this seems like the best approach. Maybe its just because I have more experience with this type of applications (nodejs non blocking single threaded workers) - but seems like the best solution.
Maybe there are some ways - practices I am not familiar with - which can make this process more efficient?
Can you suggest any resources (books or similar) for this type of applications?
NOTE: Please make a note that I understand, that I am able to make this process more efficient with couple of methods I am already familiar with - for example, closing tcp connection when app goes in background - and reconnecting/establishing when user is using application again and similar (but this depends from the application itself of course).
I am wondering, am I missing something here - or it is simply as it is. If you want to have a lot of users - and lot of tcp connections - you will need to use one thread for every single user (or other approaches I mentioned above).
Other resources I went through:
Max number of threads - Linux
Increase number of threads - JVM
Max number of threads allowed to run
Runnable vs Thread
... and other external resources

Java Multi-thread networking, Good or bad?

I run multiple game servers and I want to develop a custom application to manage them. Basically all the game servers will connect to the application to exchange data. I don't want any of this data getting lost so I think it would be best to use TCP. I have looked into networking and understand how it works however I have a question about cpu usage. More servers are being added and in the next few months it could potentially reach around 100 - 200 and will continue to grow as needed. Will new threads for each server use a lot of cpu and is it a good idea to do this? Does anyone have any suggestions on how to go about this? Thanks.
You should have a look at non blocking io. With blocking io, each socket will consume 1 thread and the number of threads in a system is limited. And even if you can create 1000+, it is a questionable approach.
With non blocking io, you can server multiple sockets with a single thread. This is a more scalable approach + you control how many threads at any given moment are running.
More servers are being added and in the next few months it could potentially reach around 100 - 200 and will continue to grow as needed. Will new threads for each server use a lot of cpu and is it a good idea to do this?
It is a standard answer to caution away from 100s of threads and to the NIO solution. However, it is important to note that the NIO approach has a significantly more complex implementation. Isolating the interaction with a server connection to a single thread has its advantages from a code standpoint.
Modern OS' can fork 1000s of threads with little overhead aside from the stack memory. If you are sure of your scaling factors (i.e. you're not going to reach 10k connections or something) and you have the core memory then I would say that a thread per TCP connection could work very well. I've very successfully run applications with 1000s of threads and have not seen fall offs in performance due to context switching which used to be the case with earlier processors/kernels.

Best practice for android client to communicate with a server using threads

I am building an android app that communicates with a server on a regular basis as long as the app is running.
I do this by initiating a connection to the server when the app starts, then I have a separate thread for receiving messages called ReceiverThread, this thread reads the message from the socket, analyzes it, and forwards it to the appropriate part of the application.
This thread runs in a loop, reading whatever it has to read and then blocks on the read() command until new data arrives, so it spends most of it's time blocked.
I handle sending messages through a different thread, called SenderThread. What I am wondering about is: should I structure the SenderThread in a similar fashion? Meaning should I maintain some form a queue for this thread, let it send all the messages in the queue and then block until new messages enter the queue, or should I just start a new instance of the thread every time a message needs to be sent, let it send the message and then "die"? I am leaning towards the first approach, but I do not know what is actually better both in term of performance (keeping a blocked thread in memory versus initializing new threads), and in terms of code correctness.
Also since all of my activities need to be able to send and receive messages I am holding a reference to both threads in my Application class, is that an acceptable approach or should I implement it differently?
One problem I have encountered with this is that sometimes if I close my application and run it again I actually have two instances of ReceiverThread, so I get some messages twice.
I am guessing that this is because my application did not actually close and the previous thread was still active (blocked on the read() operation), and when I opened the application again a new thread was initialized, but both were connected to the server so the server sent the message to both. Any tips on how to get around this problem, or on how to completely re-organize it so it will be correct?
I tried looking up these questions but found some conflicting examples for my first question, and nothing that is useful enough and applies to my second question...
1. Your approach is ok, if you really need to keep an open connection between the server and client at all time at all cost. However I would use an asynchronous connection, like sending an HTTP request to the server and then get a reply whenever the server feels like it.
If you need the server to reply to the client at some later time, but you don't know when, you could also look into the Google Cloud Messaging framework, which gives you a transparent and consistent way of sending small messages to your clients from your server.
You need to consider some things, when you're developing a mobile application.
A smartphone doesn't have endless amount of battery.
A smartphone's Internet connection is somewhat volatile and you will lose Internet connection at different times.
When you keep a direct connection to server all the time, your app keep sending keep-alive packets, which means you'll suck the phone dry pretty fast.
When the Internet connection is as unstable as it gets on mobile broadband, you will lose the connection sometimes and need to recover from this. So if you use TCP because you want to make sure your packets are received you get to resend the same packets a lot of times and so get a lot of overhead.
Also you might run in to threading problems on the server-side, if you open threads on the server on your own, which it sounds like. Let's say you have 200 clients connecting to the server at the same time. Each client has 1 thread open on the server. If the server needs to serve 200 different threads at the same time, this could be quite a performance consuming task for the server in the end and you will need to do a lot work on your own as well.
2. When you exit your application, you'll need to clean-up after you. This should be done in your onPause method of the Activity which is active.
This means, killing off all active threads (or at least interupting them), saving the state of your UI (if you need this) and flushing and closing whatever open connections to the server you have.
As far as using Threads goes, I would recommend using some of the build-in threading tools like Handlers or implementing the AsyncTask.
If you really think Thread is the way to go, I would definitely recommend using a Singleton pattern as a "manager" for your threading.
This manager would control your threads, so you don't end up with more than one Thread talking to the server at any given time, even though you're in another part of the application.
As far as the Application class implementation goes, take a look at the Application class documentation:
Base class for those who need to maintain global application state. You can provide your own implementation by specifying its name in your AndroidManifest.xml's tag, which will cause that class to be instantiated for you when the process for your application/package is created.
There is normally no need to subclass Application. In most situation, static singletons can provide the same functionality in a more modular way.
So keeping away from implementing your own Application class is recommended, however if you let one of your Activities initialize your own Singleton class for managing the Threads and connections you might (just might) run into trouble, because the initialization of the singleton might "bind" to the specific Activity and so if the specific Activity is removed from the screen and paused it might be killed and so the singleton might be killed as well. So initializing the singleton inside your Application implementation might deem useful.
Sorry for the wall of text, but your question is quite "open-ended", so I've tried to give you a somewhat open-ended question - hope it helps ;-)

Thread-per-request tcp server

I am just trying to understand how to write a thread-per-request TCP server in Java.
I have already written a thread-per-connection server, that runs serverSocket.accept() and creates a new thread each time a new connection comes in.
How could this be modified into a thread-per-request server?
I suppose the incoming connections could be put into some sort of queue, but how would you know which one has issued a request & is ready for service?
I am suspecting that NIO is necessary here, but not sure.
Thanks.
[edit]
To be clear - The original "server" is just a loop that I have written that waits for a connection and then passes it to a new thread.
The lecturer has mentioned "thread-per-request" architecture, and I was wondering how it worked "under the hood".
My first idea about how it works, may be completely wrong.
You can use a Selector to achieve your goal. Here is a good example you can refer.
You can use plain IO, or blocking NIO, (OR non-blocking NIO, or async NIO2) You can have any multiple threads per connection (or a shared worker thread pool) but unless these are waiting for slow services like databases, this might be any faster (it can be much slower if you want low latency)

How to pass sockets created to another Java Process

We have an application which creates many sockets which belongs to its thread, By design if this application somehow fails, all threads stop which is not wanted. So to overcome this issue, each thread must be separated from the main application, if one of the threads fails, the other ones should be running. One thing in our mind is to pass created socket to another java process, so what is the correct way?
An other approach also is welcome?
Waiting for your suggestions...
Forking:
You can't pass a socket handle between Java processes using the normal API as far as I can tell. However, it does seem to be possible on windows using the Winsock 2 API. On Posix you should be able to fork a child process with access to the parent socket, since forked processes inherit the parent's sockets.
You could, I think, implement a new SocketImpl class which supports moving a socket handle to another process, but you'd need to write some JNI code to do it.
Sounds pretty hairy to me, I doubt forking a new process from within Java is a good idea!
Listeners:
Another approach might be to spawn a new 'listener' process which is essentially a new pre-forked worker. Each worker could then take turns to listen to the socket for connections.
The workers would then need to coordinate with a control process which manages spawning new processes as needed.
I agree with #Bozho, if an error in one thread can take them all down (I guess it would have to be a JVM exception killing the whole app) you have a bigger problem. You should look at isolating the threads if possible.
It isn't. (Sockets can't be serialized.)
When one thread fails, its exception should be caught, logged, and this should not interfere with other threads.
So either design it to stop completely, or design it to not stop completely.
Or pass all the information about the socket (address/port) to another application, which itself could open a similar socket.
see this similar question socket passing between processes.
Unfortunately the barrier of the address space can not be exceeded.
I rather agree with Bozho you need to redesign your applications / critic threads so that an Exception or an Error does kill your whole VM.
To help you with that I suggest you to have a look to :
Thread.setDefaultUncaughtExceptionHandler(...) and Thread.setUncaughtExceptionHandler(...) (see hyperlink below) which helps to fetch unforseen problems (such as runtimes)
Runtime.addShutdownHook(...) (see hyperlink below) which helps closing things nicely (for example when an OutOfMemoryError occurs)
Regards
Cerber
http: //java.sun.com/j2se/1.5.0/docs/api/java/lang/Thread.html#setUncaughtExceptionHandler(java.lang.Thread.UncaughtExceptionHandler)
http: //java.sun.com/j2se/1.5.0/docs/api/java/lang/Runtime.html#addShutdownHook(java.lang.Thread)
Use a class that is shared between threads to hold sockets. You can use a HashMap to label each socket so other threads can reference the one it needs.
I want to respond to those who say 'just catch the exceptions and exit the thread'.
You cannot catch all the exceptions. The following cause the java jvm to exit:
assertion the jvm due to bugs in the jvm implementation
some faillure in jni code (sigsegv, sigabrt)
OutOfMemory

Categories

Resources