Multithreaded Chatting Program Architecture [closed] - java

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Currently I am going about making a chatting program (in java for those who are wondering) and I am at the point where I need to come up with a good architecture for the whole thing. This is my current outline, but feel free to provide any feedback (I have no professional training, just some reading).
Client Side:
The lifecycle will be as follows:
Connect with the server- First I plan to have the client establish a connection with the server, prompt the user to login or create a new account, and send these credentials to the server, which will then send back information like the users list of friends and any other relevant data I can think of.
Waiting for the user to connect with someone- There will be a list of friends who are currently online and that the user can connect with and a button to find and request other users to be friends with them.
Chatting- I'll go into more detail on the server section, but the user will send text and images to the server which will proceed to rout it to the other user.
Rinse and Repeat- After the user is done chatting, it will go back to step 2 until they exit out of the program, at which point it will close all connections with the server.
It seems that the client side only needs to be single threaded. Also, if you think of any useful (major) features that many IM type programs have, please share them. I'd be happy to hear (if you want to make me very happy then you could include a general outline for the implementation also :).
Server Side
Now this is where it gets rather messy. I think that I am creating way to many threads than I need to. Let me explain.
The server stores information on each client using a JSON representation of each user. As well, a list of users currently online is maintained.
Current idea for what chatting looks like server-side:
Connection established with one client- The server and client communicate and get the client logged in (or registered), the server tells the client who its friends are, and the client is added to the list of users currently online. As well, at this point I am starting a new thread to listen for input from this client
A client chooses to start a conversation with someone- At this point I think I need to create a new thread for this conversation, as many of these need to be going on at the same time. As of now my idea is that I start a new thread on the server side which handles all of the routing and comunicated with just those two clients in the conversation.
Wait for more users to connect- Although I don't ever anticipate more than two people to connect to my server, I would like to make it so that in theory the server can handle multiple conversations. This I believe would be my main thread, waiting for someone to connect then creating a listener thread for them. Once a conversation is set up, this thread would give that conversation its own thread and then go back to what it is doing. This should be able to be done in just one thread (at least according to my logic).
And that's it. Now, of course there are things like graphics and whatnot that I didn't include. Also, with what I have so for in Java I should be able to make conversations between more than two people. Nonetheless, this seems like I am using an excessive amount of threads. There is the main thread, one thread per user, and a thread per conversation. This means that with 1000 users chatting I've started 1501 new threads. Is this excessive? Could I use some type of thread pool? What other suggestions do you have? If I missed anything, just ask (if its something that I haven't thought of then I'll say that too). Finally, if you have any ideas for the actual features of the program, I'd be glad to hear.

Neither one thread per conversation or one thread for all conversations is going to scale. You need something in-between.
Using a thread pool that has some maximum number of allowed threads, for each message received for the conversation, queue up the processing of the message to the thread pool.
As long as there are threads available (i.e., you don't have too many messages to process at once), the message should be processed immediately.
If there are more messages to process than threads in the pool, there will be a delay in processing some messages. Though this isn't ideal, like the comment said a chat program requires fairly low processing/bandwidth, but managing the maximum size of the thread pool means you're not going to thrash the processor or run out of memory. Ideal for a scaling solution.
As you increase the hardware size, the number of concurrent threads can increase, though that doesn't sound like it'll be a problem here.

I advise you to use www.netty.io.
there are some tutorials to develop chat client/server :
http://www.allreadable.com/6b8c8U4g

Related

Multi-threaded java server used for a buzzer beater?

In Java is there a way to connect multiple clients to a server at once without creating multiple threads? I want to create a buzzer-beater for a trivia game I'm making for my friends and I. However, I don't want the server to only be listening to one thread (or contestant) at a time. I'd like the server to be constantly listening to all clients and track the order in which they each hit the button on the client GUI I made.
Not looking for specific code, just ideas. Is a multi-threaded server even the right approach for this sort of problem? Is there a better networking solution in Java?
Thanks!

Java networking - Look for packets without blocking the thread?

I'm creating a Networking Utilities package, to make it easier to create network applications, like chat applications, games etc. I wonder if it's possible to, in the server thread, look for packets all the time, without blocking the thread?
I want to do this, because, for example, when I'm going to create a multiplayer server, I don't want the whole server to be blocked and unplayable because the server is looking for packets that tells the server that someone is connecting.
What's the best way of solving this?
To put the joining detection in a separate thread?
Also; how many threads can you run in a single application? Should you try to hold the amount of threads down as much as possible? Is 4 threads too much?
Edit: I put the join detection in a separate thread and class. Then, while the join detector was active, it checked for packets and added them to a list of requests. Then, from the server class, every update, i checked if there was any gathered requests in the join detection class.
Sorry for the wrong answer before.

Best practice for android client to communicate with a server using threads

I am building an android app that communicates with a server on a regular basis as long as the app is running.
I do this by initiating a connection to the server when the app starts, then I have a separate thread for receiving messages called ReceiverThread, this thread reads the message from the socket, analyzes it, and forwards it to the appropriate part of the application.
This thread runs in a loop, reading whatever it has to read and then blocks on the read() command until new data arrives, so it spends most of it's time blocked.
I handle sending messages through a different thread, called SenderThread. What I am wondering about is: should I structure the SenderThread in a similar fashion? Meaning should I maintain some form a queue for this thread, let it send all the messages in the queue and then block until new messages enter the queue, or should I just start a new instance of the thread every time a message needs to be sent, let it send the message and then "die"? I am leaning towards the first approach, but I do not know what is actually better both in term of performance (keeping a blocked thread in memory versus initializing new threads), and in terms of code correctness.
Also since all of my activities need to be able to send and receive messages I am holding a reference to both threads in my Application class, is that an acceptable approach or should I implement it differently?
One problem I have encountered with this is that sometimes if I close my application and run it again I actually have two instances of ReceiverThread, so I get some messages twice.
I am guessing that this is because my application did not actually close and the previous thread was still active (blocked on the read() operation), and when I opened the application again a new thread was initialized, but both were connected to the server so the server sent the message to both. Any tips on how to get around this problem, or on how to completely re-organize it so it will be correct?
I tried looking up these questions but found some conflicting examples for my first question, and nothing that is useful enough and applies to my second question...
1. Your approach is ok, if you really need to keep an open connection between the server and client at all time at all cost. However I would use an asynchronous connection, like sending an HTTP request to the server and then get a reply whenever the server feels like it.
If you need the server to reply to the client at some later time, but you don't know when, you could also look into the Google Cloud Messaging framework, which gives you a transparent and consistent way of sending small messages to your clients from your server.
You need to consider some things, when you're developing a mobile application.
A smartphone doesn't have endless amount of battery.
A smartphone's Internet connection is somewhat volatile and you will lose Internet connection at different times.
When you keep a direct connection to server all the time, your app keep sending keep-alive packets, which means you'll suck the phone dry pretty fast.
When the Internet connection is as unstable as it gets on mobile broadband, you will lose the connection sometimes and need to recover from this. So if you use TCP because you want to make sure your packets are received you get to resend the same packets a lot of times and so get a lot of overhead.
Also you might run in to threading problems on the server-side, if you open threads on the server on your own, which it sounds like. Let's say you have 200 clients connecting to the server at the same time. Each client has 1 thread open on the server. If the server needs to serve 200 different threads at the same time, this could be quite a performance consuming task for the server in the end and you will need to do a lot work on your own as well.
2. When you exit your application, you'll need to clean-up after you. This should be done in your onPause method of the Activity which is active.
This means, killing off all active threads (or at least interupting them), saving the state of your UI (if you need this) and flushing and closing whatever open connections to the server you have.
As far as using Threads goes, I would recommend using some of the build-in threading tools like Handlers or implementing the AsyncTask.
If you really think Thread is the way to go, I would definitely recommend using a Singleton pattern as a "manager" for your threading.
This manager would control your threads, so you don't end up with more than one Thread talking to the server at any given time, even though you're in another part of the application.
As far as the Application class implementation goes, take a look at the Application class documentation:
Base class for those who need to maintain global application state. You can provide your own implementation by specifying its name in your AndroidManifest.xml's tag, which will cause that class to be instantiated for you when the process for your application/package is created.
There is normally no need to subclass Application. In most situation, static singletons can provide the same functionality in a more modular way.
So keeping away from implementing your own Application class is recommended, however if you let one of your Activities initialize your own Singleton class for managing the Threads and connections you might (just might) run into trouble, because the initialization of the singleton might "bind" to the specific Activity and so if the specific Activity is removed from the screen and paused it might be killed and so the singleton might be killed as well. So initializing the singleton inside your Application implementation might deem useful.
Sorry for the wall of text, but your question is quite "open-ended", so I've tried to give you a somewhat open-ended question - hope it helps ;-)

Preventing multiple users from doing the same action

I have a swing desktop application that is installed on many desktops within a LAN. I have a mysql database that all of them talk to. At precisely 5 PM everyday, there is a thread that will wake up in each of these applications and try to back up files to a remote server. I would like to prevent all the desktop applications from doing the same thing.
The way I was thinking to do this was:
After waking up at 5PM , all the applications will try to write a row onto a MYSQL table. They will write the same information. Only 1 will succeed and the others will get a duplicate row exception. Whoever succeeds, then goes on to run the backup program.
My questions are:
Is this right way of doing things? Is there any better (easier) way?
I know we can do this using sockets as well. But I dont want to go down that route... too much of coding also I would need to ensure that all the systems can talk to each other first (ping)
Will mysql support such as a feature. My DB is INNO DB. So I am thinking it does. Typically I will have about 20-30 users in the LAN. Will this cause a huge overhead for the DB to handle.
If you could put an intermediate class in between the applications and the database that would queue up the results and allow them to proceed in an orderly manner you'd have it knocked.
It sounds like the applications all go directly against the database. You'll have to modify the applications to avoid this issue.
I have a lot of questions about the design:
Why are they all writing "the same row"? Aren't they writing information for their own individual instance?
Why would every one of them have exactly the same primary key? If there was an auto increment or timestamp you would't have this problem.
What's the isolation set to on the database connection? If it's set to SERIALIZABLE, you'll force each one to wait until the previous one is done, at the cost of performance.
Could you have them all write files to a common directory and pick them up later in an orderly way?
I'm just brainstorming now.
It seems you want to backup server data not client data.
I recommend to use a 3-tier architecture using Java EE.
You could use a Timer Service then to trigger the backup.
Though usually a backup program is an independent program e.g. started by a cron job on the server. But again: you'll need a server to do this properly, not just a shared folder.
Here is what I would suggest. Instead of having all clients wake up at the same time and trying to perform the backup, stagger the time at which they wake up.
So when a client wakes up
- It will check some table in your DB (MYSQL) to see if a back up job has completed or is running currently. If the job has completed, the client will go on with its normal duties. You can decide how to handle the case when the job is running.
- If the client finds that the back up job has not been run for the day, it will start the back up job. At the same time will modify the row to indicate that the back up job has started. Once the back up has completed the client will modify the table to indicate that the back up has completed.
This approach will prevent a spurt in network activity and can also provide a rudimentary form of failover. So if one client fails, another client at a later time can attempt the backup. (this is a bit more involved though. Basically it comes down to what a client should do when it sees that a back up job is on going).

Critically efficient server

I am developing a client-server based application for financial alerts, where the client can set a value as the alert for a chosen financial instrument , and when this value will be reached the monitoring server will somehow alert the client (email, sms ... not important) .The server will monitor updates that come from a data generator program. Now, the server has to be very efficient as it has to handle many clients (possible over 50-100.000 alerts ,with updates coming at 1,2 seconds) .I've written servers before , but never with such imposed performances and I'm simply afraid that a basic approach(like before) will just not do it . So how should I design the server ?, what kind of data structures are best suited ?..what about multithreading ?....in general what should I do (and what I should not do) to squeeze every drop of performance out of it ?
Thanks.
I've worked on servers like this before. They were all written in C (or fairly simple C++). But they were even higher performance -- handling 20K updates per second (all updates from most major stock exchanges).
We would focus on not copying memory around. We were very careful in what STL classes we used. As far as updates, each financial instrument would be an object, and any clients that wanted to hear about that instrument would subscribe to it (ie get added to a list).
The server was multi-threaded, but not heavily so -- maybe a thread handing incoming updates, one handling outgoing client updates, one handling client subscribe/release notifications (don't remember that part -- just remember it had fewer threads than I would have expected, but not just one).
EDIT: Oh, and before I forget, the number of financial transactions happening is growing at an exponential rate. That 20K/sec server was just barely keeping up and the architects were getting stressed about what to do next year. I hear all major financial firms are facing similar problems.
You might want to look into using a proven message queue system, as it sounds like this is basically what you are doing in your application.
Projects like Apache's ActiveMQ or RabbitMQ are already widely used and highly tuned, and should be able to support the type of load you are talking about outside of the box.
I would think that squeezing every drop of performance out of it is not what you want to do, as you really never want that server to be under load significant enough to take it out of a real-time response scenario.
Instead, I would use a separate machine to handle messaging clients, and let that main, critical server focus directly on processing input data in "real time" to watch for alert criteria.
Best advice is to design your server so that it scales horizontally.
This means distributing your input events to one or more servers (on the same or different machines), that individually decide whether they need to handle a particular message.
Will you be supporting 50,000 clients on day 1? Then that should be your focus: how easily can you define a single client's needs, and how many clients can you support on a single server?
Second-best advice is not to artificially constrain yourself. If you say "we can't afford to have more than one machine," then you've already set yourself up for failure.
Beware of any architecture that needs clustered application servers to get a reasonable degree of performance. London Stock Exchange had just such a problem recently when they pulled an existing Tandem-based system and replaced it with clustered .Net servers.
You will have a lot of trouble getting this type of performance from a single Java or .Net server - really you need to consider C or C++. A clustered architecture is much more error prone to build and deploy and harder to guarantee uptime from.
For really high volumes you need to think in terms of using asynchronous I/O for networking (i.e. poll(), select() and asynchronous writes or their Windows equivalents), possibly with a pool of worker threads. Read up about the C10K problem for some more insight into this.
There is a very mature C++ framework called ACE (Adaptive Communications Environment) which was designed for high volume server applications in telecommunications. It may be a good foundation for your product - it has support for quite a variety of concurrency models and deals with most of the nuts and bolts of synchronisation within the framework. You might find that the time spent learning how to drive this framework pays you back in less development and easier implementation and testing.
One Thread for the receiving of instrument updates which will process the update and put it in a BlockingQueue.
One Thread to take the update from the BlockingQueue and hand it off to the process that handles that instrument, or set of instruments. This process will need to serialize the events to an instrument so the customer will not receive notices out-of-order.
This process (Thread) will need to iterated through the list of customers registered to receive notification and create a list of customers who should be notified based on their criteria. The process should then hand off the list to another process that will notify the customer of the change.
The notification process should iterate through the list and send each notification event to another process that handles how the customer wants to be notified (email, etc.).
One of the problems will be that with 100,000 customers synchronizing access to the list of customers and their criteria to be monitored.
You should try to find a way to organize the alerts as a tree and be able to quickly decide what alerts can be triggered by an update.
For example let's assume that the alert is the level of a certain indicator. Said indicator can have a range of 0, n. I would groups the clients who want to be notified of the level of the said indicator in a sort of a binary tree. That way you can scale it properly (you can actually implement a subtree as a process on a different machine) and the number of matches required to find the proper subset of clients will always be logarithmic.
Probably the Apache Mina network application framework as well as Apache Camel for messages routing are the good start point. Also Kilim message-passing framework looks very promising.

Categories

Resources