Advice for writing Client-Server based game - java

I'm thinking about writing a game which is based around a server, and several client programs connect to it. The game (very) basically consists of a list of items which a user can 'accept', which would remove it from the list on all connected computers (this needs to update very quickly).
I'm thinking about using a Java applet for the client since I would like this to be portable and run from a browser (mostly in Windows), as well as updating fast, and either a C++ or Java server running on Linux (currently just a home server, but possibly to go on a VPS).
A previous 'incarnation' of this game ran in a browser, and used PHP+mySQL for the backend, but this swamped the server quite a bit when several people connected (that was with about 8 people, this would eventually need to handle a lot more).
The users would probably all be in the same physical location (with the same public IP address), and the system would get several requests per second, all of which would require sending the list back to the clients.
Some computers may have firewall restrictions on them, so would you recommend using HTTP traffic, a custom port, or perhaps through SSH or some existing protocol?
Could anyone suggest some tips (threading, multiple requests of one item?), tools, databases (mySQL?), or APIs which would help me get started on this project? I would prefer C++ for the backend as it would be faster, but using Java would allow me to reuse code.
Thanks!

I wouldn't use C++ because of speed alone. It is highly unlikely that the difference in performance will make a real difference to your game. (Your network is likely to cloud any performance difference, unless you have 10 GigE between the client and server) I would use C++ or Java because you will get it working first using that language.

For anyone looking for a good networking API for c++ I always suggest Boost.Asio. It has the advantage of being platform independent, so you can compile a server for linux, windows etc. However, if you are not too familiar with c++ templates/boost the code can be a little overwhelming. Have a look, give it a try.
In terms of general advice. Given the description above, you seem to need a relatively simple server. I would suggest keeping it very basic, single threaded polling loop. Read a message from your connected clients (wait on multiple sockets), and respond appropriately. This eliminates any issue around multiple accesses to your list and other synchronization problems.
I might also suggest, before you re-write your initial incarnation. Try improving it, as you have stated:
and the system would get several requests per second, all of which would require sending the list back to the clients.
Given that each request removes an item from this list, why not just inform your uses which item is removed, rather than sending the entire list over the network time and time again? If this list is of any significant size, this minor change will result in a large improvement.

Related

Multi-client multi instance centralised server using TCP protocol gaming system in Java

i'm not a newbie in Java programming. I would like to know how can i proceed with my project.
I want to develop a centralized gaming system in Java using TCP/IP protocol socket system. It should get the player details and display the information in the gaming window.
These are my criterias:
Maximum & Minimum number of players can participate??
The behavior of the server in front of a given state of the game board: should invite one or more players to offer their shots, notify the blow of an opponent or a player can declare the party over?
How to update the game when a shot is provided by a player???
I'm not looking for a straight answere here. I'm looking for some guidence which would be helpful for me to start with the project. Is there are any tools for Multi-client multi instance centralised server using TCP protocol???
First the network layer
There are several network libraries for java, mina,netty..
With the help of those networking library, you can solve the networking problems easily.
And the Logic Layer
You should maintain all the user_context objects in your server memory, and bind each of them to corresponding tcp connection. In most time, the user_context objects maintain as a hashmap/dictionary of RB-tree.
So,when some events happen, you can find corresponding user/client and send the message to them.
I would think the minimum number of players is 0;
The maximum is likely to be dependent on;
- your bandwidth, you need to have a significant upload speed for you want thousands of users.
- how much work there is in managing each user. You can connect to 10,000 users on a single server if they are not doing much, but as you add functionality, the number of users per server will drop to 1000 possibly only 100.
The choice of IO framework makes a big difference when you have unlimited bandwidth and trivial work per connection (usually copying byte[] of zeros) For real applications, its less likely to matter. I suggest what ever solution you pick you make it easy to replace, should you find a better solution later.
Is there are any tools for Multi-client multi instance centralised server using TCP protocol???
A common tool used is JMS, but games would be one area you might not use it. I would start with ActiveMQ as this will get you up and running quickly, just make sure you can replace it easily later.

Hard time choosing ... IO vs NIO

I would like to ask what would be more appropriate to choose when developing a server similar to SmartFoxServer. I intend to develop a similar yet different server. In the benchmarks made by the ones that developed the above server they had something like 10000 concurrent clients.
I made a bit of research regarding the cost of using too many threads(>500) but cannot decide which way to go. I once made a server in java but that was for a small application and had nothing to do with heavy loads.
Thanks
Take a look at Apache Mina. They've done alot of the heavy lifting required to use NIO effectively in a networking application. Whether or not NIO increases your ability to process concurrent connections really depends on your implementation, but the performance boosts in Tomcat, JBoss and Jetty are plenty evidence to you already in the positive.
i'm not familiar with smartfoxserver, so i can only speak generically (which is not always good :P but here i go)
i think those are 2 different questions. on one hand, the io performance when using native java sockets vs. native sockets written in c (like tomcat).
the other question is how to scale up to that kind of concurrency level. other than that, i'd always choose native sockets (i.e: c).
now, how to scale: it's not a good idea to have a lot of threads running at the same time (os constraints, etc), so i'd choose to scale horizontally, meaning to add a load balancer that can send the requests to different servers that can be linked by using messages (using jms, like rabbitmq or activemq, or even using a protocol like stomp or amqp).
other solution, a cloud environment that allows you to grow your installation as you need
In most benchmarks which test 10K or 100K connections, the server is doing no work and unless your server does next to nothing, these test are unrealistic.
You need to take a clear idea of mow many concurrent connections you want to support.
If you have less than 1K connection, using a thread per connection will work ok. This is the simplest approach to take. Using a dispatcher model with NIO will work better if your request are very simple. Otherwise it won't matter much.
If you have more than 1K connections it is likely you want to use more than one server as each connection is getting less than 1% of a core and the cost of a basic server is relatively cheap these days.

Producer Consumer in PHP and Java

I have a system where in I get requests via HTTP call to my PHP code(producer). This code adds the request parameters to a table in MYSQL(queue). This is then taken and processed by a java program(consumer). In my first implementation both producer and consumer was in PHP(with MYSQL queue). Then as load increased this proved inefficient and so I made the consumer java. Now I think polling MYSQL table for the queue from my java app is getting inefficient(vey high cpu usage for MYSQL process). Is there a better way to implement this queue (sharing memory between PHP code and Java app or something)?
Yes, you've got many options. The first is obviously to convert this into a client-server service, and pass either text or binary messages between them. You might want to look into webservices if you're a masochist, or a simpler REST service, or CORBA / COM+ and other for binary serializations. And then there's various queues, like MQseries, RabbitMQ, etc. Sometimes the middle-man is fast enough and efficient enough, or a direct call would suffice.
The next is a more direct link if your platforms are within the same server or cluster, something like JavaBridge and others (do a search for "java php bridge", and several will crop up. There's even a PHP interpreter written in Java for the JVM which gives you full compatibility between the two which might do the trick for you.

Better to build or buy a compute grid platform?

I am looking to do some quite processor-intensive brute force processing for string matching. I have run my prototype in a multi-threaded environment and compared the performance to an implementation using Gridgain with a couple of nodes (also multithreaded).
The performance I observed was that my Gridgain implementation performed slower to my multithreaded implementation. It could be the case that there was a flaw in my gridgain implementation, but it was only a prototype, and I thought the results were indicative. So my question is this:
What are the advantages of having to learn and then build an implementation for a particular grid platform (hadoop, gridgain, or EC2 if going hosted - other suggestions welcome), when one could fairly easily put together a lightweight compute grid platform with a much shallower learning curve?...i.e. what do we get for free with these cloud/grid platforms that are worth having/tricky to implement?
(Please note, I don't have any need for a data grid)
Cheers,
-James
(p.s. Happy to make this community wiki if needbe)
What kind of grid are you dealing with? A dozen hosts running the same OS would be pretty straightforward to run a grid for - all you really have to deal with is sending work to each host, maybe a little load balancing, maybe take into account what to do if a host goes down, maybe deal with distributing new service code to the hosts when you update your service, but if you don't deal with any of those it's not a big deal since the grid is a manageable size. If you're dealing with 1000s of hosts, or with a service that should never be down or have errors due to single hosts going down then you suddenly have to worry about:
not overloading any single host
distributing new service code
detecting when a host isn't responding and not sending it new work, as well as resending whatever it was working on
possibly working across different OSes and architectures (little vs. big endian)
energy savings - shutting down hosts during low load and bringing them back up for high load
scaling - if you add 100 hosts to your grid tomorrow how long does it take to get them connected and working?
reliability - some services may actually perform calculations on 2-3 different hosts and only return an answer that all the hosts agree on
That's a short list of things that most grid software should do for you if you need it. If you're working on something small or non-critical then by all means, roll your own. If you're working on something that has to work, or is big enough that having any manual steps in a deployment process would be a maintenance nightmare then you probably want to go with something that already exists.

Critically efficient server

I am developing a client-server based application for financial alerts, where the client can set a value as the alert for a chosen financial instrument , and when this value will be reached the monitoring server will somehow alert the client (email, sms ... not important) .The server will monitor updates that come from a data generator program. Now, the server has to be very efficient as it has to handle many clients (possible over 50-100.000 alerts ,with updates coming at 1,2 seconds) .I've written servers before , but never with such imposed performances and I'm simply afraid that a basic approach(like before) will just not do it . So how should I design the server ?, what kind of data structures are best suited ?..what about multithreading ?....in general what should I do (and what I should not do) to squeeze every drop of performance out of it ?
Thanks.
I've worked on servers like this before. They were all written in C (or fairly simple C++). But they were even higher performance -- handling 20K updates per second (all updates from most major stock exchanges).
We would focus on not copying memory around. We were very careful in what STL classes we used. As far as updates, each financial instrument would be an object, and any clients that wanted to hear about that instrument would subscribe to it (ie get added to a list).
The server was multi-threaded, but not heavily so -- maybe a thread handing incoming updates, one handling outgoing client updates, one handling client subscribe/release notifications (don't remember that part -- just remember it had fewer threads than I would have expected, but not just one).
EDIT: Oh, and before I forget, the number of financial transactions happening is growing at an exponential rate. That 20K/sec server was just barely keeping up and the architects were getting stressed about what to do next year. I hear all major financial firms are facing similar problems.
You might want to look into using a proven message queue system, as it sounds like this is basically what you are doing in your application.
Projects like Apache's ActiveMQ or RabbitMQ are already widely used and highly tuned, and should be able to support the type of load you are talking about outside of the box.
I would think that squeezing every drop of performance out of it is not what you want to do, as you really never want that server to be under load significant enough to take it out of a real-time response scenario.
Instead, I would use a separate machine to handle messaging clients, and let that main, critical server focus directly on processing input data in "real time" to watch for alert criteria.
Best advice is to design your server so that it scales horizontally.
This means distributing your input events to one or more servers (on the same or different machines), that individually decide whether they need to handle a particular message.
Will you be supporting 50,000 clients on day 1? Then that should be your focus: how easily can you define a single client's needs, and how many clients can you support on a single server?
Second-best advice is not to artificially constrain yourself. If you say "we can't afford to have more than one machine," then you've already set yourself up for failure.
Beware of any architecture that needs clustered application servers to get a reasonable degree of performance. London Stock Exchange had just such a problem recently when they pulled an existing Tandem-based system and replaced it with clustered .Net servers.
You will have a lot of trouble getting this type of performance from a single Java or .Net server - really you need to consider C or C++. A clustered architecture is much more error prone to build and deploy and harder to guarantee uptime from.
For really high volumes you need to think in terms of using asynchronous I/O for networking (i.e. poll(), select() and asynchronous writes or their Windows equivalents), possibly with a pool of worker threads. Read up about the C10K problem for some more insight into this.
There is a very mature C++ framework called ACE (Adaptive Communications Environment) which was designed for high volume server applications in telecommunications. It may be a good foundation for your product - it has support for quite a variety of concurrency models and deals with most of the nuts and bolts of synchronisation within the framework. You might find that the time spent learning how to drive this framework pays you back in less development and easier implementation and testing.
One Thread for the receiving of instrument updates which will process the update and put it in a BlockingQueue.
One Thread to take the update from the BlockingQueue and hand it off to the process that handles that instrument, or set of instruments. This process will need to serialize the events to an instrument so the customer will not receive notices out-of-order.
This process (Thread) will need to iterated through the list of customers registered to receive notification and create a list of customers who should be notified based on their criteria. The process should then hand off the list to another process that will notify the customer of the change.
The notification process should iterate through the list and send each notification event to another process that handles how the customer wants to be notified (email, etc.).
One of the problems will be that with 100,000 customers synchronizing access to the list of customers and their criteria to be monitored.
You should try to find a way to organize the alerts as a tree and be able to quickly decide what alerts can be triggered by an update.
For example let's assume that the alert is the level of a certain indicator. Said indicator can have a range of 0, n. I would groups the clients who want to be notified of the level of the said indicator in a sort of a binary tree. That way you can scale it properly (you can actually implement a subtree as a process on a different machine) and the number of matches required to find the proper subset of clients will always be logarithmic.
Probably the Apache Mina network application framework as well as Apache Camel for messages routing are the good start point. Also Kilim message-passing framework looks very promising.

Categories

Resources