How can two/multiple JVMs running in a same machine communicate without RMI?.
Thx
If you are concerned about the security of JVM to JVM communication against snooping with Wireshark, etc, you could consider doing your RMI communication over an SSL secured channel, or the equivalent.
However, if someone is able to run Wireshark on the same machine as your two JVMs, this probably not be sufficient to solve your problems. And using an alternative to RMI is not going to make you more secure either.
For a start, if the bad guys have sufficient privilege to run Wireshark, they almost certainly have the privilege to interfere with the JVMs in ways that would subvert your use of a secured channel. For example, they could probably attach a debugger to the JVMs, or hack the application code (via the file system) to leak the information you are trying to protect.
In short, you would be better off just using RMI, and spending your time making sure that the bad guys cannot get into your machine to run Wireshark (etc) in the first place.
Communicate or invoke methods?
You could always open sockets and communicate directly via an arbitrary protocol, or even pass objects back and forth in serialized forms. On most operating systems, socket connections between processes on the same machine are faster and more efficient than connections between machines.
A good place to start would be to look at a JMS tutrial. JMS requires an extra broker process, but makes communication between JVMs a piece of cake.
There are also things like J2EE and even http based XML-RPC but these might be overkill for your needs.
sockets, remote EJB, web services ... from the top of my head. What is your specific case?
Related
I need to transfer data from one process to another.
I'm quite familiar with the subject when the two processes were originated from C code - not once I used files, signals and pipes in C-code to accomplish it, but never have I tried to do it between two processes where one was originated from Java code, and the other from C code.
Since all the above methods require the (Linux) native API, and the JVM is on the way, I have decided to use a socket to communicate between these two processes, and I have a couple of questions:
How common is it to use socket to communicate between two processes on the same machine?
Does the fact that there is no designated "Server" and "Client" can set any obstacles (implementation-wise)?
The reason I'm asking is that everywhere I read online, there is always one process defined as 'server', and one as 'client'. This is not the case in my situation. Plus, I never tried to use socket for this purpose.
How common is it to use socket to communicate between two processes on the same machine?
It's quite common for a certain class of interaction patterns: when two independently-launched programs need a bidirectional communication channel. You cannot easily use pipes for that ("independently-launched" interferes). You can use FIFOs, but you need two, someone needs to set them up, and there are other quirks.
Does the fact that there is no designated "Server" and "Client" can set any obstacles (implementation-wise)?
The distinction between "client" vs. "server" is first and foremost about roles in establishing communication: the server sets up the communication interface and waits for one or more clients to open a connection to it. Being a "server" does not necessarily imply supporting multiple clients (neither concurrently nor serially), nor does it necessarily imply anything about communication passing over the socket connection after it is established. If you use sockets then you do have client and server, but if you have no other way to designate which process should have which role then you can choose arbitrarily.
One trick with sockets in Java is that although the Java standard library supports them, it supports only network sockets, not UNIX-domain sockets. The latter are more commonly used in UNIX and Linux applications where communication is inherently restricted to processes running on the same machine, but network sockets listening to (only) the loopback interface can nevertheless serve that purpose, too.
On modern systems, local TCP connections are just as fast as UNIX-domain sockets, so using them is not a problem.
Connecting 2 processes together in a language and platform agnostic way can easily be achieved with a Socket. It's supported by all languages and platforms, and can easily be replaced with another method if wanted.
From your explanation I gathered that the Java process would be the server. Sockets are quite risk-free, since they don't require special permissions (for ports over 1024 at least) or any other special handling.
Just pay attention when designing the (application level) protocol your processes will be communicating through.
You might want to use the Java Native Interface. It might be exactly what you want. - Based on your aproach to use sockets on both programs.
You might want to look into shared memory on linux.
BUT: Using sockets is not a bad thing here in general, but i doubt it is common practice*.
*I lack proof, that this is not common practice tho.
I want to create Java Network servers which share one IP address. Something like the Piranha cluster:
Is there any solution similar to this?
P.S They have to work as a cluster. If one server is down the second one should handle the traffic.
Well the obvious solution would be to try to build your Java servers behind the Piranha layer; i.e. implement the application services on "real server 1", "real server 2", etcetera in Java
I'm pretty sure that you can't implement a Piranha-like solution in (pure) Java. The IP level load balancing is implemented in the network stack in the OS kernel (I think) of the "director". That rules out (pure) Java for two reasons:
It is impractical to put Java code inside the kernel.
To do it in user space in Java would entail using native code to read and write raw network packets. That is not possible in pure Java.
Besides, the chances are that you'd get better network throughput if the director layer was not implemented in Java, pure or otherwise.
Of course, there are other ways to do load balancing as well ...
Just create your standalone tcp/ip servers to listen on different ports (and ofcourse the IP address would be same as this is your requirement)
I have to monitor the ports which are in use under the server (i.e. all the clients who are accessing the network) and especially it must be made in a such way that it should monitor based on the bandwidth utilized.
It should report the anonymous ports (the uncommon ports which are not for any specific application or protocol) which are being used beyond some threshold value (for example, 200KB or 2000KB). Can this be implemented easily?
I'd check out jNetPcap. I used this years ago to do something similar. How 'easy' it would be for you to use and implement the solution you're looking for is hard for me to answer.
Note, I believe it does require installation at one of the communication endpoints. If that doesn't suit you, maybe look into getting data directly from routers or other network hardware.
I would like to ask what would be more appropriate to choose when developing a server similar to SmartFoxServer. I intend to develop a similar yet different server. In the benchmarks made by the ones that developed the above server they had something like 10000 concurrent clients.
I made a bit of research regarding the cost of using too many threads(>500) but cannot decide which way to go. I once made a server in java but that was for a small application and had nothing to do with heavy loads.
Thanks
Take a look at Apache Mina. They've done alot of the heavy lifting required to use NIO effectively in a networking application. Whether or not NIO increases your ability to process concurrent connections really depends on your implementation, but the performance boosts in Tomcat, JBoss and Jetty are plenty evidence to you already in the positive.
i'm not familiar with smartfoxserver, so i can only speak generically (which is not always good :P but here i go)
i think those are 2 different questions. on one hand, the io performance when using native java sockets vs. native sockets written in c (like tomcat).
the other question is how to scale up to that kind of concurrency level. other than that, i'd always choose native sockets (i.e: c).
now, how to scale: it's not a good idea to have a lot of threads running at the same time (os constraints, etc), so i'd choose to scale horizontally, meaning to add a load balancer that can send the requests to different servers that can be linked by using messages (using jms, like rabbitmq or activemq, or even using a protocol like stomp or amqp).
other solution, a cloud environment that allows you to grow your installation as you need
In most benchmarks which test 10K or 100K connections, the server is doing no work and unless your server does next to nothing, these test are unrealistic.
You need to take a clear idea of mow many concurrent connections you want to support.
If you have less than 1K connection, using a thread per connection will work ok. This is the simplest approach to take. Using a dispatcher model with NIO will work better if your request are very simple. Otherwise it won't matter much.
If you have more than 1K connections it is likely you want to use more than one server as each connection is getting less than 1% of a core and the cost of a basic server is relatively cheap these days.
I am designing a application where many clients connect to a central server. This server keeps these connections, sending keep-alives every half-hour. The server has a embedded HTTP server, which provides a interface to the client connections (ex. http://server/isClientConnected?id=id). I was wondering what is the best way to go about this. My current implementation is in Java, and I just have a Map with ID's as the key, but a thread is started for each connection, and I don't know if this is really the best way to do this. Any pointers would be appreciated.
Thanks,
Isaac Waller
Use the java.nio package, as described on this page: Building Highly Scalable Servers with Java NIO. Also read this page very carefully: Architecture of a Highly Scalable NIO-Based Server.
Personally I'd not bother with the NIO internals and use a framework like Apache MINA or xSocket. NIO is complicated and easy to get wrong in very obscure ways. If you want it to "just work", then use a framework.
With a single thread per connection you can usually scale up to about 10,000 connections on a single machine. For a Windows 32 machine, you probably will hit a limit around 1,000 connections.
To avoid this, you can either change the design of your program, or you can scale out (horizontal). You have to weight the cost of development with the cost of hardware.
The single thread per user, with a single continuous connection is usually the easiest programming model. I would stick with this model until you reach the limits of your current hardware. At that point, I would decide to either change the code, or add more hardware.
If the clients will be connected for long periods of time, allocating a thread per client can be problematic. Each thread on the server requires a certain amount of resources (memory for the stack, for example).
You could use Jetty Continuations to handle the client request with fewer threads by using asynchronous servlets.
Read more about the the Reactor pattern. There is an implementation for that in Java (it uses channels instead of thread for client).
It is easy to implement and very efficient.