In my understanding java.rmi is a specification.
How do I know what particular implementation of java.rmi am I using when I'm developing using that API.
Also is there a limit on the maximum number of threads that
are started on an RMI server ?
You use the implementation by the provider of the JDK. Therm isn't a provider architecture like there is in JNDI, NIO, JCA, etc.
The RMI Specification doesn't mention any limit on threads.
I would hardly call RMI a "specification". It is a quite Java specific serialization implementation; and it's whatever implementation matches the JRE version you are running. I would advise not randomly mixing and matching JRE versions between client and server when RMI is in use. Nor would I advise serializing POJOs using exotic features that don't exist in older VMs.
Though it's not "specified", I have reverse engineered an almost complete RMI implementation for C# before (for use with Spring/Hibernate based servers from WPF), but I don't know of anybody that uses such non JRE implementations in the real world. You get stuck with RMI just to communicate with J2EE systems in some cases. But, if at all possible, you should use something more reasonable like ProtocolBuffers/Thrift/Avro/Hessian/Parqet, etc. Those have real "specifications" with a versioned wire protocol and IDL compilers for multiple platforms; which was absolutely not the case at the time I did that for RMI.
RMI has other problems that I would categorize as security issues (ie: you spell out a class name in the serialization stream and it will call a no-arg constructor to create a class of that name). Its design also isn't very good for situations where object graphs can get large. (In particular, you can make the stack grow really large during deserialization.)
Related
In the attempt to design & implement & test a distributed capabilities system, Remote Promises[1][2][3], bit identical between Squeak & Java, there are shortcomings. I am seeking work-arounds.
With Remote Promises, proxies can change state, which changes the class implementing the proxy. In Squeak this is done with #becomeForward:, while in Java, it requires a secondary proxy, one that can change it's implemention. This does work.
Exceptions should be non-blocking to allow the event loop to continue, yet also display the problem stack for debugging, out of a quarantine. This is good in Squeak but an open issue with Java. I suppose the answer is do all your logging and then close the exception, allowing the event loop to proceed: it is server-style log debugging.
Using a meta repository, it should be possible to demand load consumers of a particular event type. Dynamically load the latest released code into the consumer servers and spread out the load to speed up the throughput. Update the system at runtime for continuous, seemless operations. I suppose the solution here is to build a dynamic jar classLoader system. Are there any examples of this? An Apache project perhaps?
Remote Promises in Squeak
Cryptography in
Squeak
Remote Promises in
Java, called Raven
Use cloud technologies made for that kind of usecases
I would say that in today world, to get the latest version of a code, you don't use a class loader or any advanced capability of your programming langage. You would user likely some kind of cloud service.
That's may be serverless cloud implementation or a container/kubernetes (https://kubernetes.io/) implementation. You can then perfectly when the new release is loaded, control if you want to do Canary, Blue/Green or progressive rollout or even implement your own strategy.
Because it would work with containers, that would be fine whatever the langage be it C++, java, python, shell, Squeak or anything.
That layer would also provide auto scaling of your various services, redundancy and load balancing and distribute the workload on your cluster.
You can go to the next step with gitops. A PR merge in git automatically trigger the load of the new version in production (https://www.weave.works/technologies/gitops/)
Dynamically loading of jars in Java
Still for sure java thanks to its class loaded API allows to load classes dynamically. This is what web servers are doing and several implementations of that do exist like OSGI or check the response of dimo414.
Conclusion
It would seems that the java route make more sense for a generic plugin system like the one of Eclipse (OSGI), and that the containers solution make more sense for a globally distributed system, auto scaling & resiliance in clusters.
Kubernetes scale to thousand of nodes and provides a whole echosystem to deal with distributed system and it can scale and operate any linux or windows process. This is the de-facto standard pushed by Google and used by thousand of companies over the world.
demand load consumers of a particular event type.
This is typically done via the ServiceLoader API. See the AutoService project to simplify working with services.
This may not be what you need; your question is still very broad, and there are many plausible approaches. Searches for [dynamically load jars] finds existing posts like Load jar dynamically at runtime? that may be of interest.
The architecture of our system is such that there is a set of functionally subdivided RESTful subsystems. Many of these subsystems not only have to respond to requests from browsers but also to other subsystems. The inter-subsystem traffic is relatively heavy and needs to scale, so the decision was made to use serialized Java beans as the representation for this type of communication (due to the speed of serialization/deserialization). This in turn introduces a binary dependency between subsystems which have a client/server relationship. Changing the internal structure of the Java beans that are exposed via the RESTful API can have version compatibility consequences with client subsystems. Of course changing the structure of a representation of any content-type will have compatibility issues, but this is obviously worse.
Since one API can service many clients, coordinating releases of every set of dependent subsystems is an unattractive option.
This must be a common problem and I wonder how do other people solve/mitigate?
One option might be to communicate between subsystems using something such as protocol buffers. From what I understand, they were designed for just the sort of thing you're describing, particularly with regard to making compatible version changes.
I'm not sure if I correctly understand your question. I guess it's about interface versioning, e.g. one operation and/or object may exist in different versions and is used by different client systems, let's say:
ClientA uses InterfaceA
ClientB uses InterfaceA
...
In the SOA world it was solved by namespacing the different (WSDL, XSD) versions, so that you can implement some governance around the interfaces:
Time t0
ClientA uses InterfaceA.v1
ClientB uses InterfaceA.v1
Time t1 (new version of InterfaceA)
ClientA uses InterfaceA.v2
ClientB uses InterfaceA.v1
Now you can implement processes to enforce ClientB to migrate to InterfaceA.v2 at a certain point in time. In general those concepts have been developed for the WS-* world, but you can apply them to the RESTful world as well (I did this several times). A nice MSFT article: http://msdn.microsoft.com/en-us/library/ms954726.aspx.
I'm developing a Java application that consists of a server and a client (possibly multiple clients in future) which may run on different hosts.
For communication between these two I currently use a custom protocol which consists of JSON messages that are sent over network sockets and that are converted back to Java Bean objects on both sides. However the more complex the application gets I notice that this method doesn't meet my standards and is too complex.
I'm looking for a well established, possibly standardized alternative.
I've looked at Remote Method Invocation (RMI) but read that the protocol is slow (big network overhead).
The technology I'm looking for should be lightweight (protocol and library wise), robust, maybe support compression (big plus if it does!), maybe support encryption, well document and well established (e.g. an Apache project). It should be as easy as calling a method on a remote object with RMI but without its disadvantages.
What can you recommend?
Avro is an Apache project that is designed for cross-language RPC (see Thrift for its spiritual predecessor). It is fairly new (less than two years old), so it isn't as well-established as RMI, for example. You should still give it a chance, though; large projects like Cassandra are moving to Avro. Avro is also a sub-project under Hadoop and has been receiving healthy support from that community.
It designed to be fast and support multiple languages, so you will probably need to introduce another step during compilation in which you translate an Avro IDL file into Java, although it isn't strictly necessary. The rest is typical RPC.
One nice thing about Avro is that its transport layers are independent of how data is represented. For example, it comes with various "transceivers" (their base communication class) for raw sockets, HTTP, and even local intra-process calls. HTTPS and SASL transceivers can provide security.
For representing data, there are encoders and decoders of various types, although the default BinaryEncoder generally suffices since Hadoop, Cassandra, etc... focus on efficiency. There is also a JsonEncoder in case you find that useful.
This really all depends on what kind of compatibility you require between client and server. CORBA is a well established and standardized way of communicating between different languages, but it requires a bit more effort to use than Java RMI. If the clients are running from some external, untrusted source, then an HTTP based protocol makes more sense. If you follow a REST approach, then it becomes easier to scale out later as you need to add more servers.
If both client and server are Java, and they are running within a trusted network, RMI meets your requirements for being "well established". Performance overhead of RMI is exaggerated, but very early versions did not pool connections.
If you're willing to toss away both "well established" and "standardized", you can use Dirmi as a substitute for RMI. It's faster, easier, has more features, and it doesn't have the firewall problems RMI has. Like RMI, it supports TLS (encryption), but neither supports built-in compression.
Whatever you choose, beware of lock-in. Try to design your server such that the remote access layer is a thin layer over the core code. This allows you to easily support multiple protocols, perhaps at the same time.
Mybe CORBA?
Would you consider HTTP/REST?
If so, you can leverage something like a Tomcat/Spring, and still support all the requirements you listed ( robust, lightweight, well documented, well established )
The RPC based protocols are simply antiquated.
Seriously, unless you're doing a web app that already requires the web baggage, you really do want RMI or, even better, CORBA. I recommend JacORB (www.jacorb.org).
Ignore general claims of slow/fast and perform your own performance tests.
Keep in mind that a software project is successful because it performs the useful function for which it was designed and intended, not because it uses the latest cool buzzword tech.
Good luck.
Apache MINA library for client-server communication and EJB3 will suit best
Is it possible to communicate with non java entity sing RMI protocol
What is special about RMI IIOP?
Thx
It's technically possible. You will need to implement a RMI server on the non-java side.
I would not recommend it though. Try exploring the possibility of using WebServices, which is commonly used for that: communicating entities from (probably) different platforms.
RMI is protocol supposed to be purely used by Java applications. It put some requirements on communicating which depends on Java implementation (e.g. serialization). On the other hand RMI IIOP is protocol which is used by EJB implementation in order to add more functionality to communication (e.g. transaction context propagation).
IIOP is originally from CORBA and could be used to communicate with components written in other languages.
I wouldn't go Web Services route if you do need to use features available to IIOP. Unless, of course you'd use respective WS-* specifications to get them.
Old question but, but answered because of high google ranking
I don't think you could do this easily.
As an alternative to Java-RMI I would recommend XML-RPC.
You can then communicate with Python, C++, Objective-C, Erlang, Groovy, Java, JavaScript, PHP and many more.
On the java side you can use the Apache XML-RPC library.
Pro: many implementations for different languages
Con: XML-RPC does know primitives and base64 encoded binaries. They will not handle your complex Java objects but give you a Map. You need to map them to your Objects by yourself.---
I have a really simple Java class that effectively decorates a Map with input validation, with the obvious void set() and String get() methods.
I'd like to be able to effectively call those methods and handle return values and exceptions from outside the JVM, but still on the same machine Update: the caller I have in mind is not another JVM; thanks #Dave Ray
My implementation considerations are typical
performance
ease of implementation and maintenance (simplicity?)
reliability
flexibility (i.e. can I call from a remote machine, etc.)
Is there a 'right way?' If not, what are my options, and what are the pro/cons for each?
(Stuff people have actually done and can provide real-life feedback on would be great!)
Ok. Here's another try now that I know the client is not Java. Since you want out-of-process access and possibly remote machine access, I don't think JNI is what you want since that's strictly in-process (and a total hassle). Here are some other options:
Raw Sockets : just set up a listener socket in Java and accept connections. When you get a connection read the request and send back a response. Almost every language can use sockets so this is a pretty universal solution. However, you'll have to define your own marshalling scheme, parsing, etc.
XML-RPC : this isn't as hip these days, but it's simple and effective. There are Java libraries as well as libraries in most other languages.
CORBA : as mentioned above, CORBA is an option, but it's pretty complicated and experts are getting harder to come by.
Web Server : set up an embedded web server in your app and handle reqests. I've heard good things about Jetty or you can use the one provided with Java. I've used the latter successfully to server KML files to Google Earth from a simulation written in Java. Most other languages have libraries for making HTTP requests. How you encode the data (XML, text, etc) is up to you.
Web Services : This would be more complicated I think, but you could use JAX-WS to expose you objects as web services. NetBeans has pretty nice tools for building Web Services, but this may be overkill.
Will you be calling from another JVM-based system, or is the client language arbitrary? If you're calling from another JVM, one of the simplest approaches is to expose your object as an MBean through JMX. The canonical Hello World MBean is shown here. The pros are:
Really easy to implement
Really easy to call from other JVMs
Support for remote machines
jconsole allows you to manually test your MBean without writing a client
Cons:
Client has to be on a JVM (I think)
Not great for more complicated data structures and interactions. For example, I don't think an MBean can return a reference to another MBean. It will serialize and return a copy.
Since your callers are not Java apps and you're already foreseeing networked callers, RMI-IIOP (CORBA) might be an option. Though it's definitely not easy to implement, it has the advantage of being a widely-recognized standard.
Since your caller is not JVM-based, this is a question of inter-process communication with JVM. The options I have in mind are:
Communicate over a socket: make your JVM listen to incoming connections and caller send commands
Communicate using shared files (caller writes to file, JVM polls and updates)
Using JNI, start JVM inside a callers process and then use RMI/MBeans to communicate with the first ("server") JVM. Caller will have access to results using JNI
Option 3 IMO is the most "Java" way of doing this, and is the most complex/error-prone.
Option 2 is ugly but simple
Option 1 is moderately easy (java part) and otherwise ok.
For ease of use, I would use Spring Remoting. If you are already using Spring in your project, that's a no brainer. If you arent ... well you should have a look anyway.
Spring provides an abstraction that allow you to switch remoting protocols easily. It supports the most widely deployed protocols (SOAP, Hessian, Burlap, RMI, ...). If you are calling from non Java code, Hessian has support in a number of other languages, is known to be more efficient than SOAP and easier than CORBA.
Beanshell is a shell-like java interpreter that can be exposed over a network socket. Basically you do this from java:
i = new bsh.Interpreter();
i.set( "myapp", this ); // Provide a reference to your app
i.eval("server(7000)");
and then you do this from anywhere else:
telnet localhost 7001
myapp.someMethod();
This little utility does remote java invocations much more easily than JNI or RMI ever has.
For more, start at: http://www.beanshell.org/manual/remotemode.html
JNI (Java Native Interface) allows access to java code from C or C++.
I have an Inno Setup script (installing a Java program) which calls some Java methods to perform some operations or check some conditions.
I (actually my predecessor) just instanciate java.exe on each call. Which is, obviously, costly, although not critical in my case (and the Windows cache kicks in, I suppose).
An alternative is to use some inter-language communication/messaging, your Java program acting as a server. Corba comes to mind, as it is language agnostic. But a bit heavy, perhaps. You can use sockets. RPC is another buzzword too, but I haven't much experience in the field.
What you want is the Java Native Interface (JNI), despite the difficulties that it may present. There is no other equivalent technology that will be as easy to implement.
As mentioned in the comments for the preceding answer, the JNI is optimized for calling native code from Java, but it can also be used for the reverse with a little work. In your native code you'll need to implement the JNI entry point--something like SetMapPointer()--then call that function from the Java code once the Map is built. The implementation of SetMapPointer() should save the Java object pointer someplace accessible, then the native code can invoke Java methods on it as needed.
You'll need to make sure that this happens in the right order (i.e. the native code doesn't try to access the Map before it's been built and passed to native code), but that shouldn't be an especially hard problem.
Another alternative to consider if the other process will be on the same machine and the OS is POSIX-compliant (not Windows) is Named Pipes.
The outside process writes the operations, as strings or some other agreed-upon byte encoding, to the named pipe while the Java application is reading from the pipe, parsing up the incoming operations and executing them against your object.
This is the same strategy that you would use for socket connections, just instead of a SocketInputStream you'd be reading from a FileInputStream that is attached to a named pipe.
An alternative to CORBA is ICE, unless the licence is a problem (it's GPL, but you can also buy a commercial licence).
It has pretty much all the benefits of CORBA, but ZeroC, the vendor, provides bindings for many different languages. CORBA vendors tend to only provide one or two language bindings, and then you start finding compatibility problems.
The documentation is also excellent. I wouldn't have said it was particularly easy to pick up, but probably easier than CORBA.
Otherwise, another option I don't think has been mentioned is the new middleware/RPC framework developed by Cisco, now donated to Apache, called Etch. It's still pretty new though, and documentation is sparse.