Suppose I have a remote class that has a method with a POJO parameter:
class MyRemote implements Remote {
void service(Param param) throws RemoteException;
}
The client retrieves a stub and does:
// DerivedParam is defined by the client
// and is derived from Param
DerivedParam dparam = getDerivedParam();
myService.service(dparam);
It fails, because the server has no clue about the DerivedParam class (and interfaces that it possibly implements).
The question: is it somehow possible to pass those classes from client to server to make such an invocation possible?
I am no an expert on the subject, but I did pulled this trick some time ago. The magic is in the use of code mobility by means of setting the java.rmi.server.codebase property.
You make this property point to a URL or space-separated list of URLs where your shared classes may reside. This could be, for instance, an FTP server or an HTTP server where a jar file with common classes reside.
Once set up, the codebase annotation will be included in all objects marshalled by server and client, and when either party cannot find a class, they look it up in the URLs provided in the code base and would dynamically load it.
Please read Dynamic Code Downloading with Java RMI.
Let's say you provide to your client only interfaces, and the implementations will be located in a given code base. Then the client requests the server to send a given object, the client expects to receive an object that implements a given interface, but the actual implementation is unknown to the client, when it deserializes the sent object is when it has to go to the code base and download the corresponding implementing class for the actual object being passed.
This will make the client very thin, and you will very easily update your classes in the code base without having to resort to updating every single client.
Let's say you have a RMI server with the following interface
public interface MiddleEarth {
public List<Creature> getAllCreatures();
}
The client will only have the interfaces for MiddleEarth and Creature, but none of the implementations in the class path.
Where the implementations of Creature are serializable objects of type Elf, Man, Dwarf and Hobbit. And these implementations are located in your code base, but not in your client's class path.
When you ask your RMI server to send you the list of all creatures in Middle Earth, it will send objects that implement Creature, that is, any of the classes listed above.
When the client receives the serialized objects it has to look for the class files in order to deserialized them, but these are not located in the local class path. Every object in this stream comes tagged with the given code base that can be used to look for missing classes. Therefore, the client resort to the code base to look for these classes. There it will find the actual creature classes being used.
The code base works in both directions, so it means that if you send your server a Creature (i.e. an Ent) it will look for it in the code base as well.
This means that when both, client and server need to publish new types of creatures all they have to do is to update the creaturesImpl.jar in the code base, and nothing in the server or client applications themselves.
Related
I am in the process of creating a game using Java. It is requested of me that the player of the game can choose to connect either through a RMI connection or a Socket one. Until now I have created all the necessary components for the game, but when it comes to creating the RMI connection, i'm having a bit of problem. From what I have read in regards of RMI all the objects used to create the connection need to be declared Remote (for example implement the Serializable interface). Seeing that I have to create both types of connections, I don't see it reasonable to serialize all the objects created so far. At this point I can think of two possible solutions:
Create a remote version of the necessary objects for the connection(for example by creating a class that extends said object and implements Serializable interface to make the object remote). After doing that, I can define the methods applicable to the remote objects that can be invoked by the clients.
Create this new type of remote objects that are just messages that take the requests from the client and "translate" them to the non remote objects and then proceed to do what was requested.
I am new to Java and I would appreciate your time and patience on this question.
From what I have read in regards of RMI all the objects used to create the connection need to be declared Remote (for example implement the Serializable interface).
You didn't read that anywhere. It doesn't even makes sense. Implementing Remote doesn't make an object Serializable. You have to
Design a remote interface that extends Remote.
Ensure that every object that will be passed or returned via this interface implements Serializable or, in rare cases, a remote interface.
Write an implementation of the interface, that typically extends UnicastRemoteObject.
If you have any remote objects at (2), repeat.
Seeing that I have to create both types of connections, I don't see it reasonable to serialize all the objects created so far.
You don't have any choice about (2), although that is unlikely to include all the objects created so far. In any case you would already have had to do it for objects you were planning to send over a socket.
At this point I can think of two possible solutions:
Create a remote version of the necessary objects for the connection(for example by creating a class that extends said object and implements Serializable interface to make the object remote).
Again this is just nonsense.
After doing that, I can define the methods applicable to the remote objects that can be invoked by the clients.
That corresponds to my step 1.
Create this new type of remote objects that are just messages that take the requests from the client and "translate" them to the non remote objects and then proceed to do what was requested.
This also is nonsense.
I'm reading the two introductory articles about bulding and consuming Spring Rest web services.
What's weird - they're creating a Greeting representation class in the client app (second link ref) for storing the GET response (the greetingmethod on server side returns a Greeting object). But the Greeting classes on the server and client side are different classes - well, they are two distinct classes with identical names, identical field names and types (client's doesn't have a constructor).
Does it mean I have to similarly rewrite the class from stratch when building the client app? In order to do that, I'd need specs on what are the fields' types of JSON-packed objects passed by server's app. A server serializes the object of class ABCClass to JSON and sends it to client. Even if some field called 'abc' has value 10, it doesn't make it an integer. Next time it might contain a string.
My question is - how much information from server app's devs do I need in order to create a client application? How is it usually done?
It all depends on your deserializer and on your needs. With Jackson for example you might use mixins (wiki ref) and custom deserializers (wiki ref) that build your object with your required field names and your structure.
Its just simplest way to have same field names and structure, but not the only one.
Of course, however, you should know the server reply structure to deserialize it anyway
Im currently learning about RMI.
I dont really understand the concept of the codebase. Every paper i read suggests, that the client, which calls the Remote object can load the Method definitions from the codebase.
The Problem is now: Dont I need the descriptions/interfaces in my classpath anyway? How can i call methods on the remote object, if i only know them during Runtime? This Wouldnt even compile.
Am i completely missing the point here? What exactly is the point of the codebase then? It seems like a lot of extra work and requirements to provide a codebase
thanks
Well, let's say you provide to your client only interfaces, and the implementations will be located in a given code base. Then the client requests the server to send a given object, the client expects to receive an object that implements a given interface, but the actual implementation is unknown to the client, when it deserializes the sent object is when it has to go to the code base and download the corresponding implementing class for the actual object being passed.
This will make the client very thin, and you will very easily update your classes in the code base without having to resort to updating every single client.
EDIT
Let's say you have a RMI server with the following interface
public interface MiddleEarth {
public List<Creature> getAllCreatures();
}
The client will only have the interfaces for MiddleEarth and Creature, but none of the implementations in the class path.
Where the implementations of Creature are serializable objects of type Elf, Man, Dwarf and Hobbit. And these implementations are located in your code base, but not in your client's class path.
When you ask your RMI server to send you the list of all creatures in Middle Earth, it will send objects that implement Creature, that is, any of the classes listed above.
When the client receives the serialized objects it has to look for the class files in order to deserialized them, but these are not located in the local class path. Every object in this stream comes tagged with the given code base that can be used to look for missing classes. Therefore, the client resort to the code base to look for these classes. There it will find the actual creature classes being used.
The code base works in both directions, so it means that if you send your server a Creature (i.e. an Ent) it will look for it in the code base as well.
This means that when both, client and server need to publish new types of creatures all they have to do is to update the creaturesImpl.jar in the code base, and nothing in the server or client applications themselves.
I've got a large Java-based API and for security reasons I'm trying to divide it into a client-to-application server architecture. I've already determined that there are no so-called "Java Application Servers" (frameworks) extant that can help me here, but if I'm wrong, please point me at one that's not restricted to web-oriented applications. That is, I'm "rolling my own" application server.
The existing API is already accessed via method calls to an instantiated instance of a single "object" that implements what needs to be done.
IIUC (If I understand correctly), I can set up an RMI server that instantiates individual instances of the API object - maybe instantiate a pool of them - and then "hand them" as object instances to inbound RMI calls from clients who ask for an instance. They can then call any methods of that instance, and all the actual processing of those methods happens on the server's side, with any results returned through the RMI mechanism.
So far so good, I think. Now for the tricky part I'd like clarification on, please:
If I've got it right, I further understand that either all the methods and attributes are exposed (via "extends UnicastRemoteObject" or I can restrict the attributes and methods I'd like to have available remotely by creating an intermediary class definition whose methods are all defined in an interface.
Am I correct in understanding that using this approach I can then have my original API as-was, and only need to create this one "encapsulating class" which exposes what needs to be exposed?
And moving to a more advanced design, as instantiation is expensive, I'd like to have a pool of pre-instantiated instances; would I need yet another class that instantiates a bunch of these exposable objects and then "returns" them to a calling client? Or, can I do that somehow within the existing RMI machinery - or within my encapsulating API-Server class itself?
When you extend UnicastRemoteObject (or export a Remote object) and implement an interface that extends Remote, the methods declared in that Remote interface are exposed for remote invocation. When these methods are invoked by a client, the execution of the method takes place on the server. Only data contained in the method result, if any, is exposed to the client.
If you want multiple instances of the remote object, you can bind them in a registry under distinct names, or you can create another remote type that returns instances of your Remote service. Here is a simple sketch:
interface MyService extends Remote {
void doStuff() throws RemoteException;
}
interface MyServiceManager extends Remote {
MyService getService() throws RemoteException;
}
You would then bind a single MyServiceManager instance in an RMI registry so that clients can find it. The MyService instances should not be bound in the registry; anonymous instances will be returned via MyServiceManager. Since these objects are also Remote, only a stub will be returned to the client, and when the client invokes a method on it, the method will be executed on the server.
I have a class that implements an interface
class TopLevel implements TopLevelOperations
inside TopLevel the operations are implemented in 2 different ways. So some of the operations in TopLevelOperations need to be called as SOAP client calls and some as restful calls.
What would be the best way to model this? Create additional two interfaces SOAPOperations and RESTOperations to specify what is the responsibility of restful and SOAP respectively ? Then use two other classes internally that implement those interfaces? The motivation is that I may one day want to swap out SOAP for some other approach.
Better way?
Edit: I also don't want different client code jumbled together in TopLevel as it currently is.
What you need to do is separate the transport layer from the payload.
Both requests over the network are over HTTP but the payload is different, i.e. it is wrapped in a SOAP envelope in one case and pure xml data in REST.
So for the higher lever code you should have an interface that just sends a message, encapsulated in an object.
The implementation of this interface converts the message to XML (via DOM or JAXB etc).
Then this XML is passed to a transport layer to be send over HTTP or wrapped in a SOAP message before passing it to the transport layer.
The transport layer can be just a concrete class that is as simple as:
public class HttpClient{
public String sendMsg(String xml){
//Use some HTTP client to send message and return the response.
//The input could be SOAP or plain xml app data
//The output could be SOAP or plain xml app data
}
}
So the user configures your objects to use SOAP or not.
The application code is only aware of the interface to send message.
In your implementation, because of the separation of XML transformation and HTTP transport layers you can swap implementations or add new ones.
Not sure the solution needs to be that complex (assuming I understand the problem of course...):
Make sure TopLevelOperations declares all the methods a client can use. Do so in a protocol-independent way, e.g. TopLevelOperations.doFoo(), not TopLevelOperations.doFooOverSOAP()
Implement TopLevel as your first version of the interface using SOAP/REST as appropriate.
Ensure clients only ever reference TopLevelOperations when declaring references - never the implementing class
Use whatever mechanism is appropriate for your app to inject the appropriate implementation into the clients (Dependency Injection / Factory / ...).
If / when you want to re-implement the methods using a different transport/protocol just create another class (TopLevelNew) that implements TopLevelOperations. Then inject it into clients instead of TopLevel in step 4 above.
Deciding which implementation to use is then an application-level configuration decision, not something the clients have to be aware of.
hth.
[You may also want/need to use some helpers classes for the implementation, e.g. separating content from payload as per #user384706's answer. But that's complementary to above (i.e. how to design the implementation vs. how to keep interface consistent for clients).]