RMI: serializable and remote objects - java

I have a certain problem: I'm using RMI to communicate between server and client.
public class RemoteMap
extends java.rmi.server.UnicastRemoteObject
implements RemoteMapInterface {
private TreeMap<String, GeneralSprite> sprites;
...
This is my remote object. But I want the client to be able to change this object's content. And after the change the server can execute some operation based on this.
Example at the client side:
map = (RemoteMapInterface) (registry.lookup("map"));
map.getSprites.get("object1").setDx(-1);
I'm using serialiable on the GeneralSprite, but I guess it passed by value. So when I did some changes at the GeneralSprite, it wasn't transported to the server . Do I have to make GeneralSprite to an Remote object too? Or is it even possibly?
Thanks in advance, and sorry for my bad english, I hope you can understand.

Everything which does not implement the Remote interface, whether directly or indirectly, will get serialized for the remote method invocation. So it’s a “call-by-copy” behavior. You can implement a new Map which implements Remote, but you can also add a method like setDx(String spriteName, int value) to your RemoteMapInterface and implement it as sprites.get(spriteName) .setDx(value); on the server side.

Related

GWT different interface implementation for client and server

Assume that we've some interface my.gwt.shared.Facade in shared package of our GWT project (exists both server and client) and two implementation of it: class my.gwt.client.ClientFacadeImpl (exists only client) and class my.gwt.server.ServerFacadeImpl (exists only server).
Is there any way to write a piece of code or annotation that substitute ClientFacadeImpl in client side and ServerFacadeImpl in server side?
Thanks all for the answers and discussion. I've found simple and elegant solution for my needs.
So, I've interface my.gwt.shared.Facade and two classes: class my.gwt.client.ClientFacadeImpl and class my.gwt.server.ServerFacadeImpl.
interface Facade {
Map<Boolean, Facade> FACADES = new HashMap<Boolean, Facade>();
}
Now, we should fill you FACADES interface. This is done like that:
public class MyEntry implements EntryPoint {
static {
Facade.FACADES.put(true, ClientFacadeImpl.INSTANCE); // client side
}
And
#Startup
#Singleton
public class Initializer {
#PostConstruct
private void init() {
Facade.FACADES.put(false, ServerFacadeImpl.INSTANCE); // server side
// other things
}
}
Now, when I need to get appropriate Facade, I just write
Facade facade = Facade.FACADES.get(GWT.isClient());
Also in this case in map is only corresponding to server or client side implementation.
P. S. Goal of this question was to allow handling of some GwtEvents fired on client direclty on server and vice-versa. This solution removed large set of DTO (data transfer objects) and simplified code a lot.
There's no answer to your question other than "it depends". Or rather, of course there are ways of doing what you ask, but would you accept the tradeoffs?
Given that you tagged the question with dependency-injection, let's start with that. If you use a DI tool with GWT, it's likely GIN (Dagger 2 would work, but it's still under development). In that case, just use distinct modules for GIN client-side and Guice server-side that bind() the appropriate implementation.
For a few releases, GWT.create() can be made to work outside a GWT (client) environment (i.e. on the server side). You have to register a ClassInstantiator on the ServerGwtBridge as an alternative to the rebind rules from gwt;xml files. So you could have a <replace-with class="my.gwt.client.ClientFacadeImpl"> rule in your gwt.xml, and a ClassInstantiator returning a ServerFacadeImpl on the server side.
Finally, you can also use a static factory and replace it with a client-side specific version by way of <super-source>.
A last one, but I'm unsure whether it'd work: you could use an if/else using GWT.isClient(), and annotate your ServerFacadeImpl with #GwtIncompatible to tell the GWT compiler that you know it's not client-compatible.

Sending an Object over a network using Java

I am trying to send an object in java over a physical network (not over localhost) but it seems I have something wrong.
The interface to the object (client and server have this):
public interface distributable extends Serializable {
public void test();
}
The Object I am trying to send (only server has this):
class ObjectToSend implements distributable {
public ObjectToSend() {
}
public void test() {
system.out.println("worked!");
}
}
Server:
private ObjectToSend obj = new ObjectToSend();
obj_out_stream.writeObject(obj);
obj_out_stream.flush();
Client:
private distributable ReceivedObj = null;
try {
ReceivedObj = (distributable) obj_in_steam,readObject();
} catch (ClassNotFoundException e) {
System.err.println("Error<w_console>: Couldn't recieve application code!");
}
ReceivedObj.test();
Everything was working when the ObjectToSend class implemented Serializable and I wasn't using an interface because all my classes were in one directory so the client 'knew' about the object. Now I want it to work across a physical network so the client only has the interface to the object. It seems that client can not receive the object as the exception is thrown every time.
As the other answers suggest, the Client also has to know the class of the object you want to send.
Usually, one creates three packages/projects for such a classic client-server example:
Common: Code that is used by client and server; the class definition of the objects you want to send from the server to the client belongs here
Client: All code only the client needs to know about
Server: All code only the server needs to know about
To be able to serialize and deserialize objects with objectinput/outputstream the classes must implement Serializable.
Also the deserializer must be able to find the classes on the classpath that you are trying to deserialize since this is embedded in the Serialized form.
If you want the client to have only the interface -- at compile time -- then you'll need to download the actual class from the server at run-time.
Jini (aka Apache River) makes this easy.
It's supposed to be like this. What can you do with a class whose code you don't have?
Have a look here: stackoverflow.com/questions/8175052/java-polymorphism-my-teacher-claims-you-can-distribute-an-executable-object-thr

Is it ok to use multiple objects for RMI communication

Scenario:
Client C connects to Server S via RMI
C asks S to create a handler H, S returns H to C
C then talks to H
Now I could do this in two ways:
Make the handler a Remote and let S return the stub to it, so C can directly talk to that (h.say(String msg);)
Give the handler an ID and return that to C. C will talk to H via S (s.sayToHandler(int id, String msg);)
The first is nicer OO, but what about the performance? Will an extra TCP connection be opened, or is the existing connection between S and H used?
I don't know about the implementation. I don't think a new connection is made. But what I know is the more objects you share remotely the more objects that depends on remote dereference to get garbage collected (so there will be more objects living longer, not good).
Alternative approach
I'll recommend a mixed approach. Use the nice approach for the client but implement it the not-so-nice-way internally:
interface Server {
public Handler getHandler(...);
}
interface Handler extends Serializable {
// it gets copied!
public X doThis(...);
public Y doThat(...);
}
class HandlerImpl implements Handler {
public X doThis(...) {
backDoor.doThis(this, ...);
}
public Y doThat(...) {
backDoor.doThat(this, ...);
}
private BackDoor backDoor;
}
interface BackDoor {
public X doThis(Handler h, ...);
public Y doThat(Handler h, ...);
}
class ServerImpl imlpements Server, BackDoor {
public Handler getHandler(...) {
return /*a handler with a self reference as backdoor, only ONE remote obj shared via TWO interfaces */
}
...
// it does everything
// it receives the handler
}
BackDoor and Handler are sync'ed interfaces. The first has the methods with Handler as argument, the later has the pure methods. I don't think it's a big deal. And the two different interfaces let you work cleanly without the client know nothing and allowing the thin serializable Handler's do the dirty work.
Hope you like it!
The RMI specification does not really say whether there should be a new connection or the existing one reused. The wire protocol allows both, using either multiplexing or one TCP connection per call (which might be reused for following calls to the same sever object, I'm not sure). If you need tunneling through HTTP, than only one message per connection is allowed.
I did not find anything on how to configure which type of protocol to use (other than disabling HTTP tunneling).
If you want to make sure that only one TCP connection is used, use custom client and server socket factories doing the tunneling of multiple connections though one themselves. This will probably be less efficient than what the RMI runtime system would be doing there.

readobject method throws ClassNotFoundException

I'm trying to pick up Java and wanted to test around with Java's client/server to make the client send a simple object of a self defined class(Message) over to the server. The problem was that I kept getting a ClassNotFoundException on the server side.
I think the rest of the codes seem to be alright because other objects such as String can go through without problems.
I had two different netbeans projects in different locations for client and server each.
Each of them have their own copy of Message class under their respective packages.
Message class implements Serializable.
On the client side, I attempt to send a Message object through.
On the server side, upon calling the readObject method, it seems to be finding Message class from the client's package instead of it's own. printStackTrace showed: "java.lang.ClassNotFoundException: client.Message" on the server side
I have not even tried to cast or store the object received yet. Is there something I left out?
The package name and classname must be exactly the same at the both sides. I.e. write once, compile once and then give the both sides the same copy. Don't have separate server.Message and client.Message classes, but a single shared.Message class or something like that.
If you can guarantee the same package/class name, but not always whenever it's exactly the same copy, then you need to add a serialVersionUID field with the same value to the class(es) in question.
package shared;
import java.io.Serializable;
public class Message implements Serializable {
private static final long serialVersionUID = 1L;
// ...
}
The reason is, that the readObject() in ObjectInputStream is practically implemented as:
String s = readClassName();
Class c = Class.forName(s); // Here your code breaks
Object o = c.newInstance();
...populate o...

Help with proper design for generic Socket Server

I've created a generic 'SocketServer' class in java which takes the following arguments:
String address, int port, Class socketProtocol, String encryptionType, int backlogSize
Basically, I want other developers to be able to instance this class in their projects and set some simple options for encryption, backlog, address, port.. at which point they have to design the protocol. SocketProtocol is an interface which enables sendMessage and receiveMessage (along with a few others).
At this point, the user of the class should just implement SocketProtocol and pass the class (i.e. MySocketProto.class) to the SocketServer instance, which will in turn instance a copy of the protocol for each incoming connection via .newInstance();
Does this make sense? Is there an easier way to establish this type functionality? I don't like the idea of passing the class type to the server, it just seems odd.
Thanks all,
Chris
I would use the Factory pattern in this situation. The linked Wikipedia example is a bit verbose, but it can be really simple:
public interface ISocketProtocolFactory {
ISocketProtocol buildProtocol();
}
Your SocketServer constructor would then take an instance of something implementing ISocketProtocolFactory, and ask it for new ISocketProtocols as it goes.
This will give your users a lot more flexibility in constructing the ISocketProtocol instances, take care of the 'nastiness' of having a class parameter.
I would assume that each port would have it's own protocol. With that in mind you would specify it for the port.
The way I had done it in the past is to have the implementors pass in a class that inherits from:
public Interface ProtocolInterface
{
public void serve(InputStream in, OutputStream out) throws IOException
...
where the InputStream and OutputStream are the input and outputs to the Socket

Categories

Resources