Help with proper design for generic Socket Server - java

I've created a generic 'SocketServer' class in java which takes the following arguments:
String address, int port, Class socketProtocol, String encryptionType, int backlogSize
Basically, I want other developers to be able to instance this class in their projects and set some simple options for encryption, backlog, address, port.. at which point they have to design the protocol. SocketProtocol is an interface which enables sendMessage and receiveMessage (along with a few others).
At this point, the user of the class should just implement SocketProtocol and pass the class (i.e. MySocketProto.class) to the SocketServer instance, which will in turn instance a copy of the protocol for each incoming connection via .newInstance();
Does this make sense? Is there an easier way to establish this type functionality? I don't like the idea of passing the class type to the server, it just seems odd.
Thanks all,
Chris

I would use the Factory pattern in this situation. The linked Wikipedia example is a bit verbose, but it can be really simple:
public interface ISocketProtocolFactory {
ISocketProtocol buildProtocol();
}
Your SocketServer constructor would then take an instance of something implementing ISocketProtocolFactory, and ask it for new ISocketProtocols as it goes.
This will give your users a lot more flexibility in constructing the ISocketProtocol instances, take care of the 'nastiness' of having a class parameter.

I would assume that each port would have it's own protocol. With that in mind you would specify it for the port.
The way I had done it in the past is to have the implementors pass in a class that inherits from:
public Interface ProtocolInterface
{
public void serve(InputStream in, OutputStream out) throws IOException
...
where the InputStream and OutputStream are the input and outputs to the Socket

Related

How to subclass SubethaSmtp SMTPClient class

I am trying to develop a simple SMTPclient for testing purposes using the SubethaSmtp client package. i want to use the SMTPClient class instead of the SmartClient class for more control but i have not been able to figure out how to write mail data using SMTPClient, the only OutputStream exposed to public or external subclasses is the one for sending commands, the ones for sending data (after sending the DATA command) is exposed only to classes in the same package (SmartClient).
am i missing something here? i would like to know how a direct subclass of SMARTClient can written to work around this problem.
Looks like you are correct, you cannot simply extend the SMTPClient and get access similar to the one that SmartClient has, being a same-package class.
At this point you can either:
1) Fork your own version of the app from https://github.com/voodoodyne/subethasmtp and do whatever the hell you like with it, or
2) Go all the way and implement your own version of SMTPClient, as the package protected SMTPClient.dotTerminatedOutput;, used by SmartClient.dataWrite() actually is just instantiated like so
...
this.rawOutput = this.socket.getOutputStream();
this.dotTerminatedOutput = new DotTerminatedOutputStream(this.rawOutput);
...

GWT different interface implementation for client and server

Assume that we've some interface my.gwt.shared.Facade in shared package of our GWT project (exists both server and client) and two implementation of it: class my.gwt.client.ClientFacadeImpl (exists only client) and class my.gwt.server.ServerFacadeImpl (exists only server).
Is there any way to write a piece of code or annotation that substitute ClientFacadeImpl in client side and ServerFacadeImpl in server side?
Thanks all for the answers and discussion. I've found simple and elegant solution for my needs.
So, I've interface my.gwt.shared.Facade and two classes: class my.gwt.client.ClientFacadeImpl and class my.gwt.server.ServerFacadeImpl.
interface Facade {
Map<Boolean, Facade> FACADES = new HashMap<Boolean, Facade>();
}
Now, we should fill you FACADES interface. This is done like that:
public class MyEntry implements EntryPoint {
static {
Facade.FACADES.put(true, ClientFacadeImpl.INSTANCE); // client side
}
And
#Startup
#Singleton
public class Initializer {
#PostConstruct
private void init() {
Facade.FACADES.put(false, ServerFacadeImpl.INSTANCE); // server side
// other things
}
}
Now, when I need to get appropriate Facade, I just write
Facade facade = Facade.FACADES.get(GWT.isClient());
Also in this case in map is only corresponding to server or client side implementation.
P. S. Goal of this question was to allow handling of some GwtEvents fired on client direclty on server and vice-versa. This solution removed large set of DTO (data transfer objects) and simplified code a lot.
There's no answer to your question other than "it depends". Or rather, of course there are ways of doing what you ask, but would you accept the tradeoffs?
Given that you tagged the question with dependency-injection, let's start with that. If you use a DI tool with GWT, it's likely GIN (Dagger 2 would work, but it's still under development). In that case, just use distinct modules for GIN client-side and Guice server-side that bind() the appropriate implementation.
For a few releases, GWT.create() can be made to work outside a GWT (client) environment (i.e. on the server side). You have to register a ClassInstantiator on the ServerGwtBridge as an alternative to the rebind rules from gwt;xml files. So you could have a <replace-with class="my.gwt.client.ClientFacadeImpl"> rule in your gwt.xml, and a ClassInstantiator returning a ServerFacadeImpl on the server side.
Finally, you can also use a static factory and replace it with a client-side specific version by way of <super-source>.
A last one, but I'm unsure whether it'd work: you could use an if/else using GWT.isClient(), and annotate your ServerFacadeImpl with #GwtIncompatible to tell the GWT compiler that you know it's not client-compatible.

RMI: serializable and remote objects

I have a certain problem: I'm using RMI to communicate between server and client.
public class RemoteMap
extends java.rmi.server.UnicastRemoteObject
implements RemoteMapInterface {
private TreeMap<String, GeneralSprite> sprites;
...
This is my remote object. But I want the client to be able to change this object's content. And after the change the server can execute some operation based on this.
Example at the client side:
map = (RemoteMapInterface) (registry.lookup("map"));
map.getSprites.get("object1").setDx(-1);
I'm using serialiable on the GeneralSprite, but I guess it passed by value. So when I did some changes at the GeneralSprite, it wasn't transported to the server . Do I have to make GeneralSprite to an Remote object too? Or is it even possibly?
Thanks in advance, and sorry for my bad english, I hope you can understand.
Everything which does not implement the Remote interface, whether directly or indirectly, will get serialized for the remote method invocation. So it’s a “call-by-copy” behavior. You can implement a new Map which implements Remote, but you can also add a method like setDx(String spriteName, int value) to your RemoteMapInterface and implement it as sprites.get(spriteName) .setDx(value); on the server side.

Desktop program communicating with server

I am in the process of moving the business logic of my Swing program onto the server.
What would be the most efficient way to communicate client-server and server-client?
The server will be responsible for authentication, fetching and storing data, so the program will have to communication frequently.
it depends on a lot of things. if you want a real answer, you should clarify exactly what your program will be doing and exactly what falls under your definition of "efficient"
if rapid productivity falls under your definition of efficient, a method that I have used in the past involves serialization to send plain old java objects down a socket. recently I have found that, in combination with the netty api, i am able to rapidly prototype fairly robust client/server communication.
the guts are fairly simple; the client and server both run Netty with an ObjectDecoder and ObjectEncoder in the pipeline. A class is made for each object designed to handle data. for example, a HandshakeRequest class and HandshakeResponse class.
a handshake request could look like:
public class HandshakeRequest extends Message {
private static final long serialVersionUID = 1L;
}
and a handshake response may look like:
public class HandshakeResponse extends Message {
private static final long serialVersionUID = 1L;
private final HandshakeResult handshakeResult;
public HandshakeResponse(HandshakeResult handshakeResult) {
this.handshakeResult = handshakeResult;
}
public HandshakeResult getHandshakeResult() {
return handshakeResult;
}
}
in netty, the server would send a handshake request when a client connects as such:
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel ch = e.getChannel();
ch.write(new HandshakeRequest();
}
the client receives the HandshakeRequest Object, but it needs a way to tell what kind of message the server just sent. for this, a Map<Class<?>, Method> can be used. when your program is run, it should iterate through the Methods of a class with reflection and place them in the map. here is an example:
public HashMap<Class<?>, Method> populateMessageHandler() {
HashMap<Class<?>, Method> temp = new HashMap<Class<?>, Method>();
for (Method method : getClass().getMethods()) {
if (method.getAnnotation(MessageHandler.class) != null) {
Class<?>[] methodParameters = method.getParameterTypes();
temp.put(methodParameters[1], method);
}
}
return temp;
}
this code would iterate through the current class and look for methods marked with an #MessageHandler annotation, then look at the first parameter of the method (the parameter being an object such as public void handleHandshakeRequest(HandshakeRequest request)) and place the class into the map as a key with the actual method as it's value.
with this map in place, it is very easy to receive a message and send the message directly to the method that should handle the message:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
try {
Message message = (Message) e.getMessage();
Method method = messageHandlers.get(message.getClass());
if (method == null) {
System.out.println("No handler for message!");
} else {
method.invoke(this, ctx, message);
}
} catch(Exception exception) {
exception.printStackTrace();
}
}
there's not really anything left to it. netty handles all of the messy stuff allowing us to send serialized objects back and forth with ease. if you decide that you do not want to use netty, you can wrap your own protocol around java's Object Output Stream. you will have to do a little bit more work overall, but the simplicity of communication remains intact.
It's a bit hard to say which method is "most efficient" in terms of what, and I don't know your use cases, but here's a couple of options:
The most basic way is to simply use "raw" TCP-sockets. The upside is that there's nothing extra moving across the network and you create your protocol yourself, the latter being also a downside; you have to design and implement your own protocol for the communication, plus the basic framework for handling multiple connections in the server end (if there is a need for such).
Using UDP-sockets, you'll probably save a little latency and bandwidth (not much, unless you're using something like mobile data, you probably won't notice any difference with TCP in terms of latency), but the networking code is a bit harder task; UDP-sockets are "connectionless", meaning all the clients messages will end up in the same handler and must be distinguished from one another. If the server needs to keep up with client state, this can be somewhat troublesome to implement right.
MadProgrammer brought up RMI (remote method invocation), I've personally never used it, and it seems a bit cumbersome to set up, but might be pretty good in the long run in terms of implementation.
Probably one of the most common ways is to use http for the communication, for example via REST-interface for Web services. There are multiple frameworks (I personally prefer Spring MVC) to help with the implementation, but learning a new framework might be out of your scope for now. Also, complex http-queries or long urls could eat your bandwidth a bit more, but unless we're talking about very large amounts of simultaneous clients, this usually isn't a problem (assuming you run your server(s) in a datacenter with something like 100/100MBit connections). This is probably the easiest solution to scale, if it ever comes to that, as there're lots of load-balancing solutions available for web servers.

Is it ok to use multiple objects for RMI communication

Scenario:
Client C connects to Server S via RMI
C asks S to create a handler H, S returns H to C
C then talks to H
Now I could do this in two ways:
Make the handler a Remote and let S return the stub to it, so C can directly talk to that (h.say(String msg);)
Give the handler an ID and return that to C. C will talk to H via S (s.sayToHandler(int id, String msg);)
The first is nicer OO, but what about the performance? Will an extra TCP connection be opened, or is the existing connection between S and H used?
I don't know about the implementation. I don't think a new connection is made. But what I know is the more objects you share remotely the more objects that depends on remote dereference to get garbage collected (so there will be more objects living longer, not good).
Alternative approach
I'll recommend a mixed approach. Use the nice approach for the client but implement it the not-so-nice-way internally:
interface Server {
public Handler getHandler(...);
}
interface Handler extends Serializable {
// it gets copied!
public X doThis(...);
public Y doThat(...);
}
class HandlerImpl implements Handler {
public X doThis(...) {
backDoor.doThis(this, ...);
}
public Y doThat(...) {
backDoor.doThat(this, ...);
}
private BackDoor backDoor;
}
interface BackDoor {
public X doThis(Handler h, ...);
public Y doThat(Handler h, ...);
}
class ServerImpl imlpements Server, BackDoor {
public Handler getHandler(...) {
return /*a handler with a self reference as backdoor, only ONE remote obj shared via TWO interfaces */
}
...
// it does everything
// it receives the handler
}
BackDoor and Handler are sync'ed interfaces. The first has the methods with Handler as argument, the later has the pure methods. I don't think it's a big deal. And the two different interfaces let you work cleanly without the client know nothing and allowing the thin serializable Handler's do the dirty work.
Hope you like it!
The RMI specification does not really say whether there should be a new connection or the existing one reused. The wire protocol allows both, using either multiplexing or one TCP connection per call (which might be reused for following calls to the same sever object, I'm not sure). If you need tunneling through HTTP, than only one message per connection is allowed.
I did not find anything on how to configure which type of protocol to use (other than disabling HTTP tunneling).
If you want to make sure that only one TCP connection is used, use custom client and server socket factories doing the tunneling of multiple connections though one themselves. This will probably be less efficient than what the RMI runtime system would be doing there.

Categories

Resources