I'm trying to create a system that will, on every RMI call,
1) gather some data about the current local system state (let's say the system time)
2) serialize it and transparently add it to the data sent over the wire with the call (ie, without changing the signature of the stub method being called)
3) deserialize it on the other side and take some action (let's say logging it to a file)
4) do the same thing in reverse when the method returns
I've was trying at first to do this with AspectJ, adding a pointcut at java.rmi.server.RemoteRef's invoke method that would allow me to add the metadata to the params Object array, but I've now discovered that AspectJ can't advise already-compiled code, which makes a lot of sense.
So, what's the right way to do this?
Well, I am not sure if I am getting enough context from what you are saying, but I think you could write the metadata upon serialization/deserialization of objects passed to and received from the server.
For instance, let's say you server is returning Jedi instances. And Jedi is a Serializable class. Then you could use the writeObject() and readObject() methods (as explained in the Java Serialization Specification) to write whatever special additional information that you may need at the client/server side.
For instance:
public class Jedi {
....
private void writeObject(ObjectOutputStream stream) throws IOException {
stream.writeObject(new Date());
stream.defaultWriteObject();
}
private void readObject(ObjectInputStream stream) throws IOException, ClassNotFoundException {
Date sysDate = (Date) stream.readObject();
System.out.println(sysDate);
stream.defaultReadObject();
}
}
The only problem as that you would be forced to do this for every serializable object you interchange with your server.
You could also investigate RMI/JERI in the Jini 2 project. JERI stands for Java Extensible Remote Invocation protocol, i.e. you can customize it in numerous ways.
Related
I have a service that saves a tree-like structure to a database. Before persisting the tree, the tree gets validated, and during validation, a number of things can go wrong. The tree can have duplicate nodes, or a node can be missing an important field (such as its abbreviation, full name, or level).
In order to communicate to the service what went wrong, I'm using exceptions. When the validateTree() method encounters a problem, it throws the appropriate exception. The HttpService class then uses this exception to form the appropriate response (e.g. in response to an AJAX call).
public class HttpService {
private Service service;
private Logger logger;
// ...
public HttpServiceResponse saveTree(Node root) {
try {
service.saveTree(root);
} catch (DuplicateNodeException e) {
return HttpServiceResponse.failure(DUPLICATE_NODE);
} catch (MissingAbbreviationException e) {
return HttpServiceResponse.failure(MISSING_ABBREV);
} catch (MissingNameException e) {
return HttpServiceResponse.failure(MISSING_NAME);
} catch (MissingLevelException e) {
return HttpServiceResponse.failure(MISSING_LEVEL);
} catch (Exception e) {
logger.log(e.getMessage(), e. Logger.ERROR);
return HttpServiceResponse.failure(INTERNAL_SERVER_ERROR);
}
}
}
public class Service {
private TreeDao dao;
public void saveTree(Node root)
throws DuplicateNodeException, MissingAbbreviationException, MissingNameException, MissingLevelException {
validateTree(root);
dao.saveTree(root);
}
private void validateTree(Node root)
throws DuplicateNodeException, MissingAbbreviationException, MissingNameException, MissingLevelException {
// validate and throw checked exceptions if needed
}
}
I want to know, is this a good use of exceptions? Essentially, I'm using them to convey error messages. An alternative would be for my saveTree() method to return an integer, and that integer would convey the error. But in order to do this, I would have to document what each return value means. That seems to be more in the style of C/C++ than Java. Is my current use of exceptions a good practice in Java? If not, what's the best alternative?
No, exceptions aren't a good fit for the validation you need to do here. You will likely want to display multiple validation error messages, so that the user can see all the validation errors at once, and throwing a separate exception for each invalid input won't allow that.
Instead create a list and put errors in it. Then you can show the user the list of all the validation errors.
Waiting until your request has gotten all the way to the DAO seems like the wrong time to do this validation. A server-side front controller should be doing validation on these items before they get passed along any farther, as protection against attacks such as injection or cross-site scripting.
TL;DR The Java-side parts you showed us are nearly perfect. But you could add an independent validation check and use that from the client side before trying to save.
There are many software layers involved, so let's have a look at each of them - there's no "one size fits all" answer here.
For the Service object, it's the perfect solution to have it throw exceptions from the saveTree() method if it wasn't able to save the tree (for whatever reason, not limited to validation). That's what exceptions are meant for: to communicate that some method couldn't do its job. And the Service object shouldn't rely on some external validation, but make sure itself that only valid data are saved.
The HttpService.saveTree() should also communicate to its caller if it couldn't save the tree (typically indicated by an exception from the Service). But as it's an HTTP service, it can't throw exceptions, but has to return a result code plus a text message, just the way you do it. This can never contain the full information from the Java exception, so it's a good decision that you log any unclear errors here (but you should make sure that the stack trace gets logged too!), before you pass an error result to the HTTP client.
The web client UI software should of course present detailed error lists to the user and not just a translated single exception. So, I'd create an HttpService.validateTree(...) method that returns a list of validation errors and call that from the client before trying to save. This gives you the additional possibility to check for validity independent of saving.
Why do it this way?
You never have control what happens in the client, inside some browser, you don't even know whether the request is coming from your app or from something like curl. So you can't rely on any validation that your JavaScript (?) application might implement. All of your service methods should reject invalid data, by doing the validation themselves.
Implementing the validation checks in a JavaScript client application still needs the same validation inside the Java service (see above), so you'd have to maintain two pieces of code in different languages doing exactly the same business logic - don't repeat yourself! Only if the additional roundtrip isn't tolerable, then I'd regard this an acceptable solution.
Visible and highly noticeable, both in terms of the message itself and how it indicates which dialogue element users must repair.
From Guru Nielsen,
https://www.nngroup.com/articles/error-message-guidelines/
I am in the process of moving the business logic of my Swing program onto the server.
What would be the most efficient way to communicate client-server and server-client?
The server will be responsible for authentication, fetching and storing data, so the program will have to communication frequently.
it depends on a lot of things. if you want a real answer, you should clarify exactly what your program will be doing and exactly what falls under your definition of "efficient"
if rapid productivity falls under your definition of efficient, a method that I have used in the past involves serialization to send plain old java objects down a socket. recently I have found that, in combination with the netty api, i am able to rapidly prototype fairly robust client/server communication.
the guts are fairly simple; the client and server both run Netty with an ObjectDecoder and ObjectEncoder in the pipeline. A class is made for each object designed to handle data. for example, a HandshakeRequest class and HandshakeResponse class.
a handshake request could look like:
public class HandshakeRequest extends Message {
private static final long serialVersionUID = 1L;
}
and a handshake response may look like:
public class HandshakeResponse extends Message {
private static final long serialVersionUID = 1L;
private final HandshakeResult handshakeResult;
public HandshakeResponse(HandshakeResult handshakeResult) {
this.handshakeResult = handshakeResult;
}
public HandshakeResult getHandshakeResult() {
return handshakeResult;
}
}
in netty, the server would send a handshake request when a client connects as such:
#Override
public void channelConnected(ChannelHandlerContext ctx, ChannelStateEvent e) {
Channel ch = e.getChannel();
ch.write(new HandshakeRequest();
}
the client receives the HandshakeRequest Object, but it needs a way to tell what kind of message the server just sent. for this, a Map<Class<?>, Method> can be used. when your program is run, it should iterate through the Methods of a class with reflection and place them in the map. here is an example:
public HashMap<Class<?>, Method> populateMessageHandler() {
HashMap<Class<?>, Method> temp = new HashMap<Class<?>, Method>();
for (Method method : getClass().getMethods()) {
if (method.getAnnotation(MessageHandler.class) != null) {
Class<?>[] methodParameters = method.getParameterTypes();
temp.put(methodParameters[1], method);
}
}
return temp;
}
this code would iterate through the current class and look for methods marked with an #MessageHandler annotation, then look at the first parameter of the method (the parameter being an object such as public void handleHandshakeRequest(HandshakeRequest request)) and place the class into the map as a key with the actual method as it's value.
with this map in place, it is very easy to receive a message and send the message directly to the method that should handle the message:
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
try {
Message message = (Message) e.getMessage();
Method method = messageHandlers.get(message.getClass());
if (method == null) {
System.out.println("No handler for message!");
} else {
method.invoke(this, ctx, message);
}
} catch(Exception exception) {
exception.printStackTrace();
}
}
there's not really anything left to it. netty handles all of the messy stuff allowing us to send serialized objects back and forth with ease. if you decide that you do not want to use netty, you can wrap your own protocol around java's Object Output Stream. you will have to do a little bit more work overall, but the simplicity of communication remains intact.
It's a bit hard to say which method is "most efficient" in terms of what, and I don't know your use cases, but here's a couple of options:
The most basic way is to simply use "raw" TCP-sockets. The upside is that there's nothing extra moving across the network and you create your protocol yourself, the latter being also a downside; you have to design and implement your own protocol for the communication, plus the basic framework for handling multiple connections in the server end (if there is a need for such).
Using UDP-sockets, you'll probably save a little latency and bandwidth (not much, unless you're using something like mobile data, you probably won't notice any difference with TCP in terms of latency), but the networking code is a bit harder task; UDP-sockets are "connectionless", meaning all the clients messages will end up in the same handler and must be distinguished from one another. If the server needs to keep up with client state, this can be somewhat troublesome to implement right.
MadProgrammer brought up RMI (remote method invocation), I've personally never used it, and it seems a bit cumbersome to set up, but might be pretty good in the long run in terms of implementation.
Probably one of the most common ways is to use http for the communication, for example via REST-interface for Web services. There are multiple frameworks (I personally prefer Spring MVC) to help with the implementation, but learning a new framework might be out of your scope for now. Also, complex http-queries or long urls could eat your bandwidth a bit more, but unless we're talking about very large amounts of simultaneous clients, this usually isn't a problem (assuming you run your server(s) in a datacenter with something like 100/100MBit connections). This is probably the easiest solution to scale, if it ever comes to that, as there're lots of load-balancing solutions available for web servers.
recently I found a function like this in a generic JSR245 portlet class:
public class MyGenericPortlet extends GenericPortlet {
#Override
public void processAction(ActionRequest rq, ActionResponse rs) throws PortletException{
String actParam = rq.getParameter("myAction");
if( (actParam != null) && (!("").equals(actParam))) {
try{
Method m = this.getClass().getMethod(actParam, new Class[]{ActionRequest.class, ActionResponse.class});
m.invoke(this, new Object[]{rq, rs});
}
catch(Exception e){
setRequestAttribute(rq.getPortletSession(),"error", "Error in method:"+action);
e.printStackTrace();
}
}
else setRequestAttribute(rq.getPortletSession(),"error", "Error in method:"+action);
}
}
How safe is such code? As far as I can see the following problems might occur:
A parameter transmitted from the client is used unchecked to call a function. This allows anyone who can transmit data to the corresponding portlet to call any matching function. on the other hand the function to be called must have a specific interface. Usually such functions are very rare.
A programmer might accidentaly add a function with a corresponding interface. As only public functions seem to be found this is no problem as long as the function is private or protected.
The error message can reveal information about the software to the client. This shouldn't be a problem as the software itself is Open Source.
Obviously there is some room for programming errors that can be exploited. Are there other unwanted side effects that might occur? How should I (or the developers) judge the risk that comes from this function?
If you think it is safe, I'd like to know why.
The fact that only public methods with a specific signature can be invoked remotely is good. However, it could be made more secure by, for example, requiring a special annotation on action methods. This would indicate the developer specifically intended the method to be an invokable action.
A realistic scenario where the current implementation could be dangerous is when the developer adds an action that validates that the information in the request is safe, then passes the request and response to another method for actual processing. If an attacker could learn the name of the delegate method, he could invoke it directly, bypassing the parameter safety validation.
Currently our application uses GWT-RPC for most client-server communication. Where this breaks down is when we need to auto generate images. We generate images based on dozens of parameters so what we do is build large complex urls and via a get request retrieve the dynamically built image.
If we could find a way to serialize Java objects in gwt client code and deserialize it on the server side we could make our urls much easier to work with. Instead of
http://host/page?param1=a¶m2=b¶m3=c....
we could have
http://host/page?object=?JSON/XML/Something Magicical
and on the server just have
new MagicDeserializer.(request.getParameter("object"),AwesomeClass.class);
I do not care what the intermediate format is json/xml/whatever I just really want to be able stop keeping track of manually marshalling/unmarshalling parameters in my gwt client code as well as servlets.
Use AutoBean Framework. What you need is simple and is all here http://code.google.com/p/google-web-toolkit/wiki/AutoBean
I've seen the most success and least amount of code using this library:
https://code.google.com/p/gwtprojsonserializer/
Along with the standard toString() you should have for all Object classes, I also have what's called a toJsonString() inside of each class I want "JSONable". Note, each class must extend JsonSerializable, which comes with the library:
public String toJsonString()
{
Serializer serializer = (Serializer) GWT.create(Serializer.class);
return serializer.serializeToJson(this).toString();
}
To turn the JSON string back into an object, I put a static method inside of the same class, that recreates the class itself:
public static ClassName recreateClassViaJson(String json)
{
Serializer serializer = (Serializer) GWT.create(Serializer.class);
return (ClassName) serializer.deSerialize(json, "full.package.name.ClassName");
}
Very simple!
I am using GWT-RPC to call an ANTLR grammar.
If the grammar fails, I create an object containing the errors/exceptions that were thrown by the grammar and return it to the client.
When I do this I get the exception:
com.google.gwt.user.client.rpc.SerializationException: Type 'org.antlr.runtime.NoViableAltException' was not included in the set of types which can be serialized by this SerializationPolicy or its Class object could not be loaded.
I have found that there is an identical class with the addition of a public no argument constructor (needed for GWT-RPC serialization) in the com.google.appengine.repackaged.org.antlr.runtime package.
How do I convert the org.antlr.runtime.NoViableAltException into a com.google.appengine.repackaged.org.antlr.runtime.NoViableAltException?
Do you need the exceptions themselves? I'd think not - you probably need the message or at most the stack trace. Since you're repackaging the exceptions anyway, just repack the needed strings and send those over the wire.
As an alternative for creating new Exceptions that can be serialized, I have made my Parser override the emitErrorMessage method from BaseRecognizer.
#members {
#Override
public void emitErrorMessage(String msg) {
// The original prints to stdout.
// You can do what you like with the message.
}
}
As Tassos suggested in his answer, I did not actually need the exception, just the message from it.