Servlet and Command pattern, compile vs runtime? - java

I'm writing a Java servlet that acts as a Front Controller. To carry out functions I'm using the Domain Command pattern. Currently, I'm initializing all my commands and storing them in a map with the name (string) of the command as the key and the object as the value. Whenever the servlet receives a request, I get the command from the map by passing the command query from url as:
// at init
Hashmap<String, DomainCommand> commands = new Hashmap<String, DomainCommand>();
commands.put("someCommand", new SomeCommand());
// at request
String command = request.getParameter("command");
DomainCommand c = commands.get(command);
c.execute();
This works well and does what I want since my DomainCommands have no class attributes to be shared between threads. An alternative to this is using reflection to create the object like so:
String command = request.getParameter("command");
DomainCommand c = Class.forName(command).newInstance(); // assuming in same (default) package
c.execute();
Both of these work. Which is better from a performance/memory saving point of view?

Performance
When using Map the only cost is accessing a HashMap (negligible). Reflection on the other hand might take much more time and is less safe - remember you have to make sure the user is not passing bogus command, allowing him to run arbitrary code.
Memory
When creating DomainCommand at startup they will end up in old generation after some time, thus not being subject to garbage collection for most of the time. On the other when created per request most likely they will be garbage collected immediately. So in overall, the memory footprint will be comparable, except that the second approach requires mor GC runs.
All in all, map of commands is a much better approach. BTW if you DI frameworks like Spring or Guice (unless this is an overkill for you) or web frameworks like Struts/Spring MVC, they will do precisely the same work for you.

The first approach of storing the commands in HashMap is better. The problem with the second approach is that you have to load the command class every-time you execute that command.
In fact frameworks like Struts which precisely on command pattern with Controller Servlet as front controller with individual action classes as commands.

From performance perspective the 1st approach you mentioned is definitely faster.
How about the following options?
using Visitor pattern for command
storing your command beans and do a lookup for command bean by its name (from the request) in JNDI (have a service that retrieves the command from JNDI)
using IoC framework (Spring) where all the command beans are initialized from the container startup and lookup for command is done on the application context
Performance-wise I would prefer the 3rd option.

You asked for an answer specifically from a performance/memory saving point of view, and the other answers answer that. I agree that the Map approach is probably better in this regard.
However, you should be sure that this is even a concern before worrying about that at this point; I'm assuming the network overhead to one call to your servlet by far outweighs a single HashMap lookup of a short string.
A larger concern should be clarity and maintainability. In this regard as well, I would say that the Map approach is much superior, as it:
Doesn't tie the API (legal values of the command parameter) to the implementation (names of classes)
Makes it clear which classes are intended to be used as commands and which are not (very important if you later want to make a change)
Allows the API to be more flexible (for example, you could allow the command parameter to be case-insensitive, or to have more than one command map to the same class)
To quote the Zen of Python: "Explicit is better than implicit".

How about merging the 2 options together?
Struts does the exact same thing. It contains a Map that caches all your commands that has been requested by the Servlet. If the command doesn't exist, then it creates a newInstance() of the command (just like option 2 you've created).
The advantage of this is quicker execution of your process: Retrieve the command from the cache else create a new one & and store the created new command in cache. It's definitely faster than option 2.

Related

Dynamically build java/scala method body at runtime and execute it

Suppose I have the following Interface in java:
public interface DynamicMethod {
String doit();
}
I would like to build an Object during runtime which conforms to the above interface such that I inject doit method body in it and then execute it? Is this possible with Java Reflection API, or any other way? Or probably in some way in Scala?
Note that doit body for my objects would be dynamic and are not known a priori. You can assume that in run-time an array CodeArray[1..10] of Strings is provided and each entry of this array holds the code for each doit method. I would appreciate if you could answer with a sample code.
The context:
I try to explain the context of the problem; nonetheless, the above question still remains independent from the context.
I have some commands say C1,C2, ...; each command has certain parameters. Based on a command and its parameters the system needs to perform a certain task (which is expressible using a java code.) I need that these commands are stored for future execution based on user demand (so the CodeArray[1..10] in the above holds this list of java codes). For example, a user chooses a command from the list (i.e., from the array) and demands its execution.
My thought is that I build an engine that based on the user selection, loads the corresponding command code from the array and executes it.
With your context that you added, it sounds to me like you have an Interpreter..
For example, SQL takes input like "SELECT * FROM users", parses and builds a tree of tokens that it then interprets.
Another example: Java's regex is an interpreter. A string like "[abc]+" is compiled into tokens, and then interpreted when executed. You can see the tokens (called Nodes) it uses in the source code.
I'll try to post a simple example later, but the Interpreter Pattern doesn't use dynamically generated code. All of the tokens are concrete classes. You do have to define all possible (valid) user input so that you can make a token to execute it however. SQL and regex has a defined syntax, you will need one also.
I think Byte Buddy would be helpful in your case. It's an open source project maintained by a very well respected Java developer.
Take a look at the Learn section, they have a very detailed example there:
http://bytebuddy.net/#/tutorial
Currently it's not very clear what's your aim. There are many approaches to do this depending on your requirements.
In some cases it would be enough to create a Proxy and an InvocationHandler. Sometimes it's reasonable to generate Java source, then invoke JavaCompiler in runtime and load the generated class using URLClassLoader (probably that's your case if you're speaking about strings of code). Sometimes it's better to directly create a bytecode using libraries like ASM, cglib or BCEL.

Loading Settings - Best Practices

I'm at the point in my first real application where I am adding in the user settings. I'm using Java and being very OO (and trying to keep it that way) so here are my ideas:
Load everything in the main() and
pass it all 'down the line' to the
required objects (array)
Same as above, but just pass the
object that contains the data down
the line
Load each individual setting as
needed within the various classes.
I understand some of the basic pros and cons to each method (i.e. time vs. size) but I'm looking for some outside input as to what practices they've successfully used in the past.
Someone should stand up for the purported Java standard, the Preferences API... and it's most recent incarnation in JDK6. Edited to add, since the author seems to savvy XML, this is more appropriate than before. Thought I believe you can work XML juju with Properties too, should the spirit take you.
Related on SO: Preferences API vs. Apache solution, Is a master preferences class a good idea?
(well, that's about all the standing up I'm willing to do.)
Use a SettingsManager class or something similar that is used to abstract getting all settings data. At each point in the code where you need a setting you query the SettingsManager class - something like:
int timeout = SettingsManager.GetSetting("TimeoutSetting");
You then delegate all of the logic for how settings are fetched to this single manager class, whose implementation you can change / optimize as needed. For instance, you could implement the SettingsManager to fetch settings from a config file, or a database, or some other data store, periodically refresh the settings, handle caching of settings that are expensive to retrieve, etc. The code using the settings remains blissfully unaware of all of these implementaton decisions.
For maximum flexibility you can use an interface instead of an actual class, and have different setting managers implement the interface: you can swap them in and out as needed at some central point without having to change the underlying code at all.
In .NET there is a fairly rich set of existing configuration classes (in the System.Configuration) namespace that provide this sort of thing, and it works out quite well.
I'm not sure of the Java equivalent, but it's a good pattern.
Since configuration / settings are typically loaded once (at startup; or maybe a few times during the program's runtime. In any way, we're not talking about a very frequent / time-consuming process), I would prefer simplicity over efficiency.
That rules out option number (3). Configuration-loading will be scattered all over the place.
I'm not entirely sure what the difference is between (1) and (2) in your list. Does (1) mean "passing discreet parameters" and (2) mean "passing an object containing the entire configuration"? If so, I'd prefer (2) over (1).
The rule of thumb here is that you should keep things simple and concentrated. The advantage of reading configuration in one place is that it gives you better control in case the source of the configuration changes at some point.
Here is a tutorial on the Properties class. From the Javadocs (Properties):
The Properties class represents a
persistent set of properties. The
Properties can be saved to a stream or
loaded from a stream. Each key and its
corresponding value in the property
list is a string.
A property list can contain another
property list as its "defaults"; this
second property list is searched if
the property key is not found in the
original property list.
The tutorial gives the following example instantiation for a typical usage:
. . .
// create and load default properties
Properties defaultProps = new Properties();
FileInputStream in = new FileInputStream("defaultProperties");
defaultProps.load(in);
in.close();
// create application properties with default
Properties applicationProps = new Properties(defaultProps);
// now load properties from last invocation
in = new FileInputStream("appProperties");
applicationProps.load(in);
in.close();
. . .
You could, of course, also roll your own system fairly directly using a file-based store and an XML or YAML parser. Good luck!
We have recently started using JSR-330 dependency injection (using Guice from SVN) and found that it was possible to read in a Properties file (or any other map) and bind it inside Guice in the module in the startup code so that the
#Inject #Named("key") String value
string was injected with the value corresponding to the key when that particular code was called. This is the most elegant way I have ever seen for solving this problem!
You do not have to haul configuration objects around your code or sprinkle all kinds of magic method calls in each and every corner of the code to get the values - you just mention to Guice you need it, and it is there.
Note: I've had a look at Guice, Weld (Seam-based) and Spring which all provide injection, because we want JSR-330 in our own code, and I like Guice the best currently. I think the reason is because Guice is the clearest in its bindings as opposed to the under-the-hood magic happening with Weld.

Why use the command pattern in GWT (or any web app)?

According to this video here [# 7:50] Google is recommending the use of the Command pattern on top of its request handling API. There is also a helpful looking project gwt-dispatch that implements that pattern.
According to gwt-dispatch documentation I need to create four classes for each command:
an action (e.g. command)
a result (e.g. response)
an action handler
a module
Assume my service API has 100 methods across 8 BSOs, can somebody explain to me why I want to create nearly 400 new classes? What awesomeness does this pattern buy?
One good reason to use the command pattern is, when you want to pass the command object to further delegates - so instead of copying all the arguments, it's easier to just pass the command object around. It's also useful for gwt-dispatch's rollback functionality (or the undo/redo functionality e.g. in Eclipse's UndoableOperations).
It helps to provide several variations of commands by using different constructors, and subclasses of commands.
I would not suggest to always use the pattern, but you don't save as much as you think, when you don't use it: You will often need result objects anyway - and it's possible to reuse the same return objects. In other cases, you can use the same object for the command and for the result.
The module can be used for multiple commands.

Which options do I have for Java process communication?

We have a place in a code of such form:
void processParam(Object param)
{
wrapperForComplexNativeObject result = jniCallWhichMayCrash(param);
processResult(result);
}
processParam - method which is called with many different arguments.
jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases.
wrapperForComplexNativeObject - wrapper type generated by SWIG
processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects:
1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them.
2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind.
After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?
Let us assume that the processes are on the same machine. Your options include:
Use a Process.exec() to launch a new process for each request, passing the parameter object as a command line argument or via the processes standard input and reading the result from thr processes standard output. The process exits on completion of a single request.
Use a Process.exec() to launch a long running process, using the Processes standard input / output for sending the requests and replies. The process instance handles multiple requests.
Use a "named pipe" to send requests / replies to an existing local (or possibly remote) process.
Use raw TCP/IP Sockets or Unix Domain Sockets to send requests / replies to an existing local (or possibly remote) process.
For each of the above, you will need to design your own request formats and deal with parameter / result encoding and decoding on both sides.
Implement the process as a web service and use JSON or XML (or something else) to encode the parameters and results. Depending on your chosen encoding scheme, there will be existing libraries deal with encoding / decoding and (possibly) mapping to Java types.
SOAP / WSDL - with these, you typically design the application protocol at a higher level of abstraction, and the framework libraries take care of encoding / decoding, dispatching requests and so on.
CORBA or an equivalent like ICE. These options are like SOAP / WSDL, but using more efficient wire representations, etc.
Message queuing systems like MQ-series.
Note that the last four are normally used in systems where the client and server are on separate machines, but they work just as well (and maybe faster) when client and server are colocated.
I should perhaps add that an alternative approach is to get rid of the problematic JNI code. Either replace it with pure Java code, or run it as an external command or service without a Java wrapper around it.
Have you though about using web-inspired methods ? in your case, typically, web-services could/would be a solution in all its diversity :
REST invocation
WSDL and all the heavy-weight mechanism
Even XML-RPC over http, like the one used by Spring remoting or JSPF net export could inspire you
If you can isolate the responsibilities of the process, ie P1 is a producer of data and P2 is a consumer, the most robust answer is to use a file to communicate your data. There is overhead (read CPU cycles) involved in serailization/deserialization however your process(es) will not crash and it is very easy to debug/synchronize.

Save object in debug and than use it as stub in tests

My application connects to db and gets tree of categories from here. In debug regime I can see this big tree object and I just thought of ability to save this object somewhere on disk to use in test stubs. Like this:
mockedDao = mock(MyDao.class);
when(mockedDao.getCategoryTree()).thenReturn(mySavedObject);
Assuming mySavedObject - is huge enough, so I don't want to generate it manually or write special generation code. I just want to be able to serialize and save it somewhere during debug session then deserialize it and pass to thenReturn in tests.
Is there is a standard way to do so? If not how is better to implement such approach?
I do love your idea, it's awesome!
I am not aware of a library that would offer that feature out of the box. You can try using ObjectOutoutStream and ObjectInputStream (ie the standard Java serialization) if your objects all implement Seriablizable. Typically they do not. In that case, you might have more luck using XStream or one of its friends.
We usually mock the entire DB is such scenarios, reusing (and implicitly testing) the code to load the categories from the DB.
Specifically, our unit tests run against an in-memory database (hsqldb), which we initialize prior to each test run by importing test data.
Have look at Dynamic Managed Beans - this offers a way to change values of a running java application. Maybe there's a way to define a MBean that holds your tree, read the tree, store it somewhere and inject it again later.
I've run into this same problem and considered possible solutions. A few months ago I wrote custom code to print a large binary object as hex encoded strings. My toJava() method returns a String which is source code for a field definition of the object required. This wasn't hard to implement. I put log statements in to print the result to the log file, and then cut and paste from the log file to a test class. New unit tests reference that file, giving me the ability to dig into operations on an object that would be very hard to build another way.
This has been extremely useful but I quickly hit the limit on the size of bytecode in a compilation unit.

Categories

Resources