Struts2 Basic doubts - java

Im new to Struts2.... I have some few doubts below...
a) In Struts2, how does application finds struts.xml?? (we don't define struts.xml in web.xml file unlike in Struts1 which we define struts-config.xml in web.xml file, so how its going to see that struts.xml file)
b) why dont we write extends ActionSupport for our Action class.. I have seen many examples without extending with any other predefined Action class.. How does it find execute() method or populate() method in our Action class if we dont extend any other predefined action class or implement Action interface methods ??
c) In what cases we use extends Action Support

a) If you don't override the configuration file name ("config" parameter to the Struts filter in web.xml), then it will default to "struts.xml". This is simply a hard-coded default, hence "configuration by convention".
b) The framework allows "plain old java objects" (POJOs) for actions. Just tell it what method to call (in struts.xml), and it will use reflection to find such a method (must be no-args and return a String) and call it. On the other hand, some interfaces are used for additional functionality, for example if your class implements Preparable then the prepare() method will automatically be called prior to execution (perhaps probably similar to "populate" in Struts1?)
c) Extending ActionSupport is entirely optional, but gives access to some functionality that might be useful, such as default implementations for some action methods such as "input", convenient methods for internationalization, etc.

+1 to Todd's answer.
To b) : notice that there is no need to specify a method (though one can do it), by default ("convention") the execute() method will be called.
To c) : extending ActionSupport is optional, and IMO quite frequent. Sometimes it's also advisable to implement your own (say) BaseAction (which frequently extends ActionSupport) to factor out the common functionality of your webapp, and make all (or nearly all) your actions extend it.

Related

Automatically call static block without explicitly calling Class.forName

Asume the following code:
public class Main {
public static final List<Object> configuration = new ArrayList<>();
public static void main(String[] args) {
System.out.println(configuration);
}
}
I now want to be able, to provide "self-configuring" classes. This means, they should be able to simply provide something like a static block, that will get called automatically like this:
public class Custom {
static {
Main.configuration.add(Custom.class);
}
}
If you execute this code, the configuration list is empty (because of the way static blocks are executed). The class is "reachable", but not "loaded". You could add the following to the Main class before the System.out
Class.forName("Custom");
and the list would now contain the Custom class object (since the class is not initialized yet, this call initializes it). But because the control should be inverse (Custom should know Main and not the other way around), this is not a usable approach. Custom should never be called directly from Main or any class, that is associated with Main.
What would be possible though is the following: You could add an Annotation to the class and collect all classes with said annotation, using something like the ClassGraph framework and call Class.forName on each of them.
TL;DR
Is there a way, to automatically call the static block without the need to analyze all classes and the need of knowing the concrete, "self configuring" class? Perfect would be an approach, that, upon starting the application, automatically initializes a classes (if they are annotated with a certain annotation). I thought about custom ClassLoaders, but from what i understand, they are lazy and therefor not usable for this approach.
The background of this is, that i want to incorporate it into an annotation processor, which creates "self configuring code".
Example (warning: design-talk and in depth)
To make this a little less abstract, imagine the following:
You develop a Framework. Let's call it Foo. Foo has the classes GlobalRepository and Repository. GlobalRepository follows the Singleton design pattern (only static methods). The Repository as well as the GlobalRepository have a method "void add(Object)" and " T get(Class)". If you call get on the Repository and the Class cannot be found, it calls GlobalRepository.get(Class).
For convenience, you want to provide an Annotation called #Add. This Annotation can be placed on Type-Declarations (aka Classes). An annotation-processor creates some configurations, which automatically add all annotated classes to the GlobalRepository and therefor reduce boilerplate code. It should only (in all cases) happen once. Therefor the generated code has a static initializer, in which the GlobalRepository is filled, just like you would do with the local repository. Because your Configurations have names that are designed to be as unique as possible and for some reason even contain the date of creation (this is a bit arbitrary, but stay with me), they are nearly impossible to guess.
So, you also add an annotation to those Configurations, which is called #AutoLoad. You require the using developer to call GlobalRepository.load(), after which all classes are analyzed and all classes with this annotation are initialized, and therefor their respective static-blocks are called.
This is a not very scalable approach. The bigger the application, the bigger the realm to search, the longer the time and so on. A better approach would be, that upon starting the application, all classes are automatically initialized. Like through a ClassLoader. Something like this is what i am looking for.
First, don’t hold Class objects in your registry. These Class objects would require you to use Reflection to get the actual operation, like instantiating them or invoking certain methods, whose signature you need to know before-hand anyway.
The standard approach is to use an interface to describe the operations which the dynamic components ought to support. Then, have a registry of implementation instances. These still allow to defer expensive operations, if you separate them into the operational interface and a factory interface.
E.g. a CharsetProvider is not the actual Charset implementation, but provides access to them on demand. So the existing registry of providers does not consume much memory as long as only common charsets are used.
Once you have defined such a service interface, you may use the standard service discovery mechanism. In case of jar files or directories containing class files, you create a subdirectory META-INF/services/ containing a file name as the qualified name of the interface containing qualified names of implementation classes. Each class path entry may have such a resource.
In case of Java modules, you can declare such an implementation even more robust, using
provides service.interface.name with actual.implementation.class;
statements in your module declaration.
Then, the main class may lookup the implementations, only knowing the interface, as
List<MyService> registered = new ArrayList<>();
for(Iterator<MyService> i = ServiceLoader.load(MyService.class); i.hasNext();) {
registered.add(i.next());
}
or, starting with Java 9
List<MyService> registered = ServiceLoader.load(MyService.class)
.stream().collect(Collectors.toList());
The class documentation of ServiceLoader contains a lot more details about this architecture. When you go through the package list of the standard API looking for packages have a name ending with .spi, you get an idea, how often this mechanism is already used within the JDK itself. The interfaces are not required to be in packages with such names though, e.g. implementations of java.sql.Driver are also searched through this mechanism.
Starting with Java 9, you could even use this to do something like “finding the Class objects for all classes having a certain annotation”, e.g.
List<Class<?>> configuration = ServiceLoader.load(MyService.class)
.stream()
.map(ServiceLoader.Provider::type)
.filter(c -> c.isAnnotationPresent(MyAnnotation.class))
.collect(Collectors.toList());
but since this still requires the classes to implement a service interface and being declared as implementations of the interface, it’s preferable to use the methods declared by the interface for interacting with the modules.

Java - Creating a class to dynamically determine if user has access to the calling method

I have tried doing a search for this but I fear I may not be wording what I want to do very well.
Currently, we have about a hundred action classes in our application with each determining if a user has access to it. I would like to make a class that can figure out the calling method, what permissions are required for it, and if the user has those permissions. Unfortunately, I don't really know how to even get started with this as each class may have slightly different requirements.
I'm happy to add more explanation if needed but as I said, I'm not sure I'm wording what I'm trying to do very well so if anyone has a better way of putting it that gets me some google results or a link to a related question here that's already been answered, I know I'd appreciate it.
current permissions checks look like below. This is a simple implementation, there are usually multiple profile checks in one if block.
If (scc.getUser().getCurrentProfile().getSystemAdmin() != 1) {
logIllegalAccess(log);
break;
}
IMHO the most elegant solution would make use of annotation processing. The idea is that you would annotate action classes with a custom annotation, something like:
#RequiredPermission(Permissions.SYSADM)
class ActionA {
public ActionA newInstance() {
return new ActionA_Gen(new ActionA());
}
private ActionA() {...}
...
}
Action classes would have to have a newInstance() method to be used to create instances instead of calling new. The method would create an instance of a class by the same name with _Gen extension. This class would have one method for each method in the original action class, which would perform a permission check and call the corresponding method in the original class instance that was passed to its constructor.
The _Gen class would be generated by an annotation processor.
Note that by using reflection it might be possible to move the newInstance() method in a common superclass.

Java Interface Design - Helper Methods

I have an interface like so
public interface Manager {
public void manage();
}
Now, all Managers will need to load work to manage, however, I have mixed feelings about adding public void loadWork() to the interface...
On one hand, all Managers will do this, but on the other hand, users of a Manager class will not need to know about loadWork().
Question: Is it bad practice to add "helper" or "setup" type methods to an interface?
It's not always a bad idea to add "setup" methods in an interface. For example, Java EE has an interface called ServletContextListener that is purely meant to make setup and shut down.
It's even sometimes acceptable to make interfaces with methods you should actually never directly call such as the Runnable or the Callable interface.
Being said that, it seems is that you want to force your developers to implement a loadWork() method in Manager but you also want to hide it from the class' users.
As you say, one option is adding the method in the interface but this way the method will be accessible (which you don't want). If you don't want the method to have visibility I see two options:
Make the class Manager an abstract class and add a loadWork() protected method.
Create an interface called LoadWorker with a method loadWork(). Then create an abstract class AbstractManager that implements Manager and has as a private/protected LoadWorker field. This way, even though loadWork() is public, it's not accessible from AbstractManager's users as it is called through a protected/private field (LoadWorker).
At the end it comes to a balance between overengineering and good design. It's up to you to take the decision following the specific needs. Nevertheless, there is no 'perfect solution'.

Struts2 letting an interceptor not run for certain classes

I have a custom interceptor. I would like this interceptor to run on all action invocations except a few. I would like to program this (for extendibility/clarity) rather than using if/else statements checking the action's name inside the interceptor's intercept() method itself.
I think it might be done with the "exclude method" capacities of Struts2, but I'm stuck with the exact details. I think my interceptor needs to extend the MethodFilterInterceptor, but it has 2 intercept methods and the API is not very helpful in saying what each should do:
protected abstract String doIntercept(ActionInvocation invocation)
Subclasses must override to implement the interceptor logic.
String intercept(ActionInvocation invocation)
Override to handle interception
You are thinking it the other way around:
instead of checking the Action name (or better, the instanceOf to check for a specific Action Interface) to see if it should do some business, simply tell that Action to use a different Interceptor Stack.
For example, you can say that your Custom Stack (the Stack containing your Interceptor) is the default (then applied to all actions), but that ActionA, ActionB and ActionX run with the DefaultStack...
Interceptors shouldn't extend ActionSupport, yuck. They're not actions.
Mark the actions it should (or should not) run for with an interface or annotation and check for that in the interceptor.

Implementing an interface from a framework vs simple java interface

This concept is unclear with me.
I have worked on several frameworks for an instance Spring.
To implement a feature we always implement some interfaces provided by the framework.
For an instance if I have to create a custom scope in Spring, my class implements a org.springframework.beans.factory.config.Scope interface. Which has some predefined low level functionality which helps in defining a custom scope for a bean.
Whereas in Java I read an interface is just a declaration which classes can implement & define their own functionality. The methods of an interface have no predefined functionality.
interface Car
{
topSpeed();
acclerate();
deaccelrate();
}
The methods here don't have any functionality. They are just declared.
Can anyone explain this discrepancy in the concept? How does the framework put some predefined functionality with interface methods?
It doesn't put predefined functionality in the methods. But when you implement
some interface (say I) in your class C, the framework knows that your object (of type C)
implements the I interface, and can call certain methods (defined in I) on your object
thus sending some signals/events to your object. These events can be e.g. 'app initialized',
'app started', 'app stopped', 'app destroyed'. So usually this is what frameworks do.
I am talking about frameworks in general here, not Spring in particular.
There is no conceptual difference, actually. Each java interface method has a very clear responsibility (usually described in its javadoc). Take Collection.size() as an example. It is defined to return the number of elements in your collection. Having it return a random number is possible, but will cause no end of grief for any caller. Interface methods have defined semantics ;)
As I mentioned in the comments, to some extent, implementing interfaces provided by the framework is replaced by the use of stereotype annotations. For example, you might annotate a class as #Entity to let Spring know to manage it and weave a Transaction manager into it.
I have a suspicion that what you are seeing relates to how Spring and other frameworks make use of dynamic proxies to inject functionality.
For an example of Spring injecting functionality, if you annotate a method as #Transactional, then the framework will attempt to create a dynamic proxy, which wraps access to your method. i.e. When something calls your "save()" method, the call is actually to the proxy, which might do things like starting a transaction before passing the call to your implementation, and then closing the transaction after your method has completed.
Spring is able to do this at runtime if you have defined an interface, because it is able to create a dynamic proxy which implements the same interface as your class. So where you have:
#Autowired
MyServiceInterface myService;
That is injected with SpringDynamicProxyToMyServiceImpl instead of MyServiceImpl.
However, with Spring you may have noticed that you don't always need to use interfaces. This is because it also permits AspectJ compile-time weaving. Using AspectJ actually injects the functionality into your class at compile-time, so that you are no longer forced to use an interface and implementation. You can read more about Spring AOP here:
http://docs.spring.io/spring/docs/4.0.0.RELEASE/spring-framework-reference/htmlsingle/#aop-introduction-defn
I should point out that although Spring does generally enable you to avoid defining both interface and implementation for your beans, it's not such a good idea to take advantage of it. Using separate interface and implementation is very valuable for unit testing, as it enables you to do things like inject a stub which implements an interface, instead of a full-blown implementation of something which needs database access and other rich functionality.

Categories

Resources