I am working on a legacy system, where there is a remote bean that has become too big and monolithic, and I would like to keep separate the new functionality I that need to add.
My initial idea was, instead of adding my new methods to the existing interface, create a new interface with all my stuff and add a single method that returns a remote object implementing my interface.
The problem I am facing now is that when I'm invoking the method that returns my object, the runtime tries to serialize it instead of sending the stub.
The code layout is more or less like this:
#Stateless
public class OldBean implements OldRemoteInterface {
//lots of the old unrelated methods here
public MyNewStuff getMyNewStuff() {
return new MyNewStuff();
}
}
#Remote
public interface OldRemoteInterface {
//lots of the old unrelated methods declared here
MyNewStuff getMyNewStuff();
}
public class MyNewStuff implements NewRemoteInterface {
//methods implemented here
}
#Remote
public interface NewRemoteInterface {
//new methods declared here
}
And the exception I am getting is:
"IOP00810267: (MARSHAL) An instance of class MyNewStuff could not be marshalled:
the class is not an instance of java.io.Serializable"
I have tried to do it "the old way", extending the java.rmi.Remote interface instead of using the ejb #Remote annotation, and the exception I get is:
"IOP00511403: (INV_OBJREF) Class MyNewStuff not exported, or else is actually
a JRMP stub"
I know I must be missing something that should be obvious... :-/
your approach here is a bit confusing. when you created the new interface, the next step should have been to have the old bean implement the new interface, like so:
public class OldBean implements OldRemoteInterface, NewRemoteInterface {
Your old bean would get larger, yes, but this is the only way you can expand the functionality of your old bean without creating a new bean or touching the old interface.
The object being returned by getNewStuff() is just a plain object -- it is not remote. That's why you're getting serialization errors, because RMI is trying to transfer your NewRemoteInterface instance across the network. Annotating it with #Remote doesn't do anything (until you actually use the interface on a bean, deploy that bean and then retrieve it using DI or Contexts)
Related
I need to dynamically Inject a variable group of classes in my application. The purpose is, as the application grows, only have to add more classes inheriting the same interface. This is easy to do with tradicional java as I just need to search for all classes in a package and perform a loop to instantiate them. I want to do it in CDI. For example:
public MyValidatorInterface {
public boolean validate();
}
#Named
MyValidator1 implements MyValidatorInterface
...
#Named
MyValidator2 implements MyValidatorInterface
...
Now the ugly non real java code just to get the idea of what I want to do:
public MyValidatorFactory {
for (String className: classNames) {
#Inject
MyValidatorInterface<className> myValidatorInstance;
myValidatorInstance.validate();
}
}
I want to loop over all implementations found in classNames list (all will be in the same package BTW) and Inject them dynamically so if next week I add a new validator, MyValidator3, I just have to code the new class and add it to the project. The loop in MyValidatorFactory will find it, inject it and execute the validate() method on the new class too.
I have read about dynamic injection but I can't find a way to loop over a group of class names and inject them just like I used to Instantiate them the old way.
Thanks
What you are describing is what Instance<T> does.
For your sample above, you would do:
`#Inject Instance<MyValidatorInterface> allInstances`
Now, allInstances variable contains all your beans which have the given Type (MyValidatorInterface). You can further narrow down the set by calling select(..) based on qualifiers and/or class of bean. This will again return an Instance but with only a subset of previously fitting beans. Finally, you call get() which retrieves the bean instance for you.
NOTE: if you call get() straight away (without select) in the above case, you will get an exception because you have two beans of given type and CDI cannot determine which one should be used. This is implied by rules of type-safe resolution.
What you most likely want to know is that Instance<T> also implements Iterable so that's how you get to iterate over the beans. You will want to do something like this:
#Inject
Instance<MyValidatorInterface> allInstances;
public void validateAll() {
Iterator<MyValidatorInterface> iterator = allInstances.iterator();
while (iterator.hasNext()) {
iterator.next().callYourValidationMethod();
}}
}
I am working on GWT project with JDK7. It has two entryPoints (two clients) that are located in separate packages of the project. Clients share some code that is located in /common package, which is universal and accessible to both by having the following line in their respective xml-build files:
<source path='ui/common' />
Both clients have their own specific implementations of the Callback class which serves their running environments and performs various actions in case of failure or success. I have the following abstract class that implements AsyncCallback interface and then gets extended by its respective client.
public abstract class AbstractCallback<T> implements AsyncCallback<T> {
public void handleSuccess( T result ) {}
...
}
Here are the client's classes:
public class Client1Callback<T> extends AbstractCallback<T> {...}
and
public class Client2Callback<T> extends AbstractCallback<T> {...}
In the common package, that also contains these callback classes, I am working on implementing the service layer that serves both clients. Clients use the same back-end services, just handle the results differently. Based on the type of the client I want to build a corresponding instance of AbstractCallback child without duplicating anonymous class creation for each call. I am going to have many declarations that will look like the following:
AsyncCallback<MyVO> nextCallback = isClient1 ?
new Client1Callback<MyVO>("ABC") {
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
}
:
new Client2Callback<MyVO>("DEF") {
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
};
That will result in a very verbose code.
The intent (in pseudo-code) is to have the below instead:
AsyncCallback<MyVO> nextCallback = new CallbackTypeResolver.ACallback<MyVO>(clientType, "ABC"){
public void handleSuccess(MyVO result) {
doThatSameAction(result);
}
};
I was playing with the factory pattern to get the right child instance, but quickly realized that I am not able to override handleSuccess() method after the instance is created.
I think the solution may come from one of the two sources:
Different GWT way of dealing with custom Callback implementations, lets call it alternative existent solution.
Java generics/types juggling magic
I can miss something obvious, and would appreciate any advice.
I've read some articles here and on Oracle about types erasure for generics, so I understand that my question may have no direct answer.
Refactor out the handleSuccess behavior into its own class.
The handleSuccess behavior is a separate concern from what else is going on in the AsyncCallback classes; therefore, separate it out into a more useful form. See Why should I prefer composition over inheritance?
Essentially, by doing this refactoring, you are transforming an overridden method into injected behavior that you have more control over. Specifically, you would have instead:
public interface SuccessHandler<T> {
public void handleSuccess(T result);
}
Your callback would look something like this:
public abstract class AbstractCallback<T> implements AsyncCallback<T> {
private final SuccessHandler<T> handler; // Inject this in the constructor
// etc.
// not abstract anymore
public void handleSuccess( T result ) {
handler.handleSuccess(result);
}
}
Then your pseudocode callback creation statement would be something like:
AsyncCallback<MyVO> nextCallback = new CallbackTypeResolver.ACallback<MyVO>(
clientType,
"ABC",
new SuccessHandler<MyVO>() {
public void handleSuccess(MyVO result) {
doThatSameMethod(result);
}
});
The implementations of SuccessHandler don't have to be anonymous, they can be top level classes or even inner classes based on your needs. There's a lot more power you can do once you're using this injection based framework, including creating these handlers with automatically injected dependencies using Gin and Guice Providers. (Gin is a project that integrates Guice, a dependency injection framework, with GWT).
In the project I'm working on (not my project, just working on it), there are many structures like this:
project.priv.logic.MyServiceImpl.java
project.priv.service.MyServiceFactoryImpl.java
project.pub.logic.MyServiceIF.java
project.pub.service.MyServiceFactoryIF.java
project.pub.service.MyServiceFactorySupplier.java
And the Service is called like this:
MyServiceFactorySupplier.getMyServiceFactory().getMyService()
I understand that a factory is used to hide the implementation of MyServiceImpl if the location or content of MyServiceImpl changes. But why is there another factory for my factory (the supplier)? I think the probability of my Factory and my FactorySupplier to change is roughly equal. Additionally I have not found one case, where the created factory is created dynamically (I think this would be the case in the Abstract Factory Pattern) but only returns MyServiceFactoryImpl.getInstance(). Is it common practice to implement a FactorySupplier? What are the benefits?
I can think of a couple of examples (some of the quite contrived) where this pattern may be useful. Generally, you have two or more implementations for your Services e.g.
one for production use / one for testing
one implementation for services accessing a database, another one for accessing a file base storage
different implementations for different locales (translations, formatting of dates and numbers etc)
one implementation for each type of database you want to access
In each of these examples, an initialization for your FactorySupplier is needed at startup of the application, e.g. the FactorySupplier is parametrized with the locale or the database type and produces the respective factories based in these parameters.
If I understand you correctly, you don't have any kind of this code in your application, and the FactorySupplier always returns the same kind of factory.
Maybe this was done to program for extensibility that was not needed yet, but IMHO this looks rather like guessing what the application might need at some time in the future than like a conscious architecture choice.
Suppose you have a hierarchy of classes implementing MyServiceIF.
Suppose you have a matching hierarchy of factory classes to create each of the instances in the original hierarchy.
In that case, MyServiceFactorySupplier could have a registry of available factories, and you might have a call to getMyServiceFactory(parameter), where the parameter determines which factory will be instantiated (and therefore an instance of which class would be created by the factory).
I don't know if that's the use case in your project, but it's a valid use case.
Here's a code sample of what I mean :
public class MyServiceImpl implements MyServiceIF
{
....
}
public class MyServiceImpl2 implements MyServiceIF
{
....
}
public class MyServiceFactoryImpl implements MyServiceFactoryIF
{
....
public MyServiceIF getMyService ()
{
return new MyServiceImpl ();
}
....
}
public class MyServiceFactoryImpl2 implements MyServiceFactoryIF
{
....
public MyServiceIF getMyService ()
{
return new MyServiceImpl2 ();
}
....
}
public class MyServiceFactorySupplier
{
....
public static MyServiceFactoryIF getMyServiceFactory()
{
return new MyServiceFactoryImpl (); // default factory
}
public static MyServiceFactoryIF getMyServiceFactory(String type)
{
Class serviceClass = _registry.get(type);
if (serviceClass != null) {
return serviceClass.newInstance ();
} else {
return getMyServiceFactory(); // default factory
}
}
....
}
I have a related hierarchy of classes that are instantiated by a hierarchy of factories. While I don't have a FactorySupplier class, I have in the base class of the factories hierarchy a static method BaseFactory.getInstance(parameter), which returns a factory instance that depends on the passed parameter.
I have created a OSGI service with declarative services to inject an object that implements an interface. If I inject the object in a class that is attached to the application model (handler,part,....) it is working fine. If I inject it in a class that is not attached to the application model it is always returning null.
Is it possible to use DI in classes that are not attached to the application model? I looked in the vogella tutorials but somehow I don't find a solution.
I know of three ways of how Eclipse 4 can inject objects in your classes:
During start-up the Eclipse runtime looks for relevant annotations in the classes it instantiates.
Objects injected in 1. are tracked and will be re-injected if changed.
Manually triggering injection using the ContextInjectionFactory and IEclipseContext.
What you want may be possible with the third option. Here is a code example:
ManipulateModelhandler man = new ManipulateModelhandler();
//inject the context into an object
//IEclipseContext iEclipseContext was injected into this class
ContextInjectionFactory.inject(man,iEclipseContext);
man.execute();
The problem is, however; that the IEclipseContext already needs to be injected into a class that can access the object that needs injection. Depending on the number of necessary injections, it might be more useful to use delegation instead (testability would be one argument).
#Inject
public void setFoo(Foo foo) {
//Bar is not attached to the e4 Application Model
bar.setFoo(foo);
}
Therefore, a better solution is probably using the #Creatable annotation.
Simply annotate your class, and give it a no-argument constructor.
#Creatable
public class Foo {
public Foo () {}
}
Using #Inject on that type as in the method above, will let Eclipse instantiate and inject it.
The disadvantage is that you cannot control the object creation anymore, as you would with ContextInjectionFactory.inject(..).
I refactored out some part of e(fx)clipse in order to achieve that. Have a look at this. Sorry for the shameless plug...
I am interested in getting the class being proxied from spring, rather than the proxy.
ie:
public class FooImpl<KittyKat> {
#Transactional
public void doStuff() {
getBar();
// java.lang.ClassCastException: $Proxy26 cannot be cast to
// com.my.foo.Bar
}
}
public abstract class AbstractFoo<T extends AbstractBar> {
public String barBeanName;
protected T getBar() {
// java.lang.ClassCastException: $Proxy26 cannot be cast to
// com.my.foo.Bar
return (T)appContext.getBean(barBeanName);
}
}
public class KittyCat extends AbstractBar {
...
}
public abstract class AbstractBar {
...
}
Are you trying to get the proxied bean only because of the ClassCastException? If you could cast to Bar, would you happy with that?
When Spring creates a proxy, it checks to see if the bean class implements any interfaces. If it does, then the generated proxy will also implement those interfaces, but it will not extend the target bean's class. It does this using a standard java.lang.reflect.Proxy. This seems to be the case in your example.
If the target bean's class does not implement any interfaces, then Spring will use CGLIB to generate a proxy class which is a subclass of the target bean's class. This is sort of a stop-gap measure for proxying non-interface beans.
You can force Spring to always proxy the target class, but how you do that depends on how you created the Bar proxy to begin with, and you haven't told us that.
The generally preferred solution is to refer to your proxied beans by their interfaces, and everything works nicely. If your Bar class implement interfaces, could your Foo not refer to that interface?