Related
I'm attempting to write a framework to handle an interface with an external library and its API. As part of that, I need to populate a header field that exists with the same name and type in each of many (70ish) possible message classes. Unfortunately, instead of having each message class derive from a common base class that would contain the header field, each one is entirely separate.
As as toy example:
public class A
{
public Header header;
public Integer aData;
}
public class B
{
public Header header;
public Long bData;
}
If they had designed them sanely where A and B derived from some base class containing the header, I could just do:
public boolean sendMessage(BaseType b)
{
b.header = populateHeader();
stuffNecessaryToSendMessage();
}
But as it stands, Object is the only common class. The various options I've thought of would be:
A separate method for each type. This would work, and be fast, but the code duplication would be depressingly wasteful.
I could subclass each of the types and have them implement a common Interface. While this would work, creating 70+ subclasses and then modifying the code to use them instead of the original messaging classes is a bridge too far.
Reflection. Workable, but I'd expect it to be too slow (performance is a concern here)
Given these, the separate method for each seems like my best bet, but I'd love to have a better option.
I'd suggest you the following. Create a set of interfaces you'd like to have. For example
public interface HeaderHolder {
public void setHeader(Header header);
public Header getHeader();
}
I'd like your classes to implement them, i.e you's like that your class B is defined as
class B implements HeaderHolder {...}
Unfortunately it is not. Now problem!
Create facade:
public class InterfaceWrapper {
public <T> T wrap(Object obj, Class<T> api) {...}
}
You can implement it at this phase using dynamic proxy. Yes, dynamic proxy uses reflection, but forget about this right now.
Once you are done you can use your InterfaceWrapper as following:
B b = new B();
new IntefaceWrapper().wrap(b, HeaderHolder.class).setHeader("my header");
As you can see now you can set headers to any class you want (if it has appropriate property). Once you are done you can check your performance. If and only if usage of reflection in dynamic proxy is a bottleneck change the implementation to code generation (e.g. based on custom annotation, package name etc). There are a lot of tools that can help you to do this or alternatively you can implement such logic yourself. The point is that you can always change implementation of IntefaceWrapper without changing other code.
But avoid premature optimization. Reflection works very efficiently these days. Sun/Oracle worked hard to achieve this. They for example create classes on the fly and cache them to make reflection faster. So probably taking in consideration the full flow the reflective call does not take too much time.
How about dynamically generating those 70+ subclasses in the build time of your project ? That way you won't need to maintain 70+ source files while keeping the benefits of the approach from your second bullet.
The only library I know of that can do this Dozer. It does use reflection, but the good news is that it'll be easier to test if it's slow than to write your own reflection code to discover that it's slow.
By default, dozer will call the same getter/setters on two objects even if they are completely different. You can configure it in much more complex ways though. For example, you can also tell it to access the fields directly. You can give it a custom converter to convert a Map to a List, things like that.
You can just take one populated instance, or perhaps even your own BaseType and say, dozer.map(baseType, SubType.class);
Perhaps I have completely fallen short in my search, but I cannot locate any documentation or discussions related to how to write a unit test for a Java class/method that in turn calls other non-private methods. Seemingly, Mockito takes the position that there is perhaps something wrong with the design (not truly OO) if a spy has to be used in order to test a method where mocking internal method calls is necessary. I'm not certain this is always true. But using a spy seems to be the only way to accomplish this. For example, why could you not have a "wrapper" style method that in turn relies on other methods for primitive functionality but additionally provides functionality, error handling, logging, or different branches dependent on results of the other methods, etc.?
So my question is two-fold:
Is it poorly designed and implemented code to have a method that internally calls other methods?
What is the best practice and/or approach in writing a unit test for such a method (assuming it is itself a good idea) if one has chosen Mockito as their mocking framework?
This might be a difficult request, but I would prefer for those who decide to answer to not merely re-publish the Mockito verbiage and/or stance on spies as I already am aware of that approach and ideology. Also, I've used Powermockito as well. To me, the issue here is that Mockito developed this framework where additional workarounds had to be created to support this need. So I suppose the question I am wanting an answer to is if spies are "bad", and Powermockito were not available, how is one supposed to unit test a method that calls other non-private methods?
Is it poorly designed and implemented code to have a method that internally calls other methods?
Not really. But I'd say that, in this situation, the method that calls the others should be tested as if the others where not already tested separately.
That is, it protects you from situations where your public methods stops calling the other ones without you noticing it.
Yes, it makes for (sometimes) a lot of test code. I believe that this is the point: the pain in writing the tests is a good clue that you might want to consider extracting those sub-methods into a separate class.
If I can live with those tests, then I consider that the sub-methods are not to be extracted yet.
What is the best practice and/or approach in writing a unit test for such a method (assuming it is itself a good idea) if one has chosen Mockito as their mocking framework?
I'd do something like that:
public class Blah {
public int publicMethod() {
return innerMethod();
}
int innerMethod() {
return 0;
}
}
public class BlahTest {
#Test
public void blah() throws Exception {
Blah spy = spy(new Blah());
doReturn(1).when(spy).innerMethod();
assertThat(spy.publicMethod()).isEqualTo(1);
}
}
To me, this question relates strongly to the concept of cohesion.
My answer would be:
It is ok to have methods (public) that call other methods (private) in a class, in fact very often that is what I think of as good code. There is a caveat to this however in that your class should still be strongly cohesive. To me that means the 'state' of your class should be well defined, and the methods (think behaviours) of your class should be involved in changing your classes state in predictable ways.
Is this the case with what you are trying to test? If not, you may be looking at one class when you should be looking at two (or more).
What are the state variables of the class you're trying to test?
You might find that after considering the answers to these types of questions, your code becomes much easier to test in the way you think it should be.
If you really need (or want) to avoid calling the lower-level methods again, you can stub them out instead of mocking them. For example, if method A calls B and C, you can do this:
MyClass classUnderTest = new MyClass() {
#Override
public boolean B() {return true;}
#Override
public int C() {return 0;}
};
doOtherCommonSetUp(classUnderTest);
String result = classUnderTest.A("whatever");
assertEquals("whatIWant", result);
I've used this quite a quite a bit with legacy code where extensive refactoring could easily lead to the software version of shipwright's disease: Isolate something difficult to test into a small method, and then stub that out.
But if the methods being called are fairly innocuous and don't requiring mocking, I just let them be called again without worrying that I am covering every path within them.
The real question should be:
What do I really want to test?
And actually the answer should be:
The behaviour of my object in response to outside changes
That is, depending on the way one can interact with your object, you want to test every possible single scenario in a single test. This way, you can make sure that your class reacts according to your expectations depending on the scenario you're providing your test with.
Is it poorly designed and implemented code to have a method that internally calls other methods?
Not really, and really not! These so called private methods that are called from public members are namely helper methods. It is totally correct to have helper methods!
Helper methods are there to help break some more complex behaviours into smaller pieces of reusable code from within the class itself. Only it knows how it should behave and return the state accordingly through the public members of your class.
It is unrare to see a class with helper methods and normally they are necessary to adopt an internal behaviour for which the class shouldn't react from the outside world.
What is the best practice and/or approach in writing a unit test for such a method (assuming it is itself a good idea) if one has chosen Mockito as their mocking framework?
In my humble opinion, you don't test those methods. They get tested when the public members are tested through the state that you expect out of your object upon a public member call. For example, using the MVP pattern, if you want to test user authentication, you shall not test every private methods, since private methods might as well call other public methods from an object on which depend the object under test and so forth. Instead, testing your view:
#TestFixture
public class TestView {
#Test
public void test() {
// arrange
string expected = "Invalid login or password";
string login = "SomeLogin";
string password = "SomePassword";
// act
viewUnderTest.Connect(login, password);
string actual = viewUnderTest.getErrorMessage;
// assert
assertEqual(expected, actual);
}
}
This test method describes the expected behaviour of your view once the, let's say, connectButton is clicked. If the ErrorMessage property doesn't contain the expected value, this means that either your view or presenter doesn't behave as expected. You might check whether the presenter subscribed to your view's Connect event, or if your presenter sets the right error message, etc.
The fact is that you never need to test whatever is going on in your private methods, as you shall adjust and bring corrections on debug, which in turn causes you to test the behaviour of your internal methods simultaneously, but no special test method should be written expressly for those helper method.
I have a common jar that uses some unmarshaling of a String object. The method should act differently depending on which application it is called from, how can I do that besides from the fact that I can identify the application by trying to load some unique class it has (don't like that). Is there some design pattern that solves this issue?
As I alluded to in my comment, the best thing to do is to break that uber-method up into different methods that encapsulate the specific behaviors, and likely also another method (used by all of the app-specific ones) that deals with the common behaviors.
The most important thing to remember is that behavior matters. If something is behaving differently in different scenarios, a calling application effectively cannot use that method because it doesn't have any control over what happens.
If you still really want to have a single method that all of your applications call that behaves differently in each one, you can do it, using a certain design pattern, in a way that makes sense and is maintainable. The pattern is called "Template Method".
The general idea of it is that the calling application passes in a chunk of logic that the called method wraps around and calls when it needs to. This is very similar to functional programming or programming using closures, where you are passing around chunks of logic as if it were data. While Java proper doesn't support closures, other JVM-based languages like Groovy, Scala, Clojure, JRuby, etc. do support closures.
This same general idea is very powerful in certain circumstances, and may apply in your case, but such a question requires very intimate knowledge of the application domain and architecture and there really isn't enough information in your posted question do dig too much deeper.
Actually, I think a good OO oriented solution is, in the common jar, to have one base class, and several derived classes. The base class would contain the common logic for the method being called, and each derived class would contain specific behavior.
So, in your jar, you might have the following:
public abstact class JarClass {
public method jarMethod() {
//common code here
}
}
public class JarClassVersion1 extends JarClass {
public method jarMethod() {
// initiailzation code specific to JarClassVerion1
super.jarMethod();
// wrapup code specific to JarClassVerion1
}
}
public class JarClassVersion2 extends JarClass {
public method jarMethod() {
// initiailzation code specific to JarClassVerion2
super.jarMethod();
// wrapup code specific to JarClassVerion2
}
}
As to how the caller works, if you are willing to design your code so that the knowledge of which derived class to use resides with the caller, then you obviously just have the caller create the appropriate derived class and call jarMethod.
However, I take it from your question, you want the knowledge of which class to use to reside in the jar. In that case, there are several solutions. But a fairly easy one is to define a factory method inside the jar which creates the appropriate derived class. So, inside the abstract JarClass, you might define the following method:
public static JarClass createJarClass(Class callerClass) {
if (callerClass.equals(CallerClassType1.class)) {
return new JarClassVersion1();
} else if (callerClass.equals(CallerClassType2.class)) {
return new JarClassVersion1();
// etc. for all derived classess
}
And then the caller would simply do the following:
JarClass.createJarClass(this.getClass()).jarMethod();
I'm writing a library that needs to have some code if a particular library is included. Since this code is scattered all around the project, it would be nice if users didn't have to comment/uncomment everything themselves.
In C, this would be easy enough with a #define in a header, and then code blocks surrounded with #ifdefs. Of course, Java doesn't have the C preprocessor...
To clarify - several external libraries will be distributed with mine. I do not want to have to include them all to minimize my executable size. If a developer does include a library, I need to be able to use it, and if not, then it can just be ignored.
What is the best way to do this in Java?
There's no way to do what you want from within Java. You could preprocess the Java source files, but that's outside the scope of Java.
Can you not abstract the differences and then vary the implementation?
Based on your clarification, it sounds like you might be able to create a factory method that will return either an object from one of the external libraries or a "stub" class whose functions will do what you would have done in the "not-available" conditional code.
As other have said, there is no such thing as #define/#ifdef in Java. But regarding your problem of having optional external libraries, which you would use, if present, and not use if not, using proxy classes might be an option (if the library interfaces aren't too big).
I had to do this once for the Mac OS X specific extensions for AWT/Swing (found in com.apple.eawt.*). The classes are, of course, only on the class-path if the application is running on Mac OS. To be able to use them but still allow the same app to be used on other platforms, I wrote simple proxy classes, which just offered the same methods as the original EAWT classes. Internally, the proxies used some reflection to determine if the real classes were on the class-path and would pass through all method calls. By using the java.lang.reflect.Proxy class, you can even create and pass around objects of a type defined in the external library, without having it available at compile time.
For example, the proxy for com.apple.eawt.ApplicationListener looked like this:
public class ApplicationListener {
private static Class<?> nativeClass;
static Class<?> getNativeClass() {
try {
if (ApplicationListener.nativeClass == null) {
ApplicationListener.nativeClass = Class.forName("com.apple.eawt.ApplicationListener");
}
return ApplicationListener.nativeClass;
} catch (ClassNotFoundException ex) {
throw new RuntimeException("This system does not support the Apple EAWT!", ex);
}
}
private Object nativeObject;
public ApplicationListener() {
Class<?> nativeClass = ApplicationListener.getNativeClass();
this.nativeObject = Proxy.newProxyInstance(nativeClass.getClassLoader(), new Class<?>[] {
nativeClass
}, new InvocationHandler() {
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
String methodName = method.getName();
ApplicationEvent event = new ApplicationEvent(args[0]);
if (methodName.equals("handleReOpenApplication")) {
ApplicationListener.this.handleReOpenApplication(event);
} else if (methodName.equals("handleQuit")) {
ApplicationListener.this.handleQuit(event);
} else if (methodName.equals("handlePrintFile")) {
ApplicationListener.this.handlePrintFile(event);
} else if (methodName.equals("handlePreferences")) {
ApplicationListener.this.handlePreferences(event);
} else if (methodName.equals("handleOpenFile")) {
ApplicationListener.this.handleOpenFile(event);
} else if (methodName.equals("handleOpenApplication")) {
ApplicationListener.this.handleOpenApplication(event);
} else if (methodName.equals("handleAbout")) {
ApplicationListener.this.handleAbout(event);
}
return null;
}
});
}
Object getNativeObject() {
return this.nativeObject;
}
// followed by abstract definitions of all handle...(ApplicationEvent) methods
}
All this only makes sense, if you need just a few classes from an external library, because you have to do everything via reflection at runtime. For larger libraries, you probably would need some way to automate the generation of the proxies. But then, if you really are that dependent on a large external library, you should just require it at compile time.
Comment by Peter Lawrey: (Sorry to edit, its very hard to put code into a comment)
The follow example is generic by method so you don't need to know all the methods involved. You can also make this generic by class so you only need one InvocationHandler class coded to cover all cases.
public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
String methodName = method.getName();
ApplicationEvent event = new ApplicationEvent(args[0]);
Method method = ApplicationListener.class.getMethod(methodName, ApplicationEvent.class);
return method.invoke(ApplicationListener.this, event);
}
In Java one could use a variety of approaches to achieve the same result:
Dependency Injection
Annotations
Reflection
The Java way is to put behaviour that varies into a set of separate classes abstracted through an interface, then plug the required class at run time. See also:
Factory pattern
Builder pattern
Strategy pattern
Well, Java syntax is close enough to C that you could simply use the C preprocessor, which is usually shipped as a separate executable.
But Java isn't really about doing things at compile time anyway. The way I've handled similar situations before is with reflection. In your case, since your calls to the possibly-non-present library are scattered throughout the code, I would make a wrapper class, replace all the calls to the library with calls to the wrapper class, and then use reflection inside the wrapper class to invoke on the library if it is present.
Use a constant:
This week we create some constants
that have all of the benefits of using
the C preprocessor's facilities to
define compile-time constants and
conditionally compiled code.
Java has gotten rid of the entire
notion of a textual preprocessor (if
you take Java as a "descendent" of
C/C++). We can, however, get the best
benefits of at least some of the C
preprocessor's features in Java:
constants and conditional compilation.
I don't believe that there really is such a thing. Most true Java users will tell you that this is a Good Thing, and that relying on conditional compilation should be avoided at almost all costs.
I'm don't really agree with them...
You CAN use constants that can be defined from the compile line, and that will have some of the effect, but not really all. (For example, you can't have things that don't compile, but you still want, inside #if 0... (and no, comments don't always solve that problem, because nesting comments can be tricky...)).
I think that most people will tell you to use some form of inheritance to do this, but that can be very ugly as well, with lots of repeated code...
That said, you CAN always just set up your IDE to throw your java through the pre-processor before sending it to javac...
"to minimize my executable size"
What do you mean by "executable size"?
If you mean the amount of code loaded at runtime, then you can conditionally load classes through the classloader. So you distribute your alternative code no matter what, but it's only actually loaded if the library that it stands in for is missing. You can use an Adapter (or similar) to encapsulate the API, to make sure that almost all of your code is exactly the same either way, and one of two wrapper classes is loaded according to your case. The Java security SPI might give you some ideas how this can be structured and implemented.
If you mean the size of your .jar file, then you can do the above, but tell your developers how to strip the unnecessary classes out of the jar, in the case where they know they aren't going to be needed.
I have one more best way to say.
What you need is a final variable.
public static final boolean LibraryIncluded= false; //or true - manually set this
Then inside the code say as
if(LibraryIncluded){
//do what you want to do if library is included
}
else
{
//do if you want anything to do if the library is not included
}
This will work as #ifdef. Any one of the blocks will be present in the executable code. Other will be eliminated in the compile time itself
Use properties to do this kind of thing.
Use things like Class.forName to identify the class.
Do not use if-statements when you can trivially translate a property directly to a class.
Depending on what you are doing (not quite enough information) you could do something like this:
interface Foo
{
void foo();
}
class FakeFoo
implements Foo
{
public void foo()
{
// do nothing
}
}
class RealFoo
{
public void foo()
{
// do something
}
}
and then provide a class to abstract the instantiation:
class FooFactory
{
public static Foo makeFoo()
{
final String name;
final FooClass fooClass;
final Foo foo;
name = System.getProperty("foo.class");
fooClass = Class.forName(name);
foo = (Foo)fooClass.newInstance();
return (foo);
}
}
Then run java with -Dfoo.name=RealFoo|FakeFoo
Ignored the exception handling in the makeFoo method and you can do it other ways... but the idea is the same.
That way you compile both versions of the Foo subclasses and let the developer choose at runtime which they wish to use.
I see you specifying two mutually exclusive problems here (or, more likely, you have chosen one and I'm just not understanding which choice you've made).
You have to make a choice: Are you shipping two versions of your source code (one if the library exists, and one if it does not), or are you shipping a single version and expecting it to work with the library if the library exists.
If you want a single version to detect the library's existence and use it if available, then you MUST have all the code to access it in your distributed code--you cannot trim it out. Since you are equating your problem with using a #define, I assumed this was not your goal--you want to ship 2 versions (The only way #define can work)
So, with 2 versions you can define a libraryInterface. This can either be an object that wraps your library and forwards all the calls to the library for you or an interface--in either case this object MUST exist at compile time for both modes.
public LibraryInterface getLibrary()
{
if(LIBRARY_EXISTS) // final boolean
{
// Instantiate your wrapper class or reflectively create an instance
return library;
}
return null;
}
Now, when you want to USE your library (cases where you would have had a #ifdef in C) you have this:
if(LIBRARY_EXISTS)
library.doFunc()
Library is an interface that exists in both cases. Since it's always protected by LIBRARY_EXISTS, it will compile out (should never even load into your class loader--but that's implementation dependent).
If your library is a pre-packaged library provided by a 3rd party, you may have to make Library a wrapper class that forwards it's calls to your library. Since your library wrapper is never instantiated if LIBRARY_EXISTS is false, it shouldn't even be loaded at runtime (Heck, it shouldn't even be compiled in if the JVM is smart enough since it's always protected by a final constant.) but remember that the wrapper MUST be available at compile time in both cases.
If it helps have a look at j2me polish or Using preprocessor directives in BlackBerry JDE plugin for eclipse?
this is for mobiles app but this can be reused no ?
Assuming I have a class like
public class FooImpl
{
public void bar(){};
}
Is there a way to create its interface at runtime?
e.g.
public interface Foo
{
public void bar();
}
I have been looking into Javasssist and the truth is it's reflection that I'm interested in using the interface for (as Esko Luontola and Yishai stated)
So I want an interface that specifies a subset of the original class' methods to make a proxy from.
I came to realize there are more things to be concerned about like
Should you reuse that interface or create a new one each time?
The proxy class is effectively a new instance of type java.lang.reflect.Proxy, which might cause implications depending on the use case.
The last point made me wonder on how some frameworks manage to handle this, do they deep copy the object? do they encapsulate the proxy inside the original instance?
So maybe it's just easier (though maybe not as elegant) to require for the client code to create the interface for the class.
You can do it with some bytecode manipulation/generation during class loading, for example using ASM, Javassist or similar, maybe also AspectJ.
The important question is, why would you need to do that? No normal code can use the class through its interface, because the interface does not exist at compile time. You would either need to generate the code that uses the interface or use reflection - but in that case you might as well use the original class. And for the interface to be useful, you should probably also modify the original class so that it implements the generated interface (this can be done with the libraries I mentioned).
You can look at something like Javassist to create the class. You would go over the class with Class.getMethods() and have to implement the bytecode at runtime for the interface, and then use the Proxy class to bridge the interface and implementation.