I'm having great fun with the method delegation described here:
http://www.javacodegeeks.com/2015/01/make-agents-not-frameworks.html
This works nicely:
.intercept(MethodDelegation.to(LogInterceptor.class)
.andThen(SuperMethodCall.INSTANCE)
I can intercept calls and capture arguments passed to methods, which is half of what I want to achieve. However, I haven't found an equally succinct way of capturing the return value. I know I can get a Callable passed to the interceptor which performs the call, but going down that road seems like a sure way to mess up my stacktraces.
It seems to me there should be an easy and canonical-ish way to implement the "around-method" pattern.
Before I start digging into the APIs for reals: Am I missing something?
No, you are not missing anything.
Whenever you manipulate code with Byte Buddy, this manipulation will be reflected by the stack traces of your application. This is intentional as it makes debugging much easier in case that something goes wrong. Think of your log interceptor throwing a runtime exception; if the intercept was somehow merged into your original method, this would be quite confusing for other developers to figure out. With Byte Buddy's approach, you can simply navigate to the causing source as your interceptor is in fact available from there. With Byte Buddy, no exception is ever thrown from generated code such that any problem can be traced back to source code.
Also, merging stack frames can have strange side-effects to caller sensitive code. For example, a security manager might give higher permissions to an interceptor than to the intercepted code. Merging stack frames would revert these permissions.
Writing an interceptor with a #Super Callable injected is the canonical way for implementing arround-advice. Do not worry about performance either. Byte Buddy is written in a way that makes it very easy for the JIT compiler to inline code such that the super method call is most likely executed with zero overhead. (There is even a benchmark demonstrating that.) For your example, generic arround-adivce would look like the following:
public class TimingInterceptor {
#RuntimeType
public static Object intercept(#Super Callable<?> zuper)
throws Exception {
long before = System.currentTimeMillis();
try {
return zuper.call();
} finally {
System.out.println("Took: " + (Systen.currentTimeMillis() - before));
}
}
}
For every method, the time it takes to execute is now printed to the console. You delegate to this code using MethodDelegation.to(TimingInterceptor.class).
Make sure that you use the #RuntimeType annotation. This way, Byte Buddy attempts a casting at runtime, making this generic interception possible.
Related
I have made a call to the log4j-v2-API like this:
Logger logger = LogManager.getLogger("MyLogger");
/**
* syntactic sugar as part of a facade
*/
private void logAtLevel(String level, Supplier<String> messageSupplier, Throwable thrown){
Level priority = Level.toLevel(level, Level.ALL);
if (null == thrown) {
logger.log(priority, messageSupplier);
} else {
logger.log(priority, messageSupplier, thrown);
}
}
/**
* method calling logger
*/
private void callLogging(Object value){
logAtLevel("Debug", ()->String.format("something or other: %s", value), null);
}
My expectations would have been that the above call creates a log entry "something or other: <object.toString()>" however I got "lambda#1223454" instead.
This suggests to me that the method that was executed is log(level, object) rather than the expected log(level, supplier)
Why are the methods resolved the way they are and how can I prevent this (preferably without casting)?
Your Supplier is not Log4j2's Supplier!
java.util.function.Supplier was introduced in Java 8. At around the same time (version 2.4, cf. LOG4J2-599) Log4j2 introduced org.apache.logging.log4j.util.Supplier so that the library can be used on Java 7.
That is why your code does not call log(Level, Supplier<?>, Throwable) but log(Level, Object, Throwable) and you end up with Supplier#toString() being logged instead of Supplier#get().
This is probably something that should change and I filed a wish list bug about it (cf. #1262).
Remark: Wrapping well established logging APIs like Log4j2 API and SLF4J into custom wrappers is not a good idea (cf. this question). While it is not rocket science to write a good wrapper, there are many details you should consider. For example your wrapper breaks location information.
Doing a level lookup for every call, when your intent is to avoid every nanosecond for levels that don't matter, is probably something you need to reconsider.
At any rate, there is nothing immediately obvious about your snippet that would explain what you observe. Thus, let's get into debugging it, and likely causes.
The code you are running isn't the code you wrote. For example, because you haven't (re)compiled.
logger.log is overloaded, and older versions of slf4j exist that contain only the 'object' variant, which just calls toString(), which would lead to precisely the behaviour you are now witnessing.
You can do some debugging on both of these problems by applying the following trick:
Add System.out.println(TheClassThisCodeIsIn.class.getResource("TheClassThisCodeIsIn.class")); - this will tell you exactly where the class file is at.
Then use javap -v -c path/to/that/file.class and check the actual call to logger.log - is the 'signature' that it links to the Supplier variant or the Object variant?
The different effects and what it means:
The sysout call isn't showing up. Then you aren't running the code you are looking at, and you need to investigate your toolstack. How are you compiling it, because it's not working.
The javap output shows that the 'Object' variant is invoked. In which case the compilation step is using an old version of log4j2 in its classpath. It's not about the slf4j version that exists at runtime, it's what the compiler is using, as that determines which overload is chosen.
Everything looks fine and yet you are still seeing lambda#... - then I would start suspecting that this is the actual output of your lambda code somehow, or something really, really wonky is going on. The only things I can think of that get you here are ridiculously exotic, not worth mentioning.
I have classes called say CalculationOutcome and FileHashOutcome. Their constructors have (ActualResult, Throwable) arguments, and at the end of a chain of CompletionStages I have handle(XxxOutcome::new).
It might make intentions clearer and save some boilerplate if I could write say PossiblyWithError<FileHash>.
Edit: People asking for example code...
class FileHashOutcome {
private final String hash;
private final Throwable throwable;
FileHashOutcome(String hash, Throwable throwable){
// Usual assignments
}
}
CompletionStage<FileHashOutcome> future =
SomeExternalLibrary.calculateHash(file)
// ...It's a CompletionStage<String> at this stage...
.handle(FileHashOutcome::new);
// Then I pass `future` to a service that
// will execute it and pass back the result asynchronously
To be specific, it's within an Akka actor, and I'm asking the infrastructure to pipe the result back into the actor as a message.
Edit: suggestions that CompletableFuture might do the job... Yes, good point, it represents exactly what needs to be represented. And it's a bit of a code smell to implement your own rough-and-ready future then wrap it in the java8 future.
There are some hoops to jump through depending on whether you have access to the "envelope" of the future. If you're inside a thenApply then you're "inside" the envelope and you don't know it. And the Akka infrastructure "throws away" the envelope and only returns the result (and it returns nothing if there was an exception, which is one of the main reasons why I wanted to catch and wrap that exception at the end of the CompletionStage chain). Of course, since I created the CompletionStage, I could hold onto it in a member variable until the infrastructure nudged me that it had completed.
Per #marstran, the usual name for this class is Try.
Scala has such a class in its standard library: https://www.scala-lang.org/api/2.9.3/scala/util/Try.html
The same can be done in Java, but it is not a part of the standard library: https://dzone.com/articles/why-try-better-exception-handling-in-java-with-try
In my understanding, code testing is to test whether results are right, like a calculator, I need to write a test case to verify if the result of 1+1 is 2.
But I have read many test cases about verifying the number of times a method is called. I'm very confused about that. The best example is what I just saw in Spring in Action:
public class BraveKnight implements Knight {
private Quest quest;
public BraveKnight(Quest quest) {
this.quest = quest;
}
public void embarkOnQuest() {
quest.embark();
}
}
public class BraveKnightTest {
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
BraveKnight knight = new BraveKnight(mockQuest);
knight.embarkOnQuest();
verify(mockQuest, times(1)).embark();
}
}
I really have no idea about why they need to verify the embark() function is called one time. Don't you think that embark() will certainly be invoked after embarkOnQuest() is called? Or some errors will occur, and I will notice error messages in the logs, which show the error line number, that can help me quickly locate the wrong code.
So what's the point of verifying like above?
The need is simple: to verify that the correct number of invocations were made. There are scenarios in which method calls should not happen, and others in which they should happen more or less than the default.
Consider the following modified version of embarkOnQuest:
public void embarkOnQuest() {
quest.embark();
quest.embarkAgain();
}
And suppose you are testing error cases for quest.embark():
#Test
public void knightShouldEmbarkOnQuest() {
Quest mockQuest = mock(Quest.class);
Mockito.doThrow(RuntimeException.class).when(mockQuest).embark();
...
}
In this case you want to make sure that quest.embarkAgain is NOT invoked (or is invoked 0 times):
verify(mockQuest, times(0)).embarkAgain(); //or verifyZeroInteractions
Of course this is one other simple example. There are many other examples that could be added:
A database connector that should cache entries on first fetch, one can make multiple calls and verify that the connection to the database was called just once (per test query)
A singleton object that does initialization on load (or lazily), one can test that initialization-related calls are made just once.
Consider the following code:
public void saveFooIfFlagTrue(Foo foo, boolean flag) {
if (flag) {
fooRepository.save(foo);
}
}
If you don't check the number of times that fooRepository.save() is invoked , then how can you know whether this method is doing what you want it to?
This applies to other void methods. If there is no return to a method, and therefore no response to validate, checking which other methods are called is a good way of validating that the method is behaving correctly.
Good question. You raise a good point that mocking can be overly circuitous when you can just check the results. However, there are contexts where this does lead to more robust tests.
For example, if a method needs to make a call to an external API, there are several problems with simply testing the result:
Network I/O is slow. If you have many checks like this, it will slow down your test case
Any round-trip like this would have to rely on the code making the request, the API, and the code interpreting the API's response all to work correctly. This is a lot of failure points for a single test.
If something stupid happens and you accidentally make multiple requests, this could cause performance issues with your program.
To address your sub-questions:
Don't you think that embark() will certainly be invoked after embarkOnQuest() called?
Tests also have value in letting you refactor without worry about breaking things. This is obvious now, yes. Will it be obvious in 6 months?
I really have no idea about why they need to verify the embark()
function is called one time
Verifying an invocation on a mock for a specific number of times is the standard way how Mockito works as you invoke Mockito.verify().
In fact this :
verify(mockQuest, times(1)).embark();
is just a verbose way to write :
verify(mockQuest).embark();
In a general way, the verification for a single call on the mock is what you need.
In some uncommon scenarios you may want to verify that a method was invoked a specific number of times (more than one).
But you want to avoid using so specific verifications.
In fact you even want to use verifying as few as possible.
If you need to use verifying and besides the number of invocation on the mock, it generally means two things : the mocked dependency is too much coupled to the class under
test and or the method under test performs too many unitary tasks that produce only side effects.
The test is so not necessary straight readable and maintainable. It is like if you coded the mock flow in the verifying invocations.
And as a consequence it also makes the tests more brittle as it checks invocation details not the overall logic and states.
In most of cases, a refactoring is the remedy and cancel the requirement to specify a number of invocation.
I don't tell that it is never required but use it only as it happens to be the single decent choice for the class under test.
I have the following code:
when(mockedOperation.getResult(anyDouble(), anyDouble())).thenCallRealMethod();
when(mockedOperation.division(anyDouble(), not(eq(0d)))).thenCallRealMethod();
Where Operation is something like Command pattern - it encapsulates some concrete action, in this case, simplified - division operation. The result retrieval happens not directly, but by the means of contract method, say getResult(arg1, arg2). So, I call
mockedOperation.division(10d, 3d);
But (from debugging info in my concrete implementation of Operation) I can see that division() gets not 10 and 3 but (0, 0).
As far as I understand, that arguments are lost somewhere between the thenCallRealMethod() by getResult() and calling real division() afterwards.
What is the reason for that behavior and how should I implement partial mocks correctly in case I really need it?
UPD. Maybe I should try to say it another way, for example, simply how do you create mocks that callRealMethod in such a way that arguments are correctly delivered to the endpoint?
OK, the problem is solved now. Turns out I just encountered another undocumented feature/bug in Mockito (or just the feature I didn't find the docs for yet). The problem was that in my #Before I also mocked that very operation, and, as it appears, when one redefines mock, something black-magical happens and the result is as I've already described - arguments are somehow lost.
I'm wondering if it is an accepted practice or not to avoid multiple calls on the same line with respect to possible NPEs, and if so in what circumstances. For example:
anObj.doThatWith(myObj.getThis());
vs
Object o = myObj.getThis();
anObj.doThatWith(o);
The latter is more verbose, but if there is an NPE, you immediately know what is null. However, it also requires creating a name for the variable and more import statements.
So my questions around this are:
Is this problem something worth
designing around? Is it better to go
for the first or second possibility?
Is the creation of a variable name something that would have an effect performance-wise?
Is there a proposal to change the exception
message to be able to determine what
object is null in future versions of
Java ?
Is this problem something worth designing around? Is it better to go for the first or second possibility?
IMO, no. Go for the version of the code that is most readable.
If you get an NPE that you cannot diagnose then modify the code as required. Alternatively, run it using the debugger and use breakpoints and single stepping to find out where the null pointer is coming from.
Is the creation of a variable name something that would have an effect performance-wise?
Adding an extra variable may increase the stack frame size, or may extend the time that some objects remain reachable. But both effects are unlikely to be significant.
Is there a proposal to change the exception message to be able to determine what object is null in future versions of Java ?
Not that I am aware of. Implementing such a feature would probably have significant performance downsides.
The Law of Demeter explicitly says not to do this at all.
If you are sure that getThis() cannot return a null value, the first variant is ok. You can use contract annotations in your code to check such conditions. For instance Parasoft JTest uses an annotation like #post $result != null and flags all methods without the annotation that use the return value without checking.
If the method can return null your code should always use the second variant, and check the return value. Only you can decide what to do if the return value is null, it might be ok, or you might want to log an error:
Object o = getThis();
if (null == o) {
log.error("mymethod: Could not retrieve this");
} else {
o.doThat();
}
Personally I dislike the one-liner code "design pattern", so I side by all those who say to keep your code readable. Although I saw much worse lines of code in existing projects similar to this:
someMap.put(
someObject.getSomeThing().getSomeOtherThing().getKey(),
someObject.getSomeThing().getSomeOtherThing())
I think that no one would argue that this is not the way to write maintainable code.
As for using annotations - unfortunately not all developers use the same IDE and Eclipse users would not benefit from the #Nullable and #NotNull annotations. And without the IDE integration these do not have much benefit (apart from some extra documentation). However I do recommend the assert ability. While it only helps during run-time, it does help to find most NPE causes and has no performance effect, and makes the assumptions your code makes clearer.
If it were me I would change the code to your latter version but I would also add logging (maybe print) statements with a framework like log4j so if something did go wrong I could check the log files to see what was null.