For a few days, I am stuck at a (for me) quite challenging problem.
In my current project, we have a big SOA based architecture, our goal is to monitor and log all incoming requests, the invoked services, the invoked DAOs, and their result. For certain reasons we cant uses aspects, so our idea is to connect directly to the JavaVM and observe what's going on.
In our research, we found Byteman and Bytebuddy which both use the Java Machine Tool Interface to connect and inject code into the VM.
Looking closer at Byteman we discovered that we have to specify the Byteman-Operation for each operational class which in our case is simply impossible.
Would there be a better, more efficient way to log all incoming requests, the invoked services, the invoked DAOs, and their results? Should we write our own Agent which connects to the JMTI? What would you guys recommend?
I think the way to figure out a specific service method call can be overloaded. Wouldn't it be simplest and smarter to use APM?
Related
I have a java/jersey api that is called from the front end. I need to write tests for the java code. How the code is written is:
1. The api call executes the resource method, this calls a separate method that gets data from db and returns to the resource method. This then returns a javax.ws.rs.core.Response to the client.
This is going to be my first time writing tests, so please answer considering I know nothing. What is the best way to start here? And what types of tests should I write. Unit tests are what I’m aiming for here.
Now I have done a lot of research here and I’m leaning towards using JUnit + Mockito to do this. But how do I check for the data in a Response object?
And how should I check the other file that is getting data from db? I found out DBUnit that can do that, but do I need it?
Another framework I came across was Rest Assured. Do I need to include that also? Or can the same things be done with JUnit/Mockito?
I just want some direction from people who have tested out jersey api’s. And want to know what is the most common way to do this.
I do not think there is a best way to do this, what you need to test is often subjective and dependent on the context.
However, you can structure your code in a way that the most important is tested easily and what's left (integration) can be done later / with different tools.
What I suggest here is to follow the principles of the hexagonal architecture. The idea is to keep at the center of your application and without any kind of dependencies (imports ...) to any framework (jaxrs, jpa, etc.) all business rules. These rules can be easily designed with TDD. You will then have very short running tests. It may be necessary to use Mockito to mock implementations of SPI interfaces.
In a second time, you can use this "core" by wiring adapters to the outer world (HTTP, databases, AMQP, etc.), using API and implementing SPI interfaces.
If you want to test these adapters, you exit the scope of unit-tests, and write integration-tests. Integration with a framework, a protocol, anything really.
This kind of tests can use a wide variety of tools, from a framework-related mock (like Jersey test framework), in-memory database (like H2), to fully operational middleware instance using tools like testcontainers.
What is important to remember when writing integration-tests is they are slow in regards of unit-tests. In order to keep a feedback-loop as short as possible, you will want to limit the number of integration-tests to a minimum.
Hoping this will help you!
Hi i am using Spring 4 Async rest template to make 10k rest api calls to a web service. I have a method that creates the request object and a method that calls the web service. I am using Listenable Future classes and the two methods to create and call are enclosed in another method where the response is handled in future. Any useful links for such a task would be greatly helpful.
First, set up your testing environment.
Then benchmark what you have.
Then adjust your code and compare
(repeat as necessary).
Whatever you do, there is a cost associated with it. You need to be sure that your costs are measured and understood, every step of the way.
A simple Tomcat application might outperform a Spring application or be equivalent depending on what aspects of Spring's inversion of control are being leveraged. Using a Future might be fast or slow, depending on what it is being compared to. Using non-NIO might be faster or slower, depending on the implementation and the data being processed.
I was trying to find out if there is any difference when I am calling a service through its local interface / remote interface in performance within the same JVM.
Based on this article:
http://www.onjava.com/pub/a/onjava/2004/11/03/localremote.html?page=last&x-showcontent=text
Local call should be a bit faster especially in cases for Collection of Objects.
Based on my testing I could not find a big difference between the two however maybe I was trying it with small amount of data.
But anyway I would like to know if it has any downfall to call a service through its remote interface when we are in the same JVM because in my project we are generating both local/remote interfaces however there are no real remote calls, the client and the service is within the same JVM and I am thinking about cleaning up the mess and removing the unnecessary generated remote views because people started to use both without reason.
Thanks!
implementation will vary between containers how remote interfaces perform, you cannot rely on it performing similar to local interfaces (though most containers will realize you're actually accessing a 'local' remote interface). There can be differences, like spawning a new thread for the remote call, passing values by reference (you can for example turn this on in jboss for in-vm remote calls), etc
serialization is always slow, it should be avoided whenever possible
basically just don't do it, absolutely no reason to use the remote interfaces unless you plan on splitting your application into multiple EARs
I have a number of low-level methods in my play! 2.0 application (Java) that are calling an external Web Service (Neo4j via REST, to be specific). Each of them returns a Promise<WS.Response>. To test these methods I am currently installing a Callback<WS.Response> on their return values via onRedeem. The Callbacks contain the assertions to perform on individual WS.Responses. Each test relies on some specific fixtures that I am installing/removing via setUpClass and tearDownClass, respectively.
The problem that I am facing is that due to my test code being fully asynchronous, the tear-down logic ends up getting called before all of the Callbacks have had a chance to run. As a result, not all fixtures are being removed, and the database is left in a state that is different from the state it was in before running the tests.
One way to fix this problem would be to call get() with some arbitrary timeout on the Promise objects returned by the functions that are being tested, but that solution seems fairly brittle and unreliable to me. (What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout? In that case, my tests would fail or error out even though my code is actually correct.)
So my question is: Is there a way of testing code that calls external Web Services that is non-blocking and still ensures database consistency? And if there isn't, which of the two approaches outlined above is the "canonical"/accepted way of testing this kind of code?
What if, for some reason not under my application's control, the Web Service calls do not complete within the timeout?
That is a problem for any test that calls external web services, whether asynchronous or not. That is why you should mock out your web service calls in some way, either using a fake web service or a fake implementation of the code that accesses the web service.
You can use e.g. Betamax for that.
I have written testing code for asynchronous code before and I believe your "brittle" approach is actually the right one.
I've been scratching my head around developing a simple plugin based architecture on top of Spring, for one of my current apps. No matter how much separation one could achieve using patterns like MVC, one always reaches a point where coupling is inevitable.
Thus, I started weighing options. At first I thought that filters are a good one. Every plugin I'd make would be a filter, which then I will simply insert into the filter map. Of course, this will create a bit of overhead when enumerating and checking all the filters, but at least , controllers won't have to care what has happened to the data before it reached them, or what happens afterwards, they will just care to fetch the models (through DAO or whatnot) and return them.
The problem with this is that not all of my app requests are HTTP-based. Some are based on emails, others are internally scheduled (timed), so Filters won't help much, unless I try to adapt every type of incoming request to HTTPRequest, which would be too much.
Another one I thought about was annotation based AOP, where I annotate every method, where the plugin would intercept methods based on certain conventions. My problem with is that first I am not so experienced with AOP in general, and second, simply writing all those conventions already suggests a bit of coupling
By far the option that mostly appeals to my way of thinking is using Spring-based events. Every type of request handler within my app (web controller, email handler, etc) will be a sort of an event dispatcher, which will dispatch Spring events on every major action. On the other hand, plugins will simply listen for when a particular event happens, and do some logic. This will allow me to utilize point #1 as well, as some of those plugins could be filters as well, i.e when they receive a notification that a certain controller action is done, they may just decide to do nothing, and rather wait for when they get called by the filter chain. I see this as a somewhat nice approach. Of course here comes the overhead again, of dispatching events, plus the fact that every involved class will eb coupled with Spring forever, but I see this as a necessary evil.
My main concern regarding Spring events is performance, both in terms of latency, and memory footprint.
I am still not an expert, so a bunch of feedback here would be of tremendous help. Are spring events the best for this type of architecture, or there is another solution that I've missed? I am aware that there might even be some third-party solutions out there already, so I'd be glad if someone could point out one or two tried and proven ones.
Thanks.
The concept of a plugin can be achieved with the Spring bean factory. If you create a common interface you can define multiple beans that implement it and inject them where needed. Or you can use a factorybean to deliver the right plugin for the job.
Your idea of using events is called an 'Event Driven Architecture'. This goes a lot further than just plugins because it not only decouples from the implementation but also offers the possibility to decouple from which instance is used (multiple handlers), which location (multiple machines) and the time at which the request is handled (asynchronous handling). The tradeoff is an increased overall complexity, a reduced component-level complexity and the need for a messaging infrastructure. Often JMS is used, but if you just want a single-node setup both Spring and Mule offer simple in-memory modes as well.
To help you further you should expand a bit on the requirements you are trying to meet and the architectural improvements you want. So far you have mentioned that you want to use plugins and described some possible solutions, but you have not really described what you are trying to achieve.