I have tried unsuccessfully to implement circuit breaker pattern, here, in Java using Spring framework.
How can you implement circuit breaker pattern by Java and Spring?
For a simple, straightforward circuit breaker implementation, check out Failsafe (which I authored). Ex:
CircuitBreaker breaker = new CircuitBreaker()
.withFailureThreshold(5)
.withSuccessThreshold(3)
.withDelay(1, TimeUnit.MINUTES);
Failsafe.with(breaker).run(() -> connect());
Doesn't get much simpler.
Apache commons has some implementations for several types of lightweight circuit breakers, here's a link to the docs
The project provides the EventCountCircuitBreaker and ThresholdCircuitBreaker classes, and an abstract AbstractCircuitBreaker so you could implement your own.
The code is open sources and is hosted at github, so anyone attempting to implement the pattern should at least take a peek.
Spring cloud provides some interesting integration with Hystrix. You should probably have a look into it...
Regarding the pattern itself
You can obtain a lot of useful information about this pattern at Martin Fowler's blog. It contains ruby implementation as well as references for implementation in other languages.
Regarding the java spring implementation
Please check the JRugged library.
It contains the Circuit Breaker implementation in spring as well as other design patterns.
You don't actually need to be using Spring cloud or Spring boot to use Hystrix.
Using hystrix-javanica makes it easy to use Hystrix with plain old Spring too.
Here is an example of fallback methods (both methods, getMessageTimeout and getMessageException, fail by default):
#Configuration
#ComponentScan
#EnableAspectJAutoProxy
public class CircuitBreakingWithHystrix {
#Bean
public HystrixCommandAspect hystrixAspect() {
return new HystrixCommandAspect();
}
public static void main(String[] args) throws Throwable {
ApplicationContext ctx
= new AnnotationConfigApplicationContext(CircuitBreakingWithHystrix.class);
ExampleService ex = ctx.getBean(ExampleService.class);
for (int i = 0; i < 1000; i++) {
System.out.println(ex.getMessageException());
System.out.println(ex.getMessageTimeout());
}
}
#Service
class ExampleService {
/*
* The default Hystrix timeout is 1 second. So the default
* version of this method will always fail.
* Adding the #HystrixProperty will cause
* the method to succeed.
*/
#HystrixCommand(
commandProperties = {
//#HystrixProperty(name = EXECUTION_ISOLATION_THREAD_TIMEOUT_IN_MILLISECONDS,
// value = "5000")
},
fallbackMethod = "messageFallback"
)
public String getMessageTimeout() {
try {
//Pause for 4 seconds
Thread.sleep(4000);
} catch (InterruptedException ex) {
// Do something clever with this
}
return "result";
}
#HystrixCommand(
fallbackMethod = "messageFallback")
public String getMessageException() {
throw new RuntimeException("Bad things happened");
}
private String messageFallback(Throwable hre) {
return "fallback";
}
}
You can also examine the throwable sent to the fallback method to identify why the method call failed.
You can have a look at JCircuitBreaker . The implementation there implements circuit breaker like approach.
Please note that this is not 1:1 implementation of the pattern because it does not define fixed states like "half-open". Instead it makes the decision (if the breaker should be open or closed) basing on current application state (using so called "break strategy"). Nevertheless it should be possible to define such a "break strategy" which evaluates failures thresholds - so it should be possible to also implement original pattern using JCircuitBreaker.
resilience4j is also a implementation of circuit breaker for java.
you can use 'circuit breaker', 'retry'.
guide: https://resilience4j.readme.io/docs/circuitbreaker
Related
I have a working production SpringBoot application, and part of it is getting a do-over. It would be very beneficial for me to delete my old #RequestMapping from the ResponseEntity<String> foo()s of my world, keeping the old code as an as a duplicate while we try to roll out the new functionality behind a feature gate.. All production tenants go through my no-longer-declarative foo() function, while all my test and automation tenants can start to tinker with a brand new EntityResponse<String> bar().
The way to implement the change was so clear in my mind:
class Router{
#Bean
RouterFunction<ServerResponse> helloWorldRouterFunction(OldHelloWorldService oldHelloWorldService) {
return RouterFunctions.route()
.route(RequestPredicates.path("/helloWorld/{option}"), x ->
{
String option = x.pathVariable("option");
if (FeatureManager.isActive()) {
return ServerResponse.ok().body(String.format("New implementation of Hello World! your option is: %s", option));
} else {
// FutureServerResponse is my own bad implementation of the ServerResponse interface
return FutureServerResponse.from(oldHelloWorldService.futureFoo(Integer.parseInt(option)));
}
}
)
.build();
}
}
Here's the implementation for OldHelloWorldService::futureFoo
#RestController
static class OldHelloWorldService {
#RequestMapping("/specialCase")
ResponseEntity<String> specialCase() {
// some business logic
return ResponseEntity.ok().body("Special case for Hello World with option 2");
}
/**
* Old declarative implementation, routed via functional {#link ServerRouteConfiguration}
* to allow dynamic choice based on {#link FeatureManager#isActive()}
* <p>
* as you can see, before the change, this function was a {#link RequestMapping} and it handled the
* completable future, we could return both concrete OK responses with a body, and FOUND responses with a location.
*/
// #RequestMapping("/helloWorld/{option}")
CompletableFuture<ResponseEntity<String>> futureFoo(
// #PathVariable
int option) {
return CompletableFuture.supplyAsync(() -> {
if (option == 2) {
return ResponseEntity.status(HttpStatus.FOUND)
.location(URI.create("/specialCase"))
.build();
} else {
return ResponseEntity.ok().body(String.format("Old implementation of Hello World! your option is: %s", option));
}
});
}
}
This feature lets my backend code decide what kind of ResponseEntity it will send, in the future. As you see, a smart function might for instance decide to either show a String message with an OK status, or give a Location header, and declare FOUND status, and not even give a String body at all. because the result type was a full fluid ResponseEntity, I had the power to do what I will.
Now with a EntityResponse you may still use a CompletionStage, but only as the actual entity. while building the EntityResponse I am required to give it a definitive final status. If it was OK, I can't decide it will be FOUND when my CompletionStage ran it's course.
The only problem with the above code, is that org.springframework.web.servlet.function does not contain the FutureServerResponse implementation I need. I created my own, and it works, but it feels hacky, And I wouldn't want it in my production code.
Now I feel like the functionality should still be there somewhere, Why isn't there a FutureServerResponse that can decide in the future what it is? Is there a workaround to this problem maybe somehow (ab)using views?
To state the maybe not-so-obvious.. I am contemplating a move to reactive and WebFlux, but changing the entire runtime will have more dramatic implications on current production tenants, and making that move with a feature gate would be impossible because the urls would have to be shared between MVC and Flux.
It's a niche problem, and Functional Web MVC has little resources so I will appreciate greatly any help I can get.
I have created a github companion for this question.
I've had to patch spring web to get what I needed. I pushed a pull-request to spring web with my patch, in the end something similar was created and pushed to the 5.3 release of spring.
if anybody else is looking for the async behavior described in the question, ServerResponse.async function in spring 5.3.0+ (spring boot 2.4.0+) solves the issue.
I was looking at this project https://github.com/MSzturc/cdi-async-events-extension/,
which provides async events in CDI 1.X (built-in async came from 2.0).
Now I'm questioning this piece of code inside the custom Extension:
public <X> void processAnnotatedType(#Observes ProcessAnnotatedType<X> event, final BeanManager beanManager) {
final AnnotatedType<X> type = event.getAnnotatedType();
for (AnnotatedMethod<?> method : type.getMethods()) {
for (final AnnotatedParameter<?> param : method.getParameters()) {
if (param.isAnnotationPresent(Observes.class) && param.isAnnotationPresent(Async.class)) {
asyncObservers.add(ObserverMethodHolder.create(this.pool, beanManager, type, method, param));
}
}
}
}
public void afterBeanDiscovery(#Observes AfterBeanDiscovery event) {
for (ObserverMethod<?> om : this.asyncObservers) {
event.addObserverMethod(om);
}
}
Basically, while each Bean is being registered, it is looking at each method to see if a parameter has the #Async annotation.
Then, after the discovery step, it is registering the #Observes #Async methods.
Looking inside the addObserverMethod() method, provided by JBoss Weld 2, I see:
additionalObservers.add(observerMethod);
My question then is, wouldn't those methods be called twice? I mean, they may be registered twice, first by the container itself, then by calling the addObserverMethod() method.
I am not familiar with project, but from the first look it seems pretty outdated and not maintained.
As for the extension - it basically adds the "same" observer method (OM) again, with it's own OM implementation. So I would say the behaviour depends on CDI implementation as the spec does not guarantee what happens when you register "the same" OM again - is it replaced or is it just added like you say?
And by "the same" I mean the exact same underlying Java method although wrapped in a fancier coat.
Ultimately, you can easily try it and see for yourself, but I would advise against using that project as any problems you bump into are unlikely to be resolved on the project side.
I have been using the togglz since last few days.
I am trying to find out if there is annotation based approach available in togglez API.
I want to do it like below -
public class Application {
public static void main(String[] args) {
Application application = new Application();
boolean first=false;
first=application.validate1();
System.out.println(first);
}
#Togglz(feature = "FEATURE_01")
public boolean validate1() {
System.out.println("validate1");
return false;
}
}
Is there anything available in togglz.
I could not find it anywhere , if you have any idea about such annotation please help.
My requirement is to skip the method execution based on feature value passed into it
No, there is no such annotation in Togglz. You will need some framework that support interceptors for that (like Spring, CDI, EJB). Then you can implement such an interceptor yourself.
However, to be honest I'm not sure if such an annotation would make sense. What should be the result if the feature is off? What does the method return? null? Explicit feature checks using a simple if statement are more straight forward to use in theses cases. But that's just my opinion. ;-)
I have written some code which I thought was quite well-designed, but then I started writing unit tests for it and stopped being so sure.
It turned out that in order to write some reasonable unit tests, I need to change some of my variables access modifiers from private to default, i.e. expose them (only within a package, but still...).
Here is some rough overview of my code in question. There is supposed to be some sort of address validation framework, that enables address validation by different means, e.g. validate them by some external webservice or by data in DB, or by any other source. So I have a notion of Module, which is just this: a separate way to validate addresses. I have an interface:
interface Module {
public void init(InitParams params);
public ValidationResponse validate(Address address);
}
There is some sort of factory, that based on a request or session state chooses a proper module:
class ModuleFactory {
Module selectModule(HttpRequest request) {
Module module = chooseModule(request);// analyze request and choose a module
module.init(createInitParams(request)); // init module
return module;
}
}
And then, I have written a Module that uses some external webservice for validation, and implemented it like that:
WebServiceModule {
private WebServiceFacade webservice;
public void init(InitParams params) {
webservice = new WebServiceFacade(createParamsForFacade(params));
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
So basically I have this WebServiceFacade which is a wrapper over external web service, and my module calls this facade, processes its response and returns some framework-standard response.
I want to test if WebServiceModule processes reponses from external web service correctly. Obviously, I can't call real web service in unit tests, so I'm mocking it. But then again, in order for the module to use my mocked web service, the field webservice must be accessible from the outside. It breaks my design and I wonder if there is anything I could do about it. Obviously, the facade cannot be passed in init parameters, because ModuleFactory does not and should not know that it is needed.
I have read that dependency injection might be the answer to such problems, but I can't see how? I have not used any DI frameworks before, like Guice, so I don't know if it could be easily used in this situation. But maybe it could?
Or maybe I should just change my design?
Or screw it and make this unfortunate field package private (but leaving a sad comment like // default visibility to allow testing (oh well...) doesn't feel right)?
Bah! While I was writing this, it occurred to me, that I could create a WebServiceProcessor which takes a WebServiceFacade as a constructor argument and then test just the WebServiceProcessor. This would be one of the solutions to my problem. What do you think about it? I have one problem with that, because then my WebServiceModule would be sort of useless, just delegating all its work to another components, I would say: one layer of abstraction too far.
Yes, your design is wrong. You should do dependency injection instead of new ... inside your class (which is also called "hardcoded dependency"). Inability to easily write a test is a perfect indicator of a wrong design (read about "Listen to your tests" paradigm in Growing Object-Oriented Software Guided by Tests).
BTW, using reflection or dependency breaking framework like PowerMock is a very bad practice in this case and should be your last resort.
I agree with what yegor256 said and would like to suggest that the reason why you ended up in this situation is that you have assigned multiple responsibilities to your modules: creation and validation. This goes against the Single responsibility principle and effectively limits your ability to test creation separately from validation.
Consider constraining the responsibility of your "modules" to creation alone. When they only have this responsibility, the naming can be improved as well:
interface ValidatorFactory {
public Validator createValidator(InitParams params);
}
The validation interface becomes separate:
interface Validator {
public ValidationResponse validate(Address address);
}
You can then start by implementing the factory:
class WebServiceValidatorFactory implements ValidatorFactory {
public Validator createValidator(InitParams params) {
return new WebServiceValidator(new ProdWebServiceFacade(createParamsForFacade(params)));
}
}
This factory code becomes hard to unit-test, since it is explicitly referencing prod code, so keep this impl very concise. Put any logic (like createParamsForFacade) on the side, so that you can test it separately.
The web service validator itself only gets the responsibility of validation, and takes in the façade as a dependency, following the Inversion of Control (IoC) principle:
class WebServiceValidator implements Validator {
private final WebServiceFacade facade;
public WebServiceValidator(WebServiceFacade facade) {
this.facade = facade;
}
public ValidationResponse validate(Address address) {
WebService wsResponse = webservice.validate(address);
ValidationResponse reponse = proccessWsResponse(wsResponse);
return response;
}
}
Since WebServiceValidator is not controlling the creation of its dependencies anymore, testing becomes a breeze:
#Test
public void aTest() {
WebServiceValidator validator = new WebServiceValidator(new MockWebServiceFacade());
...
}
This way you have effectively inverted the control of the creation of the dependencies: Inversion of Control (IoC)!
Oh, and by the way, write your tests first. This way you will naturally gravitate towards a testable solution, which is usually also the best design. I think that this is due to the fact that testing requires modularity, and modularity is coincidentally the hallmark of good design.
I'm experimenting with java annotation processors. I'm able to write integration tests using the "JavaCompiler" (in fact I'm using "hickory" at the moment). I can run the compile process and analyse the output. The Problem: a single test runs for about half a second even without any code in my annotation processor. This is way too long to using it in TDD style.
Mocking away the dependencies seems very hard for me (I would have to mock out the entire "javax.lang.model.element" package). Have someone succeed to write unit tests for an annotation processor (Java 6)? If not ... what would be your approach?
This is an old question, but it seems that the state of annotation processor testing hadn't gotten any better, so we released Compile Testing today. The best docs are in package-info.java, but the general idea is that there is a fluent API for testing compilation output when run with an annotation processor. For example,
ASSERT.about(javaSource())
.that(JavaFileObjects.forResource("HelloWorld.java"))
.processedWith(new MyAnnotationProcessor())
.compilesWithoutError()
.and().generatesSources(JavaFileObjects.forResource("GeneratedHelloWorld.java"));
tests that the processor generates a file that matches GeneratedHelloWorld.java (golden file on the class path). You can also test that the processor produces error output:
JavaFileObject fileObject = JavaFileObjects.forResource("HelloWorld.java");
ASSERT.about(javaSource())
.that(fileObject)
.processedWith(new NoHelloWorld())
.failsToCompile()
.withErrorContaining("No types named HelloWorld!").in(fileObject).onLine(23).atColumn(5);
This is obviously a lot simpler than mocking and unlike typical integration tests, all of the output is stored in memory.
You're right mocking the annotation processing API (with a mock library like easymock) is painful. I tried this approach and it broke down pretty rapidly. You have to setup to many method call expectations. The tests become unmaintainable.
A state-based test approach worked for me reasonably well. I had to implement the parts of the javax.lang.model.* API I needed for my tests. (That were only < 350 lines of code.)
This is the part of a test to initiate the javax.lang.model objects. After the setup the model should be in the same state as the Java compiler implementation.
DeclaredType typeArgument = declaredType(classElement("returnTypeName"));
DeclaredType validReturnType = declaredType(interfaceElement(GENERATOR_TYPE_NAME), typeArgument);
TypeParameterElement typeParameter = typeParameterElement();
ExecutableElement methodExecutableElement = Model.methodExecutableElement(name, validReturnType, typeParameter);
The static factory methods are defined in the class Model implementing the javax.lang.model.* classes. For example declaredType. (All unsupported operations will throw exceptions.)
public static DeclaredType declaredType(final Element element, final TypeMirror... argumentTypes) {
return new DeclaredType(){
#Override public Element asElement() {
return element;
}
#Override public List<? extends TypeMirror> getTypeArguments() {
return Arrays.asList(argumentTypes);
}
#Override public String toString() {
return format("DeclareTypeModel[element=%s, argumentTypes=%s]",
element, Arrays.toString(argumentTypes));
}
#Override public <R, P> R accept(TypeVisitor<R, P> v, P p) {
return v.visitDeclared(this, p);
}
#Override public boolean equals(Object obj) { throw new UnsupportedOperationException(); }
#Override public int hashCode() { throw new UnsupportedOperationException(); }
#Override public TypeKind getKind() { throw new UnsupportedOperationException(); }
#Override public TypeMirror getEnclosingType() { throw new UnsupportedOperationException(); }
};
}
The rest of the test verifies the behavior of the class under test.
Method actual = new Method(environment(), methodExecutableElement);
Method expected = new Method(..);
assertEquals(expected, actual);
You can have a look at the source code of the Quickcheck #Samples and #Iterables source code generator tests. (The code is not optimal, yet. The Method class has to many parameters and the Parameter class is not tested in its own test but as part of the Method test. It should illustrate the approach nevertheless.)
Viel Glück!
jOOR is a small Java reflection library that also provides simplified access to the in-memory Java compilation API in javax.tool.JavaCompiler. We added support for this to unit test jOOQ's annotation processors. You can easily write unit tests like this:
#Test
public void testCompileWithAnnotationProcessors() {
AProcessor p = new AProcessor();
try {
Reflect.compile(
"org.joor.test.FailAnnotationProcessing",
"package org.joor.test; " +
"#A " +
"public class FailAnnotationProcessing { " +
"}",
new CompileOptions().processors(p)
).create().get();
Assert.fail();
}
catch (ReflectException expected) {
assertFalse(p.processed);
}
}
The above example has been taken from this blog post
I was in a similar situation, so I created the Avatar library. It won't give you the performance of a pure unit test with no compilation, but if used correctly you shouldn't see much of a performance hit.
Avatar lets you write a source file, annotate it, and convert it to elements in a unit test. This allows you to unit test methods and classes which consume Element objects, without manually invoking javac.
I ran into the same problem awhile ago and found this question. Although the other answers provided are decent, I felt that that there was still room for improvement. Based on the other answers for this question, I created Elementary, a suite of JUnit 5 extensions that provide a real annotation processing environment for unit tests.
Most libraries test annotation processors by running them. However, most annotation processors are pretty complex and broken into more fine-grained components. It is not feasible to test individual components by running the annotation processor. Instead, we make the annotation processing environment available to these tests.
The following code snippet illustrates how to test a Lint component:
import com.karuslabs.elementary.junit.Cases;
import com.karuslabs.elementary.junit.Tools;
import com.karuslabs.elementary.junit.ToolsExtension;
import com.karuslabs.elementary.junit.annotations.Case;
import com.karuslabs.elementary.junit.annotations.Introspect;
import com.karuslabs.utilitary.type.TypeMirrors;
#ExtendWith(ToolsExtension.class)
#Introspect
class ToolsExtensionExampleTest {
Lint lint = new Lint(Tools.typeMirrors());
#Test
void lint_string_variable(Cases cases) {
var first = cases.one("first");
assertTrue(lint.lint(first));
}
#Test
void lint_method_that_returns_string(Cases cases) {
var second = cases.get(1);
assertFalse(lint.lint(second));
}
#Case("first") String first;
#Case String second() { return "";}
}
class Lint {
final TypeMirrors types;
final TypeMirror expectedType;
Lint(TypeMirrors types) {
this.types = types;
this.expectedType = types.type(String.class);
}
public boolean lint(Element element) {
if (!(element instanceof VariableElement)) {
return false;
}
var variable = (VariableElement) element;
return types.isSameType(expectedType, variable.asType());
}
}
By annotating the test class with #Introspect and test cases with #Case, we can declare test cases in the same file as the tests. The corresponding Element representation of the test cases can be retrieved by a test using Cases.
If anyone is interested, I wrote an article, The Problem with Annotation Processors that details the problems with unit testing annotation processors.
I have used http://hg.netbeans.org/core-main/raw-file/default/openide.util.lookup/test/unit/src/org/openide/util/test/AnnotationProcessorTestUtils.java though this is based on java.io.File for simplicity and so has the performance overhead you complain about.
Thomas's suggestion of mocking the whole JSR 269 environment would lead to a pure unit test. You might instead want to write more of an integration test which checks how your processor actually runs inside javac, giving more assurance it is correct, but merely want to avoid disk files. Doing this would require you to write a mock JavaFileManager, which is unfortunately not as easy as it seems and I have no examples handy, but you should not need to mock other things like Element interfaces.
An option is to bundle all tests in one class. Half a second for compiling etc. is then a constant for a given set of tests, the real test time for a test is negligible, I assume.