I am trying to find out an example to use hystrix with SOAP call and all i could find the examples of the same with REST.
From the hystrix documentation, it seems this is possible, if you could point me to the example that would be helpful.
Also, if there are any better ways of having a consistent circuit breaker between REST and SOAP calls (maybe extensible to EJB's).
You can do this by creating an inner class which extends HystrixCommand and then override the run() method.
public class webServiceClient extends WebServiceGatewaySupport {
public Response callsoap(Request request) {
SoapCommand sfc = new SoapCommand(getWebServiceTemplate(), request,
soapRequestHeaderModifier, configuration);
return sfc.execute();
}
class SoapCommand extends HystrixCommand<Response>{
public SoapCommand() {
super(HystrixCommandGroupKey.Factory.asKey("example"));
}
#Override
protected Response run() {
return (Response) webServiceTemplate.marshalSendAndReceive(configuration.getUri(),
request, soapRequestHeaderModifier);
}
//fallback method goes here
}
}
Related
The code I'm working with has the following structure.
public interface SomeService {
Optional<SomeClass> getThing();
// more methods
}
public abstract class SomeServiceBase implements SomeService {
Optional<SomeClass> getThing() {
// logic
this.onGetThing();
}
protected abstract Optional<SomeClass> onGetThing();
}
Additionally, there are then 3 different classes that extends SomeServiceBase and each one calls to a different 3rd party exteranl API to get some results and they all implement thier own version of onGetThing().
class FooService extends SomeServiceBase { #Override protected Optional<SomeClass> onGetThing() { } }
class DooService extends SomeServiceBase { #Override protected Optional<SomeClass> onGetThing() { } }
class RooService extends SomeServiceBase { #Override protected Optional<SomeClass> onGetThing() { } }
There's a factory service that wires up all three of the above services and returns the right one based on a "Provider" that is passed in from the client to the API.
Optional<SomeClass> myThing = SomeServiceFactory.getService(provider).getThing();
What I need to do is if FooService doesn't return a result, I want to retry with DooService. But I am struggling trying to find a good way to implement this in a somewhat generic reusable way. Any help is appreciated. Let me know if I need to provide more details.
Maybe you could take a look a to the Circuit breaker pattern.
It allows you to use a "fallback" if the original call raised an exception.
If i may resume with your sample :
A circuit breaker is provided/developped around the FooService
If everything is fine on the FooService, the original response will be given back
Else if the FooService does not provide a response or throw an exception, you will go to the linked fallback
In your fallback you will implement the call to the DooService
You can give a try with Resilience4J (you have some samples with diffrent kind of implementation) or Netflix Circuit Breaker (but deprecated)
The first controller is sending the success response after that it will call another method after sending the response.
I need to call m1()method after return the response
#RequestMapping(value = {"/hello"}, method = POST, produces = { "application/json" })
public Response getAllData(#RequestBody String request){
return new ResponseEntity<>("Hello World");
}
public void m1(){
}
The simple trick is to use try...finally.
try{
return new Response();
} finally {
//do after actions
}
'finally' will always execute after the try block no matter there is a return statement in it.
Example for the Spring AspectJ using #AfterReturning advice
#Aspect
#Component
public class A {
/*
* AfterReturning advice
*/
#AfterReturning("execution(* com.package.ClassName.getAllData(..))")
public void m1(JoinPoint joinPoint) {
}
}
You need to add #EnableAsync in your configuration class or main class
then create another service class encapsulating an Async method m1
In your class add the statement below:
asyncServiceImpl.m1();
#Service
public class AsyncServiceImpl {
#Async
public CompletableFuture<String> m1() {
// Your logic here
return CompletableFuture.completedFuture(String.valueOf(Boolean.FALSE));
}
}
you can use eventListener for creating event.
And catch it in public method with EventListener annotation.
https://www.baeldung.com/spring-events
The most simple and reliable way - run a thread. Try-final isn't good at all.
But the best solution is - throw out that SB and use pure JEE servlets to invoke all that you need (JSON, JPA, BM transactions) in the client's request's thread, so you will never get stuck like that.
tldr: Is there a way to make an internal request (using the method's path) without going to the internet?
--
Why do I need it? I have a project which receives many events. The decision of who will handle each event is made by a Controller. So I have something similar to this:
#RestController
#RequestMapping("/events")
public class EventHandlerAPI {
#Autowired
private EventAHandler eventAhandler;
#Autowired
private EventBHandler eventBhandler;
#PostMapping("/a")
public void handleEventA(#RequestBody EventA event) {
eventAhandler.handle(id, event);
}
#PostMapping("/b")
public void handleEventB(#RequestBody EventB event) {
eventBhandler.handle(id, event);
}
}
We recently added support to receive events through a Queue service. It sends to us the payload and the event class. Our decision is to let both interfaces working (rest and queue). The solution to avoid code duplication was to keep the Controller choosing which handler will take care of the event. The code nowadays is similar to this:
#Configuration
public class EventHandlerQueueConsumer {
#Autowired
private EventHandlerAPI eventHandlerAPI;
private Map<Class, EventHandler> eventHandlers;
#PostConstruct
public void init() {
/* start listen queue */
declareEventHandlers();
}
private void declareEventHandlers() {
eventHandlers = new HashMap<>();
eventHandlers.put(EventAHandler.class, (EventHandler<EventAHandler>) eventHandlerAPI::handleEventA);
eventHandlers.put(EventBHandler.class, (EventHandler<EventBHandler>) eventHandlerAPI::handleEventB);
}
private void onEventReceived(AbstractEvent event) {
EventHandler eventHandler = eventHandlers.get(event.getClass());
eventHandler.handle(event);
}
private interface EventHandler<T extends AbstractEvent> {
void handle(T event);
}
}
This code works, but it doesn't let the controller choose who will handle the event (our intention). The decision is actually being made by the map.
What I would like to do was to invoke the controller method through it's request mapping without going to the internet. Something like this:
#Configuration
public class EventHandlerQueueConsumer {
// MADE UP CLASS TO SHOW WHAT I WANT
#Autowired
private ControllerInkover controllerInvoker;
#PostConstruct
public void init() { /* start listen queue */ }
private void onEventReceived(AbstractEvent event) {
controllerInvoker.post(event.getPath(), new Object[] { event });
}
}
This way is much cleaner and let all the decisions be made by the controller.
I've researched a lot and didn't found a way to implement it. Debugging spring, I found how he routes the request after the DispatcherServlet, but all the spring internals uses HttpServletRequest and HttpServletResponse :(
Is there a way to make an internal request (using the method's path) without going to the internet?
They are classes of the same application
Then it should easy enough.
1) You can call your own API on http(s)://localhost:{port}/api/{path} using RestTemplate utility class. This is preferred way, since you'll follow standard MVC pattern. Something like:
restTemplate.exchange(uri, HttpMethod.POST, httpEntity, ResponseClass.class);
2) If you don't want to invoke network connection at all, then you can either use Spring's internal to find the mapping/method map or use some reflection to build custom
map upon controller's startup. Then you can pass your event/object to the method from the map in a way shown in your mock-up class. Something like:
#RequestMapping("foo")
public void fooMethod() {
System.out.println("mapping = " + getMapping("fooMethod")); // you can get all methods/mapping in #PostContruct initialization phase
}
private String getMapping(String methodName) {
Method methods[] = this.getClass().getMethods();
for (int i = 0; i < methods.length; i++) {
if (methods[i].getName() == methodName) {
String mapping[] = methods[i].getAnnotation(RequestMapping.class).value();
if (mapping.length > 0) {
return mapping[mapping.length - 1];
}
}
}
return null;
}
I am trying to follow the REST client implementation pattern described in the Google I/O Dobjanschi video here and am using Retrofit2 for the REST API calls.
Based on the REST client pattern described above I introduced a ServiceHelper layer that calls the actual API method via Retrofit. However I don't have a clean way to call the interface methods from the ServiceHelper layer.
I currently have an enum of the available API calls and pass that from the ServiceHelper. And in my ApiProcessor introduced a function that uses an giant if..else if ladder that returns the appropriate Retrofit API interface call based on the enum passed in. I haven't really found a better/cleaner approach to this.
Is there a better / cleaner way to map these? Or any other ideas to do this?
You should throw away that monolithic ServiceHelper and create several repositories following the repository pattern in order to encapsulate and distribute responsibilities between classes.
Actually, the Retrofit API itself favors composition over inheritance, so you can easily create as much interfaces as needed and use them in the right repository.
Without the code it is a bit hard to "inspect" your solution. :)
As You asked the question it is not really the best way to solve the problem in this way (in my opinion). Altough there are a ton approaches like "if it works it is OK".
In my opinion a bit cleaner solution would be the following: Your helper is a good thing. It should be used to hide all the details of the API You are using.
It is a good thing to hide those API specific stuff because if it changes You only forced to change only your helper/adapter. My recommendation is to use multiple method in apiprocessor and not enums. It is a bit easier to maintain and fix if something is changing. Plus you do not have to take care of your Enum.
TLDR: If it works probably it is OK. You do not have to write million dollar production code to test something, but if you would like to get a good habit You should consider the refactor that code into separate processor methods.
You Can follow service pattern:
a) Create resource interface which are your exposed rest methods eg:
public interface AppResource {
#Headers({"Accept: application/json", "Content-Type: application/json"})
#GET(ApiConstants.API_VERSION_V1 + "/users")
Call<List<User>> getUsers();
}
b) Create RetrofitFactory
public class RetrofitFactory {
private static Retrofit userRetrofit;
#NonNull
private static Retrofit initRetrofit(String serverUrl) {
final HttpLoggingInterceptor logging = new HttpLoggingInterceptor();
// set your desired log level
logging.setLevel(HttpLoggingInterceptor.Level.BODY);
final OkHttpClient okHttpClient = new OkHttpClient.Builder()
.connectTimeout(10, TimeUnit.SECONDS)
.readTimeout(30, TimeUnit.SECONDS)
.addInterceptor(new Interceptor() {
#Override
public Response intercept(Chain chain) throws IOException {
final Request original = chain.request();
final Request request = original.newBuilder()
.method(original.method(), original.body())
.build();
return chain.proceed(request);
}
})
.addInterceptor(logging)
.build();
return new Retrofit.Builder()
.baseUrl(serverUrl)
.addConverterFactory(JacksonConverterFactory.create())
.client(okHttpClient)
.build();
}
public static Retrofit getUserRetrofit() {
if (userRetrofit == null) {
final String serverUrl = context.getString(R.string.server_url); //Get context
userRetrofit = initRetrofit(serverUrl);
}
return userRetrofit;
}
}
c) Create a abstract BaseService which your every service will extend
public abstract class BaseService<Resource> {
protected final Resource resource;
final Retrofit retrofit;
public BaseService(Class<Resource> clazz) {
this(clazz, false);
}
public BaseService(Class<Resource> clazz) {
retrofit = RetrofitFactory.getUserRetrofit();
resource = retrofit.create(clazz);
}
protected <T> void handleResponse(Call<T> call, final ResponseHandler<T> responseHandler) {
call.enqueue(new Callback<T>() {
#Override
public void onResponse(final Call<T> call, final Response<T> response) {
if (response.isSuccess()) {
if (responseHandler != null) {
responseHandler.onResponse(response.body());
}
} else {
final ErrorResponse errorResponse = parseError(response);
if (responseHandler != null) {
responseHandler.onError(errorResponse);
}
}
}
#Override
public void onFailure(final Call<T> call, final Throwable throwable) {
if (responseHandler != null) {
responseHandler.onFailure(throwable);
}
}
});
}
}
d) Now your user service with their response handler
public interface UserService {
void getUsers(ResponseHandler<List<User>> userListResponse);
}
e) Now your user service implementation class which extends baseservice
public class UserServiceImpl extends BaseService<UserResource> implements UserService {
public UserServiceImpl () {
super(AppResource.class);
}
#Override
public void getUsers(ResponseHandler<List<User>> userListResponse) throws UserServiceException {
final Call<List<User>> response = resource.getUsers();
handleResponse(response, userListResponse);
}
f) Create a service factory which you will reuse to call services eg:
public class ServiceFactory {
private static UserService userservice;
UserService getUserService(){
if (UserService == null) {
UserService = new UserServiceImpl();
}
return UserService ;
}
g) Now simply call service and pass your response handler
ServiceFactory.getUserService().getUsers(getUserListResponseHandler());
} catch (your exception handler) {
//log your excp
}
Please look at the code I posted below. FYI, this is from the Oracle website's websocket sample:
https://netbeans.org/kb/docs/javaee/maven-websocketapi.html
My question is, how does this work?! -- especially, the broadcastFigure function of MyWhiteboard. It is not a abstract function that is overridden and it is not "registered" with another class as in the traditional sense. The only way I see it is when the compiler sees the #OnMessage annotation, it goes and inserts the broadcastFigure call into the compiled code for when a new message is received. But before calling this function, it flows through the received data through the FigureDecoder class - based on this decoder being specified in the annotation #ServerEndpoint. Within broadcastFigure, when sendObject is called, the compiler inserts a reference to FigureEncoder - based on what's specified in the annotation #ServerEndpoint. Is this accurate?
If so, why did this implementation do things this way using annotations? Before looking at this, I would have expected there to be an abstract OnMessage function which needs to be overridden and explicit registration functions for Encoder and Decoder. Instead of such a "traditional" approach, why does the websocket implementation do it via annotations?
Thank you.
Mywhiteboard.java:
#ServerEndpoint(value = "/whiteboardendpoint", encoders = {FigureEncoder.class}, decoders = {FigureDecoder.class})
public class MyWhiteboard {
private static Set<Session> peers = Collections.synchronizedSet(new HashSet<Session>());
#OnMessage
public void broadcastFigure(Figure figure, Session session) throws IOException, EncodeException {
System.out.println("broadcastFigure: " + figure);
for (Session peer : peers) {
if (!peer.equals(session)) {
peer.getBasicRemote().sendObject(figure);
}
}
}
#OnError
public void onError(Throwable t) {
}
#OnClose
public void onClose(Session peer) {
peers.remove(peer);
}
#OnOpen
public void onOpen(Session peer) {
peers.add(peer);
}
}
FigureEncoder.java
public class FigureEncoder implements Encoder.Text<Figure> {
#Override
public String encode(Figure figure) throws EncodeException {
return figure.getJson().toString();
}
#Override
public void init(EndpointConfig config) {
System.out.println("init");
}
#Override
public void destroy() {
System.out.println("destroy");
}
}
FigureDecoder.java:
public class FigureDecoder implements Decoder.Text<Figure> {
#Override
public Figure decode(String string) throws DecodeException {
JsonObject jsonObject = Json.createReader(new StringReader(string)).readObject();
return new Figure(jsonObject);
}
#Override
public boolean willDecode(String string) {
try {
Json.createReader(new StringReader(string)).readObject();
return true;
} catch (JsonException ex) {
ex.printStackTrace();
return false;
}
}
#Override
public void init(EndpointConfig config) {
System.out.println("init");
}
#Override
public void destroy() {
System.out.println("destroy");
}
}
Annotations have their advantages and disadvantages, and there is a lot to say about choosing to create an annotation based API versus a (how you say) "traditional" API using interfaces. I won't go into that since you'll find plenty of wars online.
Used correctly, annotations provide better information about what a class/method's responsibility is. Many prefer annotations and as such they have become a trend and they are used everywhere.
With that out of the way, let's get back to your question:
Why did this implementation do things this way using annotations? Before looking at this, I would have expected there to be an abstract OnMessage function which needs to be overridden and explicit registration functions for Encoder and Decoder. Instead of such a "traditional" approach, why does the websocket implementation do it via annotations?
Actually they don't. Annotation is just a provided way of using the API. If you don't like it then you can do it the old way. Here is from the JSR-356 spec:
There are two main means by which an endpoint can be created. The first means is to implement certain of
the API classes from the Java WebSocket API with the required behavior to handle the endpoint lifecycle,
consume and send messages, publish itself, or connect to a peer. Often, this specification will refer to this
kind of endpoint as a programmatic endpoint. The second means is to decorate a Plain Old Java Object
(POJO) with certain of the annotations from the Java WebSocket API. The implementation then takes these
annotated classes and creates the appropriate objects at runtime to deploy the POJO as a websocket endpoint.
Often, this specification will refer to this kind of endpoint as an
annotated endpoint.
Again, people prefer using annotations and that's what you'll find most of tutorials using, but you can do without them if you want it bad enough.