OpenFeign + Hystrix - Different timeout for different clients - java

I have Hystrix working with Feign in my Spring Boot Application.
I have set the default timeout for Hystrix to 10000ms with:
feign:
hystrix:
enabled: true
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 10000
The problem is I have this one client, lets call it HeavyClient, that is a heavy call that sometimes takes more time and causes the circuit break.
I would like to increase the timeout cap in Hystrix for this one guy only. Is it possible?
I have tried with Feign properties like:
feign:
client:
config:
HeavyClient:
connectTimeout: 30000
readTimeout: 30000
But this doesn't work. I guess Hystrix does not check Feign properties.
I'm using:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-openfeign</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
Any help is appreciated, thanks!

After tweeking around and search more deeply I have found on this GitHub issue that default naming for Hystrix commands are something ugly like InterfaceName#methodNameSignature().
So, for instance, given the following #FeignClient:
#FeignClient(name = "heavyClient", url = "${heavyclient.url}")
public interface HeavyClient {
#RequestMapping("/process/{operation}")
public ResponseEntity<Response> process(#PathVariable("operation") String operation);
}
It would be configured as:
hystrix:
command:
default:
execution:
isolation:
thread:
timeoutInMilliseconds: 5000
HeavyClient#process(String):
execution:
isolation:
thread:
timeoutInMilliseconds: 30000
Unfortunelly, this did not work for me... Not sure why... =s
So to solve it, I registered a bean of type SetterFactory that would create the command keys given the #FeignClient name:
#Bean
public SetterFactory setterFactory() {
return (target, method) -> HystrixCommand.Setter
.withGroupKey(HystrixCommandGroupKey.Factory.asKey(target.name()))
.andCommandKey(HystrixCommandKey.Factory.asKey(target.name()));
}
And then I can use the configuration simply like:
hystrix:
command:
heavyClient:
execution:
isolation:
thread:
timeoutInMilliseconds: 30000

CircuitBreakerNameResolver to naming circuit as HystrixCommandKey, there is a default implementation DefaultCircuitBreakerNameResolver, which output key pattern is HardCodedTarget#methodName(ParamClass), so you can customize CircuitBreakerNameResolver instead of DefaultCircuitBreakerNameResolver.

Related

NotEnoughReplicasException: Messages are rejected since there are fewer in-sync replicas than required

l have 5 kafka broker. And in my producer app, my configs is like below:
Interesting part is that, after getting this exception 2 times, in third time message is producing to topic.
spring:
kafka:
streams:
replication-factor: 3
properties:
min.insync.replicas: 2
producer:
acks: "all"
batch-size: 1
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
compression-type: "lz4"
retries: 2
properties:
linger.ms: 1
request:
timeout:
ms: 60000
So, l tried to assign different count for replication-factor and min.insync.replicas.
min.insync.replicas and replication.factor are both Topic Configs, not a Kafka (Streams) client property.
From Spring Kafka , you can use #Bean NewTopic to define Kafka topic resources with their configurations.
Your error probably went away because the cluster had healed itself, while your app was retrying the request. You should look at the broker server logs rather than your app's.

Zuul: automatic rerouting incoming requests to other service instance in case of unavailable service

I have configured Zuul with Eureka in a way, that 3 identical instances of a service are working parallely. I am calling the gateway on the port 8400, which routes incoming requests to ports 8420, 8430 and 8440 in a round-robin manner. It works smoothly. Now, if I switching off one of the 3 services, a small amount of incoming requests will go wrong with the following exception:
com.netflix.zuul.exception.ZuulException: Filter threw Exception
=> 1: java.util.concurrent.FutureTask.report(FutureTask.java:122)
=> 3: hu.perit.spvitamin.core.batchprocessing.BatchProcessor.process(BatchProcessor.java:106)
caused by: com.netflix.zuul.exception.ZuulException: Filter threw Exception
=> 1: com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:227)
caused by: org.springframework.cloud.netflix.zuul.util.ZuulRuntimeException: com.netflix.zuul.exception.ZuulException: Forwarding error
=> 1: org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.run(RibbonRoutingFilter.java:124)
caused by: com.netflix.zuul.exception.ZuulException: Forwarding error
=> 1: org.springframework.cloud.netflix.zuul.filters.route.RibbonRoutingFilter.handleException(RibbonRoutingFilter.java:198)
caused by: com.netflix.client.ClientException: com.netflix.client.ClientException
=> 1: com.netflix.client.AbstractLoadBalancerAwareClient.executeWithLoadBalancer(AbstractLoadBalancerAwareClient.java:118)
caused by: java.lang.RuntimeException: org.apache.http.NoHttpResponseException: scalable-service-2:8430 failed to respond
=> 1: rx.exceptions.Exceptions.propagate(Exceptions.java:57)
caused by: org.apache.http.NoHttpResponseException: scalable-service-2:8430 failed to respond
=> 1: org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:141)
My Zuul routing looks like this:
### Zuul routes
zuul.routes.scalable-service.path=/scalable/**
#Authorization header will be forwarded to scalable-service
zuul.routes.scalable-service.sensitiveHeaders: Cookie,Set-Cookie
zuul.routes.scalable-service.serviceId=template-scalable-service
It takes a while until Eureka discovers the service is not available any more.
My question is: Is there a possibility, to configure Zuul so that in case of a NoHttpResponseException, it forwards the requests to another available instance in the pool?
Eureka, by default, requires lease to be renewed every 90s. That is, if a service instance doesn't get its lease renewed in 90s, Eureka server will evict the instance. In your case, the instance has not been evicted yet - the renew window for the instance was valid.
For this, you can decrease the renew duration through config setup at eureka client and eureka server as described here.
Note: If you hit the actuator /shutdown endpoint, the instance is immediately evicted
Finally I found the solution to the problem. The appropriate search phrase was 'fault tolerance'. The key is the autoretry config in the following application.properties file. The value of template-scalable-service.ribbon.MaxAutoRetriesNextServer must be set at least to 6 in case of 3 pooled services to achieve full fault tolerance. With that setup I can kill 2 of 3 services any time, no incoming request will go wrong. Finally I have set it to 10, there is no unnecessary increase of timeout, hystrix will break the line.
### Eureka config
eureka.instance.hostname=${hostname:localhost}
eureka.instance.instanceId=${eureka.instance.hostname}:${spring.application.name}:${server.port}
eureka.instance.non-secure-port-enabled=false
eureka.instance.secure-port-enabled=true
eureka.instance.secure-port=${server.port}
eureka.instance.lease-renewal-interval-in-seconds=5
eureka.instance.lease-expiration-duration-in-seconds=10
eureka.datacenter=perit.hu
eureka.environment=${EUREKA_ENVIRONMENT_PROFILE:dev}
eureka.client.serviceUrl.defaultZone=${EUREKA_SERVER:https://${server.fqdn}:${server.port}/eureka}
eureka.client.server.waitTimeInMsWhenSyncEmpty=0
eureka.client.registry-fetch-interval-seconds=5
eureka.dashboard.path=/gui
eureka.server.enable-self-preservation=false
eureka.server.expected-client-renewal-interval-seconds=10
eureka.server.eviction-interval-timer-in-ms=2000
### Ribbon
ribbon.IsSecure=true
ribbon.NFLoadBalancerPingInterval=5
ribbon.ConnectTimeout=30000
ribbon.ReadTimeout=120000
### Zuul config
zuul.host.connectTimeoutMillis=30000
zuul.host.socketTimeoutMillis=120000
zuul.host.maxTotalConnections=2000
zuul.host.maxPerRouteConnections=200
zuul.retryable=true
### Zuul routes
#template-scalable-service
zuul.routes.scalable-service.path=/scalable/**
#Authorization header will be forwarded to scalable-service
zuul.routes.scalable-service.sensitiveHeaders=Cookie,Set-Cookie
zuul.routes.scalable-service.serviceId=template-scalable-service
# Autoretry config for template-scalable-service
template-scalable-service.ribbon.MaxAutoRetries=0
template-scalable-service.ribbon.MaxAutoRetriesNextServer=10
template-scalable-service.ribbon.OkToRetryOnAllOperations=true
#template-auth-service
zuul.routes.auth-service.path=/auth/**
#Authorization header will be forwarded to scalable-service
zuul.routes.auth-service.sensitiveHeaders=Cookie,Set-Cookie
zuul.routes.auth-service.serviceId=template-auth-service
# Autoretry config for template-auth-service
template-auth-service.ribbon.MaxAutoRetries=0
template-auth-service.ribbon.MaxAutoRetriesNextServer=0
template-auth-service.ribbon.OkToRetryOnAllOperations=false
### Hystrix
hystrix.command.default.execution.timeout.enabled=false
Beside of this, I have a profile specific setup in application-discovery.properties
#Microservice environment
eureka.client.registerWithEureka=false
eureka.client.fetchRegistry=true
spring.cloud.loadbalancer.ribbon.enabled=true
I start my server in a docker container like this:
services:
discovery:
container_name: discovery
image: template-eureka
environment:
#agentlib for remote debugging
- JAVA_OPTS=-DEUREKA_SERVER=https://discovery:8400/eureka -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
- TEMPLATE_EUREKA_OPTS=-Dspring.profiles.active=default,dev,discovery
- EUREKA_ENVIRONMENT_PROFILE=dev
ports:
- '8400:8400'
- '5500:5005'
networks:
- back-tier-net
- monitoring
hostname: 'discovery'
See the complete solution in GitHub.

I getting always getting "{"status":504,"error":"Gateway Timeout","message":"com.netflix.zuul.exception.ZuulException: Hystrix Readed time out"}"?

I am always getting "2019-04-09 07:24:23.389 WARN 11676 --- [nio-9095-exec-5] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering", for request which takes more than 1 second.
I have already tried to increase the timeout but none of them worked.
2019-04-09 07:24:23.389 WARN 11676 --- [nio-9095-exec-5] o.s.c.n.z.filters.post.SendErrorFilter : Error during filtering
com.netflix.zuul.exception.ZuulException:
at org.springframework.cloud.netflix.zuul.filters.post.SendErrorFilter.findZuulException(SendErrorFilter.java:114) ~[spring-cloud-netflix-zuul-2.1.0.RELEASE.jar:2.1.0.RELEASE]
at org.springframework.cloud.netflix.zuul.filters.post.SendErrorFilter.run(SendErrorFilter.java:76) ~[spring-cloud-netflix-zuul-2.1.0.RELEASE.jar:2.1.0.RELEASE]
at com.netflix.zuul.ZuulFilter.runFilter(ZuulFilter.java:117) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.FilterProcessor.processZuulFilter(FilterProcessor.java:193) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.FilterProcessor.runFilters(FilterProcessor.java:157) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.FilterProcessor.error(FilterProcessor.java:105) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.ZuulRunner.error(ZuulRunner.java:112) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.http.ZuulServlet.error(ZuulServlet.java:145) ~[zuul-core-1.3.1.jar:1.3.1]
at com.netflix.zuul.http.ZuulServlet.service(ZuulServlet.java:83) ~[zuul-core-1.3.1.jar:1.3.1]
at org.springframework.web.servlet.mvc.ServletWrappingController.handleRequestInternal(ServletWrappingController.java:165) ~[spring-webmvc-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at ava.lang.Thread.run(Thread.java:834) ~[na:na]
You can check my answer:
here
Hystrix readed timeout by default is 1 second, and you can change that in your application.yaml file. It can be done globally or per service.
Above issue is caused due to hysterix timeout.
The above issue can be solved by disabling the hystrix timeout or increasing the hysterix timeout as below :
# Disable Hystrix timeout globally (for all services)
hystrix.command.default.execution.timeout.enabled: false
#To disable timeout foror particular service,
hystrix.command.<serviceName>.execution.timeout.enabled: false
# Increase the Hystrix timeout to 60s (globally)
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000
# Increase the Hystrix timeout to 60s (per service)
hystrix.command.<serviceName>.execution.isolation.thread.timeoutInMilliseconds: 60000
The above solution will work if you are using discovery service for service lookup and routing.
Here is the detailed explaination : spring-cloud-netflix-issue-321
You are timing out on H2 console testing with postman or any other http testers because: Using Zuul...hysterix...you are trying to send the same exact object to the H2 database. This may be happening because you have validators on your models also. To resolve: make sure the json, xml or whatever it is objects are relatively unique by re-edit and then try to send request again.

How to configure Ribbon/Hystrix per route or per endpoint using Zuul Proxy

I have an ft-admin microservive which exposes 2 endpoints app-config and app-analytic. In my #EnableZuulProxy gateway project, I can define the routing rules and specify other configurations for Ribbon and Hystrix on microservice-level using the serviceId as following.
zuul:
routes:
admin-services:
path: /admin/**
serviceId: ft-admin
stripPrefix: true
ft-admin:
ribbon:
ActiveConnectionsLimit: 2
hystrix:
command:
ft-admin:
execution:
isolation:
thread:
timeoutInMilliseconds: 10000
I'm wondering if there's a way to bring the above configurations down to endpoint-level for each app-config and app-analytic individually. The goal is to be able to give a different setting for each endpoint as following.
zuul:
routes:
app-config-endpoint:
path: /app-config/**
serviceId: ft-admin
stripPrefix: false
app-analytic-endpoint:
path: /app-analytic/**
serviceId: ft-admin
stripPrefix: false
app-config-endpoint:
ribbon:
ActiveConnectionsLimit: 5
app-analytic-endpoint:
ribbon:
ActiveConnectionsLimit: 2
...
When I run my gateway project in Debug mode with ft-admin.ribbon.ActiveConnectionsLimit: 2, I can see the following lines in the log.
c.netflix.config.ChainedDynamicProperty : Property changed: 'ft-admin.ribbon.ActiveConnectionsLimit = 2'
c.netflix.config.ChainedDynamicProperty : Flipping property: ft-admin.ribbon.ActiveConnectionsLimit to use it's current value:2
However, when I run my project with app-config-endpoint.ribbon.ActiveConnectionsLimit: 5, I see the following lines.
c.netflix.config.ChainedDynamicProperty : Property changed: 'ft-admin.ribbon.ActiveConnectionsLimit = -2147483648'
c.netflix.config.ChainedDynamicProperty : Flipping property: ft-admin.ribbon.ActiveConnectionsLimit to use NEXT property: niws.loadbalancer.availabilityFilteringRule.activeConnectionsLimit = 2147483647
I've tried to search through a ton of posts but it seems like the configurations always stop at the microservice-level. The endpoint/route level configurations are completely ignore.
I'd be very grateful if you could point me in the right direction or tell me your story if you've tried this before.

Spring JpaRepository findAll, Java 8 Stream, Connection has been abandoned

I'm using Spring Boot and trying to do a findAll query in mu repository which should return a Stream of results:
public interface MyThingRepository extends JpaRepository<MyThing, String> {
Stream<MyThing> findAll();
}
public class MyScheduledJobRunner {
#Autowired
private MyThingRepository myThingRepository;
public void run() {
try (Stream<MyThing> myThingsStream : myThingRepository.findAll()) {
myThingsStream.forEach(myThing -> {
// do some stuff
});
// myThingsStream.close(); // <- at one point even tried that, though the stream is wrapped in an auto-closing block. anyway, it did not help
System.out.println("All my things processed.");
}
System.out.println("Exited the auto-closing block.");
}
}
Output that I get is:
All my things processed.
Exited the auto-closing block.
o.a.tomcat.jdbc.pool.ConnectionPool : Connection has been abandoned PooledConnection[com.mysql.jdbc.JDBC4Connection#2865b7d5]:java.lang.Exception
| at org.apache.tomcat.jdbc.pool.ConnectionPool.getThreadDump(ConnectionPool.java:1061)
...
at MyScheduledJobRunner.run(MyScheduledJobRunner:52)
MyScheduledJobRunner:52 is:
try (Stream<MyThing> myThingsStream : myThingRepository.findAll()) {
As per documentation, when using Streams in JpaRepositories you should always close them after usage. Since they implement AutoCloseable then you can use try with resources block.
http://docs.spring.io/spring-data/jpa/docs/current/reference/html/#repositories.query-streaming
A Stream potentially wraps underlying data store specific resources and must therefore be closed after usage. You can either manually close the Stream using the close() method or by using a Java 7 try-with-resources block.
There's even an example at the documentation that does it exactly the same way I do. So I am doing all that documentation says as far as I can tell, but still I get an exception 30 seconds after the operation. So apparently the connection is not closed and is left hanging. Why is that and how could I overcome this?
I've tried with Postgres 9.5 and MariaDB as a database. I am using the newest possible connectors/drivers and Tomcat connection pooling that is configured through spring boot's properties like that:
spring:
datasource:
driverClassName: com.mysql.jdbc.Driver
url: jdbc:mysql://localhost/mydb?useSSL=false
username: user
password: password
initial-size: 10
max-active: 100
max-idle: 50
min-idle: 10
max-wait: 15000
test-while-idle: true
test-on-borrow: true
validation-query: SELECT 1
validation-query-timeout: 5
validationInterval: 30000
time-between-eviction-runs-millis: 30000
min-evictable-idle-time-millis: 60000
removeAbandonedTimeout: 60
remove-abandoned: true
log-abandoned: true
in pool configuration just u need to add :
"org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer"

Categories

Resources