My application is getting below error when consuming a service that performs queries in SQL Server using FeignClient.
ERROR:
Exception in thread "pool-10-thread-14" feign.RetryableException: Read timed out executing GET
http://127.0.0.1:8876/processoData/search/buscaProcessoPorCliente?cliente=ELEKTRO+-+TRABALHISTA&estado=SP
My Consumer Service:
#FeignClient(url="http://127.0.0.1:8876")
public interface ProcessoConsumer {
#RequestMapping(method = RequestMethod.GET, value = "/processoData/search/buscaProcessoPorCliente?cliente={cliente}&estado={estado}")
public PagedResources<ProcessoDTO> buscaProcessoClienteEstado(#PathVariable("cliente") String cliente, #PathVariable("estado") String estado);
}
My YML:
server:
port: 8874
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
eureka:
client:
serviceUrl:
defaultZone: ${vcap.services.eureka-service.credentials.uri:http://xxx.xx.xxx.xx:8764}/eureka/
instance:
preferIpAddress: true
ribbon:
eureka:
enabled: true
spring:
application:
name: MyApplication
data:
mongodb:
host: xxx.xx.xxx.xx
port: 27017
uri: mongodb://xxx.xx.xxx.xx/recortesExtrator
repositories.enabled: true
solr:
host: http://xxx.xx.xxx.xx:8983/solr
repositories.enabled: true
Anyone know how to solve this?
Thanks.
Add the following properties into application.properties file, in milliseconds.
feign.client.config.default.connectTimeout=160000000
feign.client.config.default.readTimeout=160000000
I'm using Feign.builder() to instantiate my Feign clients.
In order to set connectTimeout and readTimeout, I use the following :
Feign.builder()
...
.options(new Request.Options(connectTimeout, readTimeout))
.target(MyApiInterface.class, url);
Using this I can configure different timeout for different APIs.
just ran into this issue as well. As suggested by #spencergibb here is the workaround I'm using. See the link
Add these in the application.properties.
# Disable Hystrix timeout globally (for all services)
hystrix.command.default.execution.timeout.enabled: false
# Increase the Hystrix timeout to 60s (globally)
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds: 60000
Add this in the Java configuration class.
import feign.Request;
#Configuration
#EnableDiscoveryClient
#EnableFeignClients(basePackageClasses = { ServiceFeignClient.class })
#ComponentScan(basePackageClasses = { ServiceFeignClient.class })
public class FeignConfig {
/**
* Method to create a bean to increase the timeout value,
* It is used to overcome the Retryable exception while invoking the feign client.
* #param env,
* An {#link ConfigurableEnvironment}
* #return A {#link Request}
*/
#Bean
public static Request.Options requestOptions(ConfigurableEnvironment env) {
int ribbonReadTimeout = env.getProperty("ribbon.ReadTimeout", int.class, 70000);
int ribbonConnectionTimeout = env.getProperty("ribbon.ConnectTimeout", int.class, 60000);
return new Request.Options(ribbonConnectionTimeout, ribbonReadTimeout);
}
}
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=6000
ribbon.ReadTimeout=60000
ribbon.ConnectTimeout=60000
make sure ribbon's timeout is bigger than hystrix
You can add 'Options' argument to you methods and control timeouts dynamically.
#FeignClient(url="http://127.0.0.1:8876")
public interface ProcessoConsumer {
#RequestMapping(method = RequestMethod.GET, value = "/processoData/search/buscaProcessoPorCliente?cliente={cliente}&estado={estado}")
PagedResources<ProcessoDTO> buscaProcessoClienteEstado(#PathVariable("cliente") String cliente, #PathVariable("estado") String estado,
Request.Options options);
}
Use like next:
processoConsumer.buscaProcessoClienteEstado(..., new Request.Options(100, TimeUnit.MILLISECONDS,
100, TimeUnit.MILLISECONDS, true));
Add the below properties to the application.properties file
value 5000 is in milliseconds
feign.client.config.default.connectTimeout: 5000
feign.client.config.default.readTimeout: 5000
Look at this answer. It did the trick for me. I also did a bit of research and I've found the properties documentation here:
https://github.com/Netflix/Hystrix/wiki/Configuration#intro
eureka:
client:
eureka-server-read-timeout-seconds: 30
Add these in the application.properties
feign.hystrix.enabled=false
hystrix.command.default.execution.isolation.thread.timeoutInMilliseconds=5000
Related
I have a Spring Cloud project with a module that binds to messagebus kafka and rabbitmq.
in this module I have a test for kafka:
#ActiveProfiles("test")
#DirtiesContext
#ExtendWith(SpringExtension.class)
#ContextConfiguration(classes = MessageReceiverTestConfiguration.class,
initializers = ConfigFileApplicationContextInitializer.class)
#EnableBinding(MessageReceivingChannel.class)
public class MessageReceiverITest {
#Autowired
private MessageReceivingChannel messageReceivingChannel;
#MockBean
private MessageConsumerService messageConsumerService;
#Autowired
private MessageConverter messageConverter;
#Autowired
private MessageReceiverTestConfiguration receiverTestConfiguration;
#Captor
private ArgumentCaptor<ImportantMessage> captorMessage;
#Captor
private ArgumentCaptor<MessageHeaders> captorHeaders;
#Test
public void testLoanApplicationChannelInput() throws Throwable {
final ImportantMessage sentMessage = new ImportantMessage("qwer124asdf");
final Map<String, Object> headerMap = new HashMap<>(1);
headerMap.put(MessageHeaders.CONTENT_TYPE, receiverTestConfiguration.getContentType());
MessageHeaders sentHeaders = new MessageHeaders(headerMap);
final Message<?> message = messageConverter.toMessage(sentMessage, sentHeaders);
messageReceivingChannel.input().send(message);
TimeUnit.SECONDS.sleep(1);
verify(messageConsumerService).takeActionOn(captorMessage.capture(), captorHeaders.capture());
final Object receivedMessage = captorMessage.getValue();
Assertions.assertThat(receivedMessage).isNotNull();
Assertions.assertThat(receivedMessage).isEqualTo(sentMessage);
MessageHeaders receivedHeaders = captorHeaders.getValue();
Assertions.assertThat(receivedHeaders).isNotNull();
Assertions.assertThat(receivedHeaders.get(MessageHeaders.CONTENT_TYPE).toString())
.isEqualTo(sentHeaders.get(MessageHeaders.CONTENT_TYPE));
}
}
which runs in IDE (idea) just fine.
the problem is when I try to install maven artifact, it doesn't pass the verify phase because:
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle'; nested exception is java.lang.IllegalStateException: A default binder has been requested, but there is more than one binder available for 'org.springframework.cloud.stream.messaging.DirectWithAttributesChannel' : kafka,rabbit, and no default binder has been set.
and this is how I set the default binder in test/resources/application-test.yml:
logging:
config: classpath:logback-local.xml
spring:
cloud:
stream:
default:
contentType: application/*+avro
producer:
headerMode: embeddedHeaders
bindings:
messagereceived:
binder: kafka
contentType: "application/json"
default-binder: kafka
kafka:
binder:
configuration:
security:
protocol: SSL
ssl:
truststore:
location: ${JAVA_HOME}\lib\security\cacerts
password: ***
type: JKS
kafka:
properties:
max.in.flight.requests.per.connection: 1
request.timeout.ms: 30000
max.block.ms: 3000
producer:
retries: 3
so my question is how to set default binder for spring-cloud-starter-parent:Hoxton.SR9 properly?
Thanks for advices!
Trying to do the Junit 5 E2E functional testing using Micronaut declarative HTTP client.
public interface IProductOperation {
#Get(value = "/search/{text}")
#Secured(SecurityRule.IS_ANONYMOUS)
Maybe<?> freeTextSearch(#NotBlank String text);
}
Declarative micronaut HTTP client
#Client(
id = "feteBirdProduct",
path = "/product"
)
public interface IProductClient extends IProductOperation {
}
JUnit - 5 testing
#MicronautTest
public record ProductControllerTest(IProductClient iProductClient) {
#Test
#DisplayName("Should search the item based on the name")
void shouldSearchTheItemBasedOnTheName() {
var value = iProductClient.freeTextSearch("test").blockingGet();
System.out.println(value);
}
}
Controller
#Controller("/product")
public class ProductController implements IProductOperation {
private final IProductManager iProductManager;
public ProductController(IProductManager iProductManager) {
this.iProductManager = iProductManager;
}
#Override
public Maybe<List> freeTextSearch(String text) {
LOG.info("Controller --> Finding all the products");
return iProductManager.findFreeText(text);
}
}
When I run the test, I get a 500 internet server error. I think when I run the test the application is also running. Not sure what is the reason for 500 internal server error.
Any help will be appreciated
Is #Get(value = "/search/{text}") causing the issue ?. If yes how can I solve with the declarative client
Service discovery
application.yml
consul:
client:
defaultZone: ${CONSUL_HOST:localhost}:${CONSUL_PORT:8500}
registration:
enabled: true
application-test.yml
micronaut:
server:
port: -1
http:
services:
feteBirdProduct:
urls:
- http://product
consul:
client:
registration:
enabled: false
I am trying to implement a custom Kafka Partitioner using spring cloud stream bindings. I would like to just custom Partition the user topic and not do anything with company topic(Kafka will use DefaultPartitioner in this case).
My bindings configuration:
spring:
cloud:
stream:
bindings:
comp-out:
destination: company
contentType: application/json
user-out:
destination: user
contentType: application/json
As per reference document: https://cloud.spring.io/spring-cloud-static/spring-cloud-stream-binder-kafka/2.1.0.RC4/single/spring-cloud-stream-binder-kafka.html#_partitioning_with_the_kafka_binder
I modified the configuration to this:
spring:
cloud:
stream:
bindings:
comp-out:
destination: company
contentType: application/json
user-out:
destination: user
contentType: application/json
producer:
partitioned: true
partitionSelectorClass: config.UserPartitioner
I Post the message into Stream using this:
public void postUserStream(User user) throws ServiceException {
try {
LOG.info("Posting User {} into Kafka stream...", user);
MessageChannel messageChannel = messageStreams.outboundUser();
messageChannel
.send(MessageBuilder.withPayload(user)
.setHeader(MessageHeaders.CONTENT_TYPE, MimeTypeUtils.APPLICATION_JSON).build());
} catch (Exception ex) {
LOG.error("Error while populating User stream into Kafka.. ", ex);
throw ex;
}
}
My UserPartitioner Class:
public class UserPartitioner extends DefaultPartitioner {
#Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes,
Cluster cluster) {
String partitionKey = null;
if (Objects.nonNull(value)) {
User user = (User) value;
partitionKey = String.valueOf(user.getCompanyId()) + "_" + String.valueOf(user.getId());
keyBytes = partitionKey.getBytes();
}
return super.partition(topic, partitionKey, keyBytes, value, valueBytes, cluster);
}
}
I end up receiving following exception:
Description:
Failed to bind properties under 'spring.cloud.stream.bindings.user-out.producer' to org.springframework.cloud.stream.binder.ProducerProperties:
Property: spring.cloud.stream.bindings.user-out.producer.partitioned
Value: true
Origin: "spring.cloud.stream.bindings.user-out.producer.partitioned" from property source "bootstrapProperties"
Reason: No setter found for property: partitioned
Action:
Update your application's configuration
Any reference link on how to set up Custom Partition using message binders will be helpful.
Edit: Based on the documentation Tried the below steps as well:
user-out:
destination: user
contentType: application/json
producer:
partitionKeyExtractorClass: config.SimpleUserPartitioner
#Component
public class SimpleUserPartitioner implements PartitionKeyExtractorStrategy {
#Override
public Object extractKey(Message<?> message) {
if(message.getPayload() instanceof BaseUser) {
BaseUser user = (BaseUser) message.getPayload();
return user.getId();
}
return 10;
}
}
update 2: Solution that worked for me add partitioncount to bindings and autoaddpartitions to true in binder:
spring:
logging:
level: info
cloud:
stream:
bindings:
user-out:
destination: user
contentType: application/json
producer:
partition-key-expression: headers['partitionKey']
partition-count: 4
spring:
cloud:
stream:
kafka:
binder:
brokers: localhost:9092
autoAddPartitions: true
There is no property partitioned; the getter depends on other properties...
public boolean isPartitioned() {
return this.partitionKeyExpression != null
|| this.partitionKeyExtractorName != null;
}
partitionSelectorClass: config.UserPartitioner
The UserPartitioner is a Kafka Partitioner - it determines which consumers get which partitions (on the consumer side)
The partitionSelectorClass has to be a PartitionSelectorStrategy - it determines which partition a record is sent to (on the producer side).
These are completely different objects.
If you really want to customize the way partitions are distributed across consumer instances, that is a Kafka concern and has nothing to do with Spring.
Furthermore, all consumer bindings in the same binder will use the same Partitioner. You would have to configure multiple binders to have different Partitioners.
Given your question, I think you are simply confusing Partitioner with PartitionSelectorStrategy and you need the latter.
Also, note; The partitionSelectorClass . has been deprecated for a while now and have been removed in the current master (won't be available in 3.0.0) in favor of partitionSelectorName - https://cloud.spring.io/spring-cloud-static/spring-cloud-stream/3.0.0.M1/spring-cloud-stream.html#spring-cloud-stream-overview-partitioning
I'm developing an application in the event-driven architecture.
I'm trying to model the following flow of events:
UserAccountCreated (user-management-events) -> sending an e-mail -> MailNotificationSent (notification-service-events)
The notification-service application executes the whole flow. It waits for the UserAccountCreated event by listening to user-management-events topic. When the event is received, the application sends the email and publishes a new event - MailNotificationSent to the notification-service-events topic.
I have no problems with listening to the first event (UserAccountCreated) - application receives it and performs the rest of the flow. I also have no problem with publishing the MailNotificationSent event. Unfortunately, for development purposes, I want to listen to the MailNotificationSent event in the notification service, so the application has to listen to both UserAccountCreated and MailNotificationSent. Here I'm not able to make it works.
Let's take a look at the implementation:
NotificationStreams:
public interface NotificationStreams {
String INPUT = "notification-service-events-in";
String OUTPUT = "notification-service-events-out";
#Input(INPUT)
SubscribableChannel inboundEvents();
#Output(OUTPUT)
MessageChannel outboundEvents();
}
NotificationsEventsListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class NotificationEventsListener {
#StreamListener(NotificationStreams.INPUT)
public void notificationServiceEventsIn(Flux<ActivationLinkSent> input) {
input.subscribe(event -> {
log.info("Received event ActivationLinkSent: " + event.toString());
});
}
}
UserManagementEvents:
public interface UserManagementEvents {
String INPUT = "user-management-events";
#Input(INPUT)
SubscribableChannel inboundEvents();
}
UserManagementEventsListener:
#Slf4j
#Component
#RequiredArgsConstructor
public class UserManagementEventsListener {
private final Gate gate;
#StreamListener(UserManagementEvents.INPUT)
public void userManagementEvents(Flux<UserAccountCreated> input) {
input.subscribe(event -> {
log.info("Received event UserAccountCreated: " + event.toString());
gate.dispatch(SendActivationLink.builder()
.email(event.getEmail())
.username(event.getUsername())
.build()
);
});
}
}
KafkaStreamsConfig:
#EnableBinding(value = {NotificationStreams.class, UserManagementEvents.class})
public class KafkaStreamsConfig {
}
EventPublisher:
#Slf4j
#RequiredArgsConstructor
#Component
public class EventPublisher {
private final NotificationStreams eventsStreams;
private final AvroMessageBuilder messageBuilder;
public void publish(Event event) {
MessageChannel messageChannel = eventsStreams.outboundEvents();
AvroActivationLinkSent activationLinkSent = new AvroActivationLinkSent(); activationLinkSent.setEmail(((ActivationLinkSent)event).getEmail());
activationLinkSent.setUsername(((ActivationLinkSent)event).getUsername() + "-domain");
activationLinkSent.setTimestamp(System.currentTimeMillis());
messageChannel.send(messageBuilder.buildMessage(activationLinkSent));
}
}
application config:
spring:
devtools:
restart:
enabled: true
cloud:
stream:
default:
contentType: application/*+avro
kafka:
binder:
brokers: localhost:9092
schemaRegistryClient:
endpoint: http://localhost:8990
kafka:
consumer:
group-id: notification-group
auto-offset-reset: earliest
kafka:
bootstrap:
servers: localhost:9092
The application seems to ignore the notification-service-events listener. It works when listening to only one stream.
I'm almost 100% sure that this is not an issue with publishing the event, because I've connected manually to Kafka and verified that messages are published properly:
kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic notification-service-events-out --from-beginning
Do you have any ideas what else I should check? Is there any additional configuration on the Spring side?
I've found where the problem was.
I was missing bindings configuration. In the application properties, I should have added the following lines:
cloud:
stream:
bindings:
notification-service-events-in:
destination: notification-service-events
notification-service-events-out:
destination: notification-service-events
user-management-events-in:
destination: user-management-events
In the user-management-service I didn't have such a problem because I used a different property:
cloud:
stream:
default:
contentType: application/*+avro
destination: user-management-events
I am trying to configure ribbon for my zuul routes, so when I go to http://host1:port1/restful-service/app1 it should route me to http://host2:port2/rest-example/app1.
It works properly when I define the route with "url" property, without using ribbon:
zuul:
routes:
restful-service:
path: /restful-service/**
url: http://host2:port2/rest-example
But when I try to use ribbon, which looks like this:
zuul:
routes:
restful-service:
path: /restful-service/**
serviceId: restful-service
ribbon:
eureka:
enabled: false
restful-service:
ribbon:
listOfServers: host2:port2/rest-example
It only allows me to route to http://host2:port2/rest-example but not to the chosen service directly http://host2:port2/rest-example/app1 (it returns 404 status code).
Change your configuration properties to the following -
zuul:
routes:
restful-service:
serviceId:restful-service
stripPrefix:false
ribbon:
eureka:
enabled:false
restful-service:
ribbon:
listOfServers: host2:port2
Then you will need to write a zuul Pre-Filter to change the requestURI
public class CustomPreFilter extends ZuulFilter {
public Object run() {
RequestContext context=RequestContext.getCurrentContext();
String oldrequestURI=(String) context.get("requestURI");
String newrequestURI=oldrequestURI.replace("restful-service", "rest-example");
context.put("requestURI",newrequestURI);
return null;
}
public boolean shouldFilter() {
HttpServletRequest httpServletRequest=RequestContext.getCurrentContext().getRequest();
if(httpServletRequest.getRequestURI().contains("/restful-service"))
return true;
else
return false;
}
#Override
public int filterOrder() {
return PreDecorationFilter.FILTER_ORDER+1;
}
#Override
public String filterType() {
return "pre";
}
}
Now make a request, this will work for what you want to do.
the listOfServers property only supports host and port, not path.