Is WebClient.Builder.codecs() ignored in Spring Boot 2.7.*? - java

I use pretty straightforward configuration of my `WebClient:
#Configuration
class Config {
#Value("${client.baseUrl}")
private String baseUrl;
#Bean
public WebClient webClient() {
return WebClient.builder()
.codecs(this::configureCodec)
.baseUrl(baseUrl)
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
}
private void configureCodec(ClientCodecConfigurer configurer) {
configurer
.defaultCodecs()
.maxInMemorySize(16 * 1024 * 1024);
}
}
And it works for spring-boot-starter-parent:2.6.7. However as of spring-boot-starter-parent:2.7.8 for huge payloads I get DataBufferLimitException: Exceeded limit on max bytes to buffer : 262144 which is in turn fixed by adding this line to application.properties:
spring.codec.max-in-memory-size=16777216
Neither ClientCodecConfigurer, nor WebClient.Builder.codecs() are deprecated and their JavaDoc as of 2.7.8 says nothing about spring.codec.max-in-memory-size so my question is whether it is a bug or expected behavior?

Related

Update: Spring Boot JMS static reply queue on IBM MQ Series

In my use case I need to do request-reply call to a remote system via managed queues. Using Spring Boot and IBM's MQ starter I have the problem that the application wants to create dynamic/temporary reply queues instead of using the already existing managed queue.
Configuration is set up here
#EnableJms
#Configuration
public class QueueConfiguration {
#Bean
public MQQueueConnectionFactory connectionFactory() throws JMSException {
MQQueueConnectionFactory factory = new MQQueueConnectionFactory();
factory.setTransportType(CT_WMQ); // is 1
factory.setHostName(queueProperties.getHost());
factory.setPort(queueProperties.getPort());
factory.setChannel(queueProperties.getChannel()); // combo of ${queueManager}%${channel}
return factory;
}
#Bean
public JmsMessagingTemplate messagingTemplate(ConnectionFactory connectionFactory) {
JmsMessagingTemplate jmt = new JmsMessagingTemplate(connectionFactory);
jmt.setDefaultDestinationName(queueProperties.getQueueName());
return jmt;
}
#Bean
public Jaxb2Marshaller marshaller() {
Jaxb2Marshaller marshaller = new Jaxb2Marshaller();
marshaller.setPackagesToScan("com.foo.model");
return marshaller;
}
#Bean
public MessageConverter messageConverter(Jaxb2Marshaller marshaller) {
MarshallingMessageConverter converter = new MarshallingMessageConverter();
converter.setMarshaller(marshaller);
converter.setUnmarshaller(marshaller);
return converter;
}
}
Usage is pretty straight forward: Take the object convert and send it. Wait for response receive
and convert it.
#Component
public class ExampleSenderReceiver {
#Autowired
private JmsMessagingTemplate jmsMessagingTemplate;
#Override
#SneakyThrows
public ResponseExample sendAndReceive(RequestExample request, String correlationId) {
MessagePostProcessor mpp = message -> {
message = MessageBuilder.fromMessage(message)
.setHeader(JmsHeaders.CORRELATION_ID, correlationId)
// .setHeader(JmsHeaders.REPLY_TO, "DEV.QUEUE.3") this triggers queue creation
.build();
return message;
};
String destination = Objects.requireNonNull(jmsMessagingTemplate.getDefaultDestinationName());
return jmsMessagingTemplate.convertSendAndReceive(destination, request, ResponseExample.class, mpp);
}
I read already a lot of IBM documentation and think, I need to set the message type to "MQMT_REQUEST" but I do not find the right spot to do so.
Update
Added Spring Integration as Gary proposed and added a configuration for JmsOutboundGateway
#Bean
public MessageChannel requestChannel() {
return new DirectChannel();
}
#Bean
public QueueChannel responseChannel() {
return new QueueChannel();
}
#Bean
#ServiceActivator(inputChannel = "requestChannel" )
public JmsOutboundGateway jmsOutboundGateway( ConnectionFactory connectionFactory) {
JmsOutboundGateway gateway = new JmsOutboundGateway();
gateway.setConnectionFactory(connectionFactory);
gateway.setRequestDestinationName("REQUEST");
gateway.setReplyDestinationName("RESPONSE");
gateway.setReplyChannel(responseChannel());
gateway.setCorrelationKey("JMSCorrelationID*");
gateway.setIdleReplyContainerTimeout(2, TimeUnit.SECONDS);
return gateway;
}
And adapted my ExampleSenderReceiver class
#Autowired
#Qualifier("requestChannel")
private MessageChannel requestChannel;
#Autowired
#Qualifier("responseChannel")
private QueueChannel responseChannel;
#Override
#SneakyThrows
public ResponseExample sendAndReceive(RequestExample request, String correlationId) {
String xmlContent = "the marshalled request object";
Map<String, Object> header = new HashMap<>();
header.put(JmsHeaders.CORRELATION_ID, correlationId);
GenericMessage<String> message1 = new GenericMessage<>(xmlContent, header);
requestChannel.send(message1);
log.info("send done" );
Message<?> receive = responseChannel.receive(1500);
if(null != receive){
log.info("incoming: {}", receive.toString());
}
}
The important part is gateway.setCorrelationKey("JMSCorrelationID*");
Without that line the correlationId was not propagated correct.
Next step is re-adding MessageConverters and make it nice again.
Thank you.
The default JmsTemplate (used by the JmsMessagingTemplate) always uses a temporary reply queue. You can subclass it and override doSendAndReceive(Session session, Destination destination, MessageCreator messageCreator) to use your managed queue instead.
However, it will only work if you have one request outstanding at a time (e.g. all run on a single thread). You will also have to add code for discarding "late" arrivals by checking the correlation id.
You can use async sends instead and handle replies on a listener container and correlate the replies to the requests.
Consider using spring-integration-jms and its outbound gateway instead - it has much more flexibility in reply queue handling (and does all the correlation for you).
https://docs.spring.io/spring-integration/reference/html/jms.html#jms-outbound-gateway
You are missing the queue manager.
ibm:
mq:
queueManager: QM1
channel: chanel
connName: localhost(1414)
user: admin
password: admin

How to properly configure a TCP inboundAdapter with QueueChannel and ServiceActivator

I am trying to configure a TCP socket that receives data in the format name,value in distinct messages. Those messages arrive on average every second, sometimes faster or sometimes slower.
I was able to set up a working configuration but I am lacking a basic understanding of what actually is happening in Spring Integration.
My configuration file looks like this:
#Configuration
#EnableIntegration
public class TCPSocketServerConfig
{
#Bean
public IntegrationFlow server(
final CSVProcessingService csvProcessingService,
#Value("${tcp.socket.server.port}") final int port
)
{
return IntegrationFlows.from(
Tcp.inboundAdapter(
Tcp.nioServer(port)
.deserializer(serializer())
.leaveOpen(true)
)
.autoStartup(true)
.outputChannel(queueChannel())
).transform(new ObjectToStringTransformer())
.handle(csvProcessingService)
.get();
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller()
{
return Pollers.fixedDelay(50, TimeUnit.MILLISECONDS).get();
}
#Bean
public MessageChannel queueChannel()
{
return MessageChannels.queue("queue", 50).get();
}
#Bean
public ByteArrayLfSerializer serializer()
{
final ByteArrayLfSerializer serializer = new ByteArrayLfSerializer();
serializer.setMaxMessageSize(10240);
return serializer;
}
}
And the CSVProcessingService looks like this (abbreviated):
#Slf4j
#Service
public class CSVProcessingService
{
#ServiceActivator
public void process(final String message)
{
log.debug("DATA RECEIVED: \n" + message);
final CsvMapper csvMapper = new CsvMapper();
final CsvSchema csvSchema = csvMapper.schemaFor(CSVParameter.class);
if (StringUtils.contains(message, StringUtils.LF))
{
processMultiLineInput(message, csvMapper, csvSchema);
}
else
{
processSingleLineInput(message, csvMapper, csvSchema);
}
}
}
My goals for this configuration are the following:
receive messages on the configured port
withstand a higher load without losing messages
deserialize the messages
put them into the queue channel
(ideally also log errors)
the queue channel is polled every 50 ms and the message from the queue channel passed to the ObjectToStringTransformer
after the transformer the converted message is passed to the CSVProcessingService for further processing
Did I achieve all those goals correctly or did I make a mistake because I misunderstood Spring Integration? Would it be possible to combine the Poller and the #ServiceActivator somehow?
Futhermore, I have a problem visualizing how my configured IntegrationFlow actually "flows", maybe somebody can help me to better understand this.
EDIT:
I reworked my configuration after Artems comment. It now look like this:
#Configuration
#EnableIntegration
public class TCPSocketServerConfig
{
#Value("${tcp.socket.server.port}") int port;
#Bean
public IntegrationFlow server(
final CSVProcessingService csvProcessingService
)
{
return IntegrationFlows.from(
Tcp.inboundAdapter(
tcpNioServer()
)
.autoStartup(true)
.errorChannel(errorChannel())
)
.transform(new ObjectToStringTransformer())
.handle(csvProcessingService)
.get();
}
#Bean
public AbstractServerConnectionFactory tcpNioServer()
{
return Tcp.nioServer(port)
.deserializer(serializer())
.leaveOpen(true)
.taskExecutor(
new ThreadPoolExecutor(0, 20,
30L, TimeUnit.SECONDS,
new SynchronousQueue<>(),
new DefaultThreadFactory("TCP-POOL"))
).get();
}
#Bean
public MessageChannel errorChannel()
{
return MessageChannels.direct("errors").get();
}
#Bean
public IntegrationFlow errorHandling()
{
return IntegrationFlows.from(errorChannel()).log(LoggingHandler.Level.DEBUG).get();
}
#Bean
public ByteArrayLfSerializer serializer()
{
final ByteArrayLfSerializer serializer = new ByteArrayLfSerializer();
serializer.setMaxMessageSize(10240);
return serializer;
}
}
I also removed the #ServiceActivator annotation form the CSVProcessingService#process method.
Not sure what confuses you, but your configuration and logic looks good.
You may miss the fact that you don't need a QueueChannel in between, since an AbstractConnectionFactory.processNioSelections() is already multi-threaded and it schedules a task to read a message from the socket. So, you only need is to configure an appropriate Executor for Tcp.nioServer(). Although it is an Executors.newCachedThreadPool() by default anyway.
On the other hand with in-memory QueueChannel you indeed may lose messages because they are already read from the network.
When you do Java DSL, you should consider to use poller() option on the endpoint. The #Poller will work on the #ServiceActivator if you have inputChannel attribute over there, but using the same in the handle() will override that inputChannel, so your #Poller won't be applied. Don't confuse yourself with mixing Java DSL and annotation configuration!
Everything else is good in your configuration.

DataBufferLimitException: Exceeded limit on max bytes to buffer webflux error

While sending a file I receive an array of bytes. I always have a problem with webflux to receive an array.
the error thrown as below :
org.springframework.core.io.buffer.DataBufferLimitException: Exceeded limit on max bytes to buffer : 262144
at org.springframework.core.io.buffer.LimitedDataBufferList.raiseLimitException(LimitedDataBufferList.java:101)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException
Do you now how to resolve that in webflux ?
This worked for me:
Create a #Bean in one of your configuration classes or the main SpringBootApplication class:
#Bean
public WebClient webClient() {
final int size = 16 * 1024 * 1024;
final ExchangeStrategies strategies = ExchangeStrategies.builder()
.codecs(codecs -> codecs.defaultCodecs().maxInMemorySize(size))
.build();
return WebClient.builder()
.exchangeStrategies(strategies)
.build();
}
Next, go to your desired class where you want to use the WebClient:
#Service
public class TestService {
#Autowired
private WebClient webClient;
public void test() {
String out = webClient
.get()
.uri("/my/api/endpoint")
.retrieve()
.bodyToMono(String.class)
.block();
System.out.println(out);
}
}
I suppose this issue is about adding a new spring.codec.max-in-memory-size configuration property in Spring Boot. Add it to the application.yml file like:
spring:
codec:
max-in-memory-size: 10MB
Set the maximum bytes (in megabytes) in your Spring Boot application.properties configuration file like below:
spring.codec.max-in-memory-size=20MB
i was getting this error for a simple RestController (i post a large json string).
here is how i successfully changed the maxInMemorySize
import org.springframework.context.annotation.Configuration;
import org.springframework.http.codec.ServerCodecConfigurer;
import org.springframework.web.reactive.config.ResourceHandlerRegistry;
import org.springframework.web.reactive.config.WebFluxConfigurer;
#Configuration
public class WebfluxConfig implements WebFluxConfigurer {
#Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.addResourceHandler("/swagger-ui.html**")
.addResourceLocations("classpath:/META-INF/resources/");
registry.addResourceHandler("/webjars/**")
.addResourceLocations("classpath:/META-INF/resources/webjars/");
}
#Override
public void configureHttpMessageCodecs(ServerCodecConfigurer configurer) {
configurer.defaultCodecs().maxInMemorySize(16 * 1024 * 1024);
}
}
this was surprisingly hard to find
worked for me
webTestClient.mutate()
.codecs(configurer -> configurer
.defaultCodecs()
.maxInMemorySize(16 * 1024 * 1024))
.build().get()
.uri("/u/r/l")
.exchange()
.expectStatus()
.isOk()
Instead of retrieving data at once, you can stream:
Mono<String> string = webClient.get()
.uri("end point of an API")
.retrieve()
.bodyToFlux(DataBuffer.class)
.map(buffer -> {
String string = buffer.toString(Charset.forName("UTF-8"));
DataBufferUtils.release(buffer);
return string;
});
Alternatively convert to stream:
.map(b -> b.asInputStream(true))
.reduce(SequenceInputStream::new)
.map(stream -> {
// consume stream
stream.close();
return string;
});
In most cases you don't want to really aggregate the stream, rather than processing it directly. The need to load huge amount of data in memory is mostly a sign to change the approach to more reactive one. JSON- and XML-Parsers have streaming interfaces.
This worked for me
val exchangeStrategies = ExchangeStrategies.builder()
.codecs { configurer: ClientCodecConfigurer -> configurer.defaultCodecs().maxInMemorySize(16 * 1024 * 1024) }.build()
return WebClient.builder().exchangeStrategies(exchangeStrategies).build()
Another alternative could be creating a custom CodecCustomizer, which is going to be applied to both WebFlux and WebClient at the same time:
#Configuration
class MyAppConfiguration {
companion object {
private const val MAX_MEMORY_SIZE = 50 * 1024 * 1024 // 50 MB
}
#Bean
fun codecCustomizer(): CodecCustomizer {
return CodecCustomizer {
it.defaultCodecs()
.maxInMemorySize(MAX_MEMORY_SIZE)
}
}
}
As of Spring Boot 2.3.0, there is now a dedicated configuration property for the Reactive Elasticsearch REST client.
You can use the following configuration property to set a specific memory limit for the client.
spring.data.elasticsearch.client.reactive.max-in-memory-size=
The already existing spring.codec.max-in-memory-size property is separate and only affects other WebClient instances in the application.
For those who had no luck with the myriad of beans, customizers, and properties that could be added to solve this problem, check whether you have defined a bean extending WebFluxConfigurationSupport. If you have, it will disable the autoconfiguration version of the same bean (my personal experience, Boot 2.7.2), somewhere under which Spring loads properties such as the suggested spring.codec.max-in-memory-size. For this solution to work you need to have also configured this property correctly.
To test if this is the cause of your problems, remove your WebFluxConfigurationSupport implementation temporarily. The long term fix that worked for me was to use configuration beans to override attributes for the autoconfigured bean. In my case, WebFluxConfigurer had all of the same methods available and was a drop-in replacement for WebFluxConfigurationSupport. Large JSON messages are now decoding for me as configured.
If you dont want to change the default settings for webClient globally, you can use the following approach to manually merge multiple DataBuffers
webClient
.method(GET)
.uri("<uri>")
.exchangeToMono(response -> {
return response.bodyToFlux(DataBuffer.class)
.switchOnFirst((firstBufferSignal, responseBody$) -> {
assert firstBufferSignal.isOnNext();
return responseBody$
.collect(() -> requireNonNull(firstBufferSignal.get()).factory().allocateBuffer(), (accumulator, curr) -> {
accumulator.ensureCapacity(curr.readableByteCount());
accumulator.write(curr);
DataBufferUtils.release(curr);
})
.map(accumulator -> {
final var responseBodyAsStr = accumulator.toString(UTF_8);
DataBufferUtils.release(accumulator);
return responseBodyAsStr;
});
})
.single();
});
The above code aggregates all the DataBuffers into a single DataBuffer & then converts the final DataBuffer into a string. Please note that this answer wont work as DataBuffers received might not have all the bytes to construct a character (incase of UTF-8 characters, each character can take upto 4 bytes). So we cant convert intermediate DataBuffers into String as the bytes towards
the end of buffer might have only part of the bytes required to construct a valid character
Note that this loads all the response DataBuffers into memory but unlike changing global settings for the webClient across the whole application. You can use this option to read complete response only where you want i.e you can narrow down & pick this option only where you expect large responses.
As of Spring boot 2.7.x we should use below property to set the memory size to webclient which is used internally in reactive ElasticSearch
spring.elasticsearch.webclient.max-in-memory-size=512MB
Just add below code in your springboot main class.
#Bean
public WebClient getWebClient() {
return WebClient.builder()
.baseUrl("Your_SERVICE_URL")
.codecs(configurer -> configurer
.defaultCodecs()
.maxInMemorySize(16 * 1024 * 1024))
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
}
This work for me.

Set WebClient.Builder.exchangeStrategies() without losing Spring Jackson configuration

I'm using the following code (from this answer) to configure headers to be logged on WebClient requests:
ExchangeStrategies exchangeStrategies = ExchangeStrategies.withDefaults();
exchangeStrategies
.messageWriters().stream()
.filter(LoggingCodecSupport.class::isInstance)
.forEach(writer -> ((LoggingCodecSupport)writer).setEnableLoggingRequestDetails(true));
client = WebClient.builder()
.exchangeStrategies(exchangeStrategies)
This works, but causes my Jackson configuration to be lost. In my application.properties I have:
spring.jackson.default-property-inclusion=non-null
spring.jackson.deserialization.accept-empty-string-as-null-object=true
which gets overwritten by the above code. Here is my workaround:
#Autowired ObjectMapper objectMapper;
#Bean
WebClientCustomizer webClientCustomizer() {
return (WebClient.Builder builder) -> {
builder
.exchangeStrategies(createExchangeStrategiesWhichLogHeaders())
};
}
private ExchangeStrategies createExchangeStrategiesWhichLogHeaders() {
ExchangeStrategies exchangeStrategies =
ExchangeStrategies.builder()
.codecs(
clientDefaultCodecsConfigurer -> {
clientDefaultCodecsConfigurer
.defaultCodecs()
.jackson2JsonEncoder(
new Jackson2JsonEncoder(objectMapper, MediaType.APPLICATION_JSON));
clientDefaultCodecsConfigurer
.defaultCodecs()
.jackson2JsonDecoder(
new Jackson2JsonDecoder(objectMapper, MediaType.APPLICATION_JSON));
})
.build();
exchangeStrategies
.messageWriters()
.stream()
.filter(LoggingCodecSupport.class::isInstance)
.forEach(writer -> ((LoggingCodecSupport) writer).setEnableLoggingRequestDetails(true));
return exchangeStrategies;
}
This works, but feels a bit strange. The question is: do I need to include the jackson/objectMapper configuration like this, or is there a simpler way to avoid the Spring objectMapper configuration being overwritten?
As of Spring Boot 2.1.0, you can achieve this by enabling the following property:
spring.http.log-request-details=true
If you're on a previous Spring Boot version, you should be able to customize this without overwriting or rebuilding the whole configuration, like this:
#Configuration
static class LoggingCodecConfig {
#Bean
#Order(0)
public CodecCustomizer loggingCodecCustomizer() {
return (configurer) -> configurer.defaultCodecs()
.enableLoggingRequestDetails(true);
}
}

Spring Data Solr ConverterNotFoundException

I'm trying to configure Solr (with Multicore Support) in my application and I get a ConverterNotFoundException whenever I try and register converters.
I've stepped through and can see the query being executed and documents being returned. Just the converters not being found.
I followed the example from the official docs here.
Hopefully someone can shed some light on what's going on as examples are hard to find and the docs aren't overly clear about adding converters when using multicoreSupport=true.
#Configuration
#EnableSolrRepositories(
multicoreSupport = true,
basePackages = {"uk.co.foo.bar.repository"})
public class SolrConfig {
#Resource
private Environment environment;
#Bean
public SolrClient solrClient(HttpClient httpClient) {
String solrHost = environment.getRequiredProperty("solr.host");
return new HttpSolrClient(solrHost, httpClient);
}
#Bean
public HttpClient httpClient() {
ModifiableSolrParams params = new ModifiableSolrParams();
params.set(HttpClientUtil.PROP_BASIC_AUTH_USER, "user");
params.set(HttpClientUtil.PROP_BASIC_AUTH_PASS, "pass");
return HttpClientUtil.createClient(params);
}
#Bean
public SolrConverter solrConverter(CustomConversions customConversions){
MappingSolrConverter mappingSolrConverter= new MappingSolrConverter(new SimpleSolrMappingContext());
mappingSolrConverter.setCustomConversions(customConversions);
return mappingSolrConverter;
}
#Bean
public CustomConversions customConversions(){
return new CustomConversions(Arrays.asList(new fooConverter(), new barConverter()));
}
#Bean
public SolrTemplate solrTemplate(SolrClient solrClient, SolrConverter solrConverter){
SolrTemplate solrTemplate = new SolrTemplate(solrClient);
solrTemplate.setSolrConverter(solrConverter);
return solrTemplate;
}
}
Having multicore support enabled currently does not allow to register global CustomConverters. Unfortunately there's no workaround available. I'll take care of DATASOLR-173 to get this fixed.

Categories

Resources