While sending a file I receive an array of bytes. I always have a problem with webflux to receive an array.
the error thrown as below :
org.springframework.core.io.buffer.DataBufferLimitException: Exceeded limit on max bytes to buffer : 262144
at org.springframework.core.io.buffer.LimitedDataBufferList.raiseLimitException(LimitedDataBufferList.java:101)
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException
Do you now how to resolve that in webflux ?
This worked for me:
Create a #Bean in one of your configuration classes or the main SpringBootApplication class:
#Bean
public WebClient webClient() {
final int size = 16 * 1024 * 1024;
final ExchangeStrategies strategies = ExchangeStrategies.builder()
.codecs(codecs -> codecs.defaultCodecs().maxInMemorySize(size))
.build();
return WebClient.builder()
.exchangeStrategies(strategies)
.build();
}
Next, go to your desired class where you want to use the WebClient:
#Service
public class TestService {
#Autowired
private WebClient webClient;
public void test() {
String out = webClient
.get()
.uri("/my/api/endpoint")
.retrieve()
.bodyToMono(String.class)
.block();
System.out.println(out);
}
}
I suppose this issue is about adding a new spring.codec.max-in-memory-size configuration property in Spring Boot. Add it to the application.yml file like:
spring:
codec:
max-in-memory-size: 10MB
Set the maximum bytes (in megabytes) in your Spring Boot application.properties configuration file like below:
spring.codec.max-in-memory-size=20MB
i was getting this error for a simple RestController (i post a large json string).
here is how i successfully changed the maxInMemorySize
import org.springframework.context.annotation.Configuration;
import org.springframework.http.codec.ServerCodecConfigurer;
import org.springframework.web.reactive.config.ResourceHandlerRegistry;
import org.springframework.web.reactive.config.WebFluxConfigurer;
#Configuration
public class WebfluxConfig implements WebFluxConfigurer {
#Override
public void addResourceHandlers(ResourceHandlerRegistry registry) {
registry.addResourceHandler("/swagger-ui.html**")
.addResourceLocations("classpath:/META-INF/resources/");
registry.addResourceHandler("/webjars/**")
.addResourceLocations("classpath:/META-INF/resources/webjars/");
}
#Override
public void configureHttpMessageCodecs(ServerCodecConfigurer configurer) {
configurer.defaultCodecs().maxInMemorySize(16 * 1024 * 1024);
}
}
this was surprisingly hard to find
worked for me
webTestClient.mutate()
.codecs(configurer -> configurer
.defaultCodecs()
.maxInMemorySize(16 * 1024 * 1024))
.build().get()
.uri("/u/r/l")
.exchange()
.expectStatus()
.isOk()
Instead of retrieving data at once, you can stream:
Mono<String> string = webClient.get()
.uri("end point of an API")
.retrieve()
.bodyToFlux(DataBuffer.class)
.map(buffer -> {
String string = buffer.toString(Charset.forName("UTF-8"));
DataBufferUtils.release(buffer);
return string;
});
Alternatively convert to stream:
.map(b -> b.asInputStream(true))
.reduce(SequenceInputStream::new)
.map(stream -> {
// consume stream
stream.close();
return string;
});
In most cases you don't want to really aggregate the stream, rather than processing it directly. The need to load huge amount of data in memory is mostly a sign to change the approach to more reactive one. JSON- and XML-Parsers have streaming interfaces.
This worked for me
val exchangeStrategies = ExchangeStrategies.builder()
.codecs { configurer: ClientCodecConfigurer -> configurer.defaultCodecs().maxInMemorySize(16 * 1024 * 1024) }.build()
return WebClient.builder().exchangeStrategies(exchangeStrategies).build()
Another alternative could be creating a custom CodecCustomizer, which is going to be applied to both WebFlux and WebClient at the same time:
#Configuration
class MyAppConfiguration {
companion object {
private const val MAX_MEMORY_SIZE = 50 * 1024 * 1024 // 50 MB
}
#Bean
fun codecCustomizer(): CodecCustomizer {
return CodecCustomizer {
it.defaultCodecs()
.maxInMemorySize(MAX_MEMORY_SIZE)
}
}
}
As of Spring Boot 2.3.0, there is now a dedicated configuration property for the Reactive Elasticsearch REST client.
You can use the following configuration property to set a specific memory limit for the client.
spring.data.elasticsearch.client.reactive.max-in-memory-size=
The already existing spring.codec.max-in-memory-size property is separate and only affects other WebClient instances in the application.
For those who had no luck with the myriad of beans, customizers, and properties that could be added to solve this problem, check whether you have defined a bean extending WebFluxConfigurationSupport. If you have, it will disable the autoconfiguration version of the same bean (my personal experience, Boot 2.7.2), somewhere under which Spring loads properties such as the suggested spring.codec.max-in-memory-size. For this solution to work you need to have also configured this property correctly.
To test if this is the cause of your problems, remove your WebFluxConfigurationSupport implementation temporarily. The long term fix that worked for me was to use configuration beans to override attributes for the autoconfigured bean. In my case, WebFluxConfigurer had all of the same methods available and was a drop-in replacement for WebFluxConfigurationSupport. Large JSON messages are now decoding for me as configured.
If you dont want to change the default settings for webClient globally, you can use the following approach to manually merge multiple DataBuffers
webClient
.method(GET)
.uri("<uri>")
.exchangeToMono(response -> {
return response.bodyToFlux(DataBuffer.class)
.switchOnFirst((firstBufferSignal, responseBody$) -> {
assert firstBufferSignal.isOnNext();
return responseBody$
.collect(() -> requireNonNull(firstBufferSignal.get()).factory().allocateBuffer(), (accumulator, curr) -> {
accumulator.ensureCapacity(curr.readableByteCount());
accumulator.write(curr);
DataBufferUtils.release(curr);
})
.map(accumulator -> {
final var responseBodyAsStr = accumulator.toString(UTF_8);
DataBufferUtils.release(accumulator);
return responseBodyAsStr;
});
})
.single();
});
The above code aggregates all the DataBuffers into a single DataBuffer & then converts the final DataBuffer into a string. Please note that this answer wont work as DataBuffers received might not have all the bytes to construct a character (incase of UTF-8 characters, each character can take upto 4 bytes). So we cant convert intermediate DataBuffers into String as the bytes towards
the end of buffer might have only part of the bytes required to construct a valid character
Note that this loads all the response DataBuffers into memory but unlike changing global settings for the webClient across the whole application. You can use this option to read complete response only where you want i.e you can narrow down & pick this option only where you expect large responses.
As of Spring boot 2.7.x we should use below property to set the memory size to webclient which is used internally in reactive ElasticSearch
spring.elasticsearch.webclient.max-in-memory-size=512MB
Just add below code in your springboot main class.
#Bean
public WebClient getWebClient() {
return WebClient.builder()
.baseUrl("Your_SERVICE_URL")
.codecs(configurer -> configurer
.defaultCodecs()
.maxInMemorySize(16 * 1024 * 1024))
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
}
This work for me.
Related
I use pretty straightforward configuration of my `WebClient:
#Configuration
class Config {
#Value("${client.baseUrl}")
private String baseUrl;
#Bean
public WebClient webClient() {
return WebClient.builder()
.codecs(this::configureCodec)
.baseUrl(baseUrl)
.defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.build();
}
private void configureCodec(ClientCodecConfigurer configurer) {
configurer
.defaultCodecs()
.maxInMemorySize(16 * 1024 * 1024);
}
}
And it works for spring-boot-starter-parent:2.6.7. However as of spring-boot-starter-parent:2.7.8 for huge payloads I get DataBufferLimitException: Exceeded limit on max bytes to buffer : 262144 which is in turn fixed by adding this line to application.properties:
spring.codec.max-in-memory-size=16777216
Neither ClientCodecConfigurer, nor WebClient.Builder.codecs() are deprecated and their JavaDoc as of 2.7.8 says nothing about spring.codec.max-in-memory-size so my question is whether it is a bug or expected behavior?
I'm experiencing a strange behaviour when using filters with Spring Cloud Gateway, given the following configuration sample:
#Configuration
public class SpringCloudConfig {
#Bean
public RouteLocator gatewayRoutes(RouteLocatorBuilder builder) {
return builder.routes()
.route(r -> r.path("/sample/v1/api")
.filters(f -> f.rewritePath("/sample", "").addRequestHeader("route-random",
(int) Math.floor(Math.random() * 100) + "")
.filter(new AddHeaderCustomFilter().apply(new HeaderConfig(
"filter-random", (int) Math.floor(Math.random() * 100) + ""))))
.uri("http://localhost:8085"))
.build();
}
}
If I perform two or more dinstict requests to "/sample/v1/api" the "route-random" and "filter-random" headers will always have the same value i.e. the random value generated for the first request.
Using a Global filter instead:
#Component
public class CustomGlobalFilter {
#Bean
public GlobalFilter globalFilter() {
return (exchange, chain) -> {
exchange.getRequest().mutate()
.header("global-random", (int) Math.floor(Math.random() * 100) + "").build();
return chain.filter(exchange);
};
}
}
The "global-random" header seems to be effectly random for each request.
Can someone explain why the value seems to be cached when using route level filters and a possible solution instead of using global filters?
Thanks in advance.
I am not 100% sure but I think this is because the gateway matches the same requests based on params and url.
AddRequestHeader is aware of URI variables used to match a path or host. URI variables may be used in the value and will be expanded at runtime.
Ref: https://docs.spring.io/spring-cloud-gateway/docs/current/reference/html/#the-addrequestheader-gatewayfilter-factory
Small question regarding Spring Boot Webflux 2.5.0 and how to deal with a http response without body.
By "without body" I mean:
For instance, a web application I consume the rest API and have no control returns:
HTTP status code 200
HTTP body {"foo": "bar"}
With Spring Webflux, we can easily write something like:
public Mono<FooBar> sendRequest(SomeRequest someRequest) {
return webClient.mutate()
.baseUrl("https://third-party-rest-api.com:443")
.build()
.post()
.uri(/someroute)
.body(BodyInserters.fromValue(someRequest))
.retrieve().bodyToMono(FooBar.class);
}
public class FooBar {
private String foo;
//getter setters
}
In order to get the POJO corresponding to the http body.
Now, another third party API I am consuming only return HTTP 200 as status response.
I would like to emphasize, there is no HTTP body. It is not the empty JSON {}.
Hence, I am a bit lost, and do not know what to put here. Especially with the goal of avoiding the mono empty.
public Mono<WhatToPutHerePlease> sendRequest(SomeRequest someRequest) {
return webClient.mutate()
.baseUrl("https://third-party-rest-api.com:443")
.build()
.post()
.uri(/someroute-with-no-http-body-response)
.body(BodyInserters.fromValue(someRequest))
.retrieve()
.bodyToMono(WhatToPutHerePlease.class);
}
Any help please?
Thank you
Hence, I am a bit lost, and do not know what to put here.
The response is empty, so there's nothing for your webclient to parse and return a value. The resulting Mono is thus always going to be empty, whatever generic type you use.
We have a special type that essentially says "this will always be empty" - Void (note the capital V.) So if you want to return an empty Mono, keeping the rest of the code the same, that's the type you should use.
Alternatively, if you don't want to return an empty publisher, then you might consider using .retrieve().toBodiLessEntity() instead of .retrieve().bodyToMono() - this will return a Mono<ResponseEntity<Void>>. The resulting body will obviously still be empty, but the response entity returned will enable you to extract information such as the response code & header information, should that be useful.
toBodylessEntity() seems to suit your needs:
It returns a Mono<ResponseBody<Void>>.
With a (void rest) controller like:
#RestController
#SpringBootApplication
public class Demo {
public static void main(String[] args) {
SpringApplication.run(Demo.class, args);
// ...
}
#GetMapping("/")
public void empty() {
}
}
and a:
public class ReactiveClient {
Mono<ResponseEntity<Void>> mono = WebClient.create("http://localhost:8080")
.get()
.retrieve()
.toBodilessEntity();
// blocking/synchronous
public ResponseEntity<Void> get() {
return mono.block();
}
}
We can:
ReactiveClient reactiveClient = new ReactiveClient();
System.out.println(reactiveClient.get()); // or something else
I am new to the Spring Integration project, now I need to create a flow with Java DSL and test it.
I came up with these flows. First one should run by cron and invoke second one, which invokes HTTP endpoint and translates XML response to POJO:
#Bean
IntegrationFlow pollerFlow() {
return IntegrationFlows
.from(() -> new GenericMessage<>(""),
e -> e.poller(p -> p.cron(this.cron)))
.channel("pollingChannel")
.get();
}
#Bean
IntegrationFlow flow(HttpMessageHandlerSpec bulkEndpoint) {
return IntegrationFlows
.from("pollingChannel")
.enrichHeaders(authorizationHeaderEnricher(user, password))
.handle(bulkEndpoint)
.transform(xmlTransformer())
.channel("httpResponseChannel")
.get();
}
#Bean
HttpMessageHandlerSpec bulkEndpoint() {
return Http
.outboundGateway(uri)
.httpMethod(HttpMethod.POST)
.expectedResponseType(String.class)
.errorHandler(new DefaultResponseErrorHandler());
}
Now I want to test flow and mock HTTP call, but struggling to mock HTTP handler, I tried to do it like that:
#ExtendWith(SpringExtension.class)
#SpringIntegrationTest(noAutoStartup = {"pollerFlow"})
#ContextConfiguration(classes = FlowConfiguration.class)
public class FlowTests {
#Autowired
private MockIntegrationContext mockIntegrationContext;
#Autowired
public DirectChannel httpResponseChannel;
#Autowired
public DirectChannel pollingChannel;
#Test
void test() {
final MockMessageHandler mockHandler = MockIntegration.mockMessageHandler()
.handleNextAndReply(message -> new GenericMessage<>(xml, message.getHeaders()));
mockIntegrationContext.substituteMessageHandlerFor("bulkEndpoint", mockHandler);
httpResponseChannel.subscribe(message -> {
assertThat(message.getPayload(), is(notNullValue()));
assertThat(message.getPayload(), instanceOf(PartsSalesOpenRootElement.class));
});
pollingChannel.send(new GenericMessage<>(""));
}
}
But I am always getting an error, that on line:
mockIntegrationContext.substituteMessageHandlerFor("bulkEndpoint", mockHandler);
org.springframework.beans.factory.BeanNotOfRequiredTypeException: Bean named 'bulkEndpoint' is expected to be of type 'org.springframework.integration.endpoint.IntegrationConsumer' but was actually of type 'org.springframework.integration.http.outbound.HttpRequestExecutingMessageHandler'
Am I doing something wrong here? I am assuming I have a problem with IntegrationFlow itself, or maybe my testing approach is a problem.
The error is correct. The bulkEndpoint is not an endpoint by itself. It is really a MessageHandler. The endpoint is created from the .handle(bulkEndpoint).
See docs: https://docs.spring.io/spring-integration/docs/current/reference/html/overview.html#finding-class-names-for-java-and-dsl-configuration and https://docs.spring.io/spring-integration/docs/current/reference/html/testing.html#testing-mocks.
So, to make it working you need to do something like this:
.handle(bulkEndpoint, e -> e.id("actualEndpoint"))
And then in the test:
mockIntegrationContext.substituteMessageHandlerFor("actualEndpoint", mockHandler);
You also probably need to think to not have that pollerFlow to be started when you test it sine you send the message into pollingChannel manually. So, there is no conflicts with what you'd like to test. For this reason you also add a id() into your e.poller(p -> p.cron(this.cron)) and use #SpringIntegrationTest(noAutoStartup) to have it stopped before your test. I see you try noAutoStartup = {"pollerFlow"}, but this is not going to help for static flows. You indeed need to have stopped an actual endpoint in this case.
I have a Spring Boot 1.3.6 application, built out of the box and using the embedded Tomcat server. The application has a single endpoint doing a very simple echo request.
Later I defined a corresponding client invoking that simple endpoint using AsyncRestTemplate, however if my client uses the Netty4ClientHttpRequestFactory the request fails, otherwise it succeeds.
My example below is in Kotlin, but it fails just the same in Java, so it does not have to do with the language I use to implement it.
The Server
#SpringBootApplication
open class EchoApplication {
companion object {
#JvmStatic fun main(args: Array<String>) {
SpringApplication.run(EchoApplication::class.java, *args)
}
}
#Bean
open fun objectMapper(): ObjectMapper {
return ObjectMapper().apply {
dateFormat = SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssXXX")
registerModule(KotlinModule())
}
}
#Bean
open fun customConverters(): HttpMessageConverters {
return HttpMessageConverters(listOf(MappingJackson2HttpMessageConverter(objectMapper())))
}
}
My endpoint looks like this:
#RestController
class EchoController {
#RequestMapping(value="/foo", method = arrayOf(RequestMethod.PUT))
fun echo(#RequestBody order: Order): Order {
return order
}
}
And the order data class is
data class Order(val orderId: String)
Note: Since I use Kotlin, I also added the Kotlin Jackson Module to ensure proper constructor deserialization.
The Client
I then proceeded to create a client that invokes this endpoint.
If I do something like the following in my client, it works perfectly and I get a successful echo response.
val executor = TaskExecutorAdapter(Executors.newFixedThreadPool(2))
val restTemplate = AsyncRestTemplate(executor)
restTemplate.messageConverters = listOf(MappingJackson2HttpMessageConverter(mapper))
val promise = restTemplate.exchange(URI.create("http://localhost:8080/foo"), HttpMethod.PUT, HttpEntity(Order("b1254"), headers), Order::class.java)
promise.addCallback(object : ListenableFutureCallback<ResponseEntity<Order>> {
override fun onSuccess(result: ResponseEntity<Order>) {
println(result.body)
}
override fun onFailure(ex: Throwable) {
ex.printStackTrace()
if(ex is HttpStatusCodeException){
println(ex.responseBodyAsString)
}
}
})
As mentioned above, the code above runs perfectly and prints a successful echo response.
The Problem
But if I decide to use the Netty client, then I get 400 Bad Request reporting I did not pass the body:
val nettyFactory = Netty4ClientHttpRequestFactory()
val restTemplate = AsyncRestTemplate(nettyFactory)
When I do this then I get a HttpMessageNotReadableException with a message saying "Required request body is missing".
I debugged the Spring Boot code and I see that when the ServletInputStream is read, it always return -1 as if it was empty.
In my gradle I added runtime('io.netty:netty-all:4.1.2.Final'), so I am using what, as of today, is the latest version of Netty. This Netty version has worked just fine when interacting with endpoints in other projects that I have that use regular Spring (i.e. not Spring Boot).
How come the SimpleClientHttpRequestFactory works fine, but the Netty4ClientHttpRequestFactory fails?
I thought it might had to do with the embedded Tomcat server, however, if I package this application as war and deploy it in an existing Tomcat server (i.e. not using the embedded one), the problem persists. So, I'm guessing is something related to Spring/Spring Boot.
Am I missing any configuration in my Spring Boot app? Any suggestions on how to make the Netty client work with Spring Boot?
Looks like there are problem in serialization on Client's side. Because this code works perfect:
restTemplate.exchange(
URI.create("http://localhost:8080/foo"),
HttpMethod.PUT,
HttpEntity("""{"orderId":"1234"}""", HttpHeaders().apply {
setContentType(MediaType.APPLICATION_JSON);
}),
Order::class.java
).addCallback(object : ListenableFutureCallback<ResponseEntity<Order>> {
override fun onSuccess(result: ResponseEntity<Order>) {
println("Result: ${result.body}")
}
override fun onFailure(ex: Throwable) {
ex.printStackTrace()
if (ex is HttpStatusCodeException) {
println(ex.responseBodyAsString)
}
}
})
I need more precise look at restTemplate at his converters, but for now you can write this part this way:
val mapper = ObjectMapper()
restTemplate.exchange(
URI.create("http://localhost:8080/foo"),
HttpMethod.PUT,
HttpEntity(mapper.writeValueAsString(Order("HELLO")), HttpHeaders().apply {
setContentType(MediaType.APPLICATION_JSON);
}),
Order::class.java
).addCallback(object : ListenableFutureCallback<ResponseEntity<Order>> {
override fun onSuccess(result: ResponseEntity<Order>) {
println("Result: ${result.body}")
}
override fun onFailure(ex: Throwable) {
ex.printStackTrace()
if (ex is HttpStatusCodeException) {
println(ex.responseBodyAsString)
}
}
})
As you see, i don't use KotlinModule and this code works perfectly, so obviously problem in cofiguration AsyncRestTemplate itself.
My 2 cents. This is surely not the solution.
I configured the asyncRestTemplate with a AsyncHttpClientRequestInterceptor and it magically worked. No explanation, period!
public class AsyncClientLoggingInterceptor implements AsyncClientHttpRequestInterceptor {
#Override
public ListenableFuture<ClientHttpResponse> intercept(HttpRequest request, byte[] body, AsyncClientHttpRequestExecution execution) throws IOException {
return execution.executeAsync(request, body);
}
}