Why blocking on thenApplyAsync works but not with thenApply - java

We saw some interesting behavior in our application. The following Spock spec captures the behavior. I am trying to understand why the second test passes but the first one throws a TimeoutException.
Summary:
There is a mock server with a mock endpoint that responds with a success after a 10ms delay.
We use AsyncHttpClient to make a nonblocking call to this mock endpoint. The first call is chained with a second blocking call to the same endpoint. The first call succeeds but the second fails with timeout if thenApply is used but succeeds if thenApplyAsync is used. In both cases, the mock server seems to respond within 10ms.
Dependencies:
implementation 'com.google.guava:guava:29.0-jre'
implementation 'org.asynchttpclient:async-http-client:2.12.1'
// Use the latest Groovy version for Spock testing
testImplementation 'org.codehaus.groovy:groovy-all:2.5.11'
// Use the awesome Spock testing and specification framework even with Java
testImplementation 'org.spockframework:spock-core:1.3-groovy-2.5'
testImplementation 'org.objenesis:objenesis:1.4'
testImplementation "cglib:cglib:2.2"
testImplementation 'junit:junit:4.13'
testImplementation 'org.mock-server:mockserver-netty:5.11.1'
Spock Spec:
package com.switchcase.asyncthroughput
import com.google.common.base.Charsets
import org.asynchttpclient.DefaultAsyncHttpClient
import org.asynchttpclient.RequestBuilder
import org.mockserver.integration.ClientAndServer
import org.mockserver.model.HttpResponse
import spock.lang.Shared
import spock.lang.Specification
import java.util.concurrent.CompletableFuture
import java.util.concurrent.CompletionException
import java.util.concurrent.ExecutorService
import java.util.concurrent.Executors
import java.util.concurrent.TimeUnit
import java.util.concurrent.TimeoutException
import static org.mockserver.integration.ClientAndServer.startClientAndServer
import static org.mockserver.model.HttpRequest.request
class CompletableFutureThreadsTest extends Specification {
#Shared
ClientAndServer mockServer
def asyncHttpClient = new DefaultAsyncHttpClient();
def setupSpec() {
mockServer = startClientAndServer(9192);
//create a mock server which response with "done" after 100ms.
mockServer.when(request()
.withMethod("POST")
.withPath("/validate"))
.respond(HttpResponse.response().withBody("done")
.withStatusCode(200)
.withDelay(TimeUnit.MILLISECONDS, 10));
}
def "Calls external using AHC with a blocking call with 1sec timeout results in TimeoutException."() {
when:
callExternal().thenApply({ resp -> callExternalBlocking() }).join()
then:
def exception = thrown(CompletionException)
exception instanceof CompletionException
exception.getCause() instanceof TimeoutException
exception.printStackTrace()
}
def "Calls external using AHC with a blocking call on ForkJoinPool with 1sec timeout results in success."() {
when:
def value = callExternal().thenApplyAsync({ resp -> callExternalBlocking() }).join()
then:
value == "done"
}
def cleanupSpec() {
mockServer.stop(true)
}
private CompletableFuture<String> callExternal(def timeout = 1000) {
RequestBuilder requestBuilder = RequestBuilder.newInstance();
requestBuilder.setMethod("POST").setUrl("http://localhost:9192/validate").setRequestTimeout(timeout)
def cf = asyncHttpClient.executeRequest(requestBuilder).toCompletableFuture()
return cf.thenApply({ response ->
println("CallExternal Succeeded.")
return response.getResponseBody(Charsets.UTF_8)
})
}
private String callExternalBlocking(def timeout = 1000) {
RequestBuilder requestBuilder = RequestBuilder.newInstance();
requestBuilder.setMethod("POST").setUrl("http://localhost:9192/validate").setRequestTimeout(timeout)
def cf = asyncHttpClient.executeRequest(requestBuilder).toCompletableFuture()
return cf.thenApply({ response ->
println("CallExternalBlocking Succeeded.")
return response.getResponseBody(Charsets.UTF_8)
}).join()
}
}
EDIT:
Debug log and stack trace for timeout: (The timeout happens on the remote call in callExternalBlocking)
17:37:38.885 [AsyncHttpClient-timer-2-1] DEBUG org.asynchttpclient.netty.timeout.TimeoutTimerTask - Request timeout to localhost/127.0.0.1:9192 after 1000 ms for NettyResponseFuture{currentRetry=0,
isDone=0,
isCancelled=0,
asyncHandler=org.asynchttpclient.AsyncCompletionHandlerBase#478251c9,
nettyRequest=org.asynchttpclient.netty.request.NettyRequest#4945b749,
future=java.util.concurrent.CompletableFuture#4d7a3ab9[Not completed, 1 dependents],
uri=http://localhost:9192/validate,
keepAlive=true,
redirectCount=0,
timeoutsHolder=org.asynchttpclient.netty.timeout.TimeoutsHolder#878bd72,
inAuth=0,
touch=1622248657866} after 1019 ms
17:37:38.886 [AsyncHttpClient-timer-2-1] DEBUG org.asynchttpclient.netty.channel.ChannelManager - Closing Channel [id: 0x5485056c, L:/127.0.0.1:58076 - R:localhost/127.0.0.1:9192]
17:37:38.886 [AsyncHttpClient-timer-2-1] DEBUG org.asynchttpclient.netty.request.NettyRequestSender - Aborting Future NettyResponseFuture{currentRetry=0,
isDone=0,
isCancelled=0,
asyncHandler=org.asynchttpclient.AsyncCompletionHandlerBase#478251c9,
nettyRequest=org.asynchttpclient.netty.request.NettyRequest#4945b749,
future=java.util.concurrent.CompletableFuture#4d7a3ab9[Not completed, 1 dependents],
uri=http://localhost:9192/validate,
keepAlive=true,
redirectCount=0,
timeoutsHolder=org.asynchttpclient.netty.timeout.TimeoutsHolder#878bd72,
inAuth=0,
touch=1622248657866}
java.util.concurrent.CompletionException: java.util.concurrent.TimeoutException: Request timeout to localhost/127.0.0.1:9192 after 1000 ms
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:607)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
at org.asynchttpclient.netty.NettyResponseFuture.abort(NettyResponseFuture.java:273)
at org.asynchttpclient.netty.request.NettyRequestSender.abort(NettyRequestSender.java:473)
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.RequestTimeoutTimerTask.run(RequestTimeoutTimerTask.java:50)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:672)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:747)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:472)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Request timeout to localhost/127.0.0.1:9192 after 1000 ms
... 7 more

Related

Testcontainers MS SQL Server Module stays in a loop and never enters the test

I'm writting a JUnit4 test case with the following:
#Rule
public MSSQLServerContainer mssqlserver = new MSSQLServerContainer().acceptLicense();
#Before
public void setUp() throws Exception
{
url = mssqlserver.getJdbcUrl();
}
#Test
public void someTestMethod() {
...
But it hangs a long time and then this exception is thrown:
java.lang.IllegalStateException: Container is started, but cannot be accessed by (JDBC URL: jdbc:sqlserver://localhost:51772), please check container logs
What's wrong?
I'm using these dependencies:
testImplementation "org.testcontainers:testcontainers:1.16.3"
testImplementation "org.testcontainers:mssqlserver:1.16.3"
It looks like there's a fix on the way [1]. I had to use the workaround of
.withUrlParam("trustServerCertificate", "true")
mentioned there with testcontainers 1.16.3.
[1] https://github.com/testcontainers/testcontainers-java/issues/5032

Sentry doesn't capture exceptions from Spark job

My problem is that the code below throws an exception and I capture that with Sentry, but when I go to Sentry UI, the exception does not appear at all. I want to find a way where I could use Sentry across the spark driver and executors as well. Any ideas?
Also, I'm not sure what additional information is required, so feel free to let me know and I'll provide it.
Versions:
Spark: 2.12-3.0.0
Sentry: 3.1.1
import io.sentry.Sentry
import org.apache.spark.api.java.JavaSparkContext
import org.apache.spark.sql.SparkSession
fun main(args: Array<String>) {
SparkSession.builder()
.appName("myApp")
.config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
.config("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
.use { sparkSession ->
JavaSparkContext.fromSparkContext(sparkSession.sparkContext()).let { sc ->
Sentry.init { options ->
options.dsn = "********"
}
try {
throw Exception("exception before executors start working")
} catch(e: Exception) {
Sentry.captureException(e)
}
// starting some executors and see if Sentry receives exceptions from them:
sc.parallelize((0..100).toList(), 10).map { i ->
try {
throw Exception("exception $i from worker")
} catch (e: Exception) {
Sentry.captureException(e)
}
}
}
}
}
I also tried to execute the inner HubAdapter from Sentry and use that as a Broadcast variable, but no luck.
UPDATE #1
Added debug option to Sentry init and tried starting and closing a session:
import io.sentry.Sentry
import org.apache.spark.api.java.JavaSparkContext
import org.apache.spark.sql.SparkSession
fun main(args: Array<String>) {
SparkSession.builder()
.appName("myApp")
.config("spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version", "2")
.config("spark.hadoop.fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem")
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
.getOrCreate()
.use { sparkSession ->
JavaSparkContext.fromSparkContext(sparkSession.sparkContext()).let { sc ->
Sentry.init { options ->
options.dsn = "********"
options.isDebug = true
}
Sentry.startSession()
try {
throw Exception("exception before executors start working")
} catch(e: Exception) {
Sentry.captureException(e)
}
// starting some executors and see if Sentry receives exceptions from them:
sc.parallelize((0..100).toList(), 10).map { i ->
try {
throw Exception("exception $i from worker")
} catch (e: Exception) {
Sentry.captureException(e)
}
}
Sentry.endSession()
Sentry.close()
}
}
}
Here's what I see in the logs:
INFO: Initializing SDK with DSN: '**********'
INFO: No outbox dir path is defined in options.
INFO: GlobalHubMode: 'false'
20/11/03 18:42:01 INFO BlockManagerMasterEndpoint: Registering block manager 10.36.62.15:42123 with 4.6 GiB RAM, BlockManagerId(1, 10.36.62.15, 42123, None)
DEBUG: UncaughtExceptionHandlerIntegration enabled: true
DEBUG: UncaughtExceptionHandlerIntegration installed.
WARNING: Sessions can't be captured without setting a release.
DEBUG: Capturing event: 9b17170bdaf841cbb764969f653f99b5
ERROR: Request failed, API returned 400
ERROR: {"detail":"invalid event envelope","causes":["invalid item header","EOF while parsing an object at line 1 column 49"]}
WARNING: Sessions can't be captured without setting a release.
INFO: Closing SentryClient.
DEBUG: Shutting down
UPDATE #2:
Created issue in GitHub for better visibility

Error while Streaming data from twitter using Spark Streaming

I am writing twitter connector to get data from twitter but i got following Exception while running.
i have create application for prints tweets and learn how to do spark streaming with twitter.
20/09/25 05:53:18 ERROR ReceiverTracker: Deregistered receiver for stream 0: Restarting receiver with delay 2000ms: Error starting Twitter stream - java.lang.IllegalStateException: Authentication credentials are missing. See http://twitter4j.org/en/configuration.html for details. See and register at http://apps.twitter.com/
at twitter4j.TwitterBaseImpl.ensureAuthorizationEnabled(TwitterBaseImpl.java:219)
at twitter4j.TwitterStreamImpl.sample(TwitterStreamImpl.java:161)
at org.apache.spark.streaming.twitter.TwitterReceiver.onStart(TwitterInputDStream.scala:93)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:149)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.$anonfun$restartReceiver$1(ReceiverSupervisor.scala:198)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.Future$.$anonfun$apply$1(Future.scala:659)
at scala.util.Success.$anonfun$map$1(Try.scala:255)
at scala.util.Success.map(Try.scala:213)
at scala.concurrent.Future.$anonfun$map$1(Future.scala:292)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Below is the code for this Application
package SparkStreaming
import org.apache.log4j.{Level, Logger}
import org.apache.spark.streaming.twitter.TwitterUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import scala.io.Source
object Tweets {
Logger.getLogger("org").setLevel(Level.ERROR)
def main(args: Array[String]): Unit = {
setTweeter()
val ssc = new StreamingContext("local[*]","Tweets", Seconds(3))
val tweets = TwitterUtils.createStream(ssc,None)
val statuses = tweets.map(status => status.getText)
statuses.print()
ssc.start()
ssc.awaitTermination()
}
def setTweeter() : Unit = {
for ( line <- Source.fromFile("src/data/tweeter.txt").getLines())
{
val fields = line.split(" ")
if(fields.length == 2)
{
System.setProperty("tweeter4j.oauth." + fields(0), fields(1))
}
}
}
}
Can anyone Assist me to Resolve this problem ???

Exception: org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel

I have a sandbox for exploring newly added functions in Spring Cloud Stream, but I've faced a problem with using Function and Supplier in one Spring Cloud Stream application.
In code I used examples described in docs.
Firstly I added to project Function<String, String> with corresponding spring.cloud.stream.bindings and spring.cloud.stream.function.definition properties in application.yml. Everything is working fine, I post message to my-fun-in Kafka topic, application execute function and send result to my-fun-out topic.
Then I added Supplier<Flux<String>> to the same project with corresponding spring.cloud.stream.bindings and updated spring.cloud.stream.function.definition value to fun;sup. And here weird things start to happen. When I try to start application I receive the following error:
2020-01-15 01:45:16.608 ERROR 10128 --- [oundedElastic-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'application.sup-out-0'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}], failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:403)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
at org.springframework.integration.router.AbstractMessageRouter.doSend(AbstractMessageRouter.java:206)
at org.springframework.integration.router.AbstractMessageRouter.handleMessageInternal(AbstractMessageRouter.java:188)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:170)
at org.springframework.integration.handler.AbstractMessageHandler.onNext(AbstractMessageHandler.java:219)
at org.springframework.integration.handler.AbstractMessageHandler.onNext(AbstractMessageHandler.java:57)
at org.springframework.integration.endpoint.ReactiveStreamsConsumer$DelegatingSubscriber.hookOnNext(ReactiveStreamsConsumer.java:165)
at org.springframework.integration.endpoint.ReactiveStreamsConsumer$DelegatingSubscriber.hookOnNext(ReactiveStreamsConsumer.java:148)
at reactor.core.publisher.BaseSubscriber.onNext(BaseSubscriber.java:160)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426)
at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153)
at org.springframework.integration.channel.FluxMessageChannel.doSend(FluxMessageChannel.java:63)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:403)
at org.springframework.integration.channel.FluxMessageChannel.lambda$subscribeTo$2(FluxMessageChannel.java:83)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189)
at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.runAsync(FluxPublishOn.java:398)
at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.run(FluxPublishOn.java:484)
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}]
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73)
... 34 more
After it I've tried several thing:
Reverted spring.cloud.stream.function.definition to fun (disable sup bean binding to the external destination). Application started, function worked, supplier didn't work. Everything as expected.
Changed spring.cloud.stream.function.definition to sup (disable fun bean binding to the external destination). Application started, function didn't work, supplier worked (produced message to my-sup-out topic every second). Everything as expected as well.
Updated spring.cloud.stream.function.definition value to fun;sup. Application didn't start, got same MessageDeliveryException.
Swapped spring.cloud.stream.function.definition value to sup;fun. Application started, supplier worked, but function didn't work (didn't send messages to my-fun-out topic).
The last one confused me even more than error) So now I need someone's help to sort thing out.
Did I miss something in cofiguration? Why changing beans order separated by ; in spring.cloud.stream.function.definition leads to different results?
Full project is uploaded to GitHub and added below:
StreamApplication.java:
package com.kaine;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import reactor.core.publisher.Flux;
import java.util.function.Function;
import java.util.function.Supplier;
#SpringBootApplication
public class StreamApplication {
public static void main(String[] args) {
SpringApplication.run(StreamApplication.class);
}
#Bean
public Function<String, String> fun() {
return value -> value.toUpperCase();
}
#Bean
public Supplier<Flux<String>> sup() {
return () -> Flux.from(emitter -> {
while (true) {
try {
emitter.onNext("Hello from Supplier!");
Thread.sleep(1000);
} catch (Exception e) {
// ignore
}
}
});
}
}
application.yml
spring:
cloud:
stream:
function:
definition: fun;sup
bindings:
fun-in-0:
destination: my-fun-in
fun-out-0:
destination: my-fun-out
sup-out-0:
destination: my-sup-out
build.gradle.kts:
plugins {
java
}
group = "com.kaine"
version = "1.0-SNAPSHOT"
repositories {
mavenCentral()
}
dependencies {
implementation(platform("org.springframework.cloud:spring-cloud-dependencies:Hoxton.SR1"))
implementation("org.springframework.cloud:spring-cloud-starter-stream-kafka")
implementation(platform("org.springframework.boot:spring-boot-dependencies:2.2.2.RELEASE"))
}
configure<JavaPluginConvention> {
sourceCompatibility = JavaVersion.VERSION_11
}
Actually this is a problem with our documentation as I believe we provide a bad example of the reactive Supplier for his case. The issue is that your Supplier is in an infinite blocking loop. It basically never returns.
So please change it to something like:
#Bean
public Supplier<Flux<String>> sup() {
return () -> Flux.fromStream(Stream.generate(new Supplier<String>() {
#Override
public String get() {
try {
Thread.sleep(1000);
return "Hello from Supplier";
} catch (Exception e) {
// ignore
}
}
})).subscribeOn(Schedulers.elastic()).share();
}

LEAK: ByteBuf.release() was not called in before it's garbage-collected. Spring Reactor TcpServer

I am using reactor-core [1.1.0.RELEASE] , reactor-net [1.1.0.RELEASE] is using netty-all [4.0.18.Final], reactor-spring-context [1.1.0.RELEASE] & Spring Reactor TcpServer [Spring 4.0.3.RELEASE].
I have created simple REST API in netty for health check: /health. I have followed gs-reactor-thumbnailer code
Please see the code as follows:
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpVersion;
import org.springframework.stereotype.Service;
import reactor.function.Consumer;
import reactor.net.NetChannel;
#Service
public class HealthCheckNettyRestApi{
public Consumer<FullHttpRequest> getResponse(NetChannel<FullHttpRequest, FullHttpResponse> channel,int portNumber){
return req -> {
if (req.getMethod() != HttpMethod.GET) {
channel.send(badRequest(req.getMethod()
+ " not supported for this URI"));
} else {
DefaultFullHttpResponse resp = new DefaultFullHttpResponse( HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
resp.content().writeBytes("Hello World".getBytes());
resp.headers().set(HttpHeaders.Names.CONTENT_TYPE, "text/plain");
resp.headers().set(HttpHeaders.Names.CONTENT_LENGTH,resp.content().readableBytes());
//resp.release();
channel.send((resp));
}
};
}
}
In Spring Boot Application I am wiring it as:
#Bean
public ServerSocketOptions serverSocketOptions(){
return new NettyServerSocketOptions().
pipelineConfigurer(pipeline -> pipeline.addLast(new HttpServerCodec()).
addLast(new HttpObjectAggregator(16*1024*1024)));
}
#Autowired
private HealthCheckNettyRestApi healthCheck;
#Value("${netty.port:5555}")
private Integer nettyPort;
#Bean
public NetServer<FullHttpRequest, FullHttpResponse> restApi(Environment env,ServerSocketOptions opts) throws InterruptedException{
NetServer<FullHttpRequest, FullHttpResponse> server =
new TcpServerSpec<FullHttpRequest, FullHttpResponse>(NettyTcpServer.class)
.env(env)
.dispatcher("sync")
.options(opts)
.listen(nettyPort)
.consume(ch -> {
Stream<FullHttpRequest> in = ch.in();
log.info("netty server is humming.....");
in.filter( (FullHttpRequest req) -> (req.getUri().matches(NettyRestConstants.HEALTH_CHECK)))
.when(Throwable.class, NettyHttpSupport.errorHandler(ch))
.consume(healthCheck.getResponse(ch, nettyPort));
}).get();
server.start().await(); //this is working as Tomcat is also deployed due to Spring JPA & web dependencies
return server;
}
When I am running benchmark using wrk:
wrk -t6 -c100 -d30s --latency 'http://localhost:8087/health'
Then I am getting following stack trace:
2015-04-22 17:23:21.072] - 16497 ERROR [reactor-tcp-io-22] --- i.n.u.ResourceLeakDetector: LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetectionLevel=advanced' or call ResourceLeakDetector.setLevel()
2015-04-22 23:09:26.354 ERROR 4308 --- [actor-tcp-io-13] io.netty.util.ResourceLeakDetector : LEAK: ByteBuf.release() was not called before it's garbage-collected.
Recent access records: 0
Created at:
io.netty.buffer.CompositeByteBuf.<init>(CompositeByteBuf.java:59)
io.netty.buffer.Unpooled.compositeBuffer(Unpooled.java:355)
io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:144)
io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:49)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:341)
io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:327)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:155)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:148)
io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:341)
io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:327)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:116)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
java.lang.Thread.run(Thread.java:745)
2015-04-22 23:09:55.217 INFO 4308 --- [actor-tcp-io-13] r.n.netty.NettyNetChannelInboundHandler : [id: 0x260faf6d, /127.0.0.1:50275 => /127.0.0.1:8087] Connection reset by peer
2015-04-22 23:09:55.219 ERROR 4308 --- [actor-tcp-io-13] reactor.core.Reactor : Connection reset by peer
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:446)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:871)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:108)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at java.lang.Thread.run(Thread.java:745)
My Analyis: I think that since I am forwarding DefaultFullHttpResponse to Spring implementation, Spring APIs should take care about calling release() method. BTW I also tried calling release() method from my implementation but I am still getting the same error.
Could some one tell me what is wrong with the implementation?

Categories

Resources