Does anyone have an idea why is being thrown listed exception after invoking /user URL? It quite strange because all works as is expected (upstream service handles a response from downstream and sends to a response to a client). Using Ratpack 1.4.1. Full code is available: https://github.com/peterjurkovic/ratpack-demo
Edit:
I've just tried downgrade to version 1.3.3 and with this version of Ratpack it is not happening. Github issue created.
Edit 2:
The issue should be resolved in the next version 1.4.2.
public class DownstreamUserService {
Logger log = LoggerFactory.getLogger(DownstreamUserService.class);
private HttpClient httpClient;
private ObjectMapper mapper;
private URI downstreamServerUri;
#Inject
public DownstreamUserService(HttpClient httpClient, Config config, ObjectMapper mapper) {
this.httpClient = httpClient;
this.mapper = mapper;
try {
downstreamServerUri = new URI("http://" + config.getHost() + ":" + config.getPort() + "/endpoint");
} catch (URISyntaxException e) {
log.error("",e);
throw new RuntimeException(e);
}
}
public Promise<User> load(){
return httpClient.get( downstreamServerUri )
.onError(e -> log.info("Error",e))
.map( res -> mapper.readValue(res.getBody().getBytes(), User.class));
}
}
Server
public class App {
static Logger log = LoggerFactory.getLogger(App.class);
public static void main(String[] args) throws Exception {
RatpackServer.start(s -> s
// bindings..
.handlers( chain -> chain
.get("user", c -> {
DownstreamUserService service = c.get(DownstreamUserService.class);
service.load().then( user -> c.render( json(user) ));
})
}
}
Stacktrace:
[2016-08-28 22:58:24,979] WARN [ratpack-compute-1-2] i.n.c.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.PrematureChannelClosureException: channel gone inactive with 1 missing response(s)
at io.netty.handler.codec.http.HttpClientCodec$Decoder.channelInactive(HttpClientCodec.java:261)
at io.netty.channel.CombinedChannelDuplexHandler.channelInactive(CombinedChannelDuplexHandler.java:220)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:255)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:234)
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelInactive(DefaultChannelPipeline.java:1329)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:255)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:241)
at io.netty.channel.DefaultChannelPipeline.fireChannelInactive(DefaultChannelPipeline.java:908)
at io.netty.channel.AbstractChannel$AbstractUnsafe$7.run(AbstractChannel.java:744)
at io.netty.util.concurrent.SingleThreadEventExecutor.safeExecute(SingleThreadEventExecutor.java:451)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:418)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:306)
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:877)
at ratpack.exec.internal.DefaultExecController$ExecControllerBindingThreadFactory.lambda$newThread$0(DefaultExecController.java:136)
at ratpack.exec.internal.DefaultExecController$ExecControllerBindingThreadFactory$$Lambda$129/1240843015.run(Unknown Source)
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
at java.lang.Thread.run(Thread.java:745)
Related
I"m doing a stress test on vert.x application and send ~10K RPS.
My application send an http async request from a dedicated verticle.
I"m using vert.x http client, and see that around 20 seconds my application sent the http requests successfully.
After 20 seconds i"m starting to get a lot of "Cannot assign requested address" errors.
I tried to deploy more verticles, to set different values to the http client thread pool and nothing helped to solve the issue.
I guess that the issue related to the high throughput in a short time around 1 minute.
Main Class:
public static void main(String[] args) {
final VertxOptions vertxOptions = new VertxOptions()
.setMaxEventLoopExecuteTime(1)
.setMaxEventLoopExecuteTimeUnit(TimeUnit.MILLISECONDS);
final Vertx vertx = Vertx.vertx(vertxOptions);
final Injector injector = Guice.createInjector(new Dependencies(vertx));
CustomCodecRegister.register(vertx.eventBus());
final Stream<Future<String>> deploymentFutures = Stream.of(
deployWorker(vertx, injector, StatsHttpVerticle.class, 10)
).flatMap(stream -> stream);
CompositeFuture.all(deploymentFutures.collect(Collectors.toList()))
.onSuccess(successfulCompositeFuture -> { });}
private static <T> Stream<Future<String>> deployWorker(Vertx vertx, Injector injector, Class<T> workerVerticleClass, int numVerticles) {
final String poolName = workerVerticleClass.getSimpleName()
.toLowerCase()
.replace("verticle", "-worker-pool");
final int numberOfThreads = 50;
final DeploymentOptions options = new DeploymentOptions()
.setWorker(true)
.setWorkerPoolName(poolName)
.setWorkerPoolSize(numberOfThreads);
return IntStream.range(0, numVerticles)
.mapToObj(ignore -> Future.future((Promise<String> promise) ->
vertx.deployVerticle((Verticle) injector.getInstance(workerVerticleClass), options, promise)));
}
EventBusAdapter:
public void send(Map<String, Object> queryParams, HashMap<String, String> headers, boolean followRedirect, Event eventToFire) {
StatsRequest statsRequest = new StatsRequest(queryParams, headers, eventToFire, followRedirect);
eventBus.request(FIRE_GET_METRIC_TO_STATS,statsRequest);
}
WorkerVerticle:
#Override
public void start(Promise<Void> startPromise) throws Exception {
vertx.eventBus().consumer(FIRE_GET_METRIC_TO_STATS, this::fire);
startPromise.complete();
}
private void fire(Message<StatsRequest> message) {
StatsRequest body = message.body();
MultiMap multimapHeader = MultiMap.caseInsensitiveMultiMap();
WebClientOptions webClientOptions = new WebClientOptions();
webClientOptions.setMaxPoolSize(1000);
WebClient httpClient = WebClient.create(vertx, webClientOptions);
httpClient.request(HttpMethod.GET, port, "example.com", "/1x1.gif" + "?" + "queryParamsString")
.followRedirects(false)
.putHeaders(multimapHeader)
.timeout(120000)
.send()
.onSuccess(response -> {
logger.info("All good");
})
.onFailure(err -> {
logger.error("Exception: " + err.getMessage());
});
}
How can i solve this issue?
I'm a beginner in Java coding.
Below is the code
public class AddJIRATicketWatcherCommandHandler {
private final JiraFactory jiraFactory;
public void handle(String jiraIssueKey, String watcher) {
log.debug("Adding {} watcher to JIRA issue: {}", watcher, jiraIssueKey);
final Issue issue = jiraFactory.createClient().getIssueClient().getIssue(jiraIssueKey).claim();
log.debug("Found JIRA issue: {}", issue.getKey());
Promise<Void> addWatcherPromise = jiraFactory.createClient().getIssueClient().addWatcher(issue.getWatchers().getSelf() , watcher);
addWatcherPromise.claim();
}}
public JiraRestClient createClient() {
log.debug("Creating JIRA rest client for remote environment");
URI jiraServerUri = URI.create("");
jiraServerUri = new URI(StringUtils.removeEnd(jiraConfig.getJiraURI(), "/rest"));
JiraRestClient restClient = new AsynchronousJiraRestClientFactory().createWithBasicHttpAuthentication(jiraServerUri,
jiraConfig.getJiraUsername(),
jiraConfig.getJiraPassword());
JIRA_LOGGER.info("url=[{}], username=[{}], password=[{}]", jiraServerUri.toString(), jiraConfig.getJiraUsername(), jiraConfig.getJiraPassword());
log.debug("JIRA rest client created successfully for remote environment");
return restClient;
}
However, when I ran the sonarqube. I received this error.
Use try-with-resources or close this "JiraRestClient" in a "finally" clause.
My understanding is to close the connection once done. But, I'm unsure on how to do that.
I tried to implement finally with close(). But the results is still showing the same error.
Try with resources:
public void handle(String jiraIssueKey, String watcher) {
try (JiraRestClient restClient = jiraFactory.createClient()) {
log.debug("Adding {} watcher to JIRA issue: {}", watcher, jiraIssueKey);
final Issue issue = restClient.getIssue(jiraIssueKey).claim();
log.debug("Found JIRA issue: {}", issue.getKey());
Promise<Void> addWatcherPromise = restClient.getIssueClient().addWatcher(issue.getWatchers().getSelf() , watcher);
addWatcherPromise.claim();
}
}
try finally:
public void handle(String jiraIssueKey, String watcher) {
JiraRestClient restClient = jiraFactory.createClient();
try {
log.debug("Adding {} watcher to JIRA issue: {}", watcher, jiraIssueKey);
final Issue issue = restClient.getIssue(jiraIssueKey).claim();
log.debug("Found JIRA issue: {}", issue.getKey());
Promise<Void> addWatcherPromise = restClient.getIssueClient().addWatcher(issue.getWatchers().getSelf() , watcher);
addWatcherPromise.claim();
} finally {
restClient.close();
}
}
I am consuming batches in kafka, where retry is not supported in spring cloud stream kafka binder with batch mode, there is an option given that You can configure a SeekToCurrentBatchErrorHandler (using a ListenerContainerCustomizer) to achieve similar functionality to retry in the binder.
I tried the same, but with SeekToCurrentBatchErrorHandler, but it's retrying more than the time set which is 3 times.
How can I do that?
I would like to retry the whole batch.
How can I send the whole batch to dlq topic? like for record listener I used to match deliveryAttempt(retry) to 3 then send to DLQ topic, check in listener.
I have checked this link, which is exactly my issue but an example would be great help, with this library spring-cloud-stream-kafka-binder, can I achieve that. Please explain with an example, I am new to this.
Currently I have below code.
#Configuration
public class ConsumerConfig {
#Bean
public ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer() {
return (container, dest, group) -> {
container.getContainerProperties().setAckOnError(false);
SeekToCurrentBatchErrorHandler seekToCurrentBatchErrorHandler
= new SeekToCurrentBatchErrorHandler();
seekToCurrentBatchErrorHandler.setBackOff(new FixedBackOff(0L, 2L));
container.setBatchErrorHandler(seekToCurrentBatchErrorHandler);
//container.setBatchErrorHandler(new BatchLoggingErrorHandler());
};
}
}
Listerner:
#StreamListener(ActivityChannel.INPUT_CHANNEL)
public void handleActivity(List<Message<Event>> messages,
#Header(name = KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment
acknowledgment,
#Header(name = "deliveryAttempt", defaultValue = "1") int
deliveryAttempt) {
try {
log.info("Received activity message with message length {}", messages.size());
nodeConfigActivityBatchProcessor.processNodeConfigActivity(messages);
acknowledgment.acknowledge();
log.debug("Processed activity message {} successfully!!", messages.size());
} catch (MessagePublishException e) {
if (deliveryAttempt == 3) {
log.error(
String.format("Exception occurred, sending the message=%s to DLQ due to: ",
"message"),
e);
publisher.publishToDlq(EventType.UPDATE_FAILED, "message", e.getMessage());
} else {
throw e;
}
}
}
After seeing #Gary's response added the ListenerContainerCustomizer #Bean with RetryingBatchErrorHandler, but not able to import the class. attaching screenshots.
not able to import RetryingBatchErrorHandler
my spring cloud dependencies
Use a RetryingBatchErrorHandler to send the whole batch to the DLT
https://docs.spring.io/spring-kafka/docs/current/reference/html/#retrying-batch-eh
Use a RecoveringBatchErrorHandler where you can throw a BatchListenerFailedException to tell it which record in the batch failed.
https://docs.spring.io/spring-kafka/docs/current/reference/html/#recovering-batch-eh
In both cases provide a DeadLetterPublishingRecoverer to the error handler; disable DLTs in the binder.
EDIT
Here's an example; it uses the newer functional style rather than the deprecated #StreamListener, but the same concepts apply (but you should consider moving to the functional style).
#SpringBootApplication
public class So69175145Application {
public static void main(String[] args) {
SpringApplication.run(So69175145Application.class, args);
}
#Bean
ListenerContainerCustomizer<AbstractMessageListenerContainer<?, ?>> customizer(
KafkaTemplate<byte[], byte[]> template) {
return (container, dest, group) -> {
container.setBatchErrorHandler(new RetryingBatchErrorHandler(new FixedBackOff(5000L, 2L),
new DeadLetterPublishingRecoverer(template,
(rec, ex) -> new TopicPartition("errors." + dest + "." + group, rec.partition()))));
};
}
/*
* DLT topic won't be auto-provisioned since enableDlq is false
*/
#Bean
public NewTopic topic() {
return TopicBuilder.name("errors.so69175145.grp").partitions(1).replicas(1).build();
}
/*
* Functional equivalent of #StreamListener
*/
#Bean
public Consumer<List<String>> input() {
return list -> {
System.out.println(list);
throw new RuntimeException("test");
};
}
/*
* Not needed here - just to show we sent them to the DLT
*/
#KafkaListener(id = "so69175145", topics = "errors.so69175145.grp")
public void listen(String in) {
System.out.println("From DLT: " + in);
}
}
spring.cloud.stream.bindings.input-in-0.destination=so69175145
spring.cloud.stream.bindings.input-in-0.group=grp
spring.cloud.stream.bindings.input-in-0.content-type=text/plain
spring.cloud.stream.bindings.input-in-0.consumer.batch-mode=true
# for DLT listener
spring.kafka.consumer.auto-offset-reset=earliest
[foo]
2021-09-14 09:55:32.838ERROR...
...
[foo]
2021-09-14 09:55:37.873ERROR...
...
[foo]
2021-09-14 09:55:42.886ERROR...
...
From DLT: foo
We have a Spring Integration DSL pipeline connected to a GCP Pubsub and things "work": The data is received and processed as defined in the pipeline, using a collection of Function implementations and .handle().
The problem we have (and why I used "work" in quotes) is that, in some handlers, when some of the data isn't found in the companion database, we raise IllegalStateException, which forces the data to be reprocessed (along the way, another service may complete the companion database and then function will now work). This exception is never shown anywhere.
We tried to capture the content of errorHandler, but we really can't find the proper way of doing it programmatically (no XML).
Our Functions have something like this:
Record record = recordRepository.findById(incomingData).orElseThrow(() -> new IllegalStateException("Missing information: " + incomingData));
This IllegalStateException is the one that is not appearing anywhere in the logs.
Also, maybe it's worth mentioning that we have our channels defined as
#Bean
public DirectChannel cardInputChannel() {
return new DirectChannel();
}
#Bean
public PubSubInboundChannelAdapter cardChannelAdapter(
#Qualifier("cardInputChannel") MessageChannel inputChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter = new PubSubInboundChannelAdapter(pubSubTemplate, SUBSCRIPTION_NAME);
adapter.setOutputChannel(inputChannel);
adapter.setAckMode(AckMode.AUTO);
adapter.setPayloadType(CardDto.class);
return adapter;
}
I am not familiar with the adapter, but I just looked at the code and it looks like they just nack the message and don't log anything.
You can add an Advice to the handler's endpoint to capture and log the exception
.handle(..., e -> e.advice(exceptionLoggingAdvice)
#Bean
public MethodInterceptor exceptionLoggingAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception thrown) {
// log it
throw thrown;
}
}
}
EDIT
#SpringBootApplication
public class So57224614Application {
public static void main(String[] args) {
SpringApplication.run(So57224614Application.class, args);
}
#Bean
public IntegrationFlow flow(MethodInterceptor myAdvice) {
return IntegrationFlows.from(() -> "foo", endpoint -> endpoint.poller(Pollers.fixedDelay(5000)))
.handle("crasher", "crash", endpoint -> endpoint.advice(myAdvice))
.get();
}
#Bean
public MethodInterceptor myAdvice() {
return invocation -> {
try {
return invocation.proceed();
}
catch (Exception e) {
System.out.println("Failed with " + e.getMessage());
throw e;
}
};
}
}
#Component
class Crasher {
public void crash(Message<?> msg) {
throw new RuntimeException("test");
}
}
and
Failed with nested exception is java.lang.RuntimeException: test
I have implemented a Restful web interface using Jersey for sending messages received from an internal JMS publisher to external clients via HTTP. I have managed to get a test message out to a Java client, but the Thread throws a null pointer exception before completing the write() execution, closing the connection and preventing further communication.
Here is my resource class:
#GET
#Path("/stream_data")
#Produces(SseFeature.SERVER_SENT_EVENTS)
public EventOutput getServerSentEvents(#Context ServletContext context){
final EventOutput eventOutput = new EventOutput();
new Thread( new ObserverThread(eventOutput, (MService) context.getAttribute("instance")) ).start();
return eventOutput;
}
And here is my thread's run method:
public class ObserverThread implements Observer, Runnable {
//constructor sets eventOutput & mService objects
//mService notifyObservers() called when JMS message received
//text added to Thread's message queue to await sending to client
public void run() {
try {
String message = "{'symbol':'test','entryType'='0','price'='test'}";
Thread.sleep(1000);
OutboundEvent.Builder builder = new OutboundEvent.Builder();
builder.mediaType(MediaType.APPLICATION_JSON_TYPE);
builder.data(String.class, message);
OutboundEvent event = builder.build();
eventOutput.write(event);
System.out.println(">>>>>>SSE CLIENT HAS BEEN REGISTERED!");
mService.addObserver(this);
while(!eventOutput.isClosed()){
if(!updatesQ.isEmpty()){
pushUpdate(updatesQ.dequeue());
}
}
System.out.println("<<<<<<<SSE CLIENT HAS BEEN DEREGISTERED!");
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
Here is my client code:
Client client = ClientBuilder.newBuilder().register(SseFeature.class).build();
WebTarget target = client.target(url);
EventInput eventInput = target.request().get(EventInput.class);
try {
while (!eventInput.isClosed()) {
eventInput.setChunkType(MediaType.WILDCARD_TYPE);
final InboundEvent inboundEvent = eventInput.read();
if (inboundEvent != null) {
String theString = inboundEvent.readData();
System.out.println(theString + "\n");
}
}
} catch (Exception e) {
e.printStackTrace();
}
I am getting the "{'symbol':'test','entryType'='0','price'='test'}" test message printed to the client console, but the server then prints a NullPointerException before it can print the ">>>>SSE Client registered" message. This closes the connection so the client exits the while loop and stops listening for updates.
I converted the project to a webapp 3.0 version facet in order to add an async-supported tag to the web.xml but i am receiving the same null pointer error. I am inclined to think that it is caused by the servlet ending the Request/Response objects once the first message is returned, evidence is shown in the stack trace:
Exception in thread "Thread-20" java.lang.NullPointerException
at org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:741)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:434)
at org.apache.coyote.http11.InternalOutputBuffer.flush(InternalOutputBuffer.java:299)
at org.apache.coyote.http11.Http11Processor.action(Http11Processor.java:981)
at org.apache.coyote.Response.action(Response.java:183)
at org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:314)
at org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:288)
at org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:98)
at org.glassfish.jersey.message.internal.CommittingOutputStream.flush(CommittingOutputStream.java:292)
at org.glassfish.jersey.server.ChunkedOutput$1.call(ChunkedOutput.java:241)
at org.glassfish.jersey.server.ChunkedOutput$1.call(ChunkedOutput.java:192)
at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
at org.glassfish.jersey.internal.Errors.process(Errors.java:242)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:345)
at org.glassfish.jersey.server.ChunkedOutput.flushQueue(ChunkedOutput.java:192)
at org.glassfish.jersey.server.ChunkedOutput.write(ChunkedOutput.java:182)
at com.bpc.services.service.ObserverThread.run(MarketObserverThread.java:32)
at java.lang.Thread.run(Thread.java:745)
<<<<<<<SSE CLIENT HAS BEEN DEREGISTERED!
I have attempted to test an sse broadcaster as well. In this case I am not seeing any exceptions thrown, but the connection is closed once the first message has been received, leading me to believe it is something in the servlet forcing the connection to close. Can anyone advise me on how to debug this on the server-side?
I had a similar issue from what seems to be a long standing bug in Jersey's #Context injection for ExecutorService instances. In their current implementation of Sse (version 2.27),
class JerseySse implements Sse {
#Context
private ExecutorService executorService;
#Override
public OutboundSseEvent.Builder newEventBuilder() {
return new OutboundEvent.Builder();
}
#Override
public SseBroadcaster newBroadcaster() {
return new JerseySseBroadcaster(executorService);
}
}
the executorService field is never initialized, so the JerseySseBroadcaster raises a NullPointerException in my case. I worked around the bug by explicitly triggering the injection.
If you're using HK2 for CDI (Jersey's default), a rough sketch of a solution to the question above could look similar to the following:
#Singleton
#Path("...")
public class JmsPublisher {
private Sse sse;
private SseBroadcaster broadcaster;
private final ExecutorService executor;
private final BlockingQueue<String> jmsMessageQueue;
...
#Context
public void setSse(Sse sse, ServiceLocator locator) {
locator.inject(sse); // Inject sse.executorService
this.sse = sse;
this.broadcaster = sse.newBroadcaster();
}
...
#GET
#Path("/stream_data")
#Produces(MediaType.SERVER_SENT_EVENTS)
public void register(SseEventSink eventSink) {
broadcaster.register(eventSink);
}
...
#PostConstruct
private void postConstruct() {
executor.submit(() -> {
try {
while(true) {
String message = jmsMessageQueue.take();
broadcaster.broadcast(sse.newEventBuilder()
.mediaType(MediaType.APPLICATION_JSON_TYPE)
.data(String.class, message)
.build());
}
} catch(InterruptedException e) {
Thread.currentThread().interrupt();
}
});
}
#PreDestroy
private void preDestroy() {
executor.shutdownNow();
}
}