How to handle kafka producer exceptions - java

I am trying to understand how spring boot KafkaTemplate works with async producer and handle exceptions. I want to handle all kinds of errors including network errors. Tried with retry configs but its retrying more than the number I provided to
#Service
public class UserInfoService {
private static final Logger LOGGER = LoggerFactory.getLogger(UserInfoService.class);
#Autowired
private KafkaTemplate kafkaTemplate;
public void sendUserInfo(UserInfo data) {
final ProducerRecord<String, UserInfo> record = new ProducerRecord<>("usr-test-data", "test-app", data);
try {
ListenableFuture<SendResult<String, UserInfo>> future = kafkaTemplate.send(record);
future.addCallback(new ListenableFutureCallback<SendResult<String, UserInfo>>() {
#Override
public void onFailure(Throwable ex) {
handleFailure(ex);
}
#Override
public void onSuccess(SendResult<String, UserInfo> result) {
handleSuccess(result);
}
});
} catch (Exception e) {
throw new RuntimeException(e);
}
}
private void handleSuccess(SendResult<String, UserInfo> result) {
LOGGER.info("Message sent successfully with offset: {}", result.getRecordMetadata().offset());
}
private void handleFailure(Throwable ex) {
LOGGER.info("Unable to send message- Error: {}", ex.getMessage());
}
}
Tried to limit number of retries with configProps.put(ProducerConfig.RETRIES_CONFIG, "3"); hoping this will eventually throw exception and I can catch. But it still tries more than 3 times and seems not working. Here is my complete config class:
#Configuration
public class KafkaProducerConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_DOC, "true");
configProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, CountingProducerInterceptor.class.getName());
configProps.put(ProducerConfig.ACKS_CONFIG, "all");
configProps.put(ProducerConfig.RETRIES_CONFIG, "3");
configProps.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, 10000);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Would like to know what I can catch in future.OnFailure and in the parent try catch block.

A Future does not complete within the lifecycle of a try-catch. You need to throw the exception within the body of the onFailure.
In my experience, Kafka network errors cannot easily be caught

Related

kafka multi-threaded consumer with manual commit offset: KafkaConsumer is not safe for multi-threaded access

I use ArrayBlockingQueue to decouple Kafka consumers from sinks:
Multi-threaded consumption of Kafka, one kafka consumer per thread;
Kafka consumer manually manages the offset;
The Kafka consumer wraps the message content and the callback function containing OFFSET into a Record object and sends it to ArrayBlockingQueue;
Sink fetches the record from ArrayBlockingQueue and processes it. Only after Sink successfully processes the record, does it call the callback function of the Record object (notify the Kafka consumer commitSync)
During the operation, I encountered an error, which troubled me for several days. I don't understand which part of the problem is wrong:
11:44:10.794 [pool-2-thread-1] ERROR com.alibaba.kafka.source.KafkaConsumerRunner - [pool-2-thread-1] ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1824)
at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:1808)
at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1255)
at com.alibaba.kafka.source.KafkaConsumerRunner$1.call(KafkaConsumerRunner.java:75)
at com.alibaba.kafka.source.KafkaConsumerRunner$1.call(KafkaConsumerRunner.java:71)
at com.alibaba.kafka.sink.Sink.run(Sink.java:25)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Source Code:
Queues.java
public class Queues {
public static volatile BlockingQueue[] queues;
/**
* Create Multiple Queues.
* #param count The number of queues created.
* #param capacity The Capacity of each queue.
*/
public static void createQueues(final int count, final int capacity) {
Queues.queues = new BlockingQueue[count];
for (int i=0; i<count; ++i) {
Queues.queues[i] = new ArrayBlockingQueue(capacity, true);
}
}
}
Record
#Builder
#Getter
public class Record {
private final String value;
private final Callable<Boolean> ackCallback;
}
Sink.java
public class Sink implements Runnable {
private final int queueId;
public Sink(int queueId) {
this.queueId = queueId;
}
#Override
public void run() {
while (true) {
try {
Record record = (Record) Queues.queues[this.queueId].take();
// (1) Handler: Write to database
Thread.sleep(10);
// (2) ACK: notify kafka consumer to commit offset manually
record.getAckCallback().call();
} catch (Exception e) {
e.printStackTrace();
System.exit(1);
}
}
}
}
KafkaConsumerRunner
#Slf4j
public class KafkaConsumerRunner implements Runnable {
private final String topic;
private final KafkaConsumer<String, String> consumer;
public KafkaConsumerRunner(String topic, Properties properties) {
this.topic = topic;
this.consumer = new KafkaConsumer<>(properties);
}
#Override
public void run() {
// offsets to commit
Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>();
// Subscribe topic
this.consumer.subscribe(Collections.singletonList(this.topic));
// Consume Kafka Message
while (true) {
try {
ConsumerRecords<String, String> consumerRecords = this.consumer.poll(10000L);
for (TopicPartition topicPartition : consumerRecords.partitions()) {
for (ConsumerRecord<String, String> consumerRecord : consumerRecords.records(topicPartition)) {
// (1) Restore [partition -> offset] Map
offsetsToCommit.put(topicPartition, new OffsetAndMetadata(consumerRecord.offset()));
// (2) Put into queue
int queueId = topicPartition.partition() % Queues.queues.length;
Queues.queues[queueId].put(Record.builder()
.value(consumerRecord.value())
.ackCallback(this.getAckCallback(offsetsToCommit))
.build());
}
}
} catch (ConcurrentModificationException | InterruptedException e) {
log.error("[{}] {}", Thread.currentThread().getName(), ExceptionUtils.getMessage(e), e);
System.exit(1);
}
}
}
private Callable<Boolean> getAckCallback(Map<TopicPartition, OffsetAndMetadata> offsets) {
return new AckCallback<Boolean>(this.consumer, new HashMap<>(offsets)) {
#Override
public Boolean call() throws Exception {
try {
this.getConsumer().commitSync(this.getOffsets());
return true;
} catch (Exception e) {
log.error(String.format("[%s] %s", Thread.currentThread().getName(), ExceptionUtils.getMessage(e)), e);
return false;
}
}
};
}
#Getter
#AllArgsConstructor
abstract class AckCallback<T> implements Callable<T> {
private final KafkaConsumer<String, String> consumer;
private final Map<TopicPartition, OffsetAndMetadata> offsets;
}
}
Application.java
public class Application {
private static final String TOPIC = "YEWEI_TOPIC";
private static final int QUEUE_COUNT = 1;
private static final int QUEUE_CAPACITY = 4;
private static void createQueues() {
Queues.createQueues(QUEUE_COUNT, QUEUE_CAPACITY);
}
private static void startupSource() {
if (null == System.getProperty("java.security.auth.login.config")) {
System.setProperty("java.security.auth.login.config", "jaas.conf");
}
Properties properties = new Properties();
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "ConsumerGroup1");
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "cdh1:9092,cdh2:9092,cdh3:9092");
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 2);
properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SASL_PLAINTEXT");
properties.put(SaslConfigs.SASL_MECHANISM, "PLAIN");
ExecutorService executorService = Executors.newFixedThreadPool(QUEUE_COUNT);
for (int queueId = 0; queueId < QUEUE_COUNT; ++queueId) {
executorService.execute(new KafkaConsumerRunner(TOPIC, properties));
}
}
private static void startupSinks() {
ExecutorService executorService = Executors.newFixedThreadPool(QUEUE_COUNT);
for (int queueId = 0; queueId < QUEUE_COUNT; ++queueId) {
executorService.execute(new Sink(queueId));
}
}
public static void main(String[] args) {
Application.createQueues();
Application.startupSource();
Application.startupSinks();
}
}
I figured out this problem. Kafka consumer runs in its own thread and is also called back by the Sink thread. The poll and commitSync method of KafkaConsumer can only be applied to one thread. See org.apache.kafka.clients.consumer.KafkaConsumer#acquireAndEnsureOpen.
Change to: The Sink callback does not directly use the consumer object, but sends the ACK message to the LinkedTransferQueue. KafkaConsumerRunner polls the LinkedTransferQueue every time and batches ACKs
#Slf4j
public class KafkaConsumerRunner implements Runnable {
private final String topic;
private final BlockingQueue ackQueue;
private final KafkaConsumer<String, String> consumer;
public KafkaConsumerRunner(String topic, Properties properties) {
this.topic = topic;
this.ackQueue = new LinkedTransferQueue<Map<TopicPartition, OffsetAndMetadata>>();
this.consumer = new KafkaConsumer<>(properties);
}
#Override
public void run() {
// Subscribe topic
this.consumer.subscribe(Collections.singletonList(this.topic));
// Consume Kafka Message
while (true) {
while (!this.ackQueue.isEmpty()) {
try {
Map<TopicPartition, OffsetAndMetadata> offsets = (Map<TopicPartition, OffsetAndMetadata>) this.ackQueue.take();
this.consumer.commitSync(offsets);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
...
}
}
private Callable<Boolean> getAckCallback(Map<TopicPartition, OffsetAndMetadata> offsets) {
return new AckCallback<Boolean>(new HashMap<>(offsets)) {
#Override
public Boolean call() throws Exception {
try {
ackQueue.put(offsets);
return true;
} catch (Exception e) {
log.error(String.format("[%s] %s", Thread.currentThread().getName(), ExceptionUtils.getMessage(e)), e);
System.exit(1);
return false;
}
}
};
}
...
}

In Java, How to close Kafka connection manually?

My code is in java + Spring Boot
#Autowired
private KafkaTemplate<String, String> kafkaTemplate;
public void produce(String message) {
logger.info("Producer : Kafka Topic -> {}, Kafka Message -> {}", TOPIC, message);
kafkaTemplate.send(TOPIC, message);
}
#KafkaListener(topics = TOPIC, groupId = GROUP_ID)
public void consume(String message) {
System.out.println("Kafka consume value ->" + message);
logger.info("Consumer : Kafka Message -> {}", message);
try {
setKafkaStatus(Integer.parseInt(message.trim()));
}catch (Exception e) {
logger.info("Kafka message is not Integer");
setKafkaStatus(0);
}
}
public void closeConnection() {
//code for close connection
}
#Autowired
private KafkaListenerEndpointRegistry registry;
public void closeConnection() {
this.registry.stop();
}

Unable to catch thrown exception from async method in Spring

I am unable to catch thrown exceptions from an async method in Spring. I have written an uncaught exception handler to catch but was unsuccessful.
The application will enable to start any number of forever running asynchronous jobs.
I think my async method needs to return Future so that I can store it in hashmap and check its status or stop the job. I also can get all running jobs by storing it.
I think I can't use get method of future because if the input is correct it blocks and my job will be forever running. I need to send status as started if the input is fine. Whenever an exception occurs in the Async method it is thrown but I am unable to catch it. How can I do that?
Here is my complete code.
Application.java
#SpringBootApplication
#EnableAsync
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
AsyncConfig.java
#EnableAsync
#Configuration
public class AsyncConfig implements AsyncConfigurer {
#Override
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(5);
executor.setQueueCapacity(100);
executor.setThreadNamePrefix("MyExecutor-");
executor.initialize();
return executor;
}
#Override
public AsyncUncaughtExceptionHandler getAsyncUncaughtExceptionHandler() {
return new AsyncExceptionHandler();
}
}
AsyncExceptionHandler.java
public class AsyncExceptionHandler implements AsyncUncaughtExceptionHandler {
#Override
public void handleUncaughtException(Throwable throwable, Method method, Object... obj) {
System.out.println("Exception Cause - " + throwable.getMessage());
System.out.println("Method name - " + method.getName());
for (Object param : obj) {
System.out.println("Parameter value - " + param);
}
}
}
createBucket.java
#Service
public class createBucket {
#Async
public Future<String> start(String config){
try {
JSONObject map = new JSONObject(config);
Jedis jedis = new Jedis(map.getString("jedisip"));
jedis.auth(map.getString("password"));
// code to make a kafka consumer subscribe to a topic given in config input
while(true) {
//forever running code which polls using a kafka consumer
}
}
catch(JedisException j) {
throw new JedisException("Some msg");
}
}
}
Endpoint.java
#Controller
public class Endpoint {
#Autowired
private createBucket service;
private Future<String> out;
private HashMap<String, Future<String>> maps = new HashMap<>();
#PostMapping(value = "/start", consumes = "application/json", produces = "application/json")
public ResponseEntity<String> starttask(#RequestBody String conf) {
try {
out = service.start(conf);
maps.put(conf, out);
}
catch (Exception e) {
return new ResponseEntity<>("exception", HttpStatus.BAD_REQUEST);
}
return new ResponseEntity<>("{\"started\":\"true\"}", HttpStatus.CREATED);
}
}
As stated in official doc, AsyncUncaughtExceptionHandler is used for void return value.
https://docs.spring.io/spring/docs/5.1.10.RELEASE/spring-framework-reference/integration.html#spring-integration
In your scenario, I recommend using CompletableFuture and DeferredResult:
#Async
public CompletableFuture<String> start(String config) {
CompletableFuture completableFuture = new CompletableFuture();
try {
JSONObject map = new JSONObject(config);
Jedis jedis = new Jedis(map.getString("jedisip"));
jedis.auth(map.getString("password"));
completableFuture.complete("started!");
}
catch(JedisException j) {
completableFuture.completeExceptionally(j);
}
return completableFuture;
}
#PostMapping(value = "/start", consumes = "application/json", produces = "application/json")
public DeferredResult<ResponseEntity> starttask(#RequestBody String conf) {
CompletableFuture<String> start = service.start(conf);
DeferredResult<ResponseEntity> deferredResult = new DeferredResult<>();
start.whenComplete((res, ex) -> {
if (ex == null) {
ResponseEntity<String> successEntity = new ResponseEntity<>("{\"started\":\"true\"}", HttpStatus.CREATED);\
deferredResult.setResult(successEntity);
} else {
// handle ex here!
ResponseEntity<String> exEntity = new ResponseEntity<>("exception", HttpStatus.BAD_REQUEST);
deferredResult.setResult(exEntity);
}
});
return deferredResult;
}
There is another serious problem. The following code is not thread safe.
private Future<String> out;
private HashMap<String, Future<String>> maps = new HashMap<>();

Spring JMS onException retry

i have a requirement where i need to send a email but if the email server is down or any error occurs while sending a email that need to be retry for a specific number of times
below is my bean properties
#Bean(destroyMethod = "")
public JndiTemplate jndiTemplate() {
Properties environment = new Properties();
environment.put(Context.INITIAL_CONTEXT_FACTORY, env.getProperty("XXXXXX"));
environment.put(Context.PROVIDER_URL, env.getProperty("XXXXXX"));
JndiTemplate jndiTemplate = new JndiTemplate();
jndiTemplate.setEnvironment(environment);
return jndiTemplate;
}
#Bean(destroyMethod = "")
public JndiObjectFactoryBean jmsConnFactory() {
JndiObjectFactoryBean jmsConnFactory = new JndiObjectFactoryBean();
jmsConnFactory.setJndiTemplate(jndiTemplate());
jmsConnFactory.setJndiName(env.getProperty("XXXXX"));
return jmsConnFactory;
}
#Bean(destroyMethod = "")
public JndiObjectFactoryBean jmsDestanation() {
JndiObjectFactoryBean destination = new JndiObjectFactoryBean();
destination.setJndiTemplate(jndiTemplate());
destination.setJndiName(env.getProperty("XXXXXX"));
return destination;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate();
jmsTemplate.setDefaultDestination(jmsDestanation());
jmsTemplate.setConnectionFactory(jmsConnFactory());
return jmsTemplate;
}
#Bean
public JmsReceiver jmsReciver() {
return new JmsReceiver();
}
#Bean
public JmsExceptionListener jmsExceptionListener(){
return new JmsExceptionListener();
}
#Bean
public JmsErrorHandleListener jmsErrorHandleListener(){
return new JmsErrorHandleListener();
}
#Bean
public DefaultMessageListenerContainer jmsQueueListner() {
DefaultMessageListenerContainer listner = new DefaultMessageListenerContainer();
listner.setDestination(jmsDestanation());
listner.setConnectionFactory(jmsConnFactory());
listner.setMessageListener(jmsReciver());
listner.setExceptionListener(jmsExceptionListener());
listner.setErrorHandler(jmsErrorHandleListener());
return listner;
}
and below is my Listener class and error class
public class JmsReceiver implements MessageListener {
#Autowired
JavaMailSender jMailsender;
#Override
public void onMessage(Message message) {
TextMessage text = (TextMessage) message;
ObjectMapper objectMapper = new ObjectMapper();
MimeMessage mimeMessage = jMailsender.createMimeMessage();
try {
JmsMessage inMessage = objectMapper.readValue(text.getText(), JmsMessage.class);
//this is failing and go to the JmsErrorHandleListener
jMailsender.send(mimeMessage);
} catch (JMSException | IOException | MessagingException ex) {
logger.error("Exception on reading message ",ex);
}
}
}
public class JmsErrorHandleListener implements ErrorHandler {
#Override
public void handleError(Throwable t) {
/// not sure how to retry from hear becoz the message was allready read
/// some how i need to inform the weblogic this message was not read yet
}
}
when the message arrives to the onMessage it will throw an error then executes the JmsErrorHandleListener but since the message is already read im not sure how to call the send method again and again
try with below config, spring DMLC manage Exception's to retry MessageListener execution, if jMailsender.send(mimeMessage); fails JmsReceiver.onMessage will be retried 5s later indefintely, see DMLC backoff property
#Bean
public org.springframework.jms.listener.adapter.MessageListenerAdapter jmsReciver() {
return new org.springframework.jms.listener.adapter.MessageListenerAdapter(receiver());
}
#Bean
public JmsReceiver receiver() {
return new JmsReceiver();
}
public class JmsReceiver {
#Autowired
JavaMailSender jMailsender;
#Override
public void onMessage(Message message) throws JMSException {
TextMessage text = (TextMessage) message;
ObjectMapper objectMapper = new ObjectMapper();
MimeMessage mimeMessage = jMailsender.createMimeMessage();
try {
JmsMessage inMessage = objectMapper.readValue(text.getText(), JmsMessage.class);
//this is failing and go to the JmsErrorHandleListener
jMailsender.send(mimeMessage);
} catch (Throwable ex) {
logger.error("Exception on reading message ",ex);
throw new JMSException(ex.getMessage());
}
}
}

single instance of java.nio AsynchronousSocketChannel for multiple read and write

I read in the documentation that AsynchronousSocketChannel is threadsafe, so it is safe for single instance of it to be shared by multiple threads, but when I try to implement this single instance concept (in client side application) I failed to use write() method to send data to server.
Previously I had success doing it by calling shutdownOutput() or close() from channel after callingwrite(byteBuffer,attachment,completionHandler). But when I just want to use only single instance without callingclose()orshutdownOutput()` the message never reaches the server (I saw it from server log).
Do we need to close channel in order to make message reach the server? I use Spring boot to build this project.
Here is my code:
#Component
public class AgentStatusService {
private static final Logger log =
LoggerFactory.getLogger(AgentStatusService.class);
#Autowired
private SocketAddress serverAddress;
#Autowired
private AsynchronousSocketChannel channel;
public void consumeMessage() throws IOException {
try {
log.info("trying to connect to {}", serverAddress.toString());
channel.connect(serverAddress, channel, new SocketConnectCompletionHandler());
log.info("success connect to {}", channel.getRemoteAddress());
} catch (final AlreadyConnectedException ex) {
final ByteBuffer writeBuffer = ByteBuffer.wrap("__POP ".getBytes());
final Map<String, Object> attachm`enter code here`ent = new HashMap<>();
attachment.put("buffer", writeBuffer);
attachment.put("channel", channel);
writeBuffer.flip();
channel.write(writeBuffer, attachment, new SocketWriteCompletionHandler());
} catch (final Exception e) {
log.error("an error occured with message : {}", e.getMessage());
e.printStackTrace();
}
}
This is my socket connect completion handler class:
public class SocketConnectCompletionHandler
implements CompletionHandler<Void, AsynchronousSocketChannel> {
private static Logger log =
LoggerFactory.getLogger(SocketConnectCompletionHandler.class);
#Override
public void completed(Void result, AsynchronousSocketChannel channel) {
try {
log.info("connection to {} established", channel.getRemoteAddress());
final ByteBuffer writeBuffer = ByteBuffer.wrap("__POP ".getBytes());
final Map<String, Object> attachment = new HashMap<>();
attachment.put("buffer", writeBuffer);
attachment.put("channel", channel);
writeBuffer.flip();
channel.write(writeBuffer, attachment, new
SocketWriteCompletionHandler());
} catch (final IOException e) {
e.printStackTrace();
}
}
#Override
public void failed(Throwable exc, AsynchronousSocketChannel attachment) {
exc.printStackTrace();
try {
log.error("connection to {} was failed", attachment.getRemoteAddress());
} catch (final Exception e) {
log.error("error occured with message : {}", e.getCause());
}
}
}
This is my socket write completion handler class:
public class SocketWriteCompletionHandler
implements CompletionHandler<Integer, Map<String, Object>> {
private static final Logger log =
LoggerFactory.getLogger(SocketWriteCompletionHandler.class);
#Override
public void completed(Integer result, Map<String, Object> attachment) {
try {
final AsynchronousSocketChannel channel =
(AsynchronousSocketChannel) attachment.get("channel");
final ByteBuffer buffer = (ByteBuffer) attachment.get("buffer");
log.info("write {} request to : {}", new String(buffer.array()),
channel.getRemoteAddress());
buffer.clear();
readResponse(channel, buffer);
} catch (final Exception ex) {
ex.printStackTrace();
log.error("an error occured with message : {}", ex.getMessage());
}
}
#Override
public void failed(Throwable exc, Map<String, Object> attachment) {
log.error("an error occured : {}", exc.getMessage());
}
public void readResponse(AsynchronousSocketChannel channel, ByteBuffer
writeBuffer) {
final ByteBuffer readBuffer = ByteBuffer.allocate(2 * 1024);
final Map<String, Object> attachment = new HashMap<>();
attachment.put("writeBuffer", writeBuffer);
attachment.put("readBuffer", readBuffer);
attachment.put("channel", channel);
readBuffer.flip();
channel.read(readBuffer, attachment, new
SocketReadCompletionHandler());
}
}
If the server thinks it didn't receive the message, and it did when you were previously shutting down or closing the socket, it must be trying to read to end of stream, and blocking or at least not completing its read and so never logging anything.
Why you are using multiple threads in conjunction with asynchronous I/O, or indeed with any socket, remains a mystery.

Categories

Resources