How to solve network and memory issues in Kafka brokers? - java

When using kafka, I got intermittent two network related errors.
1. Error in fetch kafka.server.replicafetcherthread$fetchrequest connection to broker was disconnected before the reponse was read
2. Error in fetch kafka.server.replicafetcherthread$fetchrequest Connection to broker1 (id: 1 rack: null) failed
[configuration environment]
Brokers: 5 / server.properties: "kafka_manager_heap_s=1g", "kafka_manager_heap_x=1g", "offsets.commit.required.acks=1","offsets.commit.timeout.ms=5000", Most settings are the default.
Zookeepers: 3
Servers: 5
Kafka:0.10.1.2
Zookeeper: 3.4.6
Both of these errors are caused by loss of network communication.
If these errors occur, Kafka will work to expand or shrink the ISR partition several times.
expanding-ex) INFO Partition [my-topic,7] on broker 1: Expanding ISR for partition [my-topic,7] from 1,2 to 1,2,3
shrinking-ex) INFO Partition [my-topic,7] on broker 1: Shrinking ISR for partition [my-topic,7] from 1,2,3 to 1,2
I understand that these errors are caused by network problems, but I'm not sure why the break in the network is occurring.
And if this network disconnection persists, I got the following additional error
: Error when handling request(topics=null} java.lang.OutOfMemoryError: Java heap space
I wonder what causes these and how can I improve this?

The network error tells you that one of the brokers is not running, which means it cannot connect to it. As per experience the minimum heap size you can assign is 2Gb.

Related

How to manage RecordTooLargeException avoiding Flink job restarting

Is there any way to ignore oversized messages without Flink job restarting?
If I try to produce (using KafkaSink ) a message which is too large (greater than max.message.bytes) then the RecordTooLargeException occurs and the Flink job restarts, then this "exception&restart" cycle is repeating endlessly!
I don't need to increase messages size limits such as max.message.bytes (Kafka Topic Config) and max.request.size (Flink Producer Config), they are good, they are already big. I just want to handle the situation when an unrealistically large message is trying to be produced. In this case, this big message should be ignored, and an error should be logged, and any Runtime Exception should NOT occur, and the endless restarting loop should NOT start.
I tried to use ProducerInterceptor -> it cannot intercept/reject a message, it can just modify it.
I tried to ignore oversized messages in SerializationSchema (implemented a custom wrapper of SerializationSchema) -> it cannot discard message producing too.
I am trying to overwrite KafkaWriter and KafkaSink classes, but it seems to be challenging.
I will be grateful for any advice!
A few quick environment details:
Kafka version is 2.8.1
Flink code is Java code based on the newer KafkaSource/KafkaSink API, not the
older KafkaConsumer/KafkaProduer API.
The flink-clients and flink-connector-kafka version is 1.15.0
Code sample which throws the RecordTooLargeException:
int numberOfRows = 1;
int rowsPerSecond = 1;
DataStream<String> stream = environment.addSource(
new DataGeneratorSource<>(
RandomGenerator.stringGenerator(1050000), // max.message.bytes=1048588
rowsPerSecond,
(long) numberOfRows),
TypeInformation.of(String.class))
.setParallelism(1)
.name("string-generator");
KafkaSinkBuilder<String> builder = KafkaSink.<String>builder()
.setBootstrapServers("localhost:9092")
.setDeliverGuarantee(DeliveryGuarantee.AT_LEAST_ONCE)
.setRecordSerializer(
KafkaRecordSerializationSchema.builder().setTopic("test.output")
.setValueSerializationSchema(new SimpleStringSchema())
.build());
KafkaSink<String> sink = builder.build();
stream.sinkTo(sink).setParallelism(1).name("output-producer");
Exception Stack Trace:
2022-06-02/14:01:45.066/PDT [flink-akka.actor.default-dispatcher-4] INFO output-producer: Writer -> output-producer: Committer (1/1) (a66beca5a05c1c27691f7b94ca6ac025) switched from RUNNING to FAILED on 271b1b90-7d6b-4a34-8116-3de6faa8a9bf # 127.0.0.1 (dataPort=-1). org.apache.flink.util.FlinkRuntimeException: Failed to send data to Kafka null with FlinkKafkaInternalProducer{transactionalId='null', inTransaction=false, closed=false} at org.apache.flink.connector.kafka.sink.KafkaWriter$WriterCallback.throwException(KafkaWriter.java:440) ~[flink-connector-kafka-1.15.0.jar:1.15.0] at org.apache.flink.connector.kafka.sink.KafkaWriter$WriterCallback.lambda$onCompletion$0(KafkaWriter.java:421) ~[flink-connector-kafka-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.Mail.run(Mail.java:90) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMailsNonBlocking(MailboxProcessor.java:353) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.processMail(MailboxProcessor.java:317) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:201) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:804) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:753) ~[flink-streaming-java-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948) ~[flink-runtime-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:927) ~[flink-runtime-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741) ~[flink-runtime-1.15.0.jar:1.15.0] at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563) ~[flink-runtime-1.15.0.jar:1.15.0] at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_292] Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 1050088 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration.

Redisson client ; RedisTimeoutException issue

I am using Google cloud managed redis cluster(v5) via redisson(3.12.5)
Following are my SingleServer configurations in yaml file
singleServerConfig:
idleConnectionTimeout: 10000
connectTimeout: 10000
timeout: 3000
retryAttempts: 3
retryInterval: 1500
password: null
subscriptionsPerConnection: 5
clientName: null
address: "redis://127.0.0.1:6379"
subscriptionConnectionMinimumIdleSize: 1
subscriptionConnectionPoolSize: 50
connectionMinimumIdleSize: 40
connectionPoolSize: 250
database: 0
dnsMonitoringInterval: 5000
threads: 0
nettyThreads: 0
codec: !<org.redisson.codec.JsonJacksonCodec> {}
I am getting following exceptions when I increase the load on my application
org.redisson.client.RedisTimeoutException: Unable to acquire connection! Increase connection pool size and/or retryInterval settings Node source: NodeSource
org.redisson.client.RedisTimeoutException: Command still hasn't been written into connection! Increase nettyThreads and/or retryInterval settings. Payload size in bytes: 34. Node source: NodeSource
It seems there is no issue on redis cluster and i think i need to make tweaking in my client side redis connection pooling confs(mentioned above) to make it work.
Please suggest me the changes i need to make in my confs
I am also curious if I should close the Redis connection after making get/set calls. I have tried finding this but found nothing conclusive on how to close Redis connections
One last thing that I want to ask is that is there any mechanism to get Redis connection pool stats(active connection, idle connection etc ) in Redisson
Edit1:
I have tried by changing values following values in 3 different iterations
Iteration 1:
idleConnectionTimeout: 30000
connectTimeout: 30000
timeout: 30000
Iteration 2:
nettyThreads: 0
Iteration 3:
connectionMinimumIdleSize: 100
connectionPoolSize: 750
I have tried these things but nothing has worked for me
Any help is appreciated.
Thanks in advance
Assuming you are getting low memory alerts on your cache JVM.
You may have to analyze the traffic and determine 2 things
Too many parallel cache persists.
Huge chunk of data being persisted.
Both can be determined by the traffic on your server.
For option 1 configuring pool-size would solve you issue, but for option 2 you may have to refactor your code to persist data in smaller chunks.
Try to set nettyThreads = 64 settings

Azure App Service - Spring Boot - Hikari Errors

I have deployed Spring Boot application that has a Database based queue with jobs on App Service.
Yesterday I performed a few Scale out and Scale in operations while the application was working to see how it will behave.
At some point (not necessary related to scaling operations) application started to throw Hikari errors.
com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#1ae66f34 (This connection has been closed.). Possibly consider using a shorter maxLifetime value.
com.zaxxer.hikari.pool.ProxyConnection : HikariPool-1 - Connection org.postgresql.jdbc.PgConnection#1ef85079 marked as broken because of SQLSTATE(08006), ErrorCode(0)
The following are stack traces from my scheduled job in spring and other information:
org.postgresql.util.PSQLException: An I/O error occurred while sending to the backend.
Caused by: javax.net.ssl.SSLException: Connection reset by peer (Write failed)
Suppressed: java.net.SocketException: Broken pipe (Write failed)
Caused by: java.net.SocketException: Connection reset by peer (Write failed)
Next the following stack of errors:
WARN 1 --- [ scheduling-1] com.zaxxer.hikari.pool.PoolBase : HikariPool-1 - Failed to validate connection org.postgresql.jdbc.PgConnection#48d0d6da (This connection has been closed.).
Possibly consider using a shorter maxLifetime value.
org.springframework.jdbc.support.MetaDataAccessException: Error while extracting DatabaseMetaData; nested exception is java.sql.SQLException: Connection is closed
Caused by: java.sql.SQLException: Connection is closed
The code which is invoked periodically - every 500 milliseconds is here:
#Scheduled(fixedDelayString = "${worker.delay}")
#Transactional
public void execute() {
jobManager.next(jobClass).ifPresent(this::handleJob);
}
Update.
The above code is almost all the time doing nothing, since there was no traffic on the website.
Update2. I've checked Postgres logs and found this:
2020-07-11 22:48:09 UTC-5f0866f0.f0-LOG: checkpoint starting: immediate force wait
2020-07-11 22:48:10 UTC-5f0866f0.f0-LOG: checkpoint complete (240): wrote 30 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.046 s, sync=0.046 s, total=0.437 s; sync files=13, longest=0.009 s, average=0.003 s; distance=163 kB, estimate=13180 kB
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: received immediate shutdown request
2020-07-11 22:48:10 UTC-5f0a3f41.8914-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0a3f41.8914-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
// Same text about 10 times
2020-07-11 22:48:10 UTC-5f0866f2.7c-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: src/port/kill.c(84): Process (272) exited OOB of pgkill.
2020-07-11 22:48:10 UTC-5f0866f1.fc-WARNING: terminating connection because of crash of another server process
2020-07-11 22:48:10 UTC-5f0866f1.fc-DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2020-07-11 22:48:10 UTC-5f0866f1.fc-HINT: In a moment you should be able to reconnect to the database and repeat your command.
2020-07-11 22:48:10 UTC-5f0866ee.68-LOG: archiver process (PID 256) exited with exit code 1
2020-07-11 22:48:11 UTC-5f0866ee.68-LOG: database system is shut down
It looks like it is a problem with Azure PostgresSQL server and it closed itself. Am I reading this right?
Like mentioned in your logs, have you tried setting maxLifetime property for the Hikari CP ? I think after setting that property this issue should be resolved.
Based on Hikari doc (https://github.com/brettwooldridge/HikariCP) --
maxLifetime
This property controls the maximum lifetime of a connection in the pool. An in-use connection will never be retired, only when it is closed will it then be removed. On a connection-by-connection basis, minor negative attenuation is applied to avoid mass-extinction in the pool. We strongly recommend setting this value, and it should be several seconds shorter than any database or infrastructure imposed connection time limit. A value of 0 indicates no maximum lifetime (infinite lifetime), subject of course to the idleTimeout setting. The minimum allowed value is 30000ms (30 seconds). Default: 1800000 (30 minutes)

Understanding commit failure from the consumer when leader has changed

Consider following real obfuscated logs:
19:33:48,409 99733391 (pool-6-thread-11) ERROR [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [] [Consumer clientId=app2.maria1.mcdonalnds_service_msg, groupId=mcdonalnds_service_msg] Offset commit failed on partition service_megaman_mt-mcdonalnds_service_msg-1 at offset 75796: This is not the correct coordinator.
19:33:48,410 99733392 (pool-6-thread-11) INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [] [Consumer clientId=app2.maria1.mcdonalnds_service_msg, groupId=mcdonalnds_service_msg] Group coordinator kafka1.maria4.internal:9092 (id: 2147483646 rack: null) is unavailable or invalid, will attempt rediscovery
19:33:48,414 99733396 (kafka-producer-network-thread | producer-1) WARN [org.apache.kafka.clients.producer.internals.Sender] [] [Producer clientId=producer-1] Got error produce response with correlation id 16386 on topic-partition service_megaman_mo-mcdonalnds_service_msg-1, retrying (99 attempts left). Error: NOT_LEADER_FOR_PARTITION
19:33:48,510 99733492 (pool-6-thread-11) INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [] [Consumer clientId=app2.maria1.mcdonalnds_service_msg, groupId=mcdonalnds_service_msg] Discovered group coordinator kafka3.maria4.internal:9092 (id: 2147483644 rack: null)
19:33:48,528 99733510 (pool-6-thread-11) ERROR [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [] [Consumer clientId=app2.maria1.mcdonalnds_service_msg, groupId=mcdonalnds_service_msg] Offset commit failed on partition service_megaman_mt-mcdonalnds_service_msg-1 at offset 75796: The coordinator is not aware of this member.
19:33:48,528 99733510 (pool-6-thread-11) ERROR [com.bob.kafka.consumer.ListenableKafkaConsumer] [] Aborting consumer [mcdonalnds_service_msg] for topics [[service_megaman_mt-mcdonalnds_service_msg]] operation due to failure! Cause:
org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
As far as I understand the exception message about poll() is not really the cause. So what happened:
1. Coordinator was not available
2. Consumer found new coordinator
3. New coordinator did not recognise offset so rejected the commit
What I am trying to figure out is the options to recover from this situation. This is not intermittent issue but happened once in a year so the setting on poll would not have helped if leader died.
What happens now: Original application code was simply closing consumers which is wrong , caused alerts and woke up just about everyone as application stopped consuming messages :-)
What I want to happen:
Consumer is restarted, doesn't die if loses connection to coordinator
What I am not sure about:
Why coordinator is not aware of this member
If I understand the issue correctly. :-)
On service side with Java Kafka lib for KafkaConsumer class I
should call close and subscribe or unsubscribe and
subscribe to fullfill my consumer recovery scenario.
What is going to happen to the processed
offset
which was rejected by new coordinator? Since offset was not commited I assume consumer will re-read the same messages?
Following post for Spring-kafka looks very similar issue but the service does not use Spring so this is of limited use to me.

So many "Broken pipe IOE" in ZooKeeper log

Can any one help to explain why I get so many "Broken pipe IOE" in ZooKeeper log?
ZooKeeper throws this exception almost every minute. I don't think we use the four letter command to dumpWatches so frequently. So what does this mean?
This may be caused by the command wchc because our ZooKeeper has more than ten thousand znodes. And I have found that this command is executed from the same server with the different port. Will ZooKeeper call this command frequently?
2014-09-17,10:52:09,179 ERROR org.apache.zookeeper.server.NIOServerCnxn: [myid:0] Error sending data synchronously
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
at sun.nio.ch.IOUtil.write(IOUtil.java:40)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336)
at org.apache.zookeeper.server.NIOServerCnxn.sendBufferSync(NIOServerCnxn.java:138)
at org.apache.zookeeper.server.NIOServerCnxn$SendBufferWriter.checkFlush(NIOServerCnxn.java:453)
at org.apache.zookeeper.server.NIOServerCnxn$SendBufferWriter.write(NIOServerCnxn.java:474)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:111)
at java.io.BufferedWriter.write(BufferedWriter.java:212)
at java.io.PrintWriter.write(PrintWriter.java:412)
at java.io.PrintWriter.write(PrintWriter.java:429)
at java.io.PrintWriter.print(PrintWriter.java:559)
at java.io.PrintWriter.println(PrintWriter.java:695)
at org.apache.zookeeper.server.WatchManager.dumpWatches(WatchManager.java:166)
at org.apache.zookeeper.server.DataTree.dumpWatches(DataTree.java:1240)
at org.apache.zookeeper.server.NIOServerCnxn$WatchCommand.commandRun(NIOServerCnxn.java:722)
at org.apache.zookeeper.server.NIOServerCnxn$CommandThread.run(NIOServerCnxn.java:496)
2014-09-17,10:52:09,179 INFO org.apache.zookeeper.server.NIOServerCnxn: [myid:0] Closed socket connection for client /10.20.201.234:53756 which had sessionid 0x34840357f664081
2014-09-17,10:52:09,179 ERROR org.apache.zookeeper.server.NIOServerCnxn: [myid:0] Error sending data synchronously
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
at sun.nio.ch.IOUtil.write(IOUtil.java:40)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336)
at org.apache.zookeeper.server.NIOServerCnxn.sendBufferSync(NIOServerCnxn.java:138)
at org.apache.zookeeper.server.NIOServerCnxn$SendBufferWriter.checkFlush(NIOServerCnxn.java:453)
at org.apache.zookeeper.server.NIOServerCnxn$SendBufferWriter.write(NIOServerCnxn.java:474)
at java.io.BufferedWriter.flushBuffer(BufferedWriter.java:111)
at java.io.BufferedWriter.flush(BufferedWriter.java:235)
at java.io.PrintWriter.flush(PrintWriter.java:276)
at org.apache.zookeeper.server.NIOServerCnxn.cleanupWriterSocket(NIOServerCnxn.java:424)
at org.apache.zookeeper.server.NIOServerCnxn.access$000(NIOServerCnxn.java:60)
at org.apache.zookeeper.server.NIOServerCnxn$CommandThread.run(NIOServerCnxn.java:500)
I have found the reason. We're using TaoKeeper to monitor the status of ZooKeeper. It will periodically send wchc and other commands to check the status. When ZooKeeper receives wchc, the exception occurs because we have too many znodes.
I think TaoKeeper should use mntr rather than wchc which may affect ZooKeeper's performance when it has large number of znodes.

Categories

Resources