I have a simple kafka producer
public class JavaKafkaProducerExample {
public static void main(String[] args) throws ExecutionException, InterruptedException {
String server = "localhost:9092";
String topicName = "test.topic";
final Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, server);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
try (final Producer<Long, String> producer = new KafkaProducer<>(props);) {
RecordMetadata recordMetadata = (RecordMetadata) producer.send(new ProducerRecord(topicName, "example message")).get(1000, TimeUnit.MILLISECONDS);
if (recordMetadata.hasOffset()) System.out.println("Message sent successfully");
} catch (Exception e) {
System.out.println(e);
}
}
}
Dependencies:
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.1.0</version>
</dependency>
I expected that if kafka is unavailable, then send().get(timeout) will be interrupted by timeout, but I get the error java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. only after 60 seconds. Why doesn't 'get(timeout)' work? How can I reduce the time to error? Is it possible to do this programmatically or only by changing the producer parameters?
Related
I am trying to understand how spring boot KafkaTemplate works with async producer and handle exceptions. I want to handle all kinds of errors including network errors. Tried with retry configs but its retrying more than the number I provided to
#Service
public class UserInfoService {
private static final Logger LOGGER = LoggerFactory.getLogger(UserInfoService.class);
#Autowired
private KafkaTemplate kafkaTemplate;
public void sendUserInfo(UserInfo data) {
final ProducerRecord<String, UserInfo> record = new ProducerRecord<>("usr-test-data", "test-app", data);
try {
ListenableFuture<SendResult<String, UserInfo>> future = kafkaTemplate.send(record);
future.addCallback(new ListenableFutureCallback<SendResult<String, UserInfo>>() {
#Override
public void onFailure(Throwable ex) {
handleFailure(ex);
}
#Override
public void onSuccess(SendResult<String, UserInfo> result) {
handleSuccess(result);
}
});
} catch (Exception e) {
throw new RuntimeException(e);
}
}
private void handleSuccess(SendResult<String, UserInfo> result) {
LOGGER.info("Message sent successfully with offset: {}", result.getRecordMetadata().offset());
}
private void handleFailure(Throwable ex) {
LOGGER.info("Unable to send message- Error: {}", ex.getMessage());
}
}
Tried to limit number of retries with configProps.put(ProducerConfig.RETRIES_CONFIG, "3"); hoping this will eventually throw exception and I can catch. But it still tries more than 3 times and seems not working. Here is my complete config class:
#Configuration
public class KafkaProducerConfig {
#Value(value = "${spring.kafka.bootstrap-servers}")
private String bootstrapAddress;
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
configProps.put(ProducerConfig.ENABLE_IDEMPOTENCE_DOC, "true");
configProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, CountingProducerInterceptor.class.getName());
configProps.put(ProducerConfig.ACKS_CONFIG, "all");
configProps.put(ProducerConfig.RETRIES_CONFIG, "3");
configProps.put(ProducerConfig.RETRY_BACKOFF_MS_CONFIG, 10000);
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
Would like to know what I can catch in future.OnFailure and in the parent try catch block.
A Future does not complete within the lifecycle of a try-catch. You need to throw the exception within the body of the onFailure.
In my experience, Kafka network errors cannot easily be caught
I am trying to find a bug is some RabbitMQ client code that was developed six or seven years ago. The code was modified to allow for delayed messages. It seems that connections are created to the RabbitMQ server and then never destroyed. Each exists in a separate thread so I end up with 1000's of threads. I am sure the problem is very obvious / simple - but I am having trouble seeing it. I have been looking at the exchangeDeclare method (the commented out version is from the original code which seemed to work), but I have been unable to find the default values for autoDelete and durable which are being set in the modified code. The method below in within a Spring service class. Any help, advice, guidance and pointing out huge obvious errors appreciated!
private void send(String routingKey, String message) throws Exception {
String exchange = applicationConfiguration.getAMQPExchange();
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "fanout");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 10000); //delay in miliseconds i.e 10secs
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder().headers(headers);
Connection connection = null;
Channel channel = null;
try {
connection = myConnection.getConnection();
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get connection.");
e.printStackTrace();
return;
}
try {
if (connection != null) {
log.debug(" [CORE: AMQP] Sending message with key {} : {}",routingKey, message);
channel = connection.createChannel();
// channel.exchangeDeclare(exchange, exchangeType);
channel.exchangeDeclare(exchange, "x-delayed-message", true, false, args);
// channel.basicPublish(exchange, routingKey, null, message.getBytes());
channel.basicPublish(exchange, routingKey, props.build(), message.getBytes());
}
else {
log.error("Total AMQP melt down. This should never happen!");
}
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get send.");
e.printStackTrace();
}
finally {
channel.close();
}
}
This is the connection class
#Service
public class PersistentConnection {
private static final Logger log = LoggerFactory.getLogger(PersistentConnection.class);
private static Connection myConnection = null;
private Boolean blocked = false;
#Autowired ApplicationConfiguration applicationConfiguration;
#PreDestroy
private void destroy() {
try {
myConnection.close();
} catch (IOException e) {
log.error("Unable to close AMQP Connection.");
e.printStackTrace();
}
}
public Connection getConnection( ) {
if (myConnection == null) {
start();
}
return myConnection;
}
private void start() {
log.debug("Building AMQP Connection");
ConnectionFactory factory = new ConnectionFactory();
String ipAddress = applicationConfiguration.getAMQPHost();
String user = applicationConfiguration.getAMQPUser();
String password = applicationConfiguration.getAMQPPassword();
String virtualHost = applicationConfiguration.getAMQPVirtualHost();
String port = applicationConfiguration.getAMQPPort();
try {
factory.setUsername(user);
factory.setPassword(password);
factory.setVirtualHost(virtualHost);
factory.setPort(Integer.parseInt(port));
factory.setHost(ipAddress);
myConnection = factory.newConnection();
}
catch (Exception e) {
log.error("Unable to initialise AMQP Connection.");
e.printStackTrace();
}
myConnection.addBlockedListener(new BlockedListener() {
public void handleBlocked(String reason) throws IOException {
// Connection is now blocked
log.warn("Message Server has blocked. It may be resource limitted.");
blocked = true;
}
public void handleUnblocked() throws IOException {
// Connection is now unblocked
log.warn("Message server is unblocked.");
blocked = false;
}
});
}
public Boolean isBlocked() {
return blocked;
}
}
I have a code that reads from the IBM MQ queue manager but I want to be read from IBM MQ without removing the message from the queue, only after I send acknowledge to IBM MQ I want to remove the message
this is my IBM reader code :
public class IBMReaderStub extends AbstractReader {
private JMSContext context = null;
JMSConsumer consumer;
Destination destination;
public IBMReaderStub(String queueName) {
this(queueName, new IBMListener());
}
public IBMReaderStub(String queueName, IBMListener onMessage) {
super(ConfigurationManager.getString(HOST), ConfigurationManager.getInt(PORT, DEFAULT_IBM_PORT), queueName, new QueueWithThreadPool(), onMessage);
}
#Override
protected void initializeConsumer() {
try {
JmsConnectionFactory jmsConnectionFactory = createJmsConnectionFactory();
context = jmsConnectionFactory.createContext();
destination = context.createQueue("queue:///" + getQueueName()); // Set the producer and consumer destination to be the same... not true in general
consumer = context.createConsumer(destination);
} catch (Exception e) {
System.out.println(e);
}
listen();
}
#Override
public void listen() {
consumer.setMessageListener(getOnMessage());
}
private JmsConnectionFactory createJmsConnectionFactory() throws Exception {
JmsFactoryFactory jmsFactory = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
JmsConnectionFactory jmsConnectionFactory = jmsFactory.createConnectionFactory();
jmsConnectionFactory.setStringProperty(WMQConstants.WMQ_HOST_NAME, this.getHost());
jmsConnectionFactory.setIntProperty(WMQConstants.WMQ_PORT, getPort());
jmsConnectionFactory.setStringProperty(WMQConstants.WMQ_CHANNEL, ConfigurationManager.getString(CHANNEL_NAME));
jmsConnectionFactory.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, ConfigurationManager.getString(QUEUE_MANAGER_NAME));
jmsConnectionFactory.setStringProperty(WMQConstants.WMQ_APPLICATIONNAME, ConfigurationManager.getString(APPLICATION_NAME));
jmsConnectionFactory.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
return jmsConnectionFactory;
}
public static void main(String[] args) {
try {
IBMReaderStub reader = new IBMReaderStub("hey");
IBMReaderStub reader2 = new IBMReaderStub("hey");
reader.listen();
reader2.listen();
} catch (Exception e) {
System.out.println(e);
}
}
}
IBM MQ provides transactional access to messages, so you need to create a transacted session, and then you can commit or roll back message gets or puts as needed.
https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.1.0/com.ibm.mq.dev.doc/q032210_.html
https://www.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q032220_.htm
I have a Java class that -upon a certain action from the GUI- initiates a connection with the RabbitMQ server (using the pub/sub patter) and listens for new events.
I want to add a new feature where I will allow the user to set an "end time" that will stop my application from listening to new events (stop consuming from the queue without closing it).
I tried to utilise the basicCancel method, but I can't find a way to make it work for a predefined date.
Would it be a good idea to initiate a new thread inside my Subscribe class that will call the basicCancel upon reaching the given date or is there a better way to do that?
Listen to new events
private void listenToEvents(String queueName) {
try {
logger.info(" [*] Waiting for events. Subscribed to : " + queueName);
Consumer consumer = new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope,
AMQP.BasicProperties properties, byte[] body) throws IOException {
TypeOfEvent event = null;
String message = new String(body);
// process the payload
InteractionEventManager eventManager = new InteractionEventManager();
event = eventManager.toCoreMonitorFormatObject(message);
if(event!=null){
String latestEventOpnName = event.getType().getOperationMessage().getOperationName();
if(latestEventOpnName.equals("END_OF_PERIOD"))
event.getMessageArgs().getContext().setTimestamp(++latestEventTimeStamp);
latestEventTimeStamp = event.getMessageArgs().getContext().getTimestamp();
ndaec.receiveTypeOfEventObject(event);
}
}
};
channel.basicConsume(queueName, true, consumer);
//Should I add the basicCancel here?
}
catch (Exception e) {
logger.info("The Monitor could not reach the EventBus. " +e.toString());
}
}
Initiate Connection
public String initiateConnection(Timestamp endTime) {
Properties props = new Properties();
try {
props.load(new FileInputStream(everestHome+ "/monitoring-system/rabbit.properties"));
}catch(IOException e){
e.printStackTrace();
}
RabbitConfigure config = new RabbitConfigure(props,props.getProperty("queuName").trim());
ConnectionFactory factory = new ConnectionFactory();
exchangeTopic = new HashMap<String,String>();
String exchangeMerged = config.getExchange();
logger.info("Exchange=" + exchangeMerged);
String[] couples = exchangeMerged.split(";");
for(String couple : couples)
{
String[] infos = couple.split(":");
if (infos.length == 2)
{
exchangeTopic.put(infos[0], infos[1]);
}
else
{
logger.error("Invalid Exchange Detail: " + couple);
}
}
for(Entry<String, String> entry : exchangeTopic.entrySet()) {
String exchange = entry.getKey();
String topic = entry.getValue();
factory.setHost(config.getHost());
factory.setPort(Integer.parseInt(config.getPort()));
factory.setUsername(config.getUsername());
factory.setPassword(config.getPassword());
try {
connection1= factory.newConnection();
channel = connection1.createChannel();
channel.exchangeDeclare(exchange, EXCHANGE_TYPE);
/*Map<String, Object> args = new HashMap<String, Object>();
args.put("x-expires", endTime.getTime());*/
channel.queueDeclare(config.getQueue(),false,false,false,null);
channel.queueBind(config.getQueue(),exchange,topic);
logger.info("Connected to RabbitMQ.\n Exchange: " + exchange + " Topic: " + topic +"\n Queue Name is: "+ config.getQueue());
return config.getQueue();
} catch (IOException e) {
logger.error(e.getMessage());
e.printStackTrace();
} catch (TimeoutException e) {
logger.error(e.getMessage());
e.printStackTrace();
}
}
return null;
}
You can create a delayed queue, setting the time-to-leave so the message you push there will be dead-lettered exactly as soon as you want to stop your consumer.
Then you have to bind the dead letter exchange to a queue whose consumer will stop the other one as soon as it gets the message.
Never use threads when you have RabbitMq, you can do a lot of interesting stuff with delayed messages!
I am building an Apache kafka producer that is consumed by flink Kafka consumer. I need to generate 1 million up to 10 million message per second. However I am getting very small number of records per second now (up to 2000 per second per partition). I have a cluster with 3 brokers and 30 gb memory in each. The topic has also 10 partitions. Any recommendations please?
Here is my producer code
public class TempDataGenerator implements Runnable {
private String topic = "try";
private String bootStrap_Servers = "kafka-node-01:9092,kafka-node-02:9092,kafka-node-03:9092";
public static void main(String[] args) {
ExecutorService executor = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors());
executor.execute(new TempDataGenerator());
}
public TempDataGenerator() {
}
private Producer<String, String> createProducer() {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootStrap_Servers);
props.put(ProducerConfig.CLIENT_ID_CONFIG, "KafkaExampleProducer");
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class.getName());
props.put(ProducerConfig.ACKS_CONFIG,"0");
props.put(ProducerConfig.BUFFER_MEMORY_CONFIG,"5000000000");
props.put(ProducerConfig.BATCH_SIZE_CONFIG,"100000");
return new KafkaProducer<>(props);
}
public void run() {
final Producer<String, String> producer = createProducer();
Socket soc = null;
try {
boolean active = true;
int generatedCount = 0,tempUserID=1;//the minimum tuple that any thread can generate
while (active) {
generatedCount = 0;
/**
* generate per second
*/
for (long stop = Instant.now().getMillis()+1000; stop > Instant.now().getMillis(); ) { //generate tps
String msg = "{ID:" + generatedCount + ", msg: "+Instant.now().getMillis()+"}";
final ProducerRecord<String, String> record = new ProducerRecord<>(topic, null, msg);
RecordMetadata metadata = producer.send(record).get();
producer.flush();
generatedCount++;
}
}
} catch (InterruptedException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
}