Drop table using the datastax driver for Cassandra doesn't look to be working. create table works but drop table does not and does not throw an exception. 1) Am I doing the drop correctly? 2) Anyone else seen this behavior?
In the output you can see the table gets created and apparently dropped as it is not in the second table listing in the first run. However, when I reconnect (second run) the table is there resulting in an exception.
import java.util.Collection;
import com.datastax.driver.core.*;
public class Fail {
SimpleStatement createTableCQL = new SimpleStatement("create table test_table(testfield varchar primary key)");
SimpleStatement dropTableCQL = new SimpleStatement("drop table test_table");
Session session = null;
Cluster cluster = null;
public Fail()
{
System.out.println("First Run");
this.run();
System.out.println("Second Run");
this.run();
}
private void run()
{
try
{
cluster = Cluster.builder().addContactPoints("10.48.8.43 10.48.8.47 10.48.8.53")
.withCredentials("394016","394016")
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.ALL))
.build();
session = cluster.connect("gid394016");
}
catch(Exception e)
{
System.err.println(e.toString());
System.exit(1);
}
//create the table
System.out.println("createTableCQL");
this.session.execute(createTableCQL);
//list tables in the keyspace
System.out.println("Table list:");
Collection<TableMetadata> results1 = cluster.getMetadata().getKeyspace("gid394016").getTables();
for (TableMetadata tm : results1)
{
System.out.println(tm.toString());
}
//drop the table
System.out.println("dropTableCQL");
this.session.execute(dropTableCQL);
//list tables in the keyspace
System.out.println("Table list:");
Collection<TableMetadata> results2 = cluster.getMetadata().getKeyspace("gid394016").getTables();
for (TableMetadata tm : results2)
{
System.out.println(tm.toString());
}
session.close();
cluster.close();
}
public static void main(String[] args) {
new Fail();
}
}
Console output:
First Run
[main] INFO com.datastax.driver.core.NettyUtil - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
[main] INFO com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'Cassandra' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.51:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.47:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.53:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.49:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host 10.48.8.43 10.48.8.47 10.48.8.53/10.48.8.43:9042 added
createTableCQL
Table list:
CREATE TABLE gid394016.test_table (testfield text, PRIMARY KEY (testfield)) WITH read_repair_chance = 0.0 AND dclocal_read_repair_chance = 0.1 AND gc_grace_seconds = 864000 AND bloom_filter_fp_chance = 0.01 AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' } AND comment = '' AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' } AND compression = { 'sstable_compression' : 'org.apache.cassandra.io.compress.LZ4Compressor' } AND default_time_to_live = 0 AND speculative_retry = '99.0PERCENTILE' AND min_index_interval = 128 AND max_index_interval = 2048;
dropTableCQL
Table list:
Second Run
[main] INFO com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'Cassandra' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.51:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.47:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.53:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.49:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host 10.48.8.43 10.48.8.47 10.48.8.53/10.48.8.43:9042 added
createTableCQL
Exception in thread "main" com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:217)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:54)
at com.bdcauto.cassandrachecks.Fail.run(Fail.java:38)
at com.bdcauto.cassandrachecks.Fail.<init>(Fail.java:17)
at com.bdcauto.cassandrachecks.Fail.main(Fail.java:65)
Caused by: com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:130)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:118)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:151)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:175)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:44)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:801)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1014)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:937)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:69)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:230)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:221)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
... 14 more
You are running this code with the table present in the database and that's why you are getting the "already exists" error. Please connect to the database using cqlsh and check that yourself.
Create, alter and drop table statements are being propagated throughout the cluster asynchronously. Even though you are receiving a response from the coordinator you still need to wait for schema agreement.
Related
I have a requirement where I need to process messages from Kafka without losing any message and also need to maintain the message order. Therefore, I used transactions and enabled 'exactly_once' processing guarantee in my Kafka streams topology. As I assume that the topology processing will be 'all or nothing', that the message offset is committed only after the last node successfully processed the message.
However in a failure scenario, for example when the database is down and the processor fails to store message and throws an exception. At this point, the topology dies as intended and is recreated automatically on rebalance. I assume that the topology should either re-consume the original message again from the Kafka topic OR on application restart, it should re-consume that original message from Kafka topic. However, it seems that original message disappears and is never consumed or processed after that topology died.
What do I need to do to reprocess the original message sent to Kafka topic? Or what Kafka configuration requires change? Do I need manually assign a state store and keep track of messages processed on a changelog topic?
Topology:
#Singleton
public class EventTopology extends Topology {
private final Deserializer<String> deserializer = Serdes.String().deserializer();
private final Serializer<String> serializer = Serdes.String().serializer();
private final EventLogMessageSerializer eventLogMessageSerializer;
private final EventLogMessageDeserializer eventLogMessageDeserializer;
private final EventLogProcessorSupplier eventLogProcessorSupplier;
#Inject
public EventTopology(EventsConfig eventsConfig,
EventLogMessageSerializer eventLogMessageSerializer,
EventLogMessageDeserializer eventLogMessageDeserializer,
EventLogProcessorSupplier eventLogProcessorSupplier) {
this.eventLogMessageSerializer = eventLogMessageSerializer;
this.eventLogMessageDeserializer = eventLogMessageDeserializer;
this.eventLogProcessorSupplier = eventLogProcessorSupplier;
init(eventsConfig);
}
private void init(EventsConfig eventsConfig) {
var topics = eventsConfig.getTopicConfig().getTopics();
String eventLog = topics.get("eventLog");
addSource("EventsLogSource", deserializer, eventLogMessageDeserializer, eventLog)
.addProcessor("EventLogProcessor", eventLogProcessorSupplier, "EventsLogSource");
}
}
Processor:
#Singleton
#Slf4j
public class EventLogProcessor implements Processor<String, EventLogMessage> {
private final EventLogService eventLogService;
private ProcessorContext context;
#Inject
public EventLogProcessor(EventLogService eventLogService) {
this.eventLogService = eventLogService;
}
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void process(String key, EventLogMessage value) {
log.info("Processing EventLogMessage={}", value);
try {
eventLogService.storeInDatabase(value);
context.commit();
} catch (Exception e) {
log.warn("Failed to process EventLogMessage={}", value, e);
throw e;
}
}
#Override
public void close() {
}
}
Configuration:
eventsConfig:
saveTopicsEnabled: false
topologyConfig:
environment: "LOCAL"
broker: "localhost:9093"
enabled: true
initialiseWaitInterval: 3 seconds
applicationId: "eventsTopology"
config:
auto.offset.reset: latest
session.timeout.ms: 6000
fetch.max.wait.ms: 7000
heartbeat.interval.ms: 5000
connections.max.idle.ms: 7000
security.protocol: SSL
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.StringSerializer
max.poll.records: 5
processing.guarantee: exactly_once
metric.reporters: com.simple.metrics.kafka.DropwizardReporter
default.deserialization.exception.handler: org.apache.kafka.streams.errors.LogAndContinueExceptionHandler
enable.idempotence: true
request.timeout.ms: 8000
acks: all
batch.size: 16384
linger.ms: 1
enable.auto.commit: false
state.dir: "/tmp"
topicConfig:
topics:
eventLog: "EVENT-LOG-LOCAL"
kafkaTopicConfig:
partitions: 18
replicationFactor: 1
config:
retention.ms: 604800000
Test:
Feature: Feature covering the scenarios to process event log messages produced by external client.
Background:
Given event topology is healthy
Scenario: event log messages produced are successfully stored in the database
Given database is down
And the following event log messages are published
| deptId | userId | eventType | endDate | eventPayload_partner |
| dept-1 | user-1234 | CREATE | 2021-04-15T00:00:00Z | PARTNER-1 |
When database is up
And database is healthy
Then event log stored in the database as follows
| dept_id | user_id | event_type | end_date | event_payload |
| dept-1 | user-1234 | CREATE | 2021-04-15T00:00:00Z | {"partner":"PARTNER-1"} |
Logs:
INFO [data-plane-kafka-request-handler-1] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group eventsTopology in state PreparingRebalance with old generation 0 (__consumer_offsets-0) (reason: Adding new member eventsTopology-57fdac0e-09fb-4aa0-8b0b-7e01809b31fa-StreamThread-1-consumer-96a3e980-4286-461e-8536-5f04ccb2c778 with group instance id None)
INFO [executor-Rebalance] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group eventsTopology generation 1 (__consumer_offsets-0)
INFO [data-plane-kafka-request-handler-2] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group eventsTopology for generation 1
INFO [data-plane-kafka-request-handler-1] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_0 with producerId 0 and producer epoch 0 on partition __transaction_state-4
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_1 with producerId 1 and producer epoch 0 on partition __transaction_state-3
...
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_16 with producerId 17 and producer epoch 0 on partition __transaction_state-37
INFO [data-plane-kafka-request-handler-4] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_1 with producerId 18 and producer epoch 0 on partition __transaction_state-42
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_0 with producerId 19 and producer epoch 0 on partition __transaction_state-43
...
INFO [data-plane-kafka-request-handler-3] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_17 with producerId 34 and producer epoch 0 on partition __transaction_state-45
INFO [data-plane-kafka-request-handler-5] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_16 with producerId 35 and producer epoch 0 on partition __transaction_state-46
INFO [pool-26-thread-1] ManagerClient - Manager request {uri:http://localhost:8081/healthcheck, method:GET, body:'', headers:{}}
INFO [pool-26-thread-1] ManagerClient - Manager response from with body {"Database":{"healthy":true},"eventsTopology":{"healthy":true}}
INFO [dw-admin-130] KafkaConnectionCheck - successfully connected to kafka broker: localhost:9093
INFO [kafka-producer-network-thread | EVENT-LOG-LOCAL-test-client-id] LocalTestEnvironment - Message: ProducerRecord(topic=EVENT-LOG-LOCAL, partition=null, headers=RecordHeaders(headers = [], isReadOnly = true), key=null, value={"endDate":1618444800000,"deptId":"dept-1","userId":"user-1234","eventType":"CREATE","eventPayload":{"previousEndDate":null,"partner":"PARTNER-1","info":null}}, timestamp=null) pushed onto topic: EVENT-LOG-LOCAL
INFO [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] EventLogProcessor - Processing EventLogMessage=EventLogMessage(endDate=Thu Apr 15 01:00:00 BST 2021, deptId=dept-1, userId=user-1234, eventType=CREATE, eventPayload=EventLogMessage.EventPayload(previousEndDate=null, partner=PARTNER-1, info=null))
WARN [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] EventLogProcessor - Failed to process EventLogMessage=EventLogMessage(endDate=Thu Apr 15 01:00:00 BST 2021, deptId=dept-1, userId=user-1234, eventType=CREATE, eventPayload=EventLogMessage.EventPayload(previousEndDate=null, partner=PARTNER-1, info=null))
exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
at manager.service.EventLogService.storeInDatabase(EventLogService.java:24)
at manager.topology.processor.EventLogProcessor.process(EventLogProcessor.java:47)
at manager.topology.processor.EventLogProcessor.process(EventLogProcessor.java:19)
at org.apache.kafka.streams.processor.internals.ProcessorNode.lambda$process$2(ProcessorNode.java:142)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:142)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:236)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:216)
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:168)
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:96)
at org.apache.kafka.streams.processor.internals.StreamTask.lambda$process$1(StreamTask.java:679)
at org.apache.kafka.streams.processor.internals.metrics.StreamsMetricsImpl.maybeMeasureLatency(StreamsMetricsImpl.java:836)
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:679)
at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:1033)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:690)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.processor.internals.TaskManager - stream-thread [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] Failed to process stream task 0_8 due to the following error:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_8, processor=EventsLogSource, topic=EVENT-LOG-LOCAL, partition=8, offset=0, stacktrace=exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.processor.internals.StreamThread - stream-thread [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] Encountered the following exception during processing and the thread is going to shut down:
org.apache.kafka.streams.errors.StreamsException: Exception caught in process. taskId=0_8, processor=EventsLogSource, topic=EVENT-LOG-LOCAL, partition=8, offset=0, stacktrace=exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)
ERROR [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1] org.apache.kafka.streams.KafkaStreams - stream-client [eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3] All stream threads have died. The instance will be in error state and should be closed.
Exception: java.lang.IllegalStateException thrown from the UncaughtExceptionHandler in thread "eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1"
INFO [executor-Heartbeat] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Member eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1-consumer-f11ca299-2a68-4317-a559-dd1b96cd431f in group eventsTopology has failed, removing it from the group
INFO [executor-Heartbeat] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Preparing to rebalance group eventsTopology in state PreparingRebalance with old generation 1 (__consumer_offsets-0) (reason: removing member eventsTopology-b21df600-cd39-4c9d-9e7a-f55f53ac9fd3-StreamThread-1-consumer-f11ca299-2a68-4317-a559-dd1b96cd431f on heartbeat expiration)
INFO [data-plane-kafka-request-handler-2] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Stabilized group eventsTopology generation 2 (__consumer_offsets-0)
INFO [data-plane-kafka-request-handler-6] kafka.coordinator.group.GroupCoordinator - [GroupCoordinator 0]: Assignment received from leader for group eventsTopology for generation 2
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-0_0 with producerId 0 and producer epoch 1 on partition __transaction_state-4
...
INFO [data-plane-kafka-request-handler-0] kafka.coordinator.transaction.TransactionCoordinator - [TransactionCoordinator id=0] Initialized transactionalId eventsTopology-1_16 with producerId 35 and producer epoch 1 on partition __transaction_state-46
INFO [main] Cluster - New databse host localhost/127.0.0.1:59423 added
com.jayway.awaitility.core.ConditionTimeoutException: Condition defined as a lambda expression in steps.EventLogSteps
Expecting:
<0>
to be equal to:
<1>
but was not. within 20 seconds.
I am Using Apache Pulsar v2.4.2.And taken refrence of pulsar-io-jdbc and created custom sink connector packaged nar.
And running the following commands to upload schema and create connector
1.Uploaded AVRO Schema.
bin/pulsar-admin schemas upload myjdbc --filename ./connectors/pulsar-io-jdbc-schema.conf
Here is the avro schema:
{
"type": "AVRO",
"schema": "{\"type\":\"record\",\"name\":\"Test\",\"fields\":[{\"name\":\"id\",\"type\":[\"null\",\"int\"]},{\"name\":\"name\",\"type\":[\"null\",\"string\"]}]}",
"properties": {}
}
2.Create connector
bin/pulsar-admin sinks localrun --archive ./connectors/pulsar-io-jdbc-2.4.2.nar --inputs myjdbc --name myjdbc --sink-config-file ./connectors/pulsar-io-jdbc.yaml
And onces I am sending message to topic "myjdbc" from producer.SendAsync(message).
where message is
for (int i = 0; i < count; i++)
{
JObject obj = new JObject();
obj.Add("id", i);
obj.Add("name", "testing topic" + i);
var str = obj.ToString(Formatting.None);
var messageId = await producer.SendAsync(Encoding.UTF8.GetBytes(str));
Console.WriteLine($"MessageId is: '{messageId}'");
}
Pulsar localrun connector is getting the following exception:
19:48:37.437 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ClientCnx - [id: 0x557fb218, L:/127.0.0.1:39488 - R:localhost/127.0.0.1:6650] Connected through proxy to target broker at 192.168.1.97:6650
19:48:37.445 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [myjdbc][public/default/myjdbc] Subscribing to topic on cnx [id: 0x557fb218, L:/127.0.0.1:39488 - R:localhost/127.0.0.1:6650]
19:48:37.503 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [myjdbc][public/default/myjdbc] Subscribed to topic on localhost/127.0.0.1:6650 -- consumer: 0
19:48:37.557 [pulsar-client-io-1-1] INFO com.scurrilous.circe.checksum.Crc32cIntChecksum - SSE4.2 CRC32C provider initialized
19:48:37.715 [pulsar-client-io-1-1] INFO org.apache.pulsar.client.impl.ConsumerImpl - [myjdbc] [public/default/myjdbc] Closed consumer
19:48:37.942 [main] INFO org.apache.pulsar.functions.LocalRunner - RuntimeSpawner quit because of
java.lang.NullPointerException: null
at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877) ~[com.google.guava-guava-25.1-jre.jar:?]
at com.google.common.cache.LocalCache.get(LocalCache.java:3950) ~[com.google.guava-guava-25.1-jre.jar:?]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3973) ~[com.google.guava-guava-25.1-jre.jar:?]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4957) ~[com.google.guava-guava-25.1-jre.jar:?]
at org.apache.pulsar.client.impl.schema.StructSchema.decode(StructSchema.java:94) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.impl.schema.AutoConsumeSchema.decode(AutoConsumeSchema.java:72) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.impl.schema.AutoConsumeSchema.decode(AutoConsumeSchema.java:36) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.api.Schema.decode(Schema.java:97) ~[org.apache.pulsar-pulsar-client-api-2.4.2.jar:2.4.2]
at org.apache.pulsar.client.impl.MessageImpl.getValue(MessageImpl.java:268) ~[org.apache.pulsar-pulsar-client-original-2.4.2.jar:2.4.2]
at org.apache.pulsar.functions.source.PulsarRecord.getValue(PulsarRecord.java:74) ~[org.apache.pulsar-pulsar-functions-instance-2.4.2.jar:2.4.2]
at org.apache.pulsar.functions.instance.JavaInstanceRunnable.readInput(JavaInstanceRunnable.java:473) ~[org.apache.pulsar-pulsar-functions-instance-2.4.2.jar:2.4.2]
at org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:246) ~[org.apache.pulsar-pulsar-functions-instance-2.4.2.jar:2.4.2]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_232]
19:49:03.105 [function-timer-thread-5-1] ERROR org.apache.pulsar.functions.runtime.RuntimeSpawner - public/default/myjdbc-java.lang.NullPointerException Function Container is dead with exception.. restarting
And also write() method is not triggered.
I am working on something where I need to pull data from MariaDB (using HikariCP), and then send it through Redis. Eventually, when I try to pull from the database, the connection will start leaking. This only happens over time, and suddenly.
Here is the full log from when the leak started happening: https://hastebin.com/sekiximehe.makefile
Here is some debug info:
21:04:40 [INFO] 21:04:40.680 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Before cleanup stats (total=6, active=2, idle=4, waiting=0)
21:04:40 [INFO] 21:04:40.680 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - After cleanup stats (total=6, active=2, idle=4, waiting=0)
21:04:40 [INFO] 21:04:40.682 [HikariPool-1 connection adder] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Added connection org.mariadb.jdbc.MariaDbConnection#4b7a5e97
21:04:40 [INFO] 21:04:40.682 [HikariPool-1 connection adder] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - After adding stats (total=7, active=2, idle=5, waiting=0)
21:05:05 [INFO] 21:05:05.323 [HikariPool-1 housekeeper] WARN com.zaxxer.hikari.pool.ProxyLeakTask - Connection leak detection triggered for org.mariadb.jdbc.MariaDbConnection#52ede989 on thread Thread-272, stack trace follows
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:123)
at us.survivewith.bungee.database.FetchPlayerInfo.run(FetchPlayerInfo.java:29)
at java.lang.Thread.run(Thread.java:748)
21:05:10 [INFO] 21:05:10.681 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Before cleanup stats (total=7, active=2, idle=5, waiting=0)
21:05:10 [INFO] 21:05:10.681 [HikariPool-1 housekeeper] DEBUG com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - After cleanup stats (total=7, active=2, idle=5, waiting=0)
21:05:39 [INFO] 21:05:39.352 [HikariPool-1 housekeeper] WARN com.zaxxer.hikari.pool.ProxyLeakTask - Connection leak detection triggered for org.mariadb.jdbc.MariaDbConnection#3cba7850 on thread Thread-274, stack trace follows
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:123)
at us.survivewith.bungee.database.FetchPlayerInfo.run(FetchPlayerInfo.java:29)
at java.lang.Thread.run(Thread.java:748)
Here is the FetchPlayerInfo.run() method:
#Override
public void run()
{
String select = "SELECT `Rank`,`Playtime` FROM `Players` WHERE PlayerUUID=?;";
// This is line 29. How can this possibly be causing a leak?
try(Connection connection = Database.getHikari().getConnection())
{
// Get the data by querying the Players table
try(PreparedStatement serverSQL = connection.prepareStatement(select))
{
serverSQL.setString(1, player);
// Execute statement
try(ResultSet serverRS = serverSQL.executeQuery())
{
// If a row exists
if(serverRS.next())
{
String rank = serverRS.getString("Rank");
Jedis jPublisher = Redis.getJedis().getResource();
jPublisher.publish("playerconnections", player + "~" + serverRS.getInt("Playtime") + "~" + rank);
}
else
{
Jedis jPublisher = Redis.getJedis().getResource();
jPublisher.publish("playerconnections", player + "~" + 0 + "~DEFAULT");
}
}
}
}
catch(SQLException e)
{
//Print out any exception while trying to prepare statement
e.printStackTrace();
}
}
This is how I've setup my Database class:
/**
* This class is used to connect to the database
*/
public class Database
{
private static HikariDataSource hikari;
/**
* Connects to the database
*/
public static void connectToDatabase(String address,
String db,
String user,
String password,
int port)
{
// Setup main Hikari instance
hikari = new HikariDataSource();
hikari.setMaximumPoolSize(20);
hikari.setLeakDetectionThreshold(60 * 1000);
hikari.setDataSourceClassName("org.mariadb.jdbc.MariaDbDataSource");
hikari.addDataSourceProperty("serverName", address);
hikari.addDataSourceProperty("port", port);
hikari.addDataSourceProperty("databaseName", db);
hikari.addDataSourceProperty("user", user);
hikari.addDataSourceProperty("password", password);
}
/**
* Returns an instance of Hikari.
* This instance is connected to the database that contains all data.
* The stats table is only used in this database every other day
*
* #return The main HikariDataSource
*/
public static HikariDataSource getHikari()
{
return hikari;
}
And this is how I am calling the FetchPlayerInfo class:
new Thread(new FetchPlayerInfo(player.getUniqueId().toString())).start();
EDIT:
The problem still persists after using a synchronized getConnection() method from the Database class.
Jedis is also a resource of JedisPool you should close:
/// Jedis implements Closeable. Hence, the jedis instance will be auto-closed after the last statement.
try (Jedis jedis = pool.getResource()) {
What version of HikariCP? It is possible that the leak is not actually a leak. The leak will be reported when the connection is out of the pool for longer than the threshold, he may actually be returned later. Newer versions of HikariCP will log “unleaked” Connections.
EDIT: I am as close to 100% certain as I can be that here is no race condition in HikariCP. This scenario is far to simple, and HikariCP is used by far too many users (millions) for such a fundamental flaw to not have surfaced before.
The only thing that makes sense, looking at the code above and the logs generated, is that one of the calls inside of the outer try-catch is hanging (blocking). I suggest getting a stack dump when the condition occurs, to find if there is a thread blocked inside of FetchPlayerInfo.run().
i am struggling to resolve this matter, i have try to run a small test to connect with my mysql and it work.but when i am trying to work with quartz.properties it cannot access mysql.
i am sure my jdbc mysql driver is fine. as other than using quartz.properties, i can access my mysql database.
i need help. i have work on this for days now.
here is the part i suspect have problem
#The details of the datasource specified previously
org.quartz.dataSource.myDS.driver =com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL =jdbc:mysql://10.1.1.111:3306/somedatabase
org.quartz.dataSource.myDS.user =username
org.quartz.dataSource.myDS.password =somepassword
org.quartz.dataSource.myDS.maxConnections = 20
error output:
207 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'MyScheduler' initialized from default file in current working dir: 'quartz.properties'
208 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.2.1
267 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - incremented pending_acquires: 1
267 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - incremented pending_acquires: 2
268 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - incremented pending_acquires: 3
268 [main] DEBUG com.mchange.v2.resourcepool.BasicResourcePool - com.mchange.v2.resourcepool.BasicResourcePool#42af94c4 config: [start -> 3; min -> 1; max -> 20; inc -> 3; num_acq_attempts -> 30; acq_attempt_delay -> 1000; check_idle_resources_delay -> 0; mox_resource_age -> 0; max_idle_time -> 0; excess_max_idle_time -> 0; destroy_unreturned_resc_time -> 0; expiration_enforcement_delay -> 0; break_on_acquisition_failure -> false; debug_store_checkout_exceptions -> false]
269 [com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread-#2] WARN com.mchange.v2.c3p0.DriverManagerDataSource - Could not load driverClass com.mysql.jdbc.Driver
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:190)
Here is my quartz code
public class QuartzDaily {
static Logger log = Logger.getLogger("QuartzDaily");
public static void main(String[] args) throws ParseException, SchedulerException, IOException, NoSuchAlgorithmException
{
//lines of codes to configure log
DateFormat format3 = new SimpleDateFormat( "MM_dd_yyyy_HH_mm" );
Date dateToday = new Date();
String strToday= format3.format(dateToday);
BasicConfigurator.configure();
PatternLayout pattern = new PatternLayout("%r [%t] %-5p %c %x - %m%n");
FileAppender fileappender = new FileAppender(pattern,"log\\QuartzScheduler_"+strToday+".txt");
log.addAppender(fileappender);
log.info("QuartzReport; main(): [** Starting scheduler services **]");
//run quartz job
quartzWeekly();
}
public static void quartzWeekly() throws SchedulerException{
//some quartz job code
}
public static class TestJob implements Job {
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
System.out.println("hello world");
}
}
}
Below is the full code, below code is not mine, it was taken from random tutorial site.
#Skip Update Check
#Quartz contains an "update check" feature that connects to a server to
#check if there is a new version of Quartz available
org.quartz.scheduler.skipUpdateCheck: true
org.quartz.scheduler.instanceName =MyScheduler
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 4
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
org.quartz.threadPool.threadPriority = 5
#specify the jobstore used
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
org.quartz.jobStore.useProperties = false
#specify the TerracottaJobStore
#org.quartz.jobStore.class = org.terracotta.quartz.TerracottaJobStore
#org.quartz.jobStore.tcConfigUrl = localhost:9510
#The datasource for the jobstore that is to be used
org.quartz.jobStore.dataSource = myDS
#quartz table prefixes in the database
org.quartz.jobStore.tablePrefix = qrtz_
org.quartz.jobStore.misfireThreshold = 60000
org.quartz.jobStore.isClustered = false
#The details of the datasource specified previously
org.quartz.dataSource.myDS.driver =com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL =jdbc:mysql://10.1.1.11:3306/somedatabase
org.quartz.dataSource.myDS.user =username
org.quartz.dataSource.myDS.password =somepassword
org.quartz.dataSource.myDS.maxConnections = 20
#Clustering
#clustering currently only works with the JDBC-Jobstore (JobStoreTX
#or JobStoreCMT). Features include load-balancing and job fail-over
#(if the JobDetail's "request recovery" flag is set to true). It is important to note that
# When using clustering on separate machines, make sure that their clocks are synchronized
#using some form of time-sync service (clocks must be within a second of each other).
#See http://www.boulder.nist.gov/timefreq/service/its.htm.
# Never fire-up a non-clustered instance against the same set of tables that any
#other instance is running against.
# Each instance in the cluster should use the same copy of the quartz.properties file.
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
#Each server must have the same copy of the configuration file.
#auto-generate instance ids.
org.quartz.scheduler.instanceId = AUTO
#Note :
#If a job throws an exception, Quartz will typically immediately
#re-execute it (and it will likely throw the same exception again).
#It's better if the job catches all exception it may encounter, handle them,
#and reschedule itself, or other jobs. to work around the issue.
I had the exact same issue and I fixed it by adding mysql connector jar in WEB-INF/lib. Please make sure that the WEB-INF is marked as a source folder in build path.
I'm trying to connect to a remote hbase-0.94.8 installed on a ubuntu vm. I'm having a TableNotFoundException and this is my Java code:
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "192.168.56.101");
HTableInterface usersTable = new HTable(config, "users");
Here is the full exception trace:
14/06/24 15:59:48 WARN client.HConnectionManager$HConnectionImplementation: Encountered problems when prefetch META table:
org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for table: users, row=users,,99999999999999
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:158)
at org.apache.hadoop.hbase.client.MetaScanner.access$000(MetaScanner.java:52)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:130)
at org.apache.hadoop.hbase.client.MetaScanner$1.connect(MetaScanner.java:127)
at org.apache.hadoop.hbase.client.HConnectionManager.execute(HConnectionManager.java:360)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:127)
at org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:103)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:876)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:930)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)
at com.heavenize.samples.hbase.UsersTool.main(UsersTool.java:37)
Exception in thread "main" org.apache.hadoop.hbase.TableNotFoundException: users
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:952)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:818)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:782)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:249)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:213)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:171)
at com.heavenize.samples.hbase.UsersTool.main(UsersTool.java:37)
You can check for table before using it. To do it you may use HBaseAdmin.
Create HBaseAdmin instance and thet use isTableAvailable(String tableName) method.
Configuration config = HBaseConfiguration.create();
config.set("hbase.zookeeper.quorum", "192.168.56.101");
HBaseAdmin admin = new HBaseAdmin(config);
if(admin.isTableAvailable(tableName))
{
HTableInterface usersTable = new HTable(config, "users");
}
I hope it will help you.