I trying to receive some events on Ignite cache. My config.xml:
<bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="workDirectory" value="/opt/apache-ignite-2.8.1-bin"/>
<!-- Set to true to enable distributed class loading for examples, default is false. -->
<property name="peerClassLoadingEnabled" value="true"/>
<!-- Enable task execution events for examples. -->
<property name="includeEventTypes">
<util:constant static-field="org.apache.ignite.events.EventType.EVTS_CACHE"/>
</property>
Code partialy taken from ignite examples:
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setClientMode(true);
cfg.setPeerClassLoadingEnabled(true);
TcpDiscoveryMulticastIpFinder ipFinder = new TcpDiscoveryMulticastIpFinder();
ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500..47509"));
cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
Ignite ignite = Ignition.start(cfg);
IgnitePredicate<CacheEvent> locLsnr = evt -> {
System.out.println("Received event [evt=" + evt.name() + ", key=" + evt.key() +
", oldVal=" + evt.oldValue() + ", newVal=" + evt.newValue());
return true; // Continue listening.
};
ignite.events().localListen(locLsnr,
EventType.EVT_CACHE_OBJECT_PUT,
EventType.EVT_CACHE_OBJECT_READ,
EventType.EVT_CACHE_OBJECT_REMOVED);
final IgniteCache<Integer, String> cache = ignite.getOrCreateCache("myCache");
for (int i = 0; i < 20; i++)
cache.put(i, Integer.toString(i));
System.out.println(">> Created the cache and add the values.");
ignite.close();
Cant print received events. Probably have issues in my config. I have trying config in code:
cfg.setIncludeEventTypes(EVTS_CACHE);
without effect.
The cache-related events can't be locally captured from a client node. For a sample, a CACHE_PUT event is being fired on the server node that actually stores the primary partition with the particular key, not from a node, from which the #put method is called.
You can either listen for local events from a server:
IgniteEvents events = server.events();
events.localListen(locLsnr, EVTS_CACHE);
Or to listen for remote events from a client:
IgniteEvents events = ignite.events();
events.remoteListen(new IgniteBiPredicate<UUID, CacheEvent>() {
#Override
public boolean apply(UUID uuid, CacheEvent e) {
// process the event
return true; //continue listening
}
}, evt -> {
System.out.println("remote event: " + evt.name());
return true;
}, EVTS_CACHE);
Note, that it's required to explicitly enable events for your server:
Ignite server = Ignition.start(new IgniteConfiguration()
.setIgniteInstanceName("server")
.setIncludeEventTypes(EVTS_CACHE)
Please, check the documentation for more details:
https://www.gridgain.com/docs/latest/developers-guide/events/listening-to-events
Related
I am trying to make it so I can redeploy a JBoss 7.1.0 cluster with a WAR that has apache ignite.
I am starting the cache like this:
System.setProperty("IGNITE_UPDATE_NOTIFIER", "false");
igniteConfiguration = new IgniteConfiguration();
int failureDetectionTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT", "60000"));
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);
String igniteVmIps = getProperty("IGNITE_VM_IPS");
List<String> addresses = Arrays.asList("127.0.0.1:47500");
if (StringUtils.isNotBlank(igniteVmIps)) {
addresses = Arrays.asList(igniteVmIps.split(","));
}
int networkTimeout = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_NETWORK_TIMEOUT", "60000"));
boolean failureDetectionTimeoutEnabled = Boolean.parseBoolean(getProperty("IGNITE_TCP_DISCOVERY_FAILURE_DETECTION_TIMEOUT_ENABLED", "true"));
int tcpDiscoveryLocalPort = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT", "47500"));
int tcpDiscoveryLocalPortRange = Integer.parseInt(getProperty("IGNITE_TCP_DISCOVERY_LOCAL_PORT_RANGE", "0"));
TcpDiscoverySpi tcpDiscoverySpi = new TcpDiscoverySpi();
tcpDiscoverySpi.setLocalPort(tcpDiscoveryLocalPort);
tcpDiscoverySpi.setLocalPortRange(tcpDiscoveryLocalPortRange);
tcpDiscoverySpi.setNetworkTimeout(networkTimeout);
tcpDiscoverySpi.failureDetectionTimeoutEnabled(failureDetectionTimeoutEnabled);
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(addresses);
tcpDiscoverySpi.setIpFinder(ipFinder);
igniteConfiguration.setDiscoverySpi(tcpDiscoverySpi);
Ignite ignite = Ignition.start(igniteConfiguration);
ignite.cluster().active(true);
Then I am stopping the cache when the application undeploys:
ignite.close();
When I try to redeploy, I get the following error during initialization.
org.apache.ignite.spi.IgniteSpiException: Failed to marshal custom event: StartRoutineDiscoveryMessage [startReqData=StartRequestData [prjPred=org.apache.ignite.internal.cluster.ClusterGroupAdapter$CachesFilter#7385a997, clsName=null, depInfo=null, hnd=org.apache.ignite.internal.GridEventConsumeHandler#2aec6952, bufSize=1, interval=0, autoUnsubscribe=true], keepBinary=false, deserEx=null, routineId=bbe16e8e-2820-4ba0-a958-d5f644498ba2]
If I full restart the server, starts up fine.
Am I missing some magic in the shutdown process?
I see what I did wrong, and it was code I omitted from the ticket.
ignite.events(ignite.cluster().forCacheNodes(cacheConfig.getKey())).remoteListen(locLsnr, rmtLsnr,
EVT_CACHE_OBJECT_PUT, EVT_CACHE_OBJECT_READ, EVT_CACHE_OBJECT_REMOVED);
When it was trying to register this code twice, it was causing that strange error.
I put a try-catch ignore around it for now and things seem to be ok.
I'm using Java 11 and kafka-client 2.0.0.
I'm using the following code to generate a consumer :
public Consumer createConsumer(Properties properties,String regex) {
log.info("Creating consumer and listener..");
Consumer consumer = new KafkaConsumer<>(properties);
ConsumerRebalanceListener listener = new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
log.info("The following partitions were revoked from consumer : {}", Arrays.toString(partitions.toArray()));
}
#Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
log.info("The following partitions were assigned to consumer : {}", Arrays.toString(partitions.toArray()));
}
};
consumer.subscribe(Pattern.compile(regex), listener);
log.info("consumer subscribed");
return consumer;
}
}
My poll loop is in a different place in the code :
public <K, V> void startWorking(Consumer<K, V> consumer) {
try {
while (true) {
ConsumerRecords<K, V> records = consumer.poll(600);
if (records.count() > 0) {
log.info("Polled {} records", records.count());
} else {
log.info("polled 0 records.. going to sleep..");
Thread.sleep(200);
}
}
} catch (WakeupException | InterruptedException e) {
log.error("Consumer is shutting down", e);
} finally {
consumer.close();
}
}
When I run the code and use this function, the consumer is created and the log contains the following messages :
Creating consumer and listener..
consumer subscribed
polled 0 records.. going to sleep..
polled 0 records.. going to sleep..
polled 0 records.. going to sleep..
The log doesn't contain any info regarding the partition assignment/revocation.
In addition I'm able to see in the log the properties that the consumer uses (group.id is set) :
2020-07-09 14:31:07.959 DEBUG 7342 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [server1:9092]
check.crcs = true
client.id =
group.id=mygroupid
key.deserializer=..
value.deserializer=..
So I tried to use the kafka-console-consumer with the same configuration in order to consume from one of the topics that the regex(mytopic.*) should catch (in this case I used the topic mytopic-1) :
/usr/bin/kafka-console-consumer.sh --bootstrap-server server1:9092 --topic mytopic-1 --property print.timestamp=true --consumer.config /data/scripts/kafka-consumer.properties --from-begining
I have a poll loop in other part of my code that is timing out every 10m .So the bottom line - the problem is that partitions aren't assigned to the Java consumer. The prints inside the listener never happen and the consumer doesn't have any partitions to listen to.
It seems that I was missing the ssl property in my properties file. Don't forget to specify the security.protocol=ssl if you use ssl. It seems that kafka-client API doesn't throw exception if the Kafka uses ssl and you try to access it without ssl parameter configured.
I'm using the Kafka JDK client ver 0.10.2.1 . I am able to produce simple messages to Kafka for a "heartbeat" test, but I cannot consume a message from that same topic using the sdk. I am able to consume that message when I go into the Kafka CLI, so I have confirmed the message is there. Here's the function I'm using to consume from my Kafka server, with the props - I pass the message I produced to the topic only after I have indeed confirmed the produce() was succesful, I can post that function later if requested:
private def consumeFromKafka(topic: String, expectedMessage: String): Boolean = {
val props: Properties = initProps("consumer")
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJava)
var readExpectedRecord = false
try {
val records = {
val firstPollRecs = consumer.poll(MAX_POLLTIME_MS)
// increase timeout and try again if nothing comes back the first time in case system is busy
if (firstPollRecs.count() == 0) firstPollRecs else {
logger.info("KafkaHeartBeat: First poll had 0 records- trying again - doubling timeout to "
+ (MAX_POLLTIME_MS * 2)/1000 + " sec.")
consumer.poll(MAX_POLLTIME_MS * 2)
}
}
records.forEach(rec => {
if (rec.value() == expectedMessage) readExpectedRecord = true
})
} catch {
case e: Throwable => //log error
} finally {
consumer.close()
}
readExpectedRecord
}
private def initProps(propsType: String): Properties = {
val prop = new Properties()
prop.put("bootstrap.servers", kafkaServer + ":" + kafkaPort)
propsType match {
case "producer" => {
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("acks", "1")
prop.put("producer.type", "sync")
prop.put("retries", "3")
prop.put("linger.ms", "5")
}
case "consumer" => {
prop.put("group.id", groupId)
prop.put("enable.auto.commit", "false")
prop.put("auto.commit.interval.ms", "1000")
prop.put("session.timeout.ms", "30000")
prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
// poll just once, should only be one record for the heartbeat
prop.put("max.poll.records", "1")
}
}
prop
}
Now when I run the code, here's what it outputs in the console:
13:04:21 - Discovered coordinator serverName:9092 (id: 2147483647
rack: null) for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e. 13:04:23
INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned
partitions [] for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:24
INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:25 INFO
o.a.k.c.c.i.AbstractCoordinator - Successfully joined group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e with generation 1 13:04:26 INFO
o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions
[HeartBeat_Topic.Service_5.2018-08-03.13_04_10.377-0] for group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:27 INFO
c.p.p.l.util.KafkaHeartBeatUtil - KafkaHeartBeat: First poll had 0
records- trying again - doubling timeout to 60 sec.
And then nothing else, no errors thrown -so no records are polled. Does anyone have any idea what's preventing the 'consume' from happening? The subscriber seems to be successful, as I'm able to successfully call the listTopics and list partions no problem.
Your code has a bug. It seems your line:
if (firstPollRecs.count() == 0)
Should say this instead
if (firstPollRecs.count() > 0)
Otherwise, you're passing in an empty firstPollRecs, and then iterating over that, which obviously returns nothing.
I am new in Apache ignite. I am trying to populate cache and read from cache. I have created 2 java projects one populates Apache ignite caches and the other one print cached data however printing cache project gives error.
here is the code that I use to populate caches
public void run(String... arg0) throws Exception
{
try (Ignite ignite = Ignition.start("ignite.xml"))
{
int iteration=0;
while(true)
{
iteration++;
IgniteCache<Object, Object> cache = ignite.getOrCreateCache("test cache " + iteration);
System.out.println(""+100);
System.out.println("Caching started for iteration " + iteration);
printMemory();
for (int i = 0; i < 100; i++)
{
cache.put(i, new CacheObject(i, "Cached integer " + i));
System.out.println(i);
Thread.sleep(100);
}
//cache.destroy();
System.out.println("**************************************"+cache.size());
}
}
}
this is the code that I use to print cached data
Ignition.setClientMode(true);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPeerClassLoadingEnabled(true);
TcpDiscoveryMulticastIpFinder discoveryMulticastIpFinder = new TcpDiscoveryMulticastIpFinder();
Set<String> set = new HashSet<>();
set.add("serverhost:47500..47509");
discoveryMulticastIpFinder.setAddresses(set);
TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
discoverySpi.setIpFinder(discoveryMulticastIpFinder);
cfg.setDiscoverySpi(discoverySpi);
cfg.setPeerClassLoadingEnabled(true);
cfg.setIncludeEventTypes(EVTS_CACHE);
Ignite ignite = Ignition.start(cfg);
System.out.println("***************************************************\n"+ignite.cacheNames()+"\n****************************");
CacheConfiguration<String, BinaryObject> cacheConfiguration = new CacheConfiguration<>(CACHE_NAME);
IgniteCache<String, BinaryObject> cache = ignite.getOrCreateCache(cacheConfiguration).withKeepBinary();
this 2 peace of code in different project so while 1 first project populating cache, I am trying to reach the cache from another project
the error below is occurred when I am trying to reach cached data
Error that returns from the code that reads cache and prints it
Aug 01, 2018 9:25:25 AM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, sockTimeout=5000, ackTimeout=5000, reconCnt=10, maxAckTimeout=600000, forceSrvMode=false, clientReconnectDisabled=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:258)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:660)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1505)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:917)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1688)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1547)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1003)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:534)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:515)
at org.apache.ignite.Ignition.start(Ignition.java:322)
at test.App.main(App.java:76)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node's marshaller differs from remote node's marshaller (to make sure all nodes in topology have identical marshaller, configure marshaller explicitly in configuration) [locMarshaller=org.apache.ignite.internal.binary.BinaryMarshaller, rmtMarshaller=org.apache.ignite.marshaller.optimized.OptimizedMarshaller, locNodeAddrs=[192.168.1.71/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /192.168.1.71], locPort=0, rmtNodeAddr=[192.168.1.71/0:0:0:0:0:0:0:1%lo, /127.0.0.1, /192.168.1.71], locNodeId=b41f0d09-5a7f-424b-b3b5-420a5e1acdf6, rmtNodeId=ff436f20-5d4b-477e-aade-837d59b1eaa7]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.checkFailedError(TcpDiscoverySpi.java:1647)
at org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1460)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Local node's marshaller differs from remote node's marshaller (to make sure all nodes in topology have identical marshaller, configure marshaller explicitly in configuration) [locMarshaller=org.apache.ignite.internal.binary.BinaryMarshaller, rmtMarshaller=org.apache.ignite.marshaller.optimized.OptimizedMarshaller
Looks like in ignite.xml you've explicitly set marshaller. Please check <property name="marshaller"> in xml config file. You should have the same marshaller configured for all nodes in the cluster, or they won't be able to communicate.
We've performed a performance test with Oracle Advanced Queue on our Oracle DB environment. We've created the queue and the queue table with the following script:
BEGIN
DBMS_AQADM.create_queue_table(
queue_table => 'verisoft.qt_test',
queue_payload_type => 'SYS.AQ$_JMS_MESSAGE',
sort_list => 'ENQ_TIME',
multiple_consumers => false,
message_grouping => 0,
comment => 'POC Authorizations Queue Table - KK',
compatible => '10.0',
secure => true);
DBMS_AQADM.create_queue(
queue_name => 'verisoft.q_test',
queue_table => 'verisoft.qt_test',
queue_type => dbms_aqadm.NORMAL_QUEUE,
max_retries => 10,
retry_delay => 0,
retention_time => 0,
comment => 'POC Authorizations Queue - KK');
DBMS_AQADM.start_queue('q_test');
END;
/
We've published 1000000 messages with 2380 TPS using a PL/SQL client. And we've consumed 1000000 messages with 292 TPS, using Oracle JMS API Client.
The consumer rate is almost 10 times slower than the publisher and that speed does not meet our requirements.
Below, is the piece of Java code that we use to consume messages:
if (q == null) initializeQueue();
System.out.println(listenerID + ": Listening on queue " + q.getQueueName() + "...");
MessageConsumer consumer = sess.createConsumer(q);
for (Message m; (m = consumer.receive()) != null;) {
new Timer().schedule(new QueueExample(m), 0);
}
sess.close();
con.close();
Do you have any suggestion on, how we can improve the performance at the consumer side?
Your use of Timer may be your primary issue. The Timer definition reads:
Corresponding to each Timer object is a single background thread that is used to execute all of the timer's tasks, sequentially. Timer tasks should complete quickly. If a timer task takes excessive time to complete, it "hogs" the timer's task execution thread. This can, in turn, delay the execution of subsequent tasks, which may "bunch up" and execute in rapid succession when (and if) the offending task finally completes.
I would suggest you use a ThreadPool.
// My executor.
ExecutorService executor = Executors.newCachedThreadPool();
public void test() throws InterruptedException {
for (int i = 0; i < 1000; i++) {
final int n = i;
// Instead of using Timer, create a Runnable and pass it to the Executor.
executor.submit(new Runnable() {
#Override
public void run() {
System.out.println("Run " + n);
}
});
}
executor.shutdown();
executor.awaitTermination(1, TimeUnit.DAYS);
}