Consider the following code:
#Test(singleThreaded = true)
public class KafkaConsumerTest
{
private KafkaTemplate<String, byte[]> template;
private DefaultKafkaConsumerFactory<String, byte[]> consumerFactory;
private static final KafkaEmbedded EMBEDDED_KAFKA;
static {
EMBEDDED_KAFKA = new KafkaEmbedded(1, true, "topic");
try { EMBEDDED_KAFKA.before(); } catch (final Exception e) { e.printStackTrace(); }
}
#BeforeMethod
public void setUp() throws Exception {
final Map<String, Object> senderProps = KafkaTestUtils.senderProps(EMBEDDED_KAFKA.getBrokersAsString());
senderProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
senderProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, ByteArraySerializer.class);
final ProducerFactory<String, byte[]> pf = new DefaultKafkaProducerFactory<>(senderProps);
this.template = new KafkaTemplate<>(pf);
this.template.setDefaultTopic("topic");
final Map<String, Object> consumerProps = KafkaTestUtils.consumerProps("sender", "false", EMBEDDED_KAFKA);
this.consumerFactory = new DefaultKafkaConsumerFactory<>(consumerProps);
this.consumerFactory.setValueDeserializer(new ByteArrayDeserializer());
this.consumerFactory.setKeyDeserializer(new StringDeserializer());
}
#Test
public void testSendToKafka() throws InterruptedException, ExecutionException, TimeoutException {
final String message = "42";
final Message<byte[]> msg = MessageBuilder.withPayload(message.getBytes(StandardCharsets.UTF_8)).setHeader(KafkaHeaders.TOPIC, "topic").build();
this.template.send(msg).get(10, TimeUnit.SECONDS);
final Consumer<String, byte[]> consumer = this.consumerFactory.createConsumer();
consumer.subscribe(Collections.singleton("topic"));
final ConsumerRecords<String, byte[]> records = consumer.poll(10000);
Assert.assertTrue(records.count() > 0);
Assert.assertEquals(new String(records.iterator().next().value(), StandardCharsets.UTF_8), message);
consumer.commitSync();
}
}
I am trying to send a message to a KafkaTemplate and read it again using Consumer.poll(). The test framework I am using is TestNG.
Sending works, I have verified that using the "usual" code I found in the net (register a message listener on a KafkaMessageListenerContainer).
Only, I never receive anything in the consumer. I have tried the same sequence (create Consumer, poll()) against a "real" Kafka installation, and it works.
Hence it looks like there is something wrong with the way I set up my ConsumerFactory? Any help would be greatly appreciated!
You need to use
EMBEDDED_KAFKA.consumeFromAnEmbeddedTopic(consumer, "topic");
before publishing records via KafkaTemplate.
And then in the end of test for verification you need to use something like this:
ConsumerRecord<String, String> record = KafkaTestUtils.getSingleRecord(consumer, "topic");
You can also use it the way you do, only what you are missing is a ConsumerConfig.AUTO_OFFSET_RESET_CONFIG as an earliest, because the default one is latest. That way a consumer added to the topic later won't see any records published before.
Related
I have created messaging component which will be called by other service for consuming and sending message from kafka, producer part is working fine, I am not sure what wrong with the below consumer listner part why it not printing messages or in debug mode control also not going inside the #kafkaListner method, but GUI based kafkamanager app shows offset is got committed even thought its mannual offset commit.
Here is my Message listner class code , I have checked topic and groupid is setting and fetched properly
#Component
public class SpringKafkaMessageListner {
public CountDownLatch latch = new CountDownLatch(1);
#KafkaListener(topics = "#{consumerFactory.getConfigurationProperties().get(\"topic-name\")}",
groupId = "#{consumerFactory.getConfigurationProperties().get(\"group.id\")}",
containerFactory = "springKafkaListenerContainerFactory")
public void listen(ConsumerRecord<?, ?> consumerRecord, Acknowledgment ack) {
System.out.println("listening...");
System.out.println("Received Message in group : "
+ " and message: " + consumerRecord.value());
System.out.println("current offsetId : " + consumerRecord.offset());
ack.acknowledge();
latch.countDown();
}
}
Consumer config class-
#Configuration
#EnableKafka
public class KafkaConsumerBeanConfig<T> {
#Autowired
#Lazy
private KafkaConsumerConfigDTO kafkaConsumerConfigDTO;
#Bean
public ConsumerFactory<Object, T> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(kafkaConsumerConfigDTO.getConfigs());
}
//for spring kafka with manual offset commit
#Bean
public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Object,
springKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Object, T> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
//manual commit
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
return factory;
}
#Bean
SpringKafkaMessageListner consumerListner(){
return new SpringKafkaMessageListner();
}
}
Below code snippet is consumer interface implementation which expose subscribe() method and all other bean creation is done thru ConfigurableApplicationContext.
public class SpringKafkaConsumer<T> implements Consumer<T> {
public SpringKafkaConsumer(ConsumerConfig<T> consumerConfig,
ConfigurableApplicationContext context) {
this.consumerConfig = consumerConfig;
this.context = context;
this.consumerFactory = context.getBean("consumerFactory", ConsumerFactory.class);
this.springKafkaContainer = context.getBean("springKafkaListenerContainerFactory",
ConcurrentKafkaListenerContainerFactory.class);
}
// here is it just simple code to initialize SpringKafkaMessageListner class and invoking
listening part
#Override
public void subscribe() {
consumerListner = context.getBean("consumerListner", SpringKafkaMessageListner.class);
try {
consumerListner.latch.await(30, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Test class with my local docker kafka setup
#RunWith(SpringRunner.class)
#DirtiesContext
#ContextConfiguration(classes = QueueManagerSpringConfig.class)
public class SpringKafkaTest extends AbstractJUnit4SpringContextTests {
#Autowired
private QueueManager queueManager;
private Consumer<KafkaMessage> consumer;`
// test method
#Test
public void testSubscribeWithLocalBroker() {
String topicName = "topic1";
String brokerServer = "127.0.0.1:9092";
String groupId = "grp1";
Map<String, String> additionalProp = new HashMap<>();
additionalProp.put(KafkaConsumerConfig.GROUP_ID, groupId);
additionalProp.put(KafkaConsumerConfig.AUTO_COMMIT, "false");
additionalProp.put(KafkaConsumerConfig.AUTO_COMMIT_INTERVAL, "100");
ConsumerConfig<KafkaMessage> consumerConfig =
new ConsumerConfig.Builder<>(topicName, new KafkaSuccessMessageHandler(new
KafkaMessageSerializerTest()),
new KafkaMessageDeserializerTest())
.additionalProperties(additionalProp)
.enableSpringKafka(true)
.offsetPositionStrategy(new EarliestPositionStrategy())
.build();
consumer = queueManager.getConsumer(consumerConfig);
System.out.println("start subscriber");
// calling subcribe method of consumer that will invoke kafkalistner
consumer.subscribe();
}
#Configuration
public class QueueManagerSpringConfig {
#Bean
public QueueManager queueManager() {
Map<String, String> kafkaProperties = new HashMap<>();
kafkaProperties.put(KafkaPropertyNamespace.NS_PREFIX +
KafkaPropertyNamespace.BOOTSTRAP_SERVERS,
"127.0.0.1:9092");
return QueueManagerFactory.getInstance(new KafkaPropertyNamespace(kafkaProperties)); } }
I have to implement a functionality to (re-)set a listener for a certain topic/partition to any given offset. So if events are commited to the offset 5 and the admin decides to reset the offset to 2 then the event 3, 4 and 5 should be reprocessed.
We are using Spring for Kafka 2.3 and I was trying to follow the documentation on ConsumerSeekAware which seems to be exactly what I am looking for.
The problem however is that we are using topics that are created on runtime as well. We use a KafkaMessageListenerContainer through a DefaultKafkaConsumerFactory for that purpose and I don't know where to put the registerSeekCallback or something alike.
Is there any way to achieve this? I have problems understanding how the class using the #KafkaListener annotations maps to the way how listeners are created in the factory.
Any help would be appreciated. Even if it is only an explanation on how these things work together.
This is how the KafkaMessageListenerContainer are basically created:
public KafkaMessageListenerContainer<String, Object> createKafkaMessageListenerContainer(String topicName,
ContainerPropertiesStrategy containerPropertiesStrategy) {
MessageListener<String, String> messageListener = getMessageListener(topicName);
ConsumerFactory<String, Object> consumerFactory = new DefaultKafkaConsumerFactory<>(getConsumerFactoryConfiguration());
KafkaMessageListenerContainer<String, Object> kafkaMessageListenerContainer = createKafkaMessageListenerContainer(topicName, messageListener, bootstrapServers, containerPropertiesStrategy, consumerFactory);
return kafkaMessageListenerContainer;
}
public MessageListener<String, String> getMessageListener(String topic) {
MessageListener<String, String> messageListener = new MessageListener<String, String>() {
#Override
public void onMessage(ConsumerRecord<String, String> message) {
try {
consumerService.consume(topic, message.value());
} catch (IOException e) {
log.log(Level.WARNING, "Message couldn't be consumed", e);
}
}
};
return messageListener;
}
public static KafkaMessageListenerContainer<String, Object> createKafkaMessageListenerContainer(
String topicName, MessageListener<String, String> messageListener, String bootstrapServers, ContainerPropertiesStrategy containerPropertiesStrategy,
ConsumerFactory<String, Object> consumerFactory) {
ContainerProperties containerProperties = containerPropertiesStrategy.getContainerPropertiesForTopic(topicName);
containerProperties.setMessageListener(messageListener);
KafkaMessageListenerContainer<String, Object> kafkaMessageListenerContainer = new KafkaMessageListenerContainer<>(
consumerFactory, containerProperties);
kafkaMessageListenerContainer.setBeanName(topicName);
return kafkaMessageListenerContainer;
}
Hope that helps.
The key component is the AbstractConsumerSeekAware. Hopefully this will provide enough to get you started...
#SpringBootApplication
public class So59682801Application {
public static void main(String[] args) {
SpringApplication.run(So59682801Application.class, args).close();
}
#Bean
public ApplicationRunner runner(ListenerCreator creator,
KafkaTemplate<String, String> template, GenericApplicationContext context) {
return args -> {
System.out.println("Hit enter to create a listener");
System.in.read();
ConcurrentMessageListenerContainer<String, String> container =
creator.createContainer("so59682801group", "so59682801");
// register the container as a bean so that all the "...Aware" interfaces are satisfied
context.registerBean("so59682801", ConcurrentMessageListenerContainer.class, () -> container);
context.getBean("so59682801", ConcurrentMessageListenerContainer.class); // re-fetch to initialize
container.start();
// send some messages
IntStream.range(0, 10).forEach(i -> template.send("so59682801", "test" + i));
System.out.println("Hit enter to reseek");
System.in.read();
((MyListener) container.getContainerProperties().getMessageListener())
.reseek(new TopicPartition("so59682801", 0), 5L);
System.out.println("Hit enter to exit");
System.in.read();
};
}
}
#Component
class ListenerCreator {
private final ConcurrentKafkaListenerContainerFactory<String, String> factory;
ListenerCreator(ConcurrentKafkaListenerContainerFactory<String, String> factory) {
factory.getContainerProperties().setIdleEventInterval(5000L);
this.factory = factory;
}
ConcurrentMessageListenerContainer<String, String> createContainer(String groupId, String... topics) {
ConcurrentMessageListenerContainer<String, String> container = factory.createContainer(topics);
container.getContainerProperties().setGroupId(groupId);
container.getContainerProperties().setMessageListener(new MyListener());
return container;
}
}
class MyListener extends AbstractConsumerSeekAware implements MessageListener<String, String> {
#Override
public void onMessage(ConsumerRecord<String, String> data) {
System.out.println(data);
}
public void reseek(TopicPartition partition, long offset) {
getSeekCallbackFor(partition).seek(partition.topic(), partition.partition(), offset);
}
}
Calling reseek() on the listener queues the seek for the consumer thread when it wakes from the poll() (actually before the next one).
I think you can use some annotation for spring kafka like this although might be awkward setting the offset in annotation at runtime
#KafkaListener(topicPartitions =
#TopicPartition(topic = "${kafka.consumer.topic}", partitionOffsets = {
#PartitionOffset(partition = "0", initialOffset = "2")}),
containerFactory = "filterKafkaListenerContainerFactory", id = "${kafka.consumer.groupId}")
public void receive(ConsumedObject event) {
log.info(String.format("Consumed message with correlationId: %s", event.getCorrelationId()));
consumerHelper.start(event);
}
Alternatively here is some code I wrote to consume from a given offset, I simulated consumer failing on a message, this is using KafkaConsumer though rather than the KafkaMessageListenerContainer.
private static void ConsumeFromOffset(KafkaConsumer<String, Customer> consumer, boolean flag, String topic) {
Scanner scanner = new Scanner(System.in);
System.out.print("Enter offset: ");
int offsetInput = scanner.nextInt();
while (true) {
ConsumerRecords<String, Customer> records = consumer.poll(500);
for (ConsumerRecord<String, Customer> record : records) {
Customer customer = record.value();
System.out.println(customer + " has offset ->" + record.offset());
if (record.offset() == 7 && flag) {
System.out.println("simulating consumer failing after offset 7..");
break;
}
}
consumer.commitSync();
if (flag) {
// consumer.seekToBeginning(Stream.of(new TopicPartition(topic, 0)).collect(Collectors.toList())); // consume from the beginning
consumer.seek(new TopicPartition(topic, 0), 3); // consume
flag = false;
}
}
}
Currently, I have one Flink Cluster which wants to consume Kafka Topic by one Pattern, By using this way, we don't need to maintain one hard code Kafka topic list.
import java.util.regex.Pattern;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer010;
...
private static final Pattern topicPattern = Pattern.compile("(DC_TEST_([A-Z0-9_]+)");
...
FlinkKafkaConsumer010<KafkaMessage> kafkaConsumer = new FlinkKafkaConsumer010<>(
topicPattern, deserializerClazz.newInstance(), kafkaConsumerProps);
DataStream<KafkaMessage> input = env.addSource(kafkaConsumer);
I just want to know by using the above way, How can I get to know the real Kafka topic name during the processing?
Thanks.
--Update--
The reason why I need to know the topic information is we need this topic name as the parameter to be used in the coming Flink sink part.
You can implement your own custom KafkaDeserializationSchema, like this:
public class CustomKafkaDeserializationSchema implements KafkaDeserializationSchema<Tuple2<String, String>> {
#Override
public boolean isEndOfStream(Tuple2<String, String> nextElement) {
return false;
}
#Override
public Tuple2<String, String> deserialize(ConsumerRecord<byte[], byte[]> record) throws Exception {
return new Tuple2<>(record.topic(), new String(record.value(), "UTF-8"));
}
#Override
public TypeInformation<Tuple2<String, String>> getProducedType() {
return new TupleTypeInfo<>(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO);
}
}
With the custom KafkaDeserializationSchema, you can create DataStream of which the element contains topic infos. In my demo case the element type is Tuple2<String, String>, so you can access the topic name by Tuple2#f0.
FlinkKafkaConsumer010<Tuple2<String, String>> kafkaConsumer = new FlinkKafkaConsumer010<>(
topicPattern, new CustomKafkaDeserializationSchema, kafkaConsumerProps);
DataStream<Tuple2<String, String>> input = env.addSource(kafkaConsumer);
input.process(new ProcessFunction<Tuple2<String,String>, String>() {
#Override
public void processElement(Tuple2<String, String> value, Context ctx, Collector<String> out) throws Exception {
String topicName = value.f0;
// your processing logic here.
out.collect(value.f1);
}
});
There are two ways to do that.
Option 1 :
You can use Kafka-clients library to access the Kafka metadata, get topic lists. Add maven dependency or equivalent.
<!-- https://mvnrepository.com/artifact/org.apache.kafka/kafka-clients -->
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.3.0</version>
</dependency>
You can fetch topics from Kafka cluster and filter using regex as given below
private static final Pattern topicPattern = Pattern.compile("(DC_TEST_([A-Z0-9_]+)");
Properties properties = new Properties();
properties.put("bootstrap.servers","localhost:9092");
properties.put("client.id","java-admin-client");
try (AdminClient client = AdminClient.create(properties)) {
ListTopicsOptions options = new ListTopicsOptions();
options.listInternal(false);
Collection<TopicListing> listing = client.listTopics(options).listings().get();
List<String> allTopicsList = listings.stream().map(TopicListing::name)
.collect(Collectors.toList());
List<String> matchedTopics = allTopicsList.stream()
.filter(topicPattern.asPredicate())
.collect(Collectors.toList());
}catch (Exception e) {
e.printStackTrace();
}
}
Once you have matchedTopics list, you can pass that to FlinkKafkaConsumer.
Option 2 :
FlinkKafkaConsumer011 in Flink release 1.8 supports Topic & partition discovery dynamically based on pattern. Below is the example :
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
private static final Pattern topicPattern = Pattern.compile("(DC_TEST_([A-Z0-9_]+)");
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "test");
FlinkKafkaConsumer011<String> myConsumer = new FlinkKafkaConsumer011<>(
topicPattern ,
new SimpleStringSchema(),
properties);
Link : https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/connectors/kafka.html#kafka-consumers-topic-and-partition-discovery
In your case, option 2 suits best.
Since you want to access topic metadata as part of KafkaMessage, you need to implement KafkaDeserializationSchema interface as given below :
public class CustomKafkaDeserializationSchema extends KafkaDeserializationSchema<KafkaMessage> {
/**
* Deserializes the byte message.
*
* #param messageKey the key as a byte array (null if no key has been set).
* #param message The message, as a byte array (null if the message was empty or deleted).
* #param partition The partition the message has originated from.
* #param offset the offset of the message in the original source (for example the Kafka offset).
*
* #return The deserialized message as an object (null if the message cannot be deserialized).
*/
#Override
public KafkaMessage deserialize(ConsumerRecord<byte[], byte[]> record) throws IOException {
//You can access record.key(), record.value(), record.topic(), record.partition(), record.offset() to get topic information.
KafkaMessage kafkaMessage = new KafkaMessage();
kafkaMessage.setTopic(record.topic());
// Make your kafka message here and assign the values like above.
return kafkaMessage ;
}
#Override
public boolean isEndOfStream(Long nextElement) {
return false;
}
}
And then call :
FlinkKafkaConsumer010<Tuple2<String, String>> kafkaConsumer = new FlinkKafkaConsumer010<>(
topicPattern, new CustomKafkaDeserializationSchema, kafkaConsumerProps);
I'm unit testing a very simple wrapper class for a KafkaProducer whose send method is simply like this
public class EntityProducer {
private final KafkaProducer<byte[], byte[]> kafkaProducer;
private final String topic;
EntityProducer(KafkaProducer<byte[], byte[]> kafkaProducer, String topic)
{
this.kafkaProducer = kafkaProducer;
this.topic = topic;
}
public void send(String id, BusinessEntity entity) throws Exception
{
ProducerRecord<byte[], byte[]> record = new ProducerRecord<>(
this.topic,
Transformer.HexStringToByteArray(id),
entity.serialize()
);
kafkaProducer.send(record);
kafkaProducer.flush();
}
}
The unit test reads as follows:
#Test public void send() throws Exception
{
#SuppressWarnings("unchecked")
KafkaProducer<byte[], byte[]> mockKafkaProducer = Mockito.mock(KafkaProducer.class);
String topic = "mock topic";
EntityProducer producer = new EntityProducer(mockKafkaProducer, topic);
BusinessEntitiy mockedEntity = Mockito.mock(BusinessEntity.class);
byte[] serialized = new byte[]{1,2,3};
when(mockedCipMsg.serialize()).thenReturn(serialized);
String id = "B441B675-294E-4C25-A4B1-122CD3A60DD2";
producer.send(id, mockedEntity);
verify(mockKafkaProducer).send(
new ProducerRecord<>(
topic,
Transformer.HexStringToByteArray(id),
mockedEntity.serialize()
)
);
verify(mockKafkaProducer).flush();
The first verify method fails, hence the test failis, with the following message:
Argument(s) are different! Wanted:
kafkaProducer.send(
ProducerRecord(topic=mock topic, partition=null, key=[B#181e731e, value=[B#35645047, timestamp=null)
);
-> at xxx.EntityProducerTest.send(EntityProducerTest.java:33)
Actual invocation has different arguments:
kafkaProducer.send(
ProducerRecord(topic=mock topic, partition=null, key=[B#6f44a157, value=[B#35645047, timestamp=null)
);
What is most relevant is that the key of the ProducerRecord is not the same, the value appears the same
Is the unit test properly oriented? How may I make the test pass?
Kind regards.
I would suggest to capture the argument and verify it. Please see the code below:
ArgumentCaptor<ProducerRecord> captor = ArgumentCaptor.forClass(ProducerRecord.class);
verify(mockKafkaProducer).send(captor.capture());
ProducerRecord actualRecord = captor.getValue();
assertThat(actualRecord.topic()).isEqualTo("mock topic");
assertThat(actualRecord.key()).isEqualTo("...");
...
This is more readable (my view) and it is kind of document to what is happening in the method
This code:
verify(mockKafkaProducer).send(
new ProducerRecord<>(
topic,
Transformer.HexStringToByteArray(id),
mockedEntity.serialize()
)
);
Means:
"Verify that 'send' was called on 'mockKafkaProducer' with the following arguments: ..."
This assertion fails, since send was actually called with different arguments.
I have OSGI framework, where in, I accept REST calls in one bundle and the data received in the rest call is sent to KAFKA brocker. There is another bundle which will consume the messages from brocker.
If I initialize the KAFKA Consumer bundle before REST bundle, REST bundleActivator is never called because code runs in the while loop of KAFKA Consumer code. and if I initialize REST bundle before consumer bundle, Consumer bundle never starts.
Following is the code for Activator of KAFKA Bundle.:
public class KafkaConsumerActivator implements BundleActivator {
private static final String ZOOKEEPER_CONNECT = "zookeeper.connect";
private static final String GROUP_ID = "group.id";
private static final String BOOTSTRAP_SERVERS = "bootstrap.servers";
private static final String KEY_DESERIALIZER = "key.deserializer";
private ConsumerConnector consumerConnector;
private KafkaConsumer<String, String> consumer;
private static final String VALUE_DESERIALIZER = "value.deserializer";
public void start(BundleContext context) throws Exception {
Properties properties = new Properties();
properties.put(ZOOKEEPER_CONNECT,
MosaicThingsConstant.KAFKA_BROCKER_IP + ":" + MosaicThingsConstant.ZOOKEEPER_PORT);
properties.put(GROUP_ID, MosaicThingsConstant.KAFKA_GROUP_ID);
properties.put(BOOTSTRAP_SERVERS,
MosaicThingsConstant.KAFKA_BROCKER_IP + ":" + MosaicThingsConstant.KAFKA_BROCKER_PORT);
properties.put(KEY_DESERIALIZER, StringDeserializer.class.getName());
properties.put(VALUE_DESERIALIZER, StringDeserializer.class.getName());
consumer = new KafkaConsumer<>(properties);
try {
consumer.subscribe(Arrays.asList(MosaicThingsConstant.KAFKA_TOPIC_NAME));
while (true) {
ConsumerRecords<String, String> records = consumer.poll(Long.MAX_VALUE);
for (ConsumerRecord<String, String> record : records) {
Map<String, Object> data = new HashMap<>();
data.put("partition", record.partition());
data.put("offset", record.offset());
data.put("value", record.value());
System.out.println(": " + data);
}
}
} catch (WakeupException e) {
// ignore for shutdown
} finally {
consumer.close();
}
}
}
You should never do something that takes long in the start method of an Activator. It will block the whole OSGi framework.
You best execute the whole connect and loop in an extra thread. In the stop method you can then tell this thread to exit.