Remove RabbitMQ consumers and see it in Browser's RabbitMQ Console - java

Disclaimer : Noobie in RabbitMq and/or Spring Integration and/or Spring Cloud Stream.
I have the following class:
#Component
public class RabbitMQChannelBindingFactory {
...
private org.springframework.cloud.stream.binder.rabbitRabbitMessageChannelBinder binder;
private org.springframework.cloud.stream.config.BindingServiceProperties bindingServiceProperties;
private org.springframework.cloud.stream.binding.BindingService bindingService;
private org.springframework.beans.factory.config.ConfigurableListableBeanFactory beanFactory;
private org.springframework.cloud.stream.binding.SubscribableChannelBindingTargetFactory bindingTargetFactory;
private org.springframework.amqp.rabbit.connection.ConnectionFactory rabbitConnectionFactory;
...
}
What is needed?
I have a mechanism that creates an Exchange+Queue+Consumer and I have a mechanism that destroys these.
The exchange and the queue have the auto-delete set on true.
What is the problem?
The inherited mechanism that destroys all of those 3 elements does not work.
It deletes just the Exchange. The Queue doesn't get deleted because it still has a Consumer and I can see it in my application also.
What has been tried?
I tried using the JVisualVM to get to the String instance of customer tag, I then walked up the hierarchy do remove the consumers.
I have changed the org.springframework.amqp.rabbit.listener.BlockingQueueConsumer inside my application so that it would be loaded first by the class loader.
I added inside something like this, in order to keep track of all the Consumers created in my application:
public class BlockingQueueConsumer {
...
public static List<BlockingQueueConsumer> all = new ArrayList<>();
public BlockingQueueConsumer(...) {
...
all.add(this);
...
}
...
}
Once I have done the previous step, I have added another method inside the RabbitMQChannelBindingFactory
class to call the cancel method for all the consumers, something like this:
class RabbitMQChannelBindingFactory {
public void disconnect(...) {
BlockingQueueConsumer lastBlockingQueueConsumer =
BlockingQueueConsumer.all.get(BlockingQueueConsumer.all.size() - 1);
lastBlockingQueueConsumer.getConsumerTags()
.forEach(consumerTag -> basicCancel(lastBlockingQueueConsumer, consumerTag));
}
}
At this point on the Browser with RabbitMQ Console loaded, we can see that the Queue is delete (besides the Exchange and the Consumer).
What is the problem?
I can not find a way to connect the BlockingQueueConsumer to the autowired properties.
For example I have tried
public void deleteRabbitMQConsumer() {
RabbitTemplate rabbitTemplate = new RabbitTemplate(rabbitConnectionFactory);
rabbitTemplate.execute(channel -> {
if (channel instanceof ChannelN) {
ChannelN channelN = (ChannelN) channel;
return true;
}
return false;
});
}
but it seems that there are no consumers inside the ChannelN.
Can you please give me a direction what needs to be understood first?
Or what are some sources for helping me?
Or anybody that has tried this action of cancelling the Consumer using this autowired properties?
Or do I need to add other autowired properties?
I have tried the https://stackoverflow.com/a/27633771/13622666 solution.
The solution
#Component
public class RabbitMQChannelBindingFactory {
...
private org.springframework.cloud.stream.binder.rabbitRabbitMessageChannelBinder binder;
private void connectAndDisconnectConsumer(...) {
...
Binding<MessageChannel> messageChannelBinding =
binder.bindConsumer(exchangeName, "", channel, consumerProperties);
... // receive messages
messageChannelBinding.stop();
...
}
}
And the stacktrace:
messageChannelBinding.stop();
DefaultBinding#stop
AbstractEndpoint#stop()
AmqpInboundChannelAdapter#doStop
AbstractMessageListenerContainer#stop()
AbstractMessageListenerContainer#doStop
AbstractMessageListenerContainer#shutdown
SimpleMessageListenerContainer#doShutdown
BlockingQueueConsumer#basicCancel(boolean)

Voted to close. You must not abuse Java classes system like that, but more concentrate to learn the library you use. Probably some one has already asked about the solution you are looking for. As Gary said: there is just stop() on the Spring Cloud Stream binding, which is going to stop a MessageListenerContainer, which, in turn, will cancel all the consumers it has on the queue. And your auto-deleted queue is going to be removed from RabbitMQ. There is no reason in destroying exchanges. Although you can do that via AmqpAdmin.deleteExchange().

Related

Class with datastructure reused by multiple other classes SpringBoot

I am new to Spring Boot and just implemented a normal Spring Boot application with HTTP where endpoints receive data and put in a database. Now I want some data to put in both databases and a class with data structure. Since there will be continuous operations with this data I need to operate with it as a separate process.
#Service
public class RulesManager {
private HashMap<Integer, Rule> rules = new HashMap<Integer, Rule>();
public void addRule(Rule rule) {
// Add rule to the database
}
// should be running in the background
public void updateRules(){
// Continuous check of rules and update of this.rules HashMap
}
}
#SpringBootApplication
public class RulesApplication {
public static void main(String... args) {
SpringApplication.run(RulesApplication.class, args);
// How do I call RulesManager.updateRules() to run in the background and make changes to rules hashmap???
}
}
So while listening to HTTP requests I want my application to run background process which will never stop and repeat itself. I am not sure how to call that class from the main RulesApplication class so that both http requests and background process were able to make changes to this.rules HashMap. Will be grateful for any tip or advice.
If you are just looking to start a always on process when app starts ( even better when RuleManager gets initialized ), then you should simply create a new thread in the constructor of RuleManager :
methodCalledByConstructor()
{
new Thread(()->{
// loop start
// access and check the hashmap
// do what is necessary
// sleep for a sometime
// loop end
}).start();
}
But if the work is only required when some event occurs, then use observer pattern for more elegant solution.
Try to define a new Thread for example "LocalRulesHandling" and annotate it with #Component and inside this thread add your implementations regarding the rules hashmap.
In the RulesApplication class try to get the spring context and the get the execution thread bean and then start this thread.
ApplicationContext conttext = SpringApplication.run(RulesApplication.class, args);
LocalRulesHandling handling = context.getBean(LocalRulesHandling.class);
handling.start();

Weird (Loop) behavior when using Spring #TransactionalEventListener to publish event

I have a weird issue which involves #TransactionalEventListener not firing correctly or behavior as expected when triggered by another #TransactionalEventListener.
The general flow is:
AccountService publish an Event (to AccountEventListener)
AccountEventListener listens for the Event
Perform some processing and then publish another Event (to MailEventListener)
MailEventListener listens for the Event and peform some processing
So here's the classes (excerpt).
public class AccountService {
#Transactional
public User createAccount(Form registrationForm) {
// Some processing
// Persist the entity
this.accountRepository.save(userAccount);
// Publish the Event
this.applicationEventPublisher.publishEvent(new RegistrationEvent());
}
}
public class AccountEventListener {
#TransactionalEventListener
#Transactional(propagation = Propagation.REQUIRES_NEW)
public MailEvent onAccountCreated(RegistrationEvent registrationEvent) {
// Some processing
// Persist the entity
this.accountRepository.save(userAccount);
return new MailEvent();
}
}
public class MailEventListener {
private final MailService mailService;
#Async
#EventListener
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
}
This code works but my intention is to use #TransactionalEventListener in my MailEventListener class. Hence, the moment I change from #EventListener to #TransactionalEventListener in MailEventListener class. The MailEvent does not get triggered.
public class MailEventListener {
private final MailService mailService;
#Async
#TransactionalEventListener
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
}
MailEventListener was never triggered. So I went to view Spring Documentation, and it states that #Async #EventListener is not support for event that is published by the return of another event. And so I changed to using ApplicationEventPublisher in my AccountEventListener class.
public class AccountEventListener {
#TransactionalEventListener
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void onAccountCreated(RegistrationEvent registrationEvent) {
// Some processing
this.accountRepository.save(userAccount);
this.applicationEventPublisher.publishEvent(new MailEvent());
}
}
Once I changed to the above, my MailEventListener now will pick up the event that is sent from AccountEventListener but the webpage hangs when form is submitted and it throws some exception after awhile, and then it also sent me about 9 of the same email to my email account.
I added some logging, and found out that my AccountEventListener (this.accountRepository.save()) actually ran 9 times before hitting the exception, which then causes my MailEventListener to execute 9 times I believe, and that is why I received 9 mails in my inbox.
Here's the logs in Pastebin.
I'm not sure why and what is causing it to run 9 times. There is no loop or anything in my methods, be it in AccountService, AccountEventListener, or MailEventListener.
Thanks!
So I went to view Spring Documentation, and it states that #Async #EventListener is not support for event that is published by the return of another event. And so I changed to using ApplicationEventPublisher in my AccountEventListener class.
Your understand is incorrect.
The document said that:
This feature is not supported for asynchronous listeners.
It does not mean
it states that #Async #EventListener is not support for event that is published by the return of another event.
It means:
This feature does not support events return from #Async #EventListener.
Your setup:
#Async
#TransactionalEventListener
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
Does not work because as stated in document:
If the event is not published within the boundaries of a managed transaction, the event is discarded unless the fallbackExecution() flag is explicitly set. If a transaction is running, the event is processed according to its TransactionPhase.
If you use the debug, you can see that if your event is returned from an event listener, it happens after the transaction commit, hence the event is discarded.
So if you set the fallbackExecution = true as stated in the document, your event will correctly listened:
#Async
#TransactionalEventListener(fallbackExecution = true)
public void onAccountCreated(MailEvent mailEvent) {
this.mailService.prepareAndSend(mailEvent);
}
The repeated behavior is look like some retry behavior, the connection queued up, exhaust the pool and throw the exception. Unless you provide a minimal source code to reproduce the problem, I can't identify it.
Update
Reading your code, the root cause is clear now.
Look at your setup for POST /registerPublisherCommon
MailPublisherCommonEvent and AccountPublisherCommonEvent are subevent of BaseEvent
createUserAccountPublisherCommon publish an event of type AccountPublisherCommonEvent
MailPublisherCommonEventListener is registered to handle MailPublisherCommonEvent
AccountPublisherCommonEventListener is registered to handle BaseEvent and ALL SUB-EVENT of it.
AccountPublisherCommonEventListener also publishes MailPublisherCommonEvent (which is also a BaseEvent).
Read 4 + 5 you will see the root cause: AccountPublisherCommonEventListener publishes MailPublisherCommonEvent which is also handled by itself, hence the infinite event processing occur.
To resolve it, simply narrow down the type of event it can handle like you did.
Note
Your setup for MailPublisherCommonEvent working regardless the fallbackExecution flag because you're publishing it INSIDE A TRANSACTION, not OUTSIDE A TRANSACTION (by return from an event listener) like you specified in your question.
For what it's worth, I found out what is causing the looping and how to resolve it but I still cannot understand why does it happens as such.
And correct me if I'm wrong, setting fallbackExecution = true isn't really the answer to the issue.
Based on Spring documentation, the event is processed according to its TransactionPhase. So I had #Transactional(propagation = Propagation.REQUIRES_NEW) in my AccountEventListener class which should be a transaction by itself, and MailEventListener should only be executing in the event that the phase which by default is AFTER_COMMIT for #TransactionalEventListener.
I setup a git, to reproduce the issue and while doing so, allows me to discover what really went wrong. Having said that, I still do not understand the root cause of it.
Before I do, there are some things that I am not 100% sure but it's just my guess/understand at this moment.
As mentioned in the Spring Documentation,
If the event is not published within the boundaries of a managed transaction, the event is discarded unless the fallbackExecution() flag is explicitly set. If a transaction is running, the event is processed according to its TransactionPhase.
And my guess of the reason why MailEventListener class did not pick up the event when using the event as the return type to let Spring automatically publish is because it publishes outside of the boundaries of a managed transaction. Which is why, if you set (fallbackExecution = true) in MailEventListener, it will work/run because it doesn't matter if it in within the transaction or not.
Note: Classes mentioned above are taken from my initial post. The
classes below are named slightly differently but essentially, all are
still the same, just different name.
Now, back to the point where I said I found the answer as to why it is causing the loop.
Basically, it is when the parameter put in the listener is a BaseEvent of sort.
So assuming that I have the following classes:
public class BaseEvent {
private final User userAccount;
}
public class AccountPublisherCommonEvent extends BaseEvent {
public AccountPublisherCommonEvent(User userAccount) {
super(userAccount);
}
}
public class MailPublisherCommonEvent extends BaseEvent {
public MailPublisherCommonEvent(User userAccount) {
super(userAccount);
}
}
And the listeners classes (Notice that the parameter is the BaseEvent):
public class AccountPublisherCommonEventListener {
private final AccountRepository accountRepository;
private final ApplicationEventPublisher eventPublisher;
// Notice that the parameter is the BaseEvent
#TransactionalEventListener
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void onAccountPublisherCommonEvent(BaseEvent accountEvent) {
User userAccount = accountEvent.getUserAccount();
userAccount.setUserFirstName("common");
this.accountRepository.save(userAccount);
this.eventPublisher.publishEvent(new MailPublisherCommonEvent(userAccount));
}
}
public class MailPublisherCommonEventListener {
#Async
#TransactionalEventListener
public void onMailPublisherCommonEvent(MailPublisherCommonEvent mailEvent) {
log.info("Sending common email ...");
}
}
Basically, if the setup of the listener is as such (above), then you enter a loop and hit an exception as mentioned by the previous poster.
The repeated behavior is look like some retry behavior, the connection queued up, exhaust the pool and throw the exception.
And to resolve the issue, simply, change the input, and define the classes to listen by (Notice the addition of ({AccountPublisherCommonEvent.class})):
public class AccountPublisherCommonEventListener {
private final AccountRepository accountRepository;
private final ApplicationEventPublisher eventPublisher;
// Notice the addition of ({AccountPublisherCommonEvent.class})
#TransactionalEventListener({AccountPublisherCommonEvent.class})
#Transactional(propagation = Propagation.REQUIRES_NEW)
public void onAccountPublisherCommonEvent(BaseEvent accountEvent) {
User userAccount = accountEvent.getUserAccount();
userAccount.setUserFirstName("common");
this.accountRepository.save(userAccount);
this.eventPublisher.publishEvent(new MailPublisherCommonEvent(userAccount));
}
}
An alternative would be changing the parameter to the actual class name instead of the BaseEvent class I suppose. And there is no changes required to the MailPublisherCommonEventListener
By doing so, it no longer loop, nor hit the exception. The behavior would run as I want and expected it to.
I would appreciate if anyone could answer to as of why does this happen exactly if I place the BaseEvent as the input would caused a looping to occur. Here's the link to git for the poc. Hope I'm making some sense here.
Thank you.

Reactor / WebFlux implement a reactive http news ticker

I have a request that is rather simple to formulate, but I cannot pull it of without leaking resources.
I want to return a response of type application/stream+json, featuring news events someone posted. I do not want to use Websockets, not because I don't like them, I just want to know how to do it with a stream.
For this I need to return a Flux<News> from my restcontroller, that is continuously fed with news, once someone posts any.
My attempt for this was creating a Publisher:
public class UpdatePublisher<T> implements Publisher<T> {
private List<Subscriber<? super T>> subscribers = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super T> s) {
subscribers.add(s);
}
public void pushUpdate(T message) {
subscribers.forEach(s -> s.onNext(message));
}
}
And a simple News Object:
public class News {
String message;
// Constructor, getters, some properties omitted for readability...
}
And endpoints to publish news respectively get the stream of news
// ...
private UpdatePublisher<String> updatePublisher = new UpdatePublisher<>();
#GetMapping(value = "/news/ticker", produces = "application/stream+json")
public Flux<News> getUpdateStream() {
return Flux.from(updatePublisher).map(News::new);
}
#PutMapping("/news")
public void putNews(#RequestBody News news) {
updatePublisher.pushUpdate(news.getMessage());
}
This WORKS, but I cannot unsubscribe, or access any given subscription again - so once a client disconnects, the updatePublisher will just continue to push onto a growing number of dead channels - as I have no way to call the onCompleted() handler on the subscriptions.
TL;DL:
Can one push messages onto a possible endless Flux from a different thread and still terminate the Flux on demand without relying on a reset by peer exception or something along those lines?
You should never try to implement yourself the Publisher interface, as it boils down to getting the reactive streams implementation right. This is exactly the issue you're facing here.
Instead you should use one of the generator operators provided by Reactor itself (this is actually a Reactor question, nothing specific to Spring WebFlux).
In this case, Flux.create or Flux.push are probably the best candidates, given your code uses some type of event listener to push events down the stream. See the reactor project reference documentation on that.
Without more details, it's hard to give you a concrete code sample that solves your problem. Here are a few pointers though:
you might want to .share() the stream of events for all subscribers if you'd like some multicast-like communication pattern
pay attention to the push/pull/push+pull model that you'd like to have here; how is the backpressure supposed to work here? What if we produce more events that the subscribers can handle?
this model would only work on a single application instance. If you'd like this to work on multiple application instances, you might want to look into messaging patterns using a broker

Preventing race conditions for message processing

I have a J2EE application that receives messages (events) via a web service. The messages are of varying types (requiring different processing depending on type) and sent in a specific sequence. It have identified a problem where some message types take longer to process than others. The result is that a message received second in a sequence may be processed before the first in the sequence. I have tried to address this problem by placing a synchronized block around the method that processes the messages. This seems to work, but I am not confident that this is the "correct" approach? Is there perhaps an alternative that may be more appropriate or is this "acceptable"? I have included a small snippit of code to try to explain more clearly. .... Any advice / guidance appreciated.
public class EventServiceImpl implements EventService {
public String submit (String msg) {
if (msg == null)
return ("NAK");
EventQueue.getInstance().submit(msg);
return "ACK";
}
}
public class EventQueue {
private static EventQueue instance = null;
private static int QUEUE_LENGTH = 10000;
protected boolean done = false;
BlockingQueue<String> myQueue = new LinkedBlockingQueue<String>(QUEUE_LENGTH);
protected EventQueue() {
new Thread(new Consumer(myQueue)).start();
}
public static EventQueue getInstance() {
if(instance == null) {
instance = new EventQueue();
}
return instance;
}
public void submit(String event) {
try {
myQueue.put(event);
} catch (InterruptedException ex) {
}
}
class Consumer implements Runnable {
protected BlockingQueue<String> queue;
Consumer(BlockingQueue<String> theQueue) { this.queue = theQueue; }
public void run() {
try {
while (true) {
Object obj = queue.take();
process(obj);
if (done) {
return;
}
}
} catch (InterruptedException ex) {
}
}
void process(Object obj) {
Event event = new Event( (String) obj);
EventHandler handler = EventHandlerFactory.getInstance(event);
handler.execute();
}
}
// Close queue gracefully
public void close() {
this.done = true;
}
I am not sure what is the framework (EJB(MDB)/JMS) you are working with. Generally using synchronization inside a Managed Environment like that of EJB/JMS should be avoided(its not a good practice). One way to get around is
the client should wait for the acknowledgement from the server before it sends the next message.
this way you client itself will control the sequence of events.
Please note this won't work if there are multiple client submitting the messages.
EDIT:
You have a situation wherein the client of the web service sends message in sequence without taking into account the message processing time. It simply dumps the message one after another. This is a good case for Queue ( First In First Out ) based solution. I suggest following two ways to accomplish this
Use JMS . This will have an additional overhead of adding a JMS providers and writing some plumbing code.
Use some multitheading pattern like Producer-Consumer wherein your web service handler will be dumping the incoming message in a Queue and a single threaded consumer will consume one message at a time. See this example using java.util.concurrent package.
Use database. Dump the incoming messages into a database. Use a different scheduler based program to scan the datbase (based on sequence number) and process the messages accordingly.
First and third solution is very standard for these type of problems. The second approach would be quick and won't need any additional libraries in your code.
If the events are to be processed in a specific sequence, then why not try adding "eventID" and 'orderID' fields to the messages? This way your EventServiceImpl class can sort, order and then execute in the proper order (regardless of the order they are created and/or delivered to the handler).
Synchronizing the handler.execute() block will not get the desired results, I expect. All the synchronized keyword does is prevent multiple threads from executing that block at the same time. It does nothing in the realm of properly ordering which thread goes next.
If the synchronized block does seem to make things work, then I assert you are getting very lucky in that the messages are being created, delivered and then acted upon in the proper order. In a multithread environment, this is not assured! I'd take steps to assure you are controlling this, rather than relying on good fortune.
Example:
Messages are created in the order 'client01-A', 'client01-C',
'client01-B', 'client01-D'
Messages arrive at the handler in the order 'client01-D',
'client01-B', 'client01-A', 'client01-C'
EventHandler can distinquish messages from one client to another and starts to cache 'client01' 's messages.
EventHandler recv's 'client01-A' message and knows it can process this and does so.
EventHandler looks in cache for message 'client01-B', finds it and processes it.
EventHandler cannot find 'client01-C' because it hasn't arrived yet.
EventHandler recv's 'client01-C' and processes it.
EventHandler looks in cache for 'client01-D' finds it, processes it, and considers the 'client01' interaction complete.
Something along these lines would assure proper processing and would promote good use of multiple threads.

Timer Service in ejb 3.1 - schedule calling timeout issue

I have created simple example with #Singleton, #Schedule and #Timeout annotations to try if they would solve my problem.
The scenario is this: EJB calls 'check' function every 5 secconds, and if certain conditions are met it will create single action timer that would invoke some long running process in asynchronous fashion. (it's sort of queue implementation type of thing). It then continues to check, but while the long running process is there it won't start another one.
Below is the code I came up with, but this solution does not work, because it looks like asynchronous call I'm making is in fact blocking my #Schedule method.
#Singleton
#Startup
public class GenerationQueue {
private Logger logger = Logger.getLogger(GenerationQueue.class.getName());
private List<String> queue = new ArrayList<String>();
private boolean available = true;
#Resource
TimerService timerService;
#Schedule(persistent=true, minute="*", second="*/5", hour="*")
public void checkQueueState() {
logger.log(Level.INFO,"Queue state check: "+available+" size: "+queue.size()+", "+new Date());
if (available) {
timerService.createSingleActionTimer(new Date(), new TimerConfig(null, false));
}
}
#Timeout
private void generateReport(Timer timer) {
logger.info("!!--timeout invoked here "+new Date());
available = false;
try {
Thread.sleep(1000*60*2); // something that lasts for a bit
} catch (Exception e) {}
available = true;
logger.info("New report generation complete");
}
What am I missing here or should I try different aproach? Any ideas most welcome :)
Testing with Glassfish 3.0.1 latest build - forgot to mention
The default #ConcurrencyManagement for singletons is ConcurrencyManagementType.CONTAINER with default #Lock of LockType.WRITE. Basically, that means every method (including generateReports) is effectively marked with the synchronized keyword, which means that checkQueueState will block while generateReport is running.
Consider using ConcurrencyManagement(ConcurrencyManagementType.BEAN) or #Lock(LockType.READ). If neither suggestion helps, I suspect you've found a Glassfish bug.
As an aside, you probably want persistent=false since you probably don't need to guarantee that the checkQueueState method fires every 5 seconds even when your server is offline. In other words, you probably don't need the container to fire "catch ups" when you bring your server back online.

Categories

Resources