I am very new to Camel, and have been struggling to understand how to use camel in a specific scenario.
In this scenario, there is a (Java-based) agent that generates actions from time to time. I need an event-driven consumer to get notified of these events. These events will be routed to a 'file' producer (for the time being).
In the camel book, the example is for a polling consumer. I could not find a generic solution for an event-driven consumer.
I came across a similar implementation for JMX :
public class JMXConsumer extends DefaultConsumer implements NotificationListener {
JMXEndpoint jmxEndpoint;
public JMXConsumer(JMXEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.jmxEndpoint = endpoint;
}
public void handleNotification(Notification notification, Object handback) {
try {
getProcessor().process(jmxEndpoint.createExchange(notification));
} catch (Throwable e) {
handleException(e);
}
}
}
Here, the handleNotification is invoked whenever a JMX notification arrives.
I believe I have to do something similar to get my consumer notified whenever the agent generates an action. However, the above handleNotification method is specific to JMX. The web page says: " When implementing your own event-driven consumer, you must identify an analogous event listener method to implement in your custom consumer."
I want to know: How can I identify an analogous event listener, so that my consumer will be notified whenever my agent has an action.
Any advice/link to a web page is very much appreciated.
I know this is an old question, but I've been struggling with it and just thought I would document my findings for anyone else searching for an answer.
When you create an Endpoint class (extending DefaultEndpoint) you override the following method for creating a consumer:
public Consumer createConsumer(Processor processor)
In your consumer then, you have access to a Processor - calling 'process' on this processor will create an event and trigger the route.
For example, say you have some Java API that listens for messages, and has some sort of Listener. In my case, the Listener puts incoming messages onto a LinkedBlockingQueue, and my Consumer 'doStart' method looks like this (add your own error handling):
#Override
protected void doStart() throws Exception {
super.doStart();
// Spawn a new thread that submits exchanges to the Processor
Runnable runnable = new Runnable() {
#Override
public void run() {
while(true) {
IMessage incomingMessage = myLinkedBlockingQueue.take();
Exchange exchange = getEndpoint().createExchange();
exchange.getIn().setBody(incomingMessage);
myProcessor.process(exchange);
}
}
};
new Thread(runnable).start();
}
Now I can put the Component that creates the Endpoint that creates this Consumer in my CamelContext, and use it like this:
from("mycomponent:incoming").to("log:messages");
And the log message fires every time a new message arrives from the Java API.
Hope that helps someone!
Event driven is what camel is.
Any route is actually an event listener.
given the route:
from("activemq:SomeQueue").
bean(MyClass.class);
public class MyBean{
public void handleEvent(MyEventObject eventPayload){ // Given MyEventObject was sent to this "SomeQueue".
// whatever processing.
}
}
That would put up an event driven consumer. How to send events then? If you have camel embedded in your app and access to the CamelContext from your event action generator, then you could grab a Producer Template from it and just fire of your event to whatever endpoint you defined in Camel, such as "seda:SomeQueue".
Otherwise, if your Camel instance is running in another server or instance than your application, then you should use some other transport rather than SEDA. Preferably JMS, but others will do as well, pick and choose. ActiveMQ is my favourite. You can start an embedded activemq instance (intra JVM) easily and connect it to camel by:
camelContext.addComponent("activemq", activeMQComponent("vm://localhost"));
Related
I have few sqs listener consuming from some standard sqs queue.
These listeners are responsible to call upon some service method which in turn talk to some3rd-party data provider.
When there are several messages consumed within short span of time, the load on service call to 3rd-party reaches it's limit(crosses rate limit).
1st listener
#SqsListener(value = "${cloud.aws.queue-1}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void handleQueue1(final String message, #Header RequestType type, #Header("MessageId") String messageId, Acknowledgment acknowledgment) throws JsonProcessingException {
..
...
synchronized (this) {
// call to some common service method
}
...
..
}
2nd listener
#SqsListener(value = "${cloud.aws.queue-2}", deletionPolicy = SqsMessageDeletionPolicy.NEVER)
public void handleQueue2(final String message, #Header RequestType type, #Header("MessageId") String messageId, Acknowledgment acknowledgment) throws JsonProcessingException {
..
...
synchronized (this) {
// call to some common service method
}
...
..
}
My question is how can I make sure that each sqs listener reaches the service call one after another assuming each requires data from 3rd-party call.
I tried adding synchronized block but I'm unable to figure out if this is okay to have it.
While it is possible to do what you're trying to do with Java synchronization it's a really bad idea for a number of reasons.
Suggested solutions:
Use the JMSListener concurrency configuration to restrict the number or worker threads running for each listener. #JmsListener
So, what. Let the 3rd party api fail and throw an exception. SQS will automatically retry. You will want to configure your SQS queue's visibility timeout window and retry limits with your 3rd party API limits in mind. SQS Visibility Time Out
Work with the 3rd party to increase your limits.
I'm currently trying to build a tcp server with netty. The server should then be part of my main program.
My application needs to send messages to the connected clients. I know I can keep track of the channels using a concurrent hash map or a ChannelGroup inside a handler. To not block my application the server itself has to run in a seperate thread. From my pov the corresponding run method would look like this:
public class Server implements Runnable {
#Override
public void run() {
EventLoopGroup bossEventGroup = new NioEventLoopGroup();
EventLoopGroup workerEventGroup = new NioEventLoopGroup();
try {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap
.group(bossEventGroup, workerEventGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new MyServerInitializer());
ChannelFuture future = bootstrap.bind(8080).sync().channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
workerEventGroup.shutdownGracefully();
bossEventGroup.shutdownGracefully();
}
}
}
But now I have no idea how to integerate e.g. a sendMessage(Message message) method which can be used by my main application. I believe the function itself has to be defined in the handler to have access to the stored connected channels. But can someone give me an idea how to make such a function usable from the outside? Do I have to implement some sort of message queue which is checked in a loop after the bind? I could imagine that then the method invocation looks like this:
ServerHandlerTest t = (ServerHandlerTest) future.channel().pipeline().last();
(if newMessageInQueue) {
t.sendMessage(...);
}
Maybe someone is able to explain me what is the preferred implementation method for this use case.
I would go to create your own application handler to manage the business behavior within your own Netty handler, because that is the main logic (event based).
Your own (last) handler take care of all your application behavior, such that each client is served correctly, directly within the handler, using the ContextCHannelHandler ctx
Of course, you can still think of a particular application handler that would do something as:
Creation of the handler (in the pipeline creation within MyServerInitializer) will initiate the handler to look for a messageQueue to send
Then polling on the messageQueue to send but to the right client using a hashMap
But I believe it is far more complicated (which queue for which client or a global queue, how to handle the queue without blocking the server thread - not to do -, ...).
Moreover, sendMessage method ? Do you want to talk about write (or writeAndFlush) method ?
I need to listen to a Rabbit Queue in the Flume Custom Source which I have developed.This requirement may seem awkward in Flume.But this is how its needed.
As I am using Spring AMQP to listen to the queue for simplicity,I am just not able to understand how to invoke the OnMessage() method within the Flume lifecycle Start() method,So that the messages can be posted onto the Flume channel.
I have looked at the Spring MessageListenerAdapter concept but I have not been able to find any example to implement the same.
onMessage() is a part of MessageListener pattern. It is some active component, which is initiated by the external system (from big height). And it works each time by that remote command, so you can't use it as a passive componet to be initiated by the user call.
Since you have "Flume lifecycle Start()" from other side and SimpleMessageListenerContainer has the same from its side, I'd say you have to correlate their lifecycles to work in tandem.
From here you should to provide for the SimpleMessageListenerContainer some inline MessageListener implementation, which invokes the desired method to "post onto the Flume channel".
HTH
UPDATE
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
....
container.setMessageListener(new MessageListener() {
public void onMessage(Message message) {
sendMessageToFlumeChannel(message);
}
});
Where the sendMessageToFlumeChannel is a method of the holding class.
Of course it can be any POJO instead of MessageListener implementation, but the main goal to delegate listener resul to some method.
I'm developing a server based on the Netty libraby and I'm having a problem with how to structure the application with regards to business Logic.
currenty I have the business logic in the last handler and that's where I access the database. The thing I can't wrap my head around is the latency of accessing the database(blocking code). Is it advisable to do it in the handler or is there an alternative? code below:
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
super.channelRead(ctx, msg);
Msg message = (Msg)msg;
switch(message.messageType){
case MType.SIGN_UP:
userReg.signUp(message.user);// blocking database access
break;
}
}
you should execute the blocking calls in DefaultEventExecutorGroup or your custom threadpool that can be added to when the handler is added
pipeline.addLast(new DefaultEventExecutorGroup(50),"BUSSINESS_LOGIC_HANDLER", new BHandler());
ctx.executor().execute(new Runnable() {
#Override
public void run() {
//Blocking call
}});
Your custom handler is initialized by Netty everytime the Server receives a request, hence one instance of the handler is responsible for handling one Client.
So, it is perfectly fine for issuing blocking calls in your handler. It will not affect other Client's, as long as you don't block it indefinitely (or atleast not for very long time), thereby not blocking Netty's Thread for long and you do not get too much load on your server instance.
However, if you want to go for asynchronous design, then there can be more than a few design patterns that you can use.
For eg. with Netty, if you can implement WebSockets, then perhaps you can make the blocking calls in a separate Thread, and when the results are available, you can push them to the client through the WebSocket already established.
I would like to make a kind of logging proxy in netty. The goal is to be able to have a web browser make HTTP requests to a netty server, have them be passed on to a back-end web server, but also be able to take certain actions based on HTTP specific things.
There's a couple of useful netty exmaples, HexDumpProxy (which does the proxying part, agnostic to the protocol), and I've taken just a bit of code from HttpSnoopServerHandler.
My code looks like this right now:
HexDumpProxyInboundHandler can be found at http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/proxy/HexDumpProxyInboundHandler.html
//in HexDumpProxyPipelineFactory
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline p = pipeline(); // Note the static import.
p.addLast("handler", new HexDumpProxyInboundHandler(cf, remoteHost, remotePort));
p.addLast("decoder", new HttpRequestDecoder());
p.addLast("handler2", new HttpSnoopServerHandler());
return p;
}
//HttpSnoopServerHandler
public class HttpSnoopServerHandler extends SimpleChannelUpstreamHandler {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
HttpRequest request = (HttpRequest) e.getMessage();
System.out.println(request.getUri());
//going to do things based on the URI
}
}
Unfortunately messageReceived in HttpSnoopServerHandler never gets called - it seems like HexDumpProxyInboundHandler consumes all the events.
How can I have two handlers, where one of them requires a decoder but the other doesn't (I'd rather have HexDumpProxy as it is, where it doesn't need to understand HTTP, it just proxies all connections, but my HttpSnoopHandler needs to have HttpRequestDecoder in front of it)?
I've not tried it but you could extend HexDumpProxyInboundHandler and override messageReceived with something like
super.messageReceived(ctx, e);
ctx.sendUpstream(e);
Alternatively you could modify HexDumpProxyInboundHandler directly to that the last thing messageReceived does is call super.messageReceived(ctx,e).
This would only work for inbound data from the client. Data from the service you're proxy-ing would still be passed through without you code seeing it.