Business Logic in Netty? - java

I'm developing a server based on the Netty libraby and I'm having a problem with how to structure the application with regards to business Logic.
currenty I have the business logic in the last handler and that's where I access the database. The thing I can't wrap my head around is the latency of accessing the database(blocking code). Is it advisable to do it in the handler or is there an alternative? code below:
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
super.channelRead(ctx, msg);
Msg message = (Msg)msg;
switch(message.messageType){
case MType.SIGN_UP:
userReg.signUp(message.user);// blocking database access
break;
}
}

you should execute the blocking calls in DefaultEventExecutorGroup or your custom threadpool that can be added to when the handler is added
pipeline.addLast(new DefaultEventExecutorGroup(50),"BUSSINESS_LOGIC_HANDLER", new BHandler());
ctx.executor().execute(new Runnable() {
#Override
public void run() {
//Blocking call
}});

Your custom handler is initialized by Netty everytime the Server receives a request, hence one instance of the handler is responsible for handling one Client.
So, it is perfectly fine for issuing blocking calls in your handler. It will not affect other Client's, as long as you don't block it indefinitely (or atleast not for very long time), thereby not blocking Netty's Thread for long and you do not get too much load on your server instance.
However, if you want to go for asynchronous design, then there can be more than a few design patterns that you can use.
For eg. with Netty, if you can implement WebSockets, then perhaps you can make the blocking calls in a separate Thread, and when the results are available, you can push them to the client through the WebSocket already established.

Related

How can I acknowledge a JMS message on another thread and request for redelivery of unacknowledged messages?

Step 1: I need to receive message by one thread.
Step 2: Process and sending ack and redelivery request (throwing exception) by another thread.
Sample code:
List<Message> list=new ArrayList();
#JmsListener(destination = "${jms.queue-name}", concurrency = "${jms.max-thread-count}")
public void receiveMessage(Message message) throws JMSException,UnsupportedEncodingException {
list.add(message)
}
void run() {
foreach(Message message:list) {
//need to send ack or throw exception for redeliver if error
}
}
Now another thread will start and process the list which contains data then how can I send an acknowledgement or throw an exception for redelivery?
Typically you'd let your framework (e.g. Spring) deal with concurrent message processing. This is, in fact, one of the benefits of such frameworks. I don't see any explicit benefit to dumping all the messages into a List and then manually spawning a thread to process it. Spring is already doing this for you via the #JmsListener by invoking receiveMessage in a thread and providing configurable concurrency.
Furthermore, if you want to trigger redelivery then you'll need to use a transacted JMS session and invoke rollback() but JMS sessions are not threadsafe so you'll have to control access to it somehow. This will almost certainly make your code needlessly complex.

interacting with JavaFX network service in a GUI

I am transitioning from writing a java Swing application to JavaFX to write a modern java based GUI application.
I would like to know the best approach to create a network based reusable threading service. The way I coded up the network service was to use a controller class (generated from the FXML via the Net-beans GUI). I put the threading logic here via a private Service member named 'transmitter' and I wired up the start/stop logic via the Start/Stop button's event callback.
The network based thread is implemented as a javafx Service - I did this since I would like to restart the service/thread whenever the destination address changes. This seems to be the recommended approach in place of a stand alone Task.
The network service is very simple right now, all it does is use some GUI widgets to configure a packet to transmit to a host/port once a second. I need to restart the service only if the host/port widget changes, however if the network service is running, I would like to modify the packet without interrupting/restarting the DatagramSocket. The place where I have questions and require some guidance are:
What is the recommended approach to threading a network thread in an
FXML based application? An example would be greatly appreciated.
How do I safely communicate changes from GUI widgets (via their
action performed callbacks) to the running service class?
Shown below are the most relevant parts of my controller class:
/**
* FXML Controller class
*
* #author johnc
*/
public class OpMessageServerController implements Initializable {
#FXML
private Text mCurrentDateTimeText;
#FXML
private Label mApplicationStatus;
#FXML
private ComboBox<DiscreteStatus> mPofDS;
#FXML
private ComboBox<PhaseOfFlightFMS> mPofFMS;
#FXML
private ComboBox<DiscreteStatus> mTailNumberDS;
#FXML
private ComboBox<DiscreteStatus> mConfigTableDS;
#FXML
private ComboBox<DiscreteStatus> mDateTimeDS;
#FXML
private TextField mEpicPN;
#FXML
private TextField mConfigTablePNHash;
#FXML
private TextField mTailNumber;
#FXML
private ComboBox<DiscreteStatus> mTopLevelPNDS;
#FXML
private Button mStartStopButton;
#FXML
private ComboBox<String> mDLMUHostSpec;
#FXML
private CheckBox connectionStatusC1;
#FXML
private CheckBox wsuConnectionStatus;
#FXML
private CheckBox connectionStatusC4;
#FXML
private CheckBox connectionStatusC3;
#FXML
private CheckBox connectionStatusC2;
#FXML
private CheckBox dlmuwConnectionStatus;
private Service<Void> transmitter;
/**
* Initializes the controller class.
* #param url
* #param rb
*/
#Override
public void initialize(URL url, ResourceBundle rb) {
mPofDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mPofDS.getSelectionModel().selectFirst();
mPofFMS.setItems(FXCollections.observableArrayList(PhaseOfFlightFMS.values()));
mPofFMS.getSelectionModel().selectFirst();
mTailNumberDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mTailNumberDS.getSelectionModel().selectFirst();
mConfigTableDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mConfigTableDS.getSelectionModel().selectFirst();
mDateTimeDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mDateTimeDS.getSelectionModel().selectFirst();
mTopLevelPNDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mTopLevelPNDS.getSelectionModel().selectFirst();
// mDLMUHostSpec.setItems(FXCollections.observableArrayList(
// FXCollections.observableArrayList("localhost:1234", "192.168.200.2:1234")));
// add event handler here to update the current date/time label
// this should also update the transmit datastructure
final Timeline timeline = new Timeline(new KeyFrame(
Duration.seconds(1), (ActionEvent event) -> {
LocalDateTime currentDateTime = LocalDateTime.now();
mCurrentDateTimeText.setText(currentDateTime.format(
DateTimeFormatter.ofPattern("kk:mm:ss uuuu")));
}));
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
// create a service.
transmitter = new Service() {
#Override
protected Task createTask() {
return new Task<Void>() {
#Override
protected Void call() throws InterruptedException {
updateMessage("Running...");
updateProgress(0, 10);
DatagramSocket sock = null;
while (!isCancelled()) {
try {
if (sock == null) {
DatagramSocket sock = new DatagramSocket();
}
} catch (SocketException ex) {
Logger.getLogger(OpMessageServerController.class.getName()).log(Level.SEVERE, null, ex);
}
//Block the thread for a short time, but be sure
//to check the InterruptedException for cancellation
OpSupportMessage opSupportMessage = new OpSupportMessage(
DiscreteStatus.NormalOperation,
PhaseOfFlightFMS.Cruise,
DiscreteStatus.NormalOperation,
"TAILNUM",
DiscreteStatus.NormalOperation);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
String[] specParts = mDLMUHostSpec.getValue().split(":");
if (specParts.length == 2) {
try {
opSupportMessage.write(bos);
byte[] buff = bos.toByteArray();
DatagramPacket packet = new DatagramPacket(
buff, buff.length, InetAddress.getByName(
specParts[0]), Integer.parseInt(specParts[1]));
mSocket.send(packet);
Thread.sleep(1000);
} catch (IOException ex) {
} catch (InterruptedException interrupted) {
if (isCancelled()) {
updateMessage("Cancelled");
break;
}
}
}
}
updateMessage("Cancelled");
return null;
}
#Override
protected void succeeded() {
System.out.println("Scanning completed.");
}
#Override
protected void failed() {
System.out.println("Scanning failed.");
}
#Override
protected void running() {
System.out.println("Scanning started.");
}
#Override
protected void cancelled() {
System.out.println("Scanning cancelled.");
}
private void DatagramSocket() {
throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
};
}
};
mApplicationStatus.textProperty().bind(transmitter.messageProperty());
};
#FXML
private void startStopButtonAction(ActionEvent event) {
if (!transmitter.isRunning()) {
transmitter.reset();
transmitter.start();
}
}
…
}
Background
This answer is based upon a collection of comments on the question, it rambles a bit, does not provide a solution targeted at the code in the question and does not address some of the concepts in the question such as a low level UDP socket based communication system - apologies for that.
Sample Solution Project
I did a proof of concept of a JavaFX app using web socket based communication: javafx-websocket-test. Perhaps some of the concepts from there might help you, in particular the client JavaFX Task and Service code and the sample client application and controller that uses it.
The project does demonstrate, in an executable implementation, some of the communication principles outlined in Adam Bien's article on JavaFX Integration Strategies that James_D linked, for example:
Setting up a web socket endpoint within a JavaFX service.
Wrapping each communication interaction in an async JavaFX Task.
Using async event callbacks to shunt success and failure results back to the UI.
Additionally the example shows interaction between the network service and the JavaFX UI, with the JavaFX UI making async requests to the service and processing async responses from it.
I do recall the seemingly simple Java web socket API did contain a few gotchas. It is just a proof of concept, so be careful of using it as the basis for a robust network service.
Commentary and thoughts
This is actually an invariably tricky question to answer IMO, due to these reasons:
There are many forms of network communication, some of which are suited to different applications.
There is (currently) no standard or best practice of integrating network services with JavaFX applications.
Providing a robust network connection with UI status monitoring and exception handling is often not as straight-forward as it might seem and is easy to get wrong.
There are many subtleties to be dealt with, such as:
What to do in the event of a communication failure?
What to do if the application issues requests at a faster rate than the network or server can process?
What happens if the user shuts down the application while messages are outstanding?
How to ensure that the UI is not frozen while lengthy communication processes occur?
How to provide UI feedback that lengthy network processing is on-going?
What underlying communication technology is being used?
Is the underlying communication stateful or stateless?
Is the communication non-blocking and event driven or blocking?
How to serialize and deserialize data for transmission?
Even though a one-size fits all communication model would be difficult, a "standard" communication model could be adapted which fits many needs. For example something similar to http ajax calls in the browser based network model or NetConnections for flash. Those seem to function well enough for a wide variety of needs. Though of course, they aren't optimal for everything, otherwise alternate systems such as web sockets or http live streaming would not have been created.
Ideally, there would be a single, standardized API like jQuery.ajax() for JavaFX client => server communication, but I haven't yet seen anybody create a JavaFX equivalent of that kind of API.
Unlike the rest of the core JavaFX APIs, such standardized high-level interfaces for network communication don't exist in an off-the-shelf form at the moment. However, there are plenty of libraries and functions available to act as the basic building blocks for developing your own service; perhaps even too many to reasonably process.
Note that most higher level network protocol libraries, such as a Tyrus web socket implementation or the Apache HTTP components underlying a JAX-RS provider have their own internal thread-pools for communication. Systems like netty are based upon nio and are event driven rather than thread managed. What your JavaFX network client service is one of these two things:
For non-blocking I/O it is issuing async calls, hooking into the response events and relaying them back to JavaFX via Platform.runLater.
For blocking I/O, it spawning a thread with a Task or Service with either an implicit or explicit executor service pool to manage the UI interaction but not the actual network comms.
A key and confusing thing is that the JavaFX application code should always perform the network communication in an async manner. For non-blocking I/O the call is already async, so no wrapper task is necessarily required. For blocking I/O, you don't want to block the UI thread, so the Task wrapper running in it's own thread prevents that occurring.
One would think this would make the non-blocking I/O calls simpler, but it doesn't really, as the JDK's non-blocking I/O API is quite low level and is pretty tricky to code to. It isn't really appropriate for high level application code.
Generally, application code is better off using a higher level library such as JAX-RS, web sockets or akka (or, preferably, a layer on top of them) which internally manage the details of the communication in either a blocking or non-blocking fashion AND provide an event driven API for sending and receiving messages. The individual message events can be wrapped in a JavaFX Task for async processing. So, from the JavaFX application point of view, everything is event driven, nothing is blocking, and the same application API works regardless of the underlying communication protocol and blocking/non-blocking communication infrastructure.
thanks for the proof of concept application, this will be quite useful, however one thing that is a bit obscure is how one can safely communicate GUI changes to the running service thread safely. It appears that the HelloService uses a 'name' simple string property to communicate changes from the GUI to the service before it is started. I wonder how one might communicate UI changes to a running background service in a thread safe manner. Via some sort or message api perhaps?
A BlockingQueue with a fixed max-size which rejects additional requests when the queue is full can be used for communication from JavaFX thread based code to a consumer service. It is a reasonably elegant solution to the classic producer-consumer problem.
Of course, you could just skip the blocking queue and keep creating async tasks ad-nauseum, which is fine for low volume communication, but could lead to a starvation of limited thread resources for high volume communication. One standard way to handle that is to use an ExecutorService from Executors which manages a thread pool. The thread pool for the executor service can be defined to be bounded to a max number of threads and internally use an unbounded queue where messages pile up if all threads are busy. That way you don't need to define your own blocking queue, you just issue async service requests and they are immediately handled in threads if they can be or the requests pile up in the internal queue if they cannot.
This is actually the way that a JavaFX Service works:
The Service by default uses a thread pool Executor with some unspecified default or maximum thread pool size. This is done so that naive code will not completely swamp the system by creating thousands of Threads.
and:
If an Executor is specified on the Service, then it will be used to actually execute the service. Otherwise, a daemon thread will be created and executed. If you wish to create non-daemon threads, then specify a custom Executor (for example, you could use a ThreadPoolExecutor with a custom ThreadFactory).
More sophisticated solutions for which a simple BlockedQueue messaging is not appropriate would use a topic based message queue style solution, e.g., a Java based STOMP client such as this kaazing example.
Getting the message info to the service is just part of the requirement, that is essentially doing an async message send. You also need to process the response that comes back. To do that, there are two alternatives:
You model each request as a separate Task, and the onSuccess and onError handlers process the task response. Running the task within a service ensures that it is handled by an executor with a fixed thread pool backed by an internal queue for overflow.
You write your own long running service interface with it's own API and encapsulate a blocking queue for requests, using Platform.runLater for handling communicating results back to the UI.
To make the response handler logic dynamic and adjustable by the caller, you could pass the handler function as a lambda function to be executed on success for the original call using Platform.runLater.
If you wrap the call in a Task or Service, and use the onSucceeded function, you don't need the runLater call, because the implementation will ensure that the onSucceeded handler is called on the JavaFX thread once the task completes.
Note, that often the network request and response require some conversion of marshaling and unmarshaling of data to and from a serializable stream. Some of the higher level network APIs such as the JAX-RS or web socket providers provide interfaces and utilities to do some of this work for you, often using specific libraries for different types of conversion, such as JAXB for XML serialization of Jackson for JSON serialization.
Slightly related info and further thoughts
This next is probably a bit off-topic, but this is an example of BlockingQueue and Task interaction, it is not a network service, but it does demonstrate use of queues within a producer/consumer situation, with a reactive UI and progress monitoring.
One other thing that would be interesting to see (at least for me), is an Akka based solution for JavaFX client->server communication. That seems like a nice alternative to traditional http/rest/soap/rmi calls or message queue based processing. Akka is inherently an event based solution for fault-tolerant asynchronous concurrent communication, so it would seem a good match-up for a UI based framework such as JavaFX, allowing a developer to process at an appropriate layer of abstraction. But I have yet to see a JavaFX based messaging client that relies on Akka.
I would like to know the best approach to create a network based
reusable threading service. The way I coded up the network service was
to use a controller class (generated from the FXML via the Net-beans
GUI). I put the threading logic here via a private Service member
named 'transmitter' and I wired up the start/stop logic via the
Start/Stop button's event callback.
I humbly suggest that you develop your network service and your GUI controller as separate projects.
I would have the network service running in its own container or virtual machine as a daemon/background thread. The advantage of this organization is that it keeps your server away from the vagaries of the JavaFX event loop and application thread. You'll want to design your service to recognize administration commands and/or interrupt requests from your controller. You can develop your network services as REST or whatever you want without wondering how to roll this into the JavaFX application thread.
I would then have the GUI controller running as a separate GUI application either in the same process or, if remote administration is desired, in a separate JVM (and use IPC to send/receive administration messages).
TL;DR: if it were me, I would resist the temptation to program the network service as a JavaFX application.

jersey ws 2.0 #suspended AsyncResponse, what does it do?

I am analyzing some jersey 2.0 code and i have a question on how the following method works:
#Stateless
#Path("/mycoolstuff")
public class MyEjbResource {
…
#GET
#Asynchronous //does this mean the method executes on child thread ?
public void longRunningOperation(#Suspended AsyncResponse ar) {
final String result = executeLongRunningOperation();
ar.resume(result);
}
private String executeLongRunningOperation() { … }
}
Lets say im at a web browser and i type in www.mysite/mycoolstuff
this will execute the method but im not understanding what the asyncResponse is used for neither the #Asynchronous annotation. From the browser how would i notice its asychnronous ? what would be the difference in removing the annotation ? Also the suspended annotation after reading the documentation i'm not clear its purpose.
is the #Asynchronous annotation simply telling the program to execute this method on a new thread ? is it a convenience method for doing "new Thread(.....)" ?
Update: this annotation relieves the server of hanging onto the request processing thread. Throughput can be better. Anyway from the official docs:
Request processing on the server works by default in a synchronous processing mode, which means that a client connection of a request is processed in a single I/O container thread. Once the thread processing the request returns to the I/O container, the container can safely assume that the request processing is finished and that the client connection can be safely released including all the resources associated with the connection. This model is typically sufficient for processing of requests for which the processing resource method execution takes a relatively short time. However, in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used. In this model, the association between a request processing thread and client connection is broken. I/O container that handles incoming request may no longer assume that a client connection can be safely closed when a request processing thread returns. Instead a facility for explicitly suspending, resuming and closing client connections needs to be exposed. Note that the use of server-side asynchronous processing model will not improve the request processing time perceived by the client. It will however increase the throughput of the server, by releasing the initial request processing thread back to the I/O container while the request may still be waiting in a queue for processing or the processing may still be running on another dedicated thread. The released I/O container thread can be used to accept and process new incoming request connections.
#Suspended have more definite if you used it, else it will not make any difference of using it.
Let's talk about benefits of it:
#Suspended will pause/Suspend the current thread until it gets response,by default #NO_TIMEOUT no suspend timeout set. So it doesn't mean your request response (I/O)thread will get free and be available for other request.
Now Assume you want your service to be a response with some specific time, but the method you are calling from resource not guarantee the response time, then how will you manage your service response time? At that time, you can set suspend timeout for your service using #Suspended, and even provide a fall back response when time get exceed.
Below is some sample of code for setting suspend/pause timeout
public void longRunningOperation(#Suspended AsyncResponse ar) {
ar.setTimeoutHandler(customHandler);
ar.setTimeout(10, TimeUnit.SECONDS);
final String result = executeLongRunningOperation();
ar.resume(result);
}
for more details refer this
The #Suspended annotation is added before an AsyncResponse parameter on the resource method to tell the underlying web server not to expect this thread to return a response for the remote caller:
#POST
public void asyncPost(#Suspended final AsyncResponse ar, ... <args>) {
someAsyncMethodInYourServer(<args>, new AsyncMethodCallback() {
#Override
void completed(<results>) {
ar.complete(Response.ok(<results>).build());
}
#Override
void failed(Throwable t) {
ar.failed(t);
}
}
}
Rather, the AsyncResponse object is used by the thread that calls completed or failed on the callback object to return an 'ok' or throw an error to the client.
Consider using such asynchronous resources in conjunction with an async jersey client. If you're trying to implement a ReST service that exposes a fundamentally async api, these patterns allow you to project the async api through the ReST interface.
We don't create async interfaces because we have a process that takes a long time (minutes or hours) to run, but rather because we don't want our threads to ever sleep - we send the request and register a callback handler to be called later when the result is ready - from milliseconds to seconds later - in a synchronous interface, the calling thread would be sleeping during that time, rather than doing something useful. One of the fastest web servers ever written is single threaded and completely asynchronous. That thread never sleeps, and because there is only one thread, there's no context switching going on under the covers (at least within that process).
The #suspend annotation makes the caller actually wait until your done work. Lets say you have a lot of work to do on another thread. when you use jersey #suspend the caller just sits there and waits (so on a web browser they just see a spinner) until your AsyncResponse object returns data to it.
Imagine you had a really long operation you had to do and you want to do it on another thread (or multiple threads). Now we can have the user wait until we are done. Don't forget in jersey you'll need to add the " true" right in the jersey servlet definition in web.xml to get it to work.

Netty Server - how to invoke methods from the outside?

I'm currently trying to build a tcp server with netty. The server should then be part of my main program.
My application needs to send messages to the connected clients. I know I can keep track of the channels using a concurrent hash map or a ChannelGroup inside a handler. To not block my application the server itself has to run in a seperate thread. From my pov the corresponding run method would look like this:
public class Server implements Runnable {
#Override
public void run() {
EventLoopGroup bossEventGroup = new NioEventLoopGroup();
EventLoopGroup workerEventGroup = new NioEventLoopGroup();
try {
ServerBootstrap bootstrap = new ServerBootstrap();
bootstrap
.group(bossEventGroup, workerEventGroup)
.channel(NioServerSocketChannel.class)
.childHandler(new MyServerInitializer());
ChannelFuture future = bootstrap.bind(8080).sync().channel().closeFuture().sync();
} catch (InterruptedException e) {
e.printStackTrace();
} finally {
workerEventGroup.shutdownGracefully();
bossEventGroup.shutdownGracefully();
}
}
}
But now I have no idea how to integerate e.g. a sendMessage(Message message) method which can be used by my main application. I believe the function itself has to be defined in the handler to have access to the stored connected channels. But can someone give me an idea how to make such a function usable from the outside? Do I have to implement some sort of message queue which is checked in a loop after the bind? I could imagine that then the method invocation looks like this:
ServerHandlerTest t = (ServerHandlerTest) future.channel().pipeline().last();
(if newMessageInQueue) {
t.sendMessage(...);
}
Maybe someone is able to explain me what is the preferred implementation method for this use case.
I would go to create your own application handler to manage the business behavior within your own Netty handler, because that is the main logic (event based).
Your own (last) handler take care of all your application behavior, such that each client is served correctly, directly within the handler, using the ContextCHannelHandler ctx
Of course, you can still think of a particular application handler that would do something as:
Creation of the handler (in the pipeline creation within MyServerInitializer) will initiate the handler to look for a messageQueue to send
Then polling on the messageQueue to send but to the right client using a hashMap
But I believe it is far more complicated (which queue for which client or a global queue, how to handle the queue without blocking the server thread - not to do -, ...).
Moreover, sendMessage method ? Do you want to talk about write (or writeAndFlush) method ?

how to implement an event-drive consumer in camel

I am very new to Camel, and have been struggling to understand how to use camel in a specific scenario.
In this scenario, there is a (Java-based) agent that generates actions from time to time. I need an event-driven consumer to get notified of these events. These events will be routed to a 'file' producer (for the time being).
In the camel book, the example is for a polling consumer. I could not find a generic solution for an event-driven consumer.
I came across a similar implementation for JMX :
public class JMXConsumer extends DefaultConsumer implements NotificationListener {
JMXEndpoint jmxEndpoint;
public JMXConsumer(JMXEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.jmxEndpoint = endpoint;
}
public void handleNotification(Notification notification, Object handback) {
try {
getProcessor().process(jmxEndpoint.createExchange(notification));
} catch (Throwable e) {
handleException(e);
}
}
}
Here, the handleNotification is invoked whenever a JMX notification arrives.
I believe I have to do something similar to get my consumer notified whenever the agent generates an action. However, the above handleNotification method is specific to JMX. The web page says: " When implementing your own event-driven consumer, you must identify an analogous event listener method to implement in your custom consumer."
I want to know: How can I identify an analogous event listener, so that my consumer will be notified whenever my agent has an action.
Any advice/link to a web page is very much appreciated.
I know this is an old question, but I've been struggling with it and just thought I would document my findings for anyone else searching for an answer.
When you create an Endpoint class (extending DefaultEndpoint) you override the following method for creating a consumer:
public Consumer createConsumer(Processor processor)
In your consumer then, you have access to a Processor - calling 'process' on this processor will create an event and trigger the route.
For example, say you have some Java API that listens for messages, and has some sort of Listener. In my case, the Listener puts incoming messages onto a LinkedBlockingQueue, and my Consumer 'doStart' method looks like this (add your own error handling):
#Override
protected void doStart() throws Exception {
super.doStart();
// Spawn a new thread that submits exchanges to the Processor
Runnable runnable = new Runnable() {
#Override
public void run() {
while(true) {
IMessage incomingMessage = myLinkedBlockingQueue.take();
Exchange exchange = getEndpoint().createExchange();
exchange.getIn().setBody(incomingMessage);
myProcessor.process(exchange);
}
}
};
new Thread(runnable).start();
}
Now I can put the Component that creates the Endpoint that creates this Consumer in my CamelContext, and use it like this:
from("mycomponent:incoming").to("log:messages");
And the log message fires every time a new message arrives from the Java API.
Hope that helps someone!
Event driven is what camel is.
Any route is actually an event listener.
given the route:
from("activemq:SomeQueue").
bean(MyClass.class);
public class MyBean{
public void handleEvent(MyEventObject eventPayload){ // Given MyEventObject was sent to this "SomeQueue".
// whatever processing.
}
}
That would put up an event driven consumer. How to send events then? If you have camel embedded in your app and access to the CamelContext from your event action generator, then you could grab a Producer Template from it and just fire of your event to whatever endpoint you defined in Camel, such as "seda:SomeQueue".
Otherwise, if your Camel instance is running in another server or instance than your application, then you should use some other transport rather than SEDA. Preferably JMS, but others will do as well, pick and choose. ActiveMQ is my favourite. You can start an embedded activemq instance (intra JVM) easily and connect it to camel by:
camelContext.addComponent("activemq", activeMQComponent("vm://localhost"));

Categories

Resources