Netty and multiple handlers - java

How to do the following with netty:
if uri starts "/static/*" path use StaticHttpHandler
if other uri uses HttpHandler
if "/ws" use WebSocketHandler
Now I have this code:
public class HttpHelloWorldServerInitializer extends ChannelInitializer<SocketChannel> {
#Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
p.addLast(new HttpServerCodec());
p.addLast(new HttpHandler());
// Other pipelene handlers?
}
}
Can I use something like "swither" in pipeline? Or it doesn't make sense and I need to handle request uri inside handler. But how to determine websocket protocol?

In your own HttpHandler, you have to check the uri first and then decide which "real" handler to run. To do that, 2 ways of doing it:
either you have in your business code (HttpHandler) the necessary objects already alocated or newly allocated (StaticHttpHandler and WebSocketHandler) and then pass the request manually to them by calling them explicitely (so they are no more "standard" Netty handler)
either you have one specific handler (HttpRouteHandler for instance) that decides which handler to add to the pipeline for this request and pass to the next one the current request
The first is simplest but not easy to extend.
The second is a bit harder and you have to be sure/clean that each request is getting to the right handler. For instance,
once a channel is connected, are all requests coming into it of the very same nature ? If so, then you can safely add the necessary handler and even removes the HttpRouteHandler.
if not, then for each request, you have to decide to add/remove the necessary handlers as needed, keeping the HttpRouteHandler in the pipeline to handle the new context
Saying it short, you're trying to implement a route resolution in a web service.

Related

Unable to create MockEndpoint for camel-timer

I am using Camel Timer component to read blobs from Azure storage container. A route is created which will poll for blobs every 10secs and is processed by the CloudBlobProcessor.
from("timer://testRoute?fixedRate=true&period=10s")
.to("azure-blob://storageAccountName/storageContainerName?credentials=#credentials")
.to(CloudBlobProcessor)
.to("mock:result");
I want to write a testcase by creating a mock endpoint something like this
MockEndpoint timerMockEndpoint = context.getEndpoint("timer://testRoute?fixedRate=true&period=10s", MockEndpoint.class);
But, I receive a below exception while creating the above mock endpoint.
java.lang.IllegalArgumentException: The endpoint is not of type:
class org.apache.camel.component.mock.MockEndpoint but is: org.apache.camel.component.timer.TimerEndpoint
Below is the code where I am trying to skip sending to the original endpoint
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new AdviceWithRouteBuilder() {
#Override
public void configure() throws Exception {
interceptSendToEndpoint("timer://testRoute?fixedRate=true&period=10s").skipSendToOriginalEndpoint()
.log("Original Batch Endpoint skipped")
.to("azure-blob://*")
.to(CloudBlobProcessor).to("mock:result");
from("timer://testRoute?fixedRate=true&period=10s").to("mock:result");
}
};
}
What I understand, we're trying to solve two different problems here:
MockEndpoint != TimerEndpoint
Interceptions
Answer to the first one is simple: MockEndpoints follow syntax mock:name. TimerEndpoint is a different endpoint and a totally different object. I don't know what you're aiming to do with the MockEndpoint here, but we just can't technically have a TimerEndpoint object as a MockEndpoint object. Why? Because that's how object oriented programming and Java work.
Let's take a look on the second problem. I've less than a year experience with Camel and I've only used interception once last year, but I hope I can guide you to some helpful direction.
The point of interception is to say "don't do that, do this instead". In this use case, it seems that we're only trying to skip sending a request to azure-blob endpoint. I'd try intercepting azure-blob://storageAccountName/storageContainerName?credentials=#credentials.
So instead of your interception, I'd try writing an interception like this:
interceptSendToEndpoint("azure-blob://storageAccountName/storageContainerName?credentials=#credentials")
.skipSendToOriginalEndpoint()
.log("Intercepted!");
In this case, instead of sending the request to azure-blob we intercept that request. We're telling Camel to skip the send to original endpoint, which means nothing will be sent to azure-blob://storageAccountName/storageContainerName?credentials=#credentials. Instead, we'll log "Intercepted!".

Decide at runtime for sync or async response using Jersey

Is it possible to decide at runtime whether a Jersey REST request to an resource endpoint should be handled synchronously or asynchronously? Let's take a simple example.
The synchronous version:
#Path("resource")
public class Resource {
#GET
#Produces({MediaType.TEXT_PLAIN})
public Response get() {
return Response.ok("Hello there!").build();
}
}
The asynchronous version:
#Path("resource")
public class Resource {
#GET
#Produces({MediaType.TEXT_PLAIN})
public void get(#Suspended final AsyncResponse r) {
r.resume(Response.ok("Hello there!").build()); // usually called somewhere from another thread
}
}
Depending on certain parameters, I would like to decide at runtime whether the GET request should be handled synchronously or asynchronously. The URL of the resource endpoint (http://server/resource) must be the same in both cases. Is this possible?
Of course, as you can see in the example above, the synchronous version can be faked in an asynchronous manner by simply calling AsyncResponse.resume(...). However, I would to avoid the overhead of creating the asynchronous response.
A step back
The JAX-RS Asynchronous Server API is all about how the container will manage the request. But it will still hold the request and won't affect the client experience.
Quoting the Jersey documentation about the Asynchronous Server API:
Note that the use of server-side asynchronous processing model will
not improve the request processing time perceived by the client. It
will however increase the throughput of the server, by releasing the
initial request processing thread back to the I/O container while the
request may still be waiting in a queue for processing or the
processing may still be running on another dedicated thread. The
released I/O container thread can be used to accept and process new
incoming request connections.
The approaches described below won't bring any benefits to your client.
Using a custom header
You could have different URLs for sync and async methods and create a pre-matching filter, which is executed before the request matching is started.
To do it, implement ContainerRequestFilter, annotate it with #PreMatching and, based on your conditions (headers, parameters, etc), change the requested URI:
#Provider
#PreMatching
public class PreMatchingFilter implements ContainerRequestFilter {
#Override
public void filter(ContainerRequestContext requestContext) throws IOException {
if (requestContext.getHeaders().get("X-Use-Async") != null) {
requestContext.setRequestUri(yourNewURI);
}
}
}
Have a look at the ContainerRequestContext API.
Using a custom media type
I haven't tested the following solution, but it should work. You can keep the same URL for both sync and async methods, just accepting a different content type for each method.
For example:
Sync method: #Consumes("application/vnd.example.sync+text")
Async method: #Consumes("application/vnd.example.async+text")
And use the PreMatchingFilter to change the Content-Type header based on your conditions, like the following:
if (useSync) {
requestContext.getHeaders().putSingle(
HttpHeaders.CONTENT_TYPE, "application/vnd.example.sync+text");
} else {
requestContext.getHeaders().putSingle(
HttpHeaders.CONTENT_TYPE, "application/vnd.example.async+text");
}
According to the documentation, ContainerRequestContext#getHeaders() returns a mutable map with the request headers.
You could use a custom MediaType...you can for example put #Produces("simple") on your simple get method and #Produces("asynch") on your asynchronous get method. In your client you then can set the Accept Header of your call to "simple" or "asynch" depending on what you need.

Business Logic in Netty?

I'm developing a server based on the Netty libraby and I'm having a problem with how to structure the application with regards to business Logic.
currenty I have the business logic in the last handler and that's where I access the database. The thing I can't wrap my head around is the latency of accessing the database(blocking code). Is it advisable to do it in the handler or is there an alternative? code below:
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
super.channelRead(ctx, msg);
Msg message = (Msg)msg;
switch(message.messageType){
case MType.SIGN_UP:
userReg.signUp(message.user);// blocking database access
break;
}
}
you should execute the blocking calls in DefaultEventExecutorGroup or your custom threadpool that can be added to when the handler is added
pipeline.addLast(new DefaultEventExecutorGroup(50),"BUSSINESS_LOGIC_HANDLER", new BHandler());
ctx.executor().execute(new Runnable() {
#Override
public void run() {
//Blocking call
}});
Your custom handler is initialized by Netty everytime the Server receives a request, hence one instance of the handler is responsible for handling one Client.
So, it is perfectly fine for issuing blocking calls in your handler. It will not affect other Client's, as long as you don't block it indefinitely (or atleast not for very long time), thereby not blocking Netty's Thread for long and you do not get too much load on your server instance.
However, if you want to go for asynchronous design, then there can be more than a few design patterns that you can use.
For eg. with Netty, if you can implement WebSockets, then perhaps you can make the blocking calls in a separate Thread, and when the results are available, you can push them to the client through the WebSocket already established.

how to implement an event-drive consumer in camel

I am very new to Camel, and have been struggling to understand how to use camel in a specific scenario.
In this scenario, there is a (Java-based) agent that generates actions from time to time. I need an event-driven consumer to get notified of these events. These events will be routed to a 'file' producer (for the time being).
In the camel book, the example is for a polling consumer. I could not find a generic solution for an event-driven consumer.
I came across a similar implementation for JMX :
public class JMXConsumer extends DefaultConsumer implements NotificationListener {
JMXEndpoint jmxEndpoint;
public JMXConsumer(JMXEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.jmxEndpoint = endpoint;
}
public void handleNotification(Notification notification, Object handback) {
try {
getProcessor().process(jmxEndpoint.createExchange(notification));
} catch (Throwable e) {
handleException(e);
}
}
}
Here, the handleNotification is invoked whenever a JMX notification arrives.
I believe I have to do something similar to get my consumer notified whenever the agent generates an action. However, the above handleNotification method is specific to JMX. The web page says: " When implementing your own event-driven consumer, you must identify an analogous event listener method to implement in your custom consumer."
I want to know: How can I identify an analogous event listener, so that my consumer will be notified whenever my agent has an action.
Any advice/link to a web page is very much appreciated.
I know this is an old question, but I've been struggling with it and just thought I would document my findings for anyone else searching for an answer.
When you create an Endpoint class (extending DefaultEndpoint) you override the following method for creating a consumer:
public Consumer createConsumer(Processor processor)
In your consumer then, you have access to a Processor - calling 'process' on this processor will create an event and trigger the route.
For example, say you have some Java API that listens for messages, and has some sort of Listener. In my case, the Listener puts incoming messages onto a LinkedBlockingQueue, and my Consumer 'doStart' method looks like this (add your own error handling):
#Override
protected void doStart() throws Exception {
super.doStart();
// Spawn a new thread that submits exchanges to the Processor
Runnable runnable = new Runnable() {
#Override
public void run() {
while(true) {
IMessage incomingMessage = myLinkedBlockingQueue.take();
Exchange exchange = getEndpoint().createExchange();
exchange.getIn().setBody(incomingMessage);
myProcessor.process(exchange);
}
}
};
new Thread(runnable).start();
}
Now I can put the Component that creates the Endpoint that creates this Consumer in my CamelContext, and use it like this:
from("mycomponent:incoming").to("log:messages");
And the log message fires every time a new message arrives from the Java API.
Hope that helps someone!
Event driven is what camel is.
Any route is actually an event listener.
given the route:
from("activemq:SomeQueue").
bean(MyClass.class);
public class MyBean{
public void handleEvent(MyEventObject eventPayload){ // Given MyEventObject was sent to this "SomeQueue".
// whatever processing.
}
}
That would put up an event driven consumer. How to send events then? If you have camel embedded in your app and access to the CamelContext from your event action generator, then you could grab a Producer Template from it and just fire of your event to whatever endpoint you defined in Camel, such as "seda:SomeQueue".
Otherwise, if your Camel instance is running in another server or instance than your application, then you should use some other transport rather than SEDA. Preferably JMS, but others will do as well, pick and choose. ActiveMQ is my favourite. You can start an embedded activemq instance (intra JVM) easily and connect it to camel by:
camelContext.addComponent("activemq", activeMQComponent("vm://localhost"));

Multiple handlers in netty

I would like to make a kind of logging proxy in netty. The goal is to be able to have a web browser make HTTP requests to a netty server, have them be passed on to a back-end web server, but also be able to take certain actions based on HTTP specific things.
There's a couple of useful netty exmaples, HexDumpProxy (which does the proxying part, agnostic to the protocol), and I've taken just a bit of code from HttpSnoopServerHandler.
My code looks like this right now:
HexDumpProxyInboundHandler can be found at http://docs.jboss.org/netty/3.2/xref/org/jboss/netty/example/proxy/HexDumpProxyInboundHandler.html
//in HexDumpProxyPipelineFactory
public ChannelPipeline getPipeline() throws Exception {
ChannelPipeline p = pipeline(); // Note the static import.
p.addLast("handler", new HexDumpProxyInboundHandler(cf, remoteHost, remotePort));
p.addLast("decoder", new HttpRequestDecoder());
p.addLast("handler2", new HttpSnoopServerHandler());
return p;
}
//HttpSnoopServerHandler
public class HttpSnoopServerHandler extends SimpleChannelUpstreamHandler {
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
HttpRequest request = (HttpRequest) e.getMessage();
System.out.println(request.getUri());
//going to do things based on the URI
}
}
Unfortunately messageReceived in HttpSnoopServerHandler never gets called - it seems like HexDumpProxyInboundHandler consumes all the events.
How can I have two handlers, where one of them requires a decoder but the other doesn't (I'd rather have HexDumpProxy as it is, where it doesn't need to understand HTTP, it just proxies all connections, but my HttpSnoopHandler needs to have HttpRequestDecoder in front of it)?
I've not tried it but you could extend HexDumpProxyInboundHandler and override messageReceived with something like
super.messageReceived(ctx, e);
ctx.sendUpstream(e);
Alternatively you could modify HexDumpProxyInboundHandler directly to that the last thing messageReceived does is call super.messageReceived(ctx,e).
This would only work for inbound data from the client. Data from the service you're proxy-ing would still be passed through without you code seeing it.

Categories

Resources