In my project I'd like to write the same FullHttpResponse to many clients for a performance boost.
Previously I have written the same ByteBuf for a custom protocol, and used retain to prevent the buf from being released after writing.
Unfortunately with FullHttpResponse (DefaultFullHttpResponse) my technique doesn't seem to work. The first time I write the response clients receive the response correctly, but the next write doesn't go through.
I did a simple System.out.println test to make sure nothing was blocking and that my code was executed entirely, and my test showed that yes, nothing is blocking and the request does seem to go through.
I am using the Netty 4.1.0.Final release from Maven Central.
The pipeline only has an HttpServerCodec(256, 512, 512, false, 64) and my SimpleChannelInboundHandler<HttpRequest>, which I send the FullHttpResponse from.
Here's a simplified version of my inbound handler:
class HTTPHandler extends SimpleChannelInboundHandler<HttpRequest> {
private static final FullHttpResponse response =
new DefaultFullHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.OK,
Unpooled.buffer(8).writeLong(0),
false);
#Override
public void channelRead0(ChannelHandlerContext ctx, HttpRequest msg) {
ctx.writeAndFlush(response.retain(), ctx.voidPromise());
}
}
You need to use response.duplicate().retain() or if using Netty 4.1.x you can also use response.retainedDuplicate().
This is needed to ensure you get separate reader/writer indices.
Related
I have a Netty 3 codebase with an HTTP Message decoder which extends ReplayingDecoder, and I need to migrate code which analyzes the chunks of the message, which means I need to get the chunks.
protected Object decode(ChannelHandlerContext ctx, Channel channel,
ByteBuf buffer, State state) {
//...
HttpChunk chunk = new DefaultHttpChunk(buffer.readBytes(toRead));
//...
}
From what I've gathered, I need to use HttpChunkedInput instead, but creating one is surprisingly difficult.
//requires an InputStream...
HttpChunkedInput hc = new HttpChunkedInput(new ChunkedStream(...));
//but this seems really clunky/too awkward to be right.
HttpChunkedInput hc = new HttpChunkedInput(new ChunkedStream(new ByteArrayInputStream(buffer.array()));
ByteBuf doesn't seem to have a way to dump out a stream directly. Am I missing a Util class API that would be better? I did find there's a new EmbeddedChannel class which can simply readInbound() like here, but I'm not sure I should be changing the types just for that or casting the bare Channel to an EmbeddedChannel to get out of the problem.
Netty 4.x onwards comes with an out of the box HTTP codec which unless you use an HTTP aggregation handler will give you HTTP chunks as HttpContent objects.
This example shows how to write a handler that received such chunks:
https://github.com/netty/netty/blob/a329857ec20cc1b93ceead6307c6849f93b3f101/example/src/main/java/io/netty/example/http/snoop/HttpSnoopServerHandler.java#L60
I'm looking for an example like this but with a synchronous call. My program needs data from external source and should wait until response returns (or until timeout).
The Play WS library is meant for asynchronous requests and this is good!
Using it ensures that your server is not going to be blocked and wait for some response (your client might be blocked but that is a different topic).
Whenever possible you should always opt for the async WS call. Keep in mind that you still get access to the result of the WS call:
public static Promise<Result> index() {
final Promise<Result> resultPromise = WS.url(feedUrl).get().map(
new Function<WS.Response, Result>() {
public Result apply(WS.Response response) {
return ok("Feed title:" + response.asJson().findPath("title"));
}
}
);
return resultPromise;
}
You just need to handle it a bit differently - you provide a mapping function - basically you are telling Play what to do with the result when it arrives. And then you move on and let Play take care of the rest. Nice, isn't it?
Now, if you really really really want to block, then you would have to use another library to make the synchronous request. There is a sync variant of the Apache HTTP Client - https://hc.apache.org/httpcomponents-client-ga/index.html
I also like the Unirest library (http://unirest.io/java.html) which actually sits on top of the Apache HTTP Client and provides a nicer and cleaner API - you can then do stuff like:
Unirest.post("http://httpbin.org/post")
.queryString("name", "Mark")
.field("last", "Polo")
.asJson()
As both are publically available you can put them as a dependency to your project - by stating this in the build.sbt file.
All you can do is just block the call wait until get response with timeout if you want.
WS.Response response = WS.url(url)
.setHeader("Authorization","BASIC base64str")
.setContentType("application/json")
.post(requestJsonNode)
.get(20000); //20 sec
JsonNode resNode = response.asJson();
In newer Versions of play, response does ot have an asJson() method anymore. Instead, Jackson (or any other json mapper) must be applied to the body String:
final WSResponse r = ...;
Json.mapper().readValue(r, Type.class)
I'm trying to write a non-blocking proxy with netty 4.1. I have a "FrontHandler" which handles incoming connections, and then a "BackHandler" which handles outgoing ones. I'm following the HexDumpProxyHandler (https://github.com/netty/netty/blob/ed4a89082bb29b9e7d869c5d25d6b9ea8fc9d25b/example/src/main/java/io/netty/example/proxy/HexDumpProxyFrontendHandler.java#L67)
In this code I have found:
#Override
public void channelRead(final ChannelHandlerContext ctx, Object msg) {
if (outboundChannel.isActive()) {
outboundChannel.writeAndFlush(msg).addListener(new ChannelFutureListener() {, I've seen:
Meaning that the incoming message is only written if the outbound client connection is already ready. This is obviously not ideal in a HTTP proxy case, so I am thinking what would be the best way to handle it.
I am wondering if disabling auto-read on the front-end connection (and only trigger reads manually once the outgoing client connection is ready) is a good option. I could then enable autoRead over the child socket again, in the "channelActive" event of the backend handler. However, I am not sure about how many messages would I get in the handler for each "read()" invocation (using HttpDecoder, I assume I would get the initial HttpRequest, but I'd really like to avoid getting the subsequent HttpContent / LastHttpContent messages until I manually trigger the read() again and enable autoRead over the channel).
Another option would be to use a Promise to get the Channel from the client ChannelPool:
private void setCurrentBackend(HttpRequest request) {
pool.acquire(request, backendPromise);
backendPromise.addListener((FutureListener<Channel>) future -> {
Channel c = future.get();
if (!currentBackend.compareAndSet(null, c)) {
pool.release(c);
throw new IllegalStateException();
}
});
}
and then do the copying from input to output thru that promise. Eg:
private void handleLastContent(ChannelHandlerContext frontCtx, LastHttpContent lastContent) {
doInBackend(c -> {
c.writeAndFlush(lastContent).addListener((ChannelFutureListener) future -> {
if (future.isSuccess()) {
future.channel().read();
} else {
pool.release(c);
frontCtx.close();
}
});
});
}
private void doInBackend(Consumer<Channel> action) {
Channel c = currentBackend.get();
if (c == null) {
backendPromise.addListener((FutureListener<Channel>) future -> action.accept(future.get()));
} else {
action.accept(c);
}
}
but I'm not sure about how good it is to keep the promise there forever and do all the writes from "front" to "back" by adding listeners to it. I'm also not sure about how to instance the promise so that the operations are performed in the right thread... right now I'm using:
backendPromise = group.next().<Channel> newPromise(); // bad
// or
backendPromise = frontCtx.channel().eventLoop().newPromise(); // OK?
(where group is the same eventLoopGroup as used in the ServerBootstrap of the frontend).
If they're not handled thru the right thread, I assume it could be problematic to have the "else { }" optimization in the "doInBackend" method to avoid using the Promise and write to the channel directly.
The no-autoread approach doesn't work by itself, because the HttpRequestDecoder creates several messages even if only one read() was performed.
I have solved it by using chained CompletableFutures.
I have worked on a similar proxy application based on the MQTT protocol. So it was basically used to create a real-time chat application. The application that I had to design however was asynchronous in nature so I naturally did not face any such problem. Because in case the
outboundChannel.isActive() == false
then I can simply keep the messages in a queue or a persistent DB and then process them once the outboundChannel is up. However, since you are talking about an HTTP application, so this means that the application is synchronous in nature meaning that the client cannot keep on sending packets until the outboundChannel is up and running. So the option you suggest is that the packet will only be read once the channel is active and you can manually handle the message reads by disabling the auto read in ChannelConfig.
However, what I would like to suggest is that you should check if the outboundChannel is active or not. In case the channel is active, send he packet forward for processing. In case the channel is not active, you should reject the packet by sending back a response similar to Error404
Along with this you should configure your client to keep on retrying sending the packets after certain intervals and accordingly handle what needs to be done in case the channel takes too long a time to become active and become readable. Manually handling channelRead is generally not preferred and is an anti pattern. You should let Netty handle that for you in the most efficient way.
I am using the Oracle Jersey Client, and am trying to cancel a long running get or put operation.
The Client is constructed as:
JacksonJsonProvider provider = new JacksonJsonProvider(new ObjectMapper());
ClientConfig clientConfig = new DefaultClientConfig();
clientConfig.getSingletons().add(provider);
Client client = Client.create(clientConfig);
The following code is executed on a worker thread:
File bigZipFile = new File("/home/me/everything.zip");
WebResource resource = client.resource("https://putfileshere.com");
Builder builder = resource.getRequestBuilder();
builder.type("application/zip").put(bigZipFile); //This will take a while!
I want to cancel this long-running put. When I try to interrupt the worker thread, the put operation continues to run. From what I can see, the Jersey Client makes no attempt to check for Thread.interrupted().
I see the same behavior when using an AsyncWebResource instead of WebResource and using Future.cancel(true) on the Builder.put(..) call.
So far, the only solution I have come up with to interrupt this is throwing a RuntimeException in a ContainerListener:
client.addFilter(new ConnectionListenerFilter(
new OnStartConnectionListener(){
public ContainerListener onStart(ClientRequest cr) {
return new ContainerListener(){
public void onSent(long delta, long bytes) {
//If the thread has been interrupted, stop the operation
if (Thread.interrupted()) {
throw new RuntimeException("Upload or Download canceled");
}
//Report progress otherwise
}
}...
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
Yeah, interrupting the thread will only work if the code is watching for the interrupts or calling other methods (such as Thread.sleep(...)) that watch for it.
Throwing an exception out of listener doesn't sound like a bad idea. I would certainly create your own RuntimeException class such as TimeoutRuntimeException or something so you can specifically catch and handle it.
Another thing to do would be to close the underlying IO stream that is being written to which would cause an IOException but I'm not familiar with Jersey so I'm not sure if you can get access to the connection.
Ah, here's an idea. Instead of putting the File, how about putting some sort of extension on a BufferedInputStream that is reading from the File but also has a timeout. So Jersey would be reading from the buffer and at some point it would throw an IOException if the timeout expires.
As of Jersey 2.35, the above API has changed. A timeout has been introduces in the client builder which can set read timeout. If the server takes too long to respond, the underlying socket will timeout. However, if the server starts sending the response, it shall not timeout. This can be utilized, if the server does not start sending partial response, which depends on the server implementation.
client=(JerseyClient)JerseyClientBuilder
.newBuilder()
.connectTimeout(1*1000, TimeUnit.MILLISECONDS)
.readTimeout(5*1000, TimeUnit.MILLISECONDS).build()
The current filters and interceptors are for data only and the solution posted in the original question will not work with filters and interceptors (though I admit I may have missed something there).
Another way is to get hold of the underlying HttpUrlConnection (for standard Jersey client configuration) and it seems to be possible with org.glassfish.jersey.client.HttpUrlConnectorProvider
HttpUrlConnectorProvider httpConProvider=new HttpUrlConnectorProvider();
httpConProvider.connectionFactory(new CustomHttpUrlConnectionfactory());
public static class CustomHttpUrlConnectionfactory implements
HttpUrlConnectorProvider.ConnectionFactory{
#Override
public HttpURLConnection getConnection(URL url) throws IOException {
System.out.println("CustomHttpUrlConnectionfactory ..... called");
return (HttpURLConnection)url.openConnection();
}//getConnection closing
}//inner-class closing
I did try the connection provider approach, however, I could not get that working. The idea would be to keep reference to the connection by some means (thread id etc.) and close it if the communication is taking too long. The primary problem was I could not find a way to register the provider with the client. The standard
.register(httpConProvider)
mechanism does not seem to work (or perhaps it is not supposed to work like that) and the documentation is a bit sketchy in that direction.
I want to use a single instance of netty to serve both web socket(socketio) and raw tcp connections. What i am doing now is to have ONLY a RoutingHandler at start which checks the first byte, if it is '[' , then remove the RoutingHandler and add tcp handlers to the channel pipeline, otherwise, add web socket handlers. The code looks like :
public class RoutingHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final ServerContext context;
public RoutingHandler(final ServerContext context) {
this.context = context;
}
#Override
protected void channelRead0(final ChannelHandlerContext ctx, final ByteBuf in) throws Exception {
if (in.isReadable()) {
ctx.pipeline().remove(this);
final byte firstByte = in.readByte();
in.readerIndex(0);
if (firstByte == 0x5B) {
this.context.routeChannelToTcp(ctx.channel());
} else {
// websocket
this.context.routeChannelToSocketIO(ctx.channel());
}
ctx.pipeline().fireChannelActive();
final byte[] copy = new byte[in.readableBytes()];
in.readBytes(copy);
ctx.pipeline().fireChannelRead(Unpooled.wrappedBuffer(copy));
}
}
}
The code seems to be working but it does not seem the best way to do it, especially I am kind of hacking the channel lifecycle by manually calling fireChannelActive() because adding extra handlers do not trigger the active event again hence some initialization code is not run.
IS there anything wrong with my solution? What is a better way to do it?
Thanks
This is referred to as Port Unification. There is a good example of it here, although it demonstrates switching between TCP and HTTP (with SSL and/or GZip detection), and not websockets, but the principles are the same.
Basically, you will read in the first 5 bytes to sniff the protocol (more or less as you did) and when the protocol is identified, modify the handlers in the pipeline accordingly.
Since you need to initiate a websocket through HTTP anyway, the example should work for you if you add the websocket upgrade procedure as outlined in this example.
To see this in action, take a look at the following game server which does this exactly. It is much the same way mentioned in Nicholas answer.
The relevant files that will do this are ProtocolMux and LoginProtocol.