Im trying to create a netty based server to use SSE especification on client
First I create a handler(NotifyHandler) that extends from SimpleChannelInboundHandler, and extends from my own Pub system, when a notification arrives at onNotificationRecibed is written on context output channel.
private ChannelHandlerContext context = null;
private Publisher p = null;
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
super.channelRead(ctx, msg);
this.context = ctx;
HttpResponse response = new DefaultHttpResponse(HttpVersion.HTTP_1_1,
HttpResponseStatus.OK);
HttpHeaders headers = response.headers();
headers.set(HttpHeaders.Names.CONTENT_TYPE, "text/event-stream");
headers.set(HttpHeaders.Names.CACHE_CONTROL, "no-cache, no-store, max-age=0, must-revalidate");
headers.set(HttpHeaders.Names.PRAGMA, HttpHeaders.Values.NO_CACHE);
headers.set(HttpHeaders.Names.TRANSFER_ENCODING, HttpHeaders.Values.CHUNKED);
ctx.writeAndFlush(response);
Pub.getInstance().suscribe(this);
}
#Override
public void onNotificationRecibed(String type, Map<String, Object> data) {
context.writeAndFlush("event:"+type);
context.writeAndFlush("data:"+data.toString());
context.flush();
}
On the initializer:
public void initChannel(SocketChannel ch) {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast(new HttpRequestDecoder());
pipeline.addLast(new HttpResponseEncoder());
pipeline.addLast(new NotifyHandler());
}
I canĀ“t make it work, i trying to find some examples or usages on this streams but nothing seems to work. Anyone can point me on the right direction?. Sorry for my english, and thanks for your time.
I had the same problem. As it is described in specification, each field has to be separated by new line and each message by another new line.
#Override
public void onNotificationRecibed(String type, Map<String, Object> data) {
context.writeAndFlush("event:" + type + "\n");
context.writeAndFlush("data:" + data.toString() + "\n\n");
context.flush();
}
In addition to #MPazik's suggestion, as you're using HttpResponseEncoder, you will need to write HttpContent to the ChannelHandlerContext. For example:
#Override
public void onNotificationRecibed(String type, Map<String, Object> data) {
final StringBuilder msg = new StringBuilder(1024); // 1kb
msg.append("event:").append(type).append("\n");
msg.append"data:").append(data.toString().append("\n\n"));
final ByteBuf buffer = Unpooled.copiedBuffer(msg.toString(), StandardCharsets.UTF_8);
context.writeAndFlush(new DefaultHttpContent(buffer));
}
All of the classes above are either standard JDK or Netty.
Related
I am using apache async http client to stream objects from azure storage.
I only need to return the HttpResponse object which has the stream associated. My clients will actually have to read from that stream to store the file locally.
So Apache Async clients use a BasicAsyncResponseConsumer which actually buffers the entire file in local memory before calling the completed callback.
I am trying to create my own implementation of AbstractAsyncResponseConsumer so that I can stream the response body instead of actually storing it first but have been unsuccessful to do so till now.
Here is the bare bones cosumer class for reference ->
public class MyConsumer extends` AbstractAsyncResponseConsumer<HttpResponse> {
#Override
protected void onResponseReceived(HttpResponse response) throws HttpException, IOException {
}
#Override
protected void onContentReceived(ContentDecoder decoder, IOControl ioctrl) throws IOException {
}
#Override
protected void onEntityEnclosed(HttpEntity entity, ContentType contentType) throws IOException {
}
#Override
protected HttpResponse buildResult(HttpContext context) throws Exception {
return null;
}
#Override
protected void releaseResources() {
}
}
And here is the code to send the request and return the response ->
public void getFile(HttpRequestBase request) {
MyConsumer myConsumer = new MyConsumer();
HttpAsyncRequestProducer producer =
HttpAsyncMethods.create(request);
CompletableFuture<HttpResponse> future = new CompletableFuture<>();
return Future<HttpResponse> responseFuture =
httpclient.execute(producer,myConsumer,
new FutureCallback<HttpResponse>() {
#Override
public void completed(HttpResponse result) {
//This is called only when all the response body has been read
//future.complete(Result)
}
#Override
public void failed(Exception ex) {
}
#Override
public void cancelled() {
}
});
return future;
}
I will be returning a CompletableFuture of the HttpResponse object to my clients.
They shouldnt be waiting for my http client to read all the response body first in local buffer.
They ideally should start copying directly from the stream provided in the response object.
What should I add inmy implementation of the consumer to get the desired result ?
I don't know if you still have this problem, but if what you want is an InputStream that actually streams data, then you'll want to use the blocking version of Apache HttpClient.
Java's built-in InputStream and OutputStream are inherently blocking, so returning a CompletableFuture of InputStream essentially defeats the purpose. BasicAsyncResponseConsumer buffering the entire response in memory is actually the right thing to do, because that's the only way of making it truly non-blocking.
Another option you can take a look at is HttpAsyncMethods.createZeroCopyConsumer. What it does is that it stores the content to a file in a completely non-blocking way.
Here's an example:
try (CloseableHttpAsyncClient client = HttpAsyncClients.createDefault()) {
client.start();
final CompletableFuture<HttpResponse> cf = new CompletableFuture<>();
client.execute(
HttpAsyncMethods.createGet("https://example.com"),
HttpAsyncMethods.createZeroCopyConsumer(new File("foo.html")),
new FutureCallback<HttpResponse>() {
#Override
public void completed(HttpResponse result) {
cf.complete(result);
}
#Override
public void failed(Exception ex) {
cf.completeExceptionally(ex);
}
#Override
public void cancelled() {
cf.cancel(true);
}
});
// When cf completes, the file will be ready.
// The InputStream inside the HttpResponse will be the FileInputStream of the created file.
}
i am using netty 4.1 embeded in Java and trying to retrive Data from a clients POST request in the pipeline. I tried several options i found online but nothing works...
Maybe someone has a useful thought on this.
Regards and thanks for everyone who helps.
Pipeline:
p.addLast ("codec", new HttpServerCodec ());
p.addLast("decoder", new HttpRequestDecoder());
p.addLast("encoder", new HttpRequestEncoder());
p.addLast("handler",new InboundHandlerA());
Handler:
private static class InboundHandlerA extends ChannelInboundHandlerAdapter{
#Override
public void channelActive(ChannelHandlerContext ctx) {
System.out.println("Connected!");
ctx.fireChannelActive();
}
public void channelRead (ChannelHandlerContext channelHandlerCtxt, Object msg) throws Exception {
System.out.println(msg);
}
}
Recieving HTTP requests using netty is simple, you can do this with the following pipeline:
// Provides support for http objects:
p.addLast("codec", new HttpServerCodec());
// Deals with fragmentation in http traffic:
p.addLast("aggregator", new HttpObjectAggregator(Short.MAX_VALUE));
// Deals with optional compression of responses:
// p.addLast("aggregator", new HttpContentCompressor());
p.addLast("handler",new InboundHandlerA());
This can be used with a custom SimpleChannelInboundHandler<FullHttpRequest>:
public class InboundHandlerA extends SimpleChannelInboundHandler<FullHttpRequest> {
#Override
public void channelActive(ChannelHandlerContext ctx) {
super.channelActive(ctx);
System.out.println("Connected!");
}
// Please keep in mind that this method
will be renamed to messageReceived(ChannelHandlerContext, I) in 5.0.
#Override
public void channelRead0 (ChannelHandlerContext ctx,
FullHttpRequest msg) throws Exception {
// Check for invalid http data:
if(msg.getDecoderResult() != DecoderResult.SUCCESS ) {
ctx.close();
return;
}
System.out.println("Recieved request!");
System.out.println("HTTP Method: " + msg.getMethod());
System.out.println("HTTP Version: " + msg.getProtocolVersion());
System.out.println("URI: " + msg.getUri());
System.out.println("Headers: " + msg.headers());
System.out.println("Trailing headers: " + msg.trailingHeaders());
ByteBuf data = msg.content();
System.out.println("POST/PUT length: " + data.readableBytes());
System.out.println("POST/PUT as string: ");
System.out.println("-- DATA --");
System.out.println(data.toString(StandardCharsets.UTF_8));
System.out.println("-- DATA END --");
// Send response back so the browser won't timeout
ByteBuf responseBytes = ctx.alloc().buffer();
responseBytes.writeBytes("Hello World".getBytes());
FullHttpResponse response = new DefaultFullHttpResponse(
HttpVersion.HTTP_1_1, HttpResponseStatus.OK, responseBytes);
response.headers().set(HttpHeaders.Names.CONTENT_TYPE,
"text/plain");
response.headers().set(HttpHeaders.Names.CONTENT_LENGTH,
response.content().readableBytes());
response.headers().set(HttpHeaders.Names.CONNECTION,
HttpHeaders.Values.KEEP_ALIVE);
ctx.write(response);
}
}
The code above is printing out all the details on a incoming message, including the post data. If you require only the post data, you can add a simple if-statement to filter on a POST response type
It is in AKKA documentation written that
... Actors should not block (i.e. passively wait while occupying a Thread) on some external entity, which might be a lock, a network socket, etc. The blocking operations should be done in some special-cased thread which sends messages to the actors which shall act on them.
source http://doc.akka.io/docs/akka/2.0/general/actor-systems.html#Actor_Best_Practices
I have found the following information at the moment :
I read Sending outbound HTTP request from Akka / Scala and checked the example at https://github.com/dsciamma/fbgl1
I found following article http://nurkiewicz.blogspot.de/2012/11/non-blocking-io-discovering-akka.html explaining how to use https://github.com/AsyncHttpClient/async-http-client non blocking http client with akka. But is written in Scala.
How can i write an actor that make non-blocking http requests?
It must downlad a remote url page as file and than send the generated file object to the master actor. master actor then sends this request to parser actor to parse the file...
In the last response, Koray is using a wrong reference for the sender, the correct way to do it is:
public class ReduceActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof URI) {
URI url = (URI) message;
AsyncHttpClient asyncHttpClient = new AsyncHttpClient();
final ActorRef sender = getSender();
asyncHttpClient.prepareGet(url.toURL().toString()).execute(new AsyncCompletionHandler<Response>() {
#Override
public Response onCompleted(Response response) throws Exception {
File f = new File("e:/tmp/crawler/" + UUID.randomUUID().toString() + ".html");
// Do something with the Response
// ...
// System.out.println(response1.getStatusLine());
FileOutputStream fao = new FileOutputStream(f);
IOUtils.copy(response.getResponseBodyAsStream(), fao);
System.out.println("File downloaded " + f);
sender.tell(new WordCount(f));
return response;
}
#Override
public void onThrowable(Throwable t) {
// Something wrong happened.
}
});
} else
unhandled(message);
}
Checkout this other thread of akka: https://stackoverflow.com/a/11899690/575746
I have implemented this in this way.
public class ReduceActor extends UntypedActor {
#Override
public void onReceive(Object message) throws Exception {
if (message instanceof URI) {
URI url = (URI) message;
AsyncHttpClient asyncHttpClient = new AsyncHttpClient();
asyncHttpClient.prepareGet(url.toURL().toString()).execute(new AsyncCompletionHandler<Response>() {
#Override
public Response onCompleted(Response response) throws Exception {
File f = new File("e:/tmp/crawler/" + UUID.randomUUID().toString() + ".html");
// Do something with the Response
// ...
// System.out.println(response1.getStatusLine());
FileOutputStream fao = new FileOutputStream(f);
IOUtils.copy(response.getResponseBodyAsStream(), fao);
System.out.println("File downloaded " + f);
getSender().tell(new WordCount(f));
return response;
}
#Override
public void onThrowable(Throwable t) {
// Something wrong happened.
}
});
} else
unhandled(message);
}
I have a jersey client that need to upload a file big enough to require a progress bar.
The problem is that, for an upload that requires some minutes, i see the bytes transfered to go to 100% as soon as the application has started. Then it takes some minutes to print the "on finished" string. It is as if the bytes were sent to a buffer, and i was reading the transfert-to-the buffer speed instead of the actual upload speed. This makes the progress bar useless.
This is the very simple code:
ClientConfig config = new DefaultClientConfig();
Client client = Client.create(config);
WebResource resource = client.resource("www.myrestserver.com/uploads");
WebResource.Builder builder = resource.type(MediaType.MULTIPART_FORM_DATA_TYPE);
FormDataMultiPart multiPart = new FormDataMultiPart();
FileDataBodyPart fdbp = new FileDataBodyPart("data.zip", new File("data.zip"));
BodyPart bp = multiPart.bodyPart(fdbp);
String response = builder.post(String.class, multiPart);
To get progress state i've added a ContainerListener filter, obviouslt before calling builder.post:
final ContainerListener containerListener = new ContainerListener() {
#Override
public void onSent(long delta, long bytes) {
System.out.println(delta + " : " + long);
}
#Override
public void onFinish() {
super.onFinish();
System.out.println("on finish");
}
};
OnStartConnectionListener connectionListenerFactory = new OnStartConnectionListener() {
#Override
public ContainerListener onStart(ClientRequest cr) {
return containerListener;
}
};
resource.addFilter(new ConnectionListenerFilter(connectionListenerFactory));
In Jersey 2.X, i've used a WriterInterceptor to wrap the output stream with a subclass of Apache Commons IO CountingOutputStream that tracks the writing and notify my upload progress code (not shown).
public class UploadMonitorInterceptor implements WriterInterceptor {
#Override
public void aroundWriteTo(WriterInterceptorContext context) throws IOException, WebApplicationException {
// the original outputstream jersey writes with
final OutputStream os = context.getOutputStream();
// you can use Jersey's target/builder properties or
// special headers to set identifiers of the source of the stream
// and other info needed for progress monitoring
String id = (String) context.getProperty("id");
long fileSize = (long) context.getProperty("fileSize");
// subclass of counting stream which will notify my progress
// indicators.
context.setOutputStream(new MyCountingOutputStream(os, id, fileSize));
// proceed with any other interceptors
context.proceed();
}
}
I then registered this interceptor with the client, or with specific targets where you want to use the interceptor.
it should be enough to provide you own MessageBodyWriter for java.io.File which fires some events or notifies some listeners as progress changes
#Provider()
#Produces(MediaType.APPLICATION_OCTET_STREAM)
public class MyFileProvider implements MessageBodyWriter<File> {
public boolean isWriteable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
return File.class.isAssignableFrom(type);
}
public void writeTo(File t, Class<?> type, Type genericType, Annotation annotations[], MediaType mediaType, MultivaluedMap<String, Object> httpHeaders, OutputStream entityStream) throws IOException {
InputStream in = new FileInputStream(t);
try {
int read;
final byte[] data = new byte[ReaderWriter.BUFFER_SIZE];
while ((read = in.read(data)) != -1) {
entityStream.write(data, 0, read);
// fire some event as progress changes
}
} finally {
in.close();
}
}
#Override
public long getSize(File t, Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
return t.length();
}
}
and to make your client application uses this new provider simply:
ClientConfig config = new DefaultClientConfig();
config.getClasses().add(MyFileProvider.class);
or
ClientConfig config = new DefaultClientConfig();
MyFileProvider myProvider = new MyFileProvider ();
cc.getSingletons().add(myProvider);
You would have to also include some algorithm to recognize which file is transfered when receiving progress events.
Edited:
I just found that by default HTTPUrlConnection uses buffering. And to disable buffering you could do couple of things:
httpUrlConnection.setChunkedStreamingMode(chunklength) - disables buffering and uses chunked transfer encoding to send request
httpUrlConnection.setFixedLengthStreamingMode(contentLength) - disables buffering and but ads some constraints to streaming: exact number of bytes must be sent
So I suggest the final solution to your problem uses 1st option and would look like this:
ClientConfig config = new DefaultClientConfig();
config.getClasses().add(MyFileProvider.class);
URLConnectionClientHandler clientHandler = new URLConnectionClientHandler(new HttpURLConnectionFactory() {
#Override
public HttpURLConnection getHttpURLConnection(URL url) throws IOException {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setChunkedStreamingMode(1024);
return connection;
}
});
Client client = new Client(clientHandler, config);
I have successfully used David's answer. However, I would like to extend on it:
The following aroundWriteTo implementation of my WriterInterceptor shows how a panel (or similar) can also be passed to the CountingOutputStream:
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException
{
final OutputStream outputStream = context.getOutputStream();
long fileSize = (long) context.getProperty(FILE_SIZE_PROPERTY_NAME);
context.setOutputStream(new ProgressFileUploadStream(outputStream, fileSize,
(progressPanel) context
.getProperty(PROGRESS_PANEL_PROPERTY_NAME)));
context.proceed();
}
The afterWrite of the CountingOutputStream can then set the progress:
#Override
protected void afterWrite(int n)
{
double percent = ((double) getByteCount() / fileSize);
progressPanel.setValue((int) (percent * 100));
}
The properties can be set on the Invocation.Builder object:
Invocation.Builder invocationBuilder = webTarget.request();
invocationBuilder.property(
UploadMonitorInterceptor.FILE_SIZE_PROPERTY_NAME, newFile.length());
invocationBuilder.property(
UploadMonitorInterceptor.PROGRESS_PANEL_PROPERTY_NAME,
progressPanel);
Perhaps the most important addition to David's answer and the reason why I decided to post my own is the following code:
client.property(ClientProperties.CHUNKED_ENCODING_SIZE, 1024);
client.property(ClientProperties.REQUEST_ENTITY_PROCESSING, "CHUNKED");
The client object is the the javax.ws.rs.client.Client.
It is essential to disable buffering also with the WriterInterceptor approach. Above code is a straightforward way to do this with Jersey 2.x
I'm trying create a simple audio stream server like a concept proof, but I'm having some dificulties.
I'm streaming a single file to start, I searched but didn't found enought information to create a audio stream server, so I just created a simple server based on my little knowledge about servers. I've created it with netty passing the stream to ChunkedStream object and wrote it on channel:
public class CastServerHandler extends SimpleChannelHandler {
#Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e)
throws Exception {
HttpRequest request = (HttpRequest) e.getMessage();
if (request.getMethod() != GET) {
sendError(ctx, METHOD_NOT_ALLOWED);
return;
}
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
System.out.println(response.toString());
Channel channel = e.getChannel();
channel.write(response);
ChannelFuture writeFuture;
StreamSource source = StreamSource.getInstance();
ChunkedStream stream = new ChunkedStream(source.getLiveStream());
writeFuture = channel.write(stream);
writeFuture.addListener(new ChannelFutureProgressListener() {
public void operationComplete(ChannelFuture future) {
System.out.println("terminou");
future.getChannel().close();
}
public void operationProgressed(ChannelFuture future, long amount,
long current, long total) {
System.out.println("Transferido: " + current + " de " + total);
}
});
if (!isKeepAlive(request)) {
writeFuture.addListener(ChannelFutureListener.CLOSE);
}
}
private void sendError(ChannelHandlerContext ctx, HttpResponseStatus status) {
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, status);
response.setHeader(CONTENT_TYPE, "text/plain; charset=UTF-8");
response.setContent(ChannelBuffers.copiedBuffer(
"Failure: " + status.toString() + "\r\n", CharsetUtil.UTF_8));
// Close the connection as soon as the error message is sent.
ctx.getChannel().write(response)
.addListener(ChannelFutureListener.CLOSE);
}
private void writeLiveStream(Channel channel) {
StreamSource source = StreamSource.getInstance();
ChunkedStream stream = new ChunkedStream(source.getLiveStream());
channel.write(stream);
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e)
throws Exception {
e.getCause().printStackTrace();
e.getChannel().close();
}
}
Ufortunately, I didn't successfully streamed the audio directly to web browser, so I tryied to figure out what icecast returns as response to web browser, and it return these properties in header:
Cache-Control:no-cache
Content-Type:application/ogg
Server:Icecast 2.3.2
ice-audio-info:samplerate=44100;channels=2;quality=3%2e00
icy-description:Stream de teste
icy-genre:Rock
icy-name:Radio teste Brevleq
icy-pub:0
Is there a simple way netty use to put these content in HttpResponse header (specially Content-type:applicatio/ogg)?? I hope this is the problem...
See the API of HttpResponse.
It has setHeader method.
I'd consider going with a straight binary protocol, and creating an HTTP interface only for a proxy. There's no reason to deal with a text based protocol for something like this.