I have some code like their example like so(untested as of yet)....
Promise<Object> promise = new Promise<Object>();
response.contentType = "application/json";
JsonStreamer streamer = new JsonStreamer(columns, promise);
while(streamer.hasMoreData()) {
await(promise);
response.writeChunk(streamer.nextDataChunk());
}
What I don't get is how do I release the socket that the client opened? I am streaming some very large data back in json. I need some kind of response.releaseSocket() after writing the last chunk. I see WebSockets has that but what about when I am using the await stuff?
thanks,
Dean
ah, I think it is noticing I never called await and in that case closes the socket. If I call await, it knows to keep the socket open. That makes sense.
Related
Context: I've recently started using java.nio for my project which leverages Android's VpnService. In my implementation, I've wrapped the FileDescriptor that is returned by the establish() method of the VpnService into a java.nio.FileChannel as shown below.
private val outboundNetworkChannel = FileInputStream(fd).channel
After that, I've a kotlin coroutine which reads from the FileChannel indefinitely and processes the outbound IPv4 / IPv6 packets.
Issue: The below mentioned snippet works, but I see a lot of empty reads happening from the FileChannel which in turn spins the while loop unnecessarily.
fun reader() = scope.launch(handler) {
while (isActive) {
val pkt = read()
if(pkt !== DUMMY){
// Send the read IPv4/IPv6 packet for processing
}
}
}
private suspend fun read(): IPDatagram =
withContext(Dispatchers.IO) {
val bytes = ByteBufferPool.acquire()
outboundChannel.read(bytes) // Returns a lot of empty reads with return value as 0
return#withContext marshal(bytes) // Read IPv4/IPv6 headers and wrap the packet
}
What I'm looking for: For a fact, I know that FileChannel is a blocking channel and in this case since the channel is backed by a network interface, it might not have packets ready to be read. Is there a better approach with / without FileChannel which would lead to a more efficient implementation without wasting precious CPU cycles? I'm open to new ideas as well :)
I managed to figure this out after digging through the Android Docs for VpnService. By default, when a VPN connection is established using VpnService.Builder the fd is in non-blocking mode. Starting API level 21, one can setBlocking(true).
As stated in the docs for public VpnService.Builder setBlocking (boolean blocking)
Sets the VPN interface's file descriptor to be in
blocking/non-blocking mode. By default, the file descriptor returned
by establish() is non-blocking.
I am new here so forgive me if I am not familiar with standard operating procedure, but I have researched this topic at length and haven't found a lot of info.
I am trying to implement a client in a Java Http Servlet that can subscribe to a server-sent-event stream and parse data from that stream. Every time I have a client POST a request to my Http servlet, I need to pass on some data from that client to another server and then open an SSE listener, as that is how the other server will notify me it has data for me to hand back to the client.
It needs to be asynchronous and probably multi-threaded because I will have many requests from the client happening in a short time frame and I need to catch every event coming back from the server. The data I pass back from the server to the client can be large so I need threading so I don't miss new events coming in.
I am at a loss for where to start. I have tried implementing some of the example code using the Jersey SSE API (https://jersey.java.net/documentation/latest/sse.html) but when I implement their asynchronous SSE event handling example, the events coming in happen too fast for my handler to process all the data back to the client and the function gets called again from a new event before it finishes, or at least that's what seems to be happening.
Here is a synopsis of what I have written so far:
Client client = ClientBuilder.newBuilder().register(SseFeature.class).build();
WebTarget target = client.target("Target URL");
EventSource eventSource = new EventSource(target) {
#Override
public void onEvent(InboundEvent inboundEvent){
if ("in".equals(inboundEvent.getName())) {
//Check if the event is of the type we care about
//If it is, open an input stream to read the payload and store in a byte array via an HttpURLConnection object
//Open an output stream and stream the payload to a client via an HttpServletResponse Object - This never seems to happen
}
}
};
}
I know it's sloppy, I'm not as familiar with Java so I am just piecing things together so I apologize for that.
This gets called from within my servlet class but it never makes it to the point where I write to the output stream, I think because it's getting interrupted by another event coming in. If anyone has insight into how I can make this work, or another way to do it, I would greatly appreciate it. Thanks.
I recommend you the JEaSSE library (Java Easy Server-Sent Events): http://mvnrepository.com/artifact/info.macias/jeasse
You can find some usage examples here:
https://github.com/mariomac/jeasse
I am using the Oracle Jersey Client, and am trying to cancel a long running get or put operation.
The Client is constructed as:
JacksonJsonProvider provider = new JacksonJsonProvider(new ObjectMapper());
ClientConfig clientConfig = new DefaultClientConfig();
clientConfig.getSingletons().add(provider);
Client client = Client.create(clientConfig);
The following code is executed on a worker thread:
File bigZipFile = new File("/home/me/everything.zip");
WebResource resource = client.resource("https://putfileshere.com");
Builder builder = resource.getRequestBuilder();
builder.type("application/zip").put(bigZipFile); //This will take a while!
I want to cancel this long-running put. When I try to interrupt the worker thread, the put operation continues to run. From what I can see, the Jersey Client makes no attempt to check for Thread.interrupted().
I see the same behavior when using an AsyncWebResource instead of WebResource and using Future.cancel(true) on the Builder.put(..) call.
So far, the only solution I have come up with to interrupt this is throwing a RuntimeException in a ContainerListener:
client.addFilter(new ConnectionListenerFilter(
new OnStartConnectionListener(){
public ContainerListener onStart(ClientRequest cr) {
return new ContainerListener(){
public void onSent(long delta, long bytes) {
//If the thread has been interrupted, stop the operation
if (Thread.interrupted()) {
throw new RuntimeException("Upload or Download canceled");
}
//Report progress otherwise
}
}...
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
I am wondering if there is a better solution (perhaps when creating the Client) that correctly handles interruptible I/O without using a RuntimeException.
Yeah, interrupting the thread will only work if the code is watching for the interrupts or calling other methods (such as Thread.sleep(...)) that watch for it.
Throwing an exception out of listener doesn't sound like a bad idea. I would certainly create your own RuntimeException class such as TimeoutRuntimeException or something so you can specifically catch and handle it.
Another thing to do would be to close the underlying IO stream that is being written to which would cause an IOException but I'm not familiar with Jersey so I'm not sure if you can get access to the connection.
Ah, here's an idea. Instead of putting the File, how about putting some sort of extension on a BufferedInputStream that is reading from the File but also has a timeout. So Jersey would be reading from the buffer and at some point it would throw an IOException if the timeout expires.
As of Jersey 2.35, the above API has changed. A timeout has been introduces in the client builder which can set read timeout. If the server takes too long to respond, the underlying socket will timeout. However, if the server starts sending the response, it shall not timeout. This can be utilized, if the server does not start sending partial response, which depends on the server implementation.
client=(JerseyClient)JerseyClientBuilder
.newBuilder()
.connectTimeout(1*1000, TimeUnit.MILLISECONDS)
.readTimeout(5*1000, TimeUnit.MILLISECONDS).build()
The current filters and interceptors are for data only and the solution posted in the original question will not work with filters and interceptors (though I admit I may have missed something there).
Another way is to get hold of the underlying HttpUrlConnection (for standard Jersey client configuration) and it seems to be possible with org.glassfish.jersey.client.HttpUrlConnectorProvider
HttpUrlConnectorProvider httpConProvider=new HttpUrlConnectorProvider();
httpConProvider.connectionFactory(new CustomHttpUrlConnectionfactory());
public static class CustomHttpUrlConnectionfactory implements
HttpUrlConnectorProvider.ConnectionFactory{
#Override
public HttpURLConnection getConnection(URL url) throws IOException {
System.out.println("CustomHttpUrlConnectionfactory ..... called");
return (HttpURLConnection)url.openConnection();
}//getConnection closing
}//inner-class closing
I did try the connection provider approach, however, I could not get that working. The idea would be to keep reference to the connection by some means (thread id etc.) and close it if the communication is taking too long. The primary problem was I could not find a way to register the provider with the client. The standard
.register(httpConProvider)
mechanism does not seem to work (or perhaps it is not supposed to work like that) and the documentation is a bit sketchy in that direction.
The application that I am working on has two parts. The server part runs on a Linux machine. The client part, an Android application, queries the server and gets necessary response. Both the parts are written in Java, use socket-based communication, and transfer textual data.
Right after sending the request, here is how the client receives the response:
public static String ReadAvailableTextFromSocket(BufferedReader input) throws IOException {
if (input.ready() == false) {
return null;
}
StringBuilder retVal = new StringBuilder();
while(input.ready()) {
char ch = (char) input.read();
retVal.append(ch);
}
return retVal.toString();
}
However, this doesn't seem to be that reliable. The input is not always ready because of server response time or transmission delays.
Looks like input.ready() is not the right way to wait for getting data.
I am wondering if there is a better way to accomplish this. Perhaps there is some standard practice that I could use.
Perhaps you should use Threads. Keep a listener thread in a while(true) loop that reads more data as it comes in, and simply buffers the data in a data structure (let's say a queue) shared with the main thread. That way, the main thread could simply dequeue data as needed. If the queue is empty, it can infer that no new data was received.
Edit: see this multithreaded chat server/client code as an example.
Here is how I solved this problem. As I am responsible for writing both, the client side as well as the server side, when a request comes to the server, the first line of information I send as the response is the number of bytes the client can expect. This way, the client first waits to read a line. Once the line is read, the client now knows how many bytes of data to expect from the server.
Hope this helps others.
Regards,Peter
How can I do long-polling using netty framework? Say for example I fetch http://localhost/waitforx
but waitforx is asynchronous because it has to wait for an event? Say for example it fetches something from a blocking queue(can only fetch when data in queue). When getting item from queue I would like to sent data back to client. Hopefully somebody can give me some tips how to do this.
Many thanks
You could write a response header first, and then send the body (content) later from other thread.
void messageReceived(...) {
HttpResponse res = new DefaultHttpResponse(...);
res.setHeader(...);
...
channel.write(res);
}
// In a different thread..
ChannelBuffer partialContent = ...;
channel.write(partialContent);
You can use netty-socketio project. It's implementation of Socket.IO server with long polling support. On web side you can use Socket.IO client javascript lib.
You could also do the following in [sfnrpc]: http://code.google.com/p/sfnrpc
Object object = RPCClient.getInstance().invoke("#URN1","127.0.0.1:6878","echo",true,60,"", objArr,classArr, sl);
The true causes communication to be synchronous.