I'd like to do something similar to the example posted in restlet's site (first applicaiton) - with one difference:
I want to stream data - not use primitive types - using an interface.
I want to define some kind of an interface between the client and the server, stream data between them and let restlet handle transmit the data seamlessly.
Example of what I have in mind:
interface Streaming {
InputStream startStream(String streamId);
}
When the client invokes a call, it starts reading from the inputstream. The server receives the call and starts providing the stream by creating an inputstream (for example, a video file, or just some raw data). Restlet should be reading from the inputstream on the server side and provide the data as an inputstream on the client side.
Any idea how can I achieve this? A code sample or link to one would be great. Thanks.
Below's an example code of what I learned so far - an interface with streaming capavilities and a client-server streaming example.
I haven't yet added parameters to the interface and it's only download - no upload yet.
Interface:
public interface DownloadResource {
public ReadableRepresentation download();
}
Interface with protocol: (separation between logic and technology):
public interface DownloadResourceProtocol extends DownloadResource {
#Get
#Override
public ReadableRepresentation download();
}
Client:
ClientResource cr = new ClientResource("http://10.0.2.2:8888/download/");
cr.setRequestEntityBuffering(true);
DownloadResource downloadResource = cr.wrap(DownloadResourceProtocol.class);
// Remote invocation - seamless:
Representation representation = downloadResource.download();
// Using data:
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
IOUtils.copy(representation.getStream(), byteArrayOutputStream);
byte[] byteArray = byteArrayOutputStream.toByteArray();
Log.i("Byte array: " + Arrays.toString(byteArray));
Server:
public class DownloadResourceImpl extends ServerResource implements DownloadResourceProtocol {
#Override
public ReadableRepresentation download() {
InputStreamChannel inputStreamChannel;
try {
inputStreamChannel = new InputStreamChannel(new ByteArrayInputStream(new byte[]{1,2,3,4,5,6,7,8,9,10}));
return new ReadableRepresentation(inputStreamChannel, MediaType.ALL);
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
}
Configuration:
public class SampleApplication extends Application {
#Override
public Restlet createInboundRoot() {
Router router = new Router(getContext());
router.attach("/download/", DownloadResourceImpl.class);
return router;
}
}
Not sure this addresses your problem completely, but one approach is to create a thread that streams data back to the client using ReadableRepresentation and a Pipe.
Create a pipe:
Pipe pipe = Pipe.open();
Create a representation like this:
ReadableRepresentation r = new ReadableRepresentation(pipe.source(), mediatype);
Start a separate thread that writes batches of bytes to the pipe like this:
pipe.sink().write(ByteBuffer.wrap(someBytes));
return the representation to the client.
Related
I am trying to find a way to create a hot stream where i could insert data in one method and a subscriber could get the data in another method.
I have succeeded using a WorkQueueProcessor, but I am not sure if this is the right way of doing that. Is it possible to do the same thing using Flux.create ?
Here's my working snippet:
Call connect();
Send byte data to server, the client will receive a response from tcp server and workQueueProcessor will emit the data.
#Component
#RequiredArgsConstructor
public class TcpCli {
#Setter
private Connection connection;
private NettyOutbound out;
//Creation of Work Queue Processor, can a Flux.create here can do the same job ?
private WorkQueueProcessor<String> workQueueProcessor = WorkQueueProcessor.<String>builder().build();
public Mono<? extends Connection> connect() {
return TcpClient.create()
.host(tcpConfig.getHost())
.port(tcpConfig.getPort())
.handle(this::handleConnection)
.connect();
}
public Mono<String> sendData(ByteArray data) {
out.sendByteArray(Mono.just(data)).then().subscribe();
//Get emitted data from workQueueProcessor
return workQueueProcessor.next();
}
private Publisher<Void> handleConnection(NettyInbound in, NettyOutbound out) {
this.out = out;
in.receive().asString()
.log("In received")
.subscribe(str -> {
LOGGER.info(String.format("Inbound: %s", str));
//Emit data to workQueueProcessor
workQueueProcessor.onNext(str);
});
return out
.neverComplete() //keep connection alive
.log("Never close");
}
}
If you have multiple input sources and more than one subscribers, I think you are on right way.
If the NettyInbound is the only input source of your steam, then you don't need to use any Processors. Just subscribe to it.
If you have multiple input sources for your stream, and you have only one subscriber, in your case NettyOutbound, you may try 'UnicastProcessor' which is much light weighted.
Can you explain what the #Setter does here. I try the same code and it always gives the out
#Setter
private Connection connection;
private NettyOutbound out;
I'm newbie to Apache Camel. In hp nonstop there is a Receiver that receives events generated by event manager assume like a stream. My goal is to setup a consumer end point which receives the incoming message and process it through Camel.
Another end point I simply need to write it in logs. From my study I understood that for Consumer end point I need to create own component and configuration would be like
from("myComp:receive").to("log:net.javaforge.blog.camel?level=INFO")
Here is my code snippet which receives message from event system.
Receive receive = com.tandem.ext.guardian.Receive.getInstance();
byte[] maxMsg = new byte[500]; // holds largest possible request
short errorReturn = 0;
do { // read messages from $receive until last close
try {
countRead = receive.read(maxMsg, maxMsg.length);
String receivedMessage=new String(maxMsg, "UTF-8");
//Here I need to handover receivedMessage to camel
} catch (ReceiveNoOpeners ex) {
moreOpeners = false;
} catch(Exception e) {
moreOpeners = false;
}
} while (moreOpeners);
Can someone guide with some hints how to make this as a Consumer.
The 10'000 feet view is this:
You need to start out with implementing a component. The easiest way to get started is to extend org.apache.camel.impl.DefaultComponent. The only thing you have to do is override DefaultComponent::createEndpoint(..). Quite obviously what it does is create your endpoint.
So the next thing you need is to implement your endpoint. Extend org.apache.camel.impl.DefaultEndpoint for this. Override at the minimum DefaultEndpoint::createConsumer(Processor) to create your own consumer.
Last but not least you need to implement the consumer. Again, best ist to extend org.apache.camel.impl.DefaultConsumer. The consumer is where your code has to go that generates your messages. Through the constructor you receive a reference to your endpoint. Use the endpoint reference to create a new Exchange, populate it and send it on its way along the route. Something along the lines of
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
setMyMessageHeaders(ex.getIn(), myMessagemetaData);
setMyMessageBody(ex.getIn(), myMessage);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
LOG.debug("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
I recommend you pick a simple component (DirectComponent ?) as an example to follow.
Herewith adding my own consumer component may help someone.
public class MessageConsumer extends DefaultConsumer {
private final MessageEndpoint endpoint;
private boolean moreOpeners = true;
public MessageConsumer(MessageEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.endpoint = endpoint;
}
#Override
protected void doStart() throws Exception {
int countRead=0; // number of bytes read
do {
countRead++;
String msg = String.valueOf(countRead)+" "+System.currentTimeMillis();
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
ex.getIn().setBody(msg);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
log.info("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
// This is an echo server so echo request back to requester
} while (moreOpeners);
}
#Override
protected void doStop() throws Exception {
moreOpeners = false;
log.debug("Message processor is shutdown");
}
}
I have got an OutputStream which can be initialized as a chain of OutputStreams. There could be any level of chaining .Only thing guaranteed is that at the end of the chain is a FileOutputStream.
I need to recreate this chained outputStream with a modified Filename in FileOutputStream. This would have been possible if out variable (which stores the underlying chained outputStream) was accessible ; as shown below.
public OutputStream recreateChainedOutputStream(OutputStream os) throws IOException {
if(os instanceof FileOutputStream) {
return new FileOutputStream("somemodified.filename");
} else if (os instanceof FilterOutputStream) {
return recreateChainedOutputStream(os.out);
}
}
Is there any other way of achieving the same?
You can use reflection to access the os.out field of the FilterOutputStream, this has however some drawbacks:
If the other OutputStream is also a kind of RolloverOutputStream, you can have a hard time reconstructing it,
If the other OutputStream has custom settings, like GZip compression parameter, you cannot reliable read this
If there is a
A quick and dirty implementation of recreateChainedOutputStream( might be:
private final static Field out;
{
try {
out = FilterInputStream.class.getField("out");
out.setAccessible(true);
} catch(Exception e) {
throw new RuntimeException(e);
}
}
public OutputStream recreateChainedOutputStream(OutputStream out) throws IOException {
if (out instanceof FilterOutputStream) {
Class<?> c = ou.getClass();
COnstructor<?> con = c.getConstructor(OutputStream.class);
return con.invoke(this.out.get(out));
} else {
// Other output streams...
}
}
While this may be ok in your current application, this is a big no-no in the production world because the large amount of different kind of OutputStreams your application may recieve.
A better way to solve would be a kind of Function<String, OutputStream> that works as a factory to create OutputStreams for the named file. This way the external api keeps its control over the OutputStreams while your api can adress multiple file names. An example of this would be:
public class MyApi {
private final Function<String, OutputStream> fileProvider;
private OutputStream current;
public MyApi (Function<String, OutputStream> fileProvider, String defaultFile) {
this.fileProvider = fileProvider;
selectNewOutputFile(defaultFile);
}
public void selectNewOutputFile(String name) {
OutputStream current = this.current;
this.current = fileProvider.apply(name);
if(current != null) current.close();
}
}
This can then be used in other applications as:
MyApi api = new MyApi(name->new FileOutputStream(name));
For simple FileOutputStreams, or be used as:
MyApi api = new MyApi(name->
new GZIPOutputStream(
new CipherOutputStream(
new CheckedOutputStream(
new FileOutputStream(name),
new CRC32()),
chipper),
1024,
true)
);
For a file stream that stored checksummed using new CRC32(), chipped using chipper, gzip according to a 1024 buffer with sync write mode.
I'm writing a play 2.0 java application that allows users to upload files. Those files are stored on a third-party service I access using a Java library, the method I use in this API has the following signature:
void store(InputStream stream, String path, String contentType)
I've managed to make uploads working using the following simple controller:
public static Result uploadFile(String path) {
MultipartFormData body = request().body().asMultipartFormData();
FilePart filePart = body.getFile("files[]");
InputStream is = new FileInputStream(filePart.getFile())
myApi.store(is,path,filePart.getContentType());
return ok();
}
My concern is that this solution is not efficient because by default the play framework stores all the data uploaded by the client in a temporary file on the server then calls my uploadFile() method in the controller.
In a traditional servlet application I would have written a servlet behaving this way:
myApi.store(request.getInputStream(), ...)
I have been searching everywhere and didn't find any solution. The closest example I found is Why makes calling error or done in a BodyParser's Iteratee the request hang in Play Framework 2.0? but I didn't found how to modify it to fit my needs.
Is there a way in play2 to achieve this behavior, i.e. having the data uploaded by the client to go "through" the web-application directly to another system ?
Thanks.
I've been able to stream data to my third-party API using the following Scala controller code:
def uploadFile() =
Action( parse.multipartFormData(myPartHandler) )
{
request => Ok("Done")
}
def myPartHandler: BodyParsers.parse.Multipart.PartHandler[MultipartFormData.FilePart[Result]] = {
parse.Multipart.handleFilePart {
case parse.Multipart.FileInfo(partName, filename, contentType) =>
//Still dirty: the path of the file is in the partName...
String path = partName;
//Set up the PipedOutputStream here, give the input stream to a worker thread
val pos:PipedOutputStream = new PipedOutputStream();
val pis:PipedInputStream = new PipedInputStream(pos);
val worker:UploadFileWorker = new UploadFileWorker(path,pis);
worker.contentType = contentType.get;
worker.start();
//Read content to the POS
Iteratee.fold[Array[Byte], PipedOutputStream](pos) { (os, data) =>
os.write(data)
os
}.mapDone { os =>
os.close()
Ok("upload done")
}
}
}
The UploadFileWorker is a really simple Java class that contains the call to the thrid-party API.
public class UploadFileWorker extends Thread {
String path;
PipedInputStream pis;
public String contentType = "";
public UploadFileWorker(String path, PipedInputStream pis) {
super();
this.path = path;
this.pis = pis;
}
public void run() {
try {
myApi.store(pis, path, contentType);
pis.close();
} catch (Exception ex) {
ex.printStackTrace();
try {pis.close();} catch (Exception ex2) {}
}
}
}
It's not completely perfect because I would have preferred to recover the path as a parameter to the Action but I haven't been able to do so. I thus have added a piece of javascript that updates the name of the input field (and thus the partName) and it does the trick.
In both client and server classes, I have an exact same inner class called Data. This Data object is being sent from the server using:
ObjectOutputStream output= new ObjectOutputStream(socket.getOutputStream());
output.writeObject(d);
(where d is a Data object)
This object is received on the client side and cast to a Data object:
ObjectInputStream input = new ObjectInputStream(socket.getInputStream());
Object receiveObject = input.readObject();
if (receiveObject instanceof Data){
Data receiveData = (Data) receiveObject;
// some code here...
}
I'm getting a java.lang.ClassNotFoundException: TCPServer$Data on this line Object receiveObject = input.readObject();
My guess is that it's trying to to look for the Data class in the Server side and can't find it, but I'm not sure... How do I fix this?
What you were trying to do is something along the lines of the following:
class TCPServer {
/* some code */
class Data {
}
}
class TCPClient {
/* some code */
class Data {
}
}
Then you are serializing a TCPServer$Data and trying to unserialize it as a TCPClient$Data. Instead you are going to want to be doing this:
class TCPServer {
/* some code */
}
class TCPClient {
/* some code */
}
class Data {
/* some code */
}
Then make sure the Data class is available to both the client and the server programs.
When you use some class in two different JVMs, and you are marshalling/unmarshalling the class then the class should be exported to a common library and shared between both server and client. Having different class wont work any time.
What you are trying to do is marshall TCPServer$Data and unmarshall as TCPClient$Data. That is incompatible.