Custom sized Flux DataBuffer within WebSocket - java

I'm trying to make a live video stream where on the backend chunks are received and stored into a DataBuffer which then is continuously writen to a File. But instead of a bigger getting DataBuffer and file, I rather want a DataBuffer which is limited in its size and (hopefully) works like FIFO, so that if new chunks are pushed into the DataBuffer, older chunks are pushed out.
What I tried is to create a DataBuffer with the DataBufferFactory and allocateBuffer(), but this always gives me type missmatch errors because I have the Flux<DataBuffer> videoDataFlux which contains the chunk data on the one hand, and a <DataBufferFactory> on the other hand, which somehow can't be combined.
So, whats the way to go?
What is working so far:
#Override
public Mono<Void> handle(WebSocketSession webSocketSession) {
String filename = "strm";
Path path = FileSystems.getDefault().getPath("C:\\Program Files (x86)\\Apache Software Foundation\\
Tomcat 9.0\\webapps\\stream\\videos");
Flux<DataBuffer> videoDataFlux = webSocketSession.receive()
.map(WebSocketMessage::getPayload);
try{
Path file = Files.createTempFile(path, filename, ".webm");
AsynchronousFileChannel channel = AsynchronousFileChannel.open(file, StandardOpenOption.WRITE);
return (DataBufferUtils.write(videoDataFlux, channel, 0)
.then()
.doOnNext(s -> {
if((!videoDataFlux.equals(null)) && (!webSocketSession.equals(null))){
DataBufferUtils.write(videoDataFlux, channel, 0).subscribe();
}
}));
} catch(IOException e){
}
return null;
}

Related

How to copy S3 object from one region to another when vpc endpoint is enabled

Recently I was unable to copy files using the s3.copyObject(sourceBucket, sourceKey, destBucket, destKey); because of 2 reasons.
1) The source and destination buckets are in 2 different regions (us-east-1 and us-east2 in my case).
2) The region where the server resides is in a VPC which has an S3 endpoint enabled. S3 endpoint is an internal connection to S3, but only in the same region
Given that we are moving large files, we could not download and then upload even temporarily. We also wanted to keep the S3 endpoint in place, because the application makes serious use of S3 assets once in region.
The solution is to stream copy the files from one stream to another. I wrote this simple function which will handle it.
ZipException is just a custom exception. Throw whatever you want.
Hopefully this helps somebody.
public static void copyObject(AmazonS3 sourceClient, AmazonS3 destClient, String sourceBucket, String sourceKey, String destBucket, String destKey) throws IOException {
S3ObjectInputStream inStream = null;
try {
GetObjectRequest request = new GetObjectRequest(sourceBucket, sourceKey);
S3Object object = sourceClient.getObject(request);
inStream = object.getObjectContent();
destClient.putObject(destBucket,
destKey, inStream, object.getObjectMetadata());
} catch (SdkClientException e) {
throw new ZipException("Unable to copy file.", e);
} finally {
if (inStream != null) {
inStream.close();
}
}
}

upload file in vertx and convert it into byte array to insert in database

I need to write upload file code using vertx and then save it into PostgreSQL table. but as file is uploaded in multipart and asynchronously I am unable to get byte complete array. Following is my code
public static void uploadLogo(RoutingContext routingContext) {
HttpServerRequest request = routingContext.request();
HttpServerResponse response = routingContext.response();
request.setExpectMultipart(true);
request.uploadHandler(upload -> {
upload.handler(chunk -> {
byte[] fileBytes = chunk.getBytes();
});
upload.endHandler(endHandler -> {
System.out.println("uploaded successfully");
});
upload.exceptionHandler(cause -> {
request.response().setChunked(true).end("Upload failed");
});
});
}
Here I get byte array in fileBytes but only part at a time. I dont understand how to add next byte array to it as it works asynchronously. Is there any way to get byte array of entire file
Hi I am able to extract bytes by using below code :
router.post("/upload").handler(ctx->{
ctx.request().setExpectMultipart(true);
ctx.request().bodyHandler(buffer->{
byte[] bytes=buffer.getBytes();
//transfer bytes to whichever service you want from here
});
ctx.response().end();
});
Request context has .fileUploads() method for that.
See here for the full example: https://github.com/vert-x3/vertx-examples/blob/master/web-examples/src/main/java/io/vertx/example/web/upload/Server.java#L42
If you want to access uploaded files:
Vertx vertx = Vertx.vertx();
Router router = Router.router(vertx);
router.post("/upload").handler(ctx -> {
for (FileUpload fu : ctx.fileUploads()) {
vertx.fileSystem().readFile(fu.uploadedFileName(), fileHandler -> {
// Do something with buffer
});
}
});
to get the uploaded files you have to use fileUploads() method after you can get the byte array.
JsonArray attachments = new JsonArray();
for (FileUpload f : routingContext.fileUploads()) {
Buffer fileUploaded = routingContext.vertx().fileSystem().readFileBlocking(f.uploadedFileName());
attachments.add(new JsonObject().put("body",fileUploaded.getBytes()).put("contentType",f.contentType())
.put("fileName",f.fileName()));
}
You need to build it manually by appending incoming parts in upload.handler into Buffer. Once upload.endHandler is called the upload process has ended, and you can get the resulting buffer and its byte array.
request.uploadHandler(upload -> {
Buffer cache = null;
upload.handler(chunk -> {
if (cache == null) {
cache = chunk;
} else {
cache.appendBuffer(chunk);
}
});
upload.endHandler(end -> {
byte[] result = cache.get().getBytes();
});
});

Heap exception when write file on my server

I have some problem with file write on server. I have two approach to file upload, one with Spring Multipart file, and it doesn't have problem, and one with byte array and when the file is big it gives java.lang.OutOfMemoryError: Java heap space(I can see that the occupied ram increase with this method). I use byte array because I pass this file between web service REST and I put the stream into an object with also the field name.
This is my code for file write:
#Override
public void storeAcquisition(Response response, String toStorePath) throws Exception{
#SuppressWarnings("unchecked")
LinkedHashMap<String,String> result= (LinkedHashMap<String,String>)response.getResult();
Files.write(Paths.get(toStorePath + "/" + result.get("name")), DatatypeConverter.parseBase64Binary(result.get("content")));
}
This is the sender method
byte[] file = getFile(filePath);
if (file!=null){
FileTransfer fileTransfer= new FileTransfer(file, Paths.get(filePath).getFileName().toString());
//TODO POST TO SERVER
Response responseSend = new Response(true, true, fileTransfer, null);
RestTemplate restTemplate = new RestTemplate();
Response responseStatus = restTemplate.postForObject(serverIp + "ATS/client/file/?toStorePath={toStorePath}", responseSend, Response.class, toStorePath);
return responseStatus;
}
and
public byte[] getFile(String path) throws Exception{
if (Files.exists(Paths.get(path)))
return Files.readAllBytes(Paths.get(path));
else
return null;
}
Is there a problem in my code or the only solution is to use a better server?
The problem seems to be on restTemplate.postForObject and if I put a breakpoint on LinkedHashMap<String,String> result it doesn't come called.

Problems with facebooks conceal library

I'm having issues with reading decrypted data from conceal. It looks like I can't correctly finish streaming.
I pretend there is some issue with conceal, because of when I switch my proxyStream (just the encryption part) to not run it through conceal, everything works as expected. I'm also assuming that writing is ok, there is no exception whatsoever and I can find the encrypted file on disk.
I'm proxying my data through contentprovider to allow other apps read decrypted data when the user wants it. (sharing,...)
In my content provider I'm using the openFile method to allow contentResolvers read the data
#Override
public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException {
try {
ParcelFileDescriptor[] pipe = ParcelFileDescriptor.createPipe();
String name = uri.getLastPathSegment();
File file = new File(name);
InputStream fileContents = mStorageProxy.getDecryptInputStream(file);
ParcelFileDescriptor.AutoCloseOutputStream stream = new ParcelFileDescriptor.AutoCloseOutputStream(pipe[1]);
PipeThread pipeThread = new PipeThread(fileContents, stream);
pipeThread.start();
return pipe[0];
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
I guess in the Facebook app Facebook android team could be rather using a standard query() method with a byte array sent in MediaStore.MediaColumns() which is not suitable for me because of I'm not only encrypting media files and I also like the approach of streams better.
This is how I'm reading from the Inpustream. It's basically a pipe between two parcelFileDescriptors. The inputstream comes from conceal and it is a FileInputstream wrapped into a BufferedInputStream originaly.
static class PipeThread extends Thread {
InputStream input;
OutputStream out;
PipeThread(InputStream inputStream, OutputStream out) {
this.input=inputStream;
this.out=out;
}
#Override
public void run() {
byte[] buf=new byte[1024];
int len;
try {
while ((len=input.read(buf)) > 0) {
out.write(buf, 0, len);
}
input.close();
out.flush();
out.close();
}
catch (IOException e) {
Log.e(getClass().getSimpleName(),
"Exception transferring file", e);
}
}
}
I've tried other methods how to read the stream, so it really shouldn't be the issue.
Finally here's the exception I'm constantly ending up with. Do you know what could be the issue? It points to native calls, which I got lost in..
Exception transferring file
com.facebook.crypto.cipher.NativeGCMCipherException: decryptFinal
at com.facebook.crypto.cipher.NativeGCMCipher.decryptFinal(NativeGCMCipher.java:108)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.ensureTagValid(NativeGCMCipherInputStream.java:126)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.read(NativeGCMCipherInputStream.java:91)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.read(NativeGCMCipherInputStream.java:76)
EDIT:
It looks like the stream is working ok, but what fails is the last iteration of reading from it. As I'm using buffer it seems like the fact that the buffer is bigger then the amount of remaiming data is causing the issue. I've been looking into sources of conceal and it seems to be ok from this regard there. Couldn't it be failing somewhere in the native layer?
Note: I've managed to get the decrypted file except its final chunk of bytes..So I have for example an incomplete image file (with last few thousands of pixels not being displayed)
From my little experience with conceal, I have noticed that, only the same application that encrypts a file could decrypt it successfully irrespective whether it has the same package or not. Be sure to put this in mind
This was resolved in https://github.com/facebook/conceal/issues/24. For posterity's sake, the problem here is that the author forgot to call close() on the output stream.

Java data object for bidirectional I/O

I am developing an interface that takes as input an encrypted byte stream -- probably a very large one -- that generates output of more or less the same format.
The input format is this:
{N byte envelope}
- encryption key IDs &c.
{X byte encrypted body}
The output format is the same.
Here's the usual use case (heavily pseudocoded, of course):
Message incomingMessage = new Message (inputStream);
ProcessingResults results = process (incomingMessage);
MessageEnvelope messageEnvelope = new MessageEnvelope ();
// set message encryption options &c. ...
Message outgoingMessage = new Message ();
outgoingMessage.setEnvelope (messageEnvelope);
writeProcessingResults (results, message);
message.writeToOutput (outputStream);
To me, it seems to make sense to use the same object to encapsulate this behaviour, but I'm at a bit of a loss as to how I should go about this. It isn't practical to load all of the encrypted body in at a time; I need to be able to stream it (so, I'll be using some kind of input stream filter to decrypt it) but at the same time I need to be able to write out new instances of this object. What's a good approach to making this work? What should Message look like internally?
I won't create one class to handle in- and output - one class, one responsibility. I would like two filter streams, one for input/decryption and one for output/encryption:
InputStream decrypted = new DecryptingStream(inputStream, decryptionParameters);
...
OutputStream encrypted = new EncryptingStream(outputSream, encryptionOptions);
They may have something like a lazy init mechanism reading the envelope before first read() call / writing the envelope before first write() call. You also use classes like Message or MessageEnvelope in the filter implementations, but they may stay package protected non API classes.
The processing will know nothing about de-/encryption just working on a stream. You may also use both streams for input and output at the same time during processing streaming the processing input and output.
Can you split the body at arbitrary locations?
If so, I would have two threads, input thread and output thread and have a concurrent queue of strings that the output thread monitors. Something like:
ConcurrentLinkedQueue<String> outputQueue = new ConcurrentLinkedQueue<String>();
...
private void readInput(Stream stream) {
String str;
while ((str = stream.readLine()) != null) {
outputQueue.put(processStream(str));
}
}
private String processStream(String input) {
// do something
return output;
}
private void writeOutput(Stream out) {
while (true) {
while (outputQueue.peek() == null) {
sleep(100);
}
String msg = outputQueue.poll();
out.write(msg);
}
}
Note: This will definitely not work as-is. Just a suggestion of a design. Someone is welcome to edit this.
If you need to read and write same time you either have to use threads (different threads reading and writing) or asynchronous I/O (the java.nio package). Using input and output streams from different threads is not a problem.
If you want to make a streaming API in java, you should usually provide InputStream for reading and OutputStream for writing. This way those can then be passed for other APIs so that you can chain things and so get the streams go all the way as streams.
Input example:
Message message = new Message(inputStream);
results = process(message.getInputStream());
Output example:
Message message = new Message(outputStream);
writeContent(message.getOutputStream());
The message needs to wrap the given streams with a classes that do the needed encryption and decryption.
Note that reading multiple messages at same time or writing multiple messages at same time would need support from the protocol too. You need to get the synchronization correct.
You should check Wikipedia article on different block cipher modes supporting encryption of streams. The different encryption algorithms may support a subset of these.
Buffered streams will allow you to read, encrypt/decrypt and write in a loop.
Examples demonstrating ZipInputStream and ZipOutputStream could provide some guidance on how you may solve this. See example.
What you need is using Cipher Streams (CipherInputStream). Here is an example of how to use it.
I agree with Arne, the data processor shouldn't know about encryption, it just needs to read the decrypted body of the message, and write out the results, and stream filters should take care of encryption. However, since this is logically operating on the same piece of information (a Message), I think they should be packaged inside one class which handles the message format, although the encryption/decryption streams are indeed independent from this.
Here's my idea for the structure, flipping the architecture around somewhat, and moving the Message class outside the encryption streams:
class Message {
InputStream input;
Envelope envelope;
public Message(InputStream input) {
assert input != null;
this.input = input;
}
public Message(Envelope envelope) {
assert envelope != null;
this.envelope = envelope;
}
public Envelope getEnvelope() {
if (envelope == null && input != null) {
// Read envelope from beginning of stream
envelope = new Envelope(input);
}
return envelope
}
public InputStream read() {
assert input != null
// Initialise the decryption stream
return new DecryptingStream(input, getEnvelope().getEncryptionParameters());
}
public OutputStream write(OutputStream output) {
// Write envelope header to output stream
getEnvelope().write(output);
// Initialise the encryption
return new EncryptingStream(output, getEnvelope().getEncryptionParameters());
}
}
Now you can use it by creating a new message for the input, and one for the output:
OutputStream output; // This is the stream for sending the message
Message inputMessage = new Message(input);
Message outputMessage = new Message(inputMessage.getEnvelope());
process(inputMessage.read(), outputMessage.write(output));
Now the process method just needs to read chunks of data as required from the input, and write results to the output:
public void process(InputStream input, OutputStream output) {
byte[] buffer = new byte[1024];
int read;
while ((read = input.read(buffer) > 0) {
// Process buffer, writing to output as you go.
}
}
This all now works in lockstep, and you don't need any extra threads. You can also abort early without having to process the whole message (if the output stream is closed for example).

Categories

Resources