How to convert Flux of ByteBuffer to Spring BodyInserter - java

I have an usecase to read a file from s3 and publish to rest service in java.
For the implementation, I am trying awssdk s3 API to read file which returns Flux<ByteBuffer> and then publish to rest service using the Spring WebClient.
Per my exploration, the spring WebClient requires BodyInserter which can be prepared using the BodyInserters.fromDataBuffers. I am unable to figure out how to convert properly Flux to Flux and call WebClient exchange;
Flux<ByteBuffer> byteBufferFlux = getS3File(key);
Flux<DataBuffer> dataBufferFlux= byteBufferFlux.map(byteBuffer -> {
?????????????Convert bytebuffer to data buffer ??????
return dataBuffer;
});
BodyInserter<Flux<DataBuffer>, ReactiveHttpOutputMessage> inserter = BodyInserters.fromDataBuffers(dataBufferFlux);
Any suggestions how to achieve this?

You can convert using DefaultDataBuffer which you can create via the DefaultDataBufferFactory
DataBufferFactory dataBufferFactory = new DefaultDataBufferFactory();
Flux<DataBuffer> buffer = getS3File(key).map(dataBufferFactory::wrap);
BodyInserter<Flux<DataBuffer>, ReactiveHttpOutputMessage> inserter =
BodyInserters.fromDataBuffers(buffer);
You don't actually need a BodyInserter at all though if using Webclient you can the following method signature for body()
<T, P extends Publisher<T>> RequestHeadersSpec<?> body(P publisher, Class<T> elementClass);
Which you can then pass your Flux<ByteBuffer> directly into, whilst specifying the Class to use
WebClient.create("http://someUrl")
.post()
.uri("/someUri")
.body(getS3File(key),ByteBuffer.class)

You may not need dataBufferFlux and should be able to write the Flux to your rest endpoint. Try this:
Flux<ByteBuffer> byteBufferFlux = getS3File(key);
BodyInserter<Flux<ByteBuffer>, ReactiveHttpOutputMessage> = BodyInserters.fromPublisher(byteBufferFlux, ByteBuffer.class);

Related

Handle error in Spring Webflux MultipartFile.transferTo

I'm using spring-webflux and I wonder if someone knows how to handle error in Mono<Void>. I'm using MultipartFile's method transferTo, which on success returns Mono.empty() and in other cases it wraps exceptions in Mono.error().
public Mono<UploadedFile> create(final User user, final FilePart file) {
final UploadedFile uploadedFile = new UploadedFile(file.filename(), user.getId());
final Path path = Paths.get(fileUploadConfig.getPath(), uploadedFile.getId());
file.transferTo(path);
uploadedFile.setFilePath(path.toString());
return repo.save(uploadedFile);
}
I want to save UploadedFile only in case transferTo ended successfully. But I can't use map/flatMap because empty Mono obviously doesn't emit value. onErrorResume only accepts Mono with same type (Void).
Hi try to chain your operators like this:
...
return Mono.just(file)
.map(f -> f.transferTo(path))
.then(Mono.just(uploadedFile))
.flatMap(uF -> {
uF.setFilePath(path.toString());
return repo.save(uF)
});
}
if your transferTo will finished successfully it calls then operators.
P.S. if I'm not mistaken FilePart is blocking, try to avoid it.

Why my Spring webflux application makes temp file on every request?

why does spring webflux (or java nio) makes multipartData DataBuffer tmp file?
in my case on macOS, files like /private/var/folders/v6/vtrxqpbd4lb3pq8v_sbm10hc0000gn/T/nio-file-upload/nio-body-1-82f11dbe-61b3-4e5d-8c43-92e02aa38481.tmp made on request and then deleted.
is it possible to improve performance with preventing disk write?
this is my code:
public class FileHandler {
public Mono<ServerResponse> postFile(ServerRequest req) {
val file = req.multipartData()
.map(map -> map.getFirst("file"))
.ofType(FilePart.class);
val buffer = file.flatMap(part -> part.content().next());
val hash = buffer.map(d -> {
try {
val md = MessageDigest.getInstance("SHA-1");
md.update(d.asByteBuffer());
return Base64Utils.encodeToString(md.digest());
} catch (NoSuchAlgorithmException e) {
// does not reach here!
return "";
}
});
val name = file.map(FilePart::filename);
return ok().body(hash, String.class);
}
}
The multipart file support in Spring WebFlux is using the Synchronoss NIO Multipart library. The downside of that implementation is that it's not fully reactive and as a result it can create temporary files to not load the whole content in memory.
What makes you think that this behavior is a performance problem? Do you have a sample or benchmark results that show that this is an issue?
The Spring Framework team already worked on this and a fully reactive implementation will be available as the default in Spring Framework 5.2 (see spring-framework#21659).

AWS S3 Event notification using Lambda function in Java

I am trying to use Lambda function for S3 Put event notification. My Lambda function should be called once I put/add any new JSON file in my S3 bucket.
The challenge I have is there are not enough documents for this to implement such Lambda function in Java. Most of doc I found are for Node.js
I want, my Lambda function should be called and then inside that Lambda function, I want to consume that added json and then send that JSON to AWS ES Service.
But what all classes I should use for this? Anyone has any idea about this? S3 abd ES are all setup and running. The auto generated code for lambda is
`
#Override
public Object handleRequest(S3Event input, Context context) {
context.getLogger().log("Input: " + input);
// TODO: implement your handler
return null;
}
What next??
Handling S3 events in Lambda can be done, but you have to keep in mind, the the S3Event object only transports the reference to the object and not the object itself. To get to the actual object you have to invoke the AWS SDK yourself.
Requesting a S3 Object within a lambda function would look like this:
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//insert object into elasticsearch
}
return null;
}
Now the rather difficult part to insert this object into ElasticSearch. Sadly the AWS SDK does not provide any functions for this. The default approach would be to do a REST call against the AWS ES endpoint. There are various samples out their on how to proceed with calling an ElasticSearch instance.
Some people seem to go with the following project:
Jest - Elasticsearch Java Rest Client
Finally, here are the steps for S3 --> Lambda --> ES integration using Java.
Have your S3, Lamba and ES created on AWS. Steps are here.
Use below Java code in your lambda function to fetch a newly added object in S3 and send it to ES service.
public Object handleRequest(S3Event input, Context context) {
AmazonS3Client s3Client = new AmazonS3Client(new DefaultAWSCredentialsProviderChain());
for (S3EventNotificationRecord record : input.getRecords()) {
String s3Key = record.getS3().getObject().getKey();
String s3Bucket = record.getS3().getBucket().getName();
context.getLogger().log("found id: " + s3Bucket+" "+s3Key);
// retrieve s3 object
S3Object object = s3Client.getObject(new GetObjectRequest(s3Bucket, s3Key));
InputStream objectData = object.getObjectContent();
//Start putting your objects in AWS ES Service
String esInput = "Build your JSON string here using S3 objectData";
HttpClient httpClient = new DefaultHttpClient();
HttpPut putRequest = new HttpPut(AWS_ES_ENDPOINT + "/{Index_name}/{product_name}/{unique_id}" );
StringEntity input = new StringEntity(esInput);
input.setContentType("application/json");
putRequest.setEntity(input);
httpClient.execute(putRequest);
httpClient.getConnectionManager().shutdown();
}
return "success";}
Use either Postman or Sense to create Actual index & corresponding mapping in ES.
Once done, download and run proxy.js on your machine. Make sure you setup ES Security steps suggested in this post
Test setup and Kibana by running http://localhost:9200/_plugin/kibana/ URL from your machine.
All is set. Go ahead and set your dashboard in Kibana. Test it by adding new objects in your S3 bucket

Spring Integration : Transformer : file to Object

I am new to Spring Integration and I am trying to read a file and transform into a custom object which has to be sent to jms Queue wrapped in jms.Message.
It all has to be done using annotation.
I am reading the files from directory using below.
#Bean
#InboundChannelAdapter(value = "filesChannel", poller = #Poller(fixedRate = "5000", maxMessagesPerPoll = "1"))
public MessageSource<File> fileReadingMessageSource() {
FileReadingMessageSource source = new FileReadingMessageSource();
source.setDirectory(new File(INBOUND_PATH));
source.setAutoCreateDirectory(false);
/*source.setFilter(new AcceptOnceFileListFilter());*/
source.setFilter(new CompositeFileListFilter<File>(getFileFilters()));
return source;
}
Next Step is transforming the file content to Invoice Object(assume).
I want to know what would be incoming message type for my transformer and how should I transform it. Could you please help here. I am not sure what would be the incoming datatype and what should be the transformed object type (should it be wrapped inside Message ?)
#Transformer(inputChannel = "filesChannel", outputChannel = "jmsOutBoundChannel")
public ? convertFiletoInvoice(? fileMessage){
}
The payload is a File (java.io.File).
You can read the file and output whatever you want (String, byte[], Invoice etc).
Or you could use some of the standard transformers (e.g. FileToStringTransformer, JsonToObjectTransformer etc).
The JMS adapter will convert the object to TextMessage, ObjectMessage etc.

How to send a InputStream in play framework in an java only project without chunked responses?

In a Java (only) Play 2.3 project we need to send a non-chunked response of an InputStream directly to the client. The InputStream comes from a remote service from which we want to stream directly to the client, without blocking or buffering to a local file. Since we know the size before reading the input stream, we do not want a chunked response.
What is the best way to return a result for an input stream with a known size? (preferable without using Scala).
When looking at the default ok(file, ..) method for returning File objects it goes deep into play internals which are only accessible from scala, and it uses the play-internal execution context which can't even be accessed from outside. Would be nice if it would work identical, just with an InputStream.
FWIW I have now found a way to serve an InputStream, which basically duplicates the logic which the Results.ok(File) method to allow directly passing in an InputStream.
The key is to use the scala call to create an Enumerator from an InputStream: play.api.libs.iteratee.Enumerator$.MODULE$.fromStream
private final MessageDispatcher fileServeContext = Akka.system().dispatchers().lookup("file-serve-context");
protected void serveInputStream(InputStream inputStream, String fileName, long contentLength) {
response().setHeader(
HttpHeaders.CONTENT_DISPOSITION,
"attachment; filename=\"" + fileName + "\"");
// Set Content-Type header based on file extension.
scala.Option<String> contentType = MimeTypes.forFileName(fileName);
if (contentType.isDefined()) {
response().setHeader(CONTENT_TYPE, contentType.get());
} else {
response().setHeader(CONTENT_TYPE, ContentType.DEFAULT_BINARY.getMimeType());
}
response().setHeader(CONTENT_LENGTH, Long.toString(contentLength));
return new WrappedScalaResult(new play.api.mvc.Result(
new ResponseHeader(StatusCode.OK, toScalaMap(response().getHeaders())),
// Enumerator.fromStream() will also close the input stream once it is done.
play.api.libs.iteratee.Enumerator$.MODULE$.fromStream(
inputStream,
FILE_SERVE_CHUNK_SIZE,
fileServeContext),
play.api.mvc.HttpConnection.KeepAlive()));
}
/**
* A simple Result which wraps a scala result so we can call it from our java controllers.
*/
private static class WrappedScalaResult implements Result {
private play.api.mvc.Result scalaResult;
public WrappedScalaResult(play.api.mvc.Result scalaResult) {
this.scalaResult = scalaResult;
}
#Override
public play.api.mvc.Result toScala() {
return scalaResult;
}
}

Categories

Resources