Jersey client upload progress - java

I have a jersey client that need to upload a file big enough to require a progress bar.
The problem is that, for an upload that requires some minutes, i see the bytes transfered to go to 100% as soon as the application has started. Then it takes some minutes to print the "on finished" string. It is as if the bytes were sent to a buffer, and i was reading the transfert-to-the buffer speed instead of the actual upload speed. This makes the progress bar useless.
This is the very simple code:
ClientConfig config = new DefaultClientConfig();
Client client = Client.create(config);
WebResource resource = client.resource("www.myrestserver.com/uploads");
WebResource.Builder builder = resource.type(MediaType.MULTIPART_FORM_DATA_TYPE);
FormDataMultiPart multiPart = new FormDataMultiPart();
FileDataBodyPart fdbp = new FileDataBodyPart("data.zip", new File("data.zip"));
BodyPart bp = multiPart.bodyPart(fdbp);
String response = builder.post(String.class, multiPart);
To get progress state i've added a ContainerListener filter, obviouslt before calling builder.post:
final ContainerListener containerListener = new ContainerListener() {
#Override
public void onSent(long delta, long bytes) {
System.out.println(delta + " : " + long);
}
#Override
public void onFinish() {
super.onFinish();
System.out.println("on finish");
}
};
OnStartConnectionListener connectionListenerFactory = new OnStartConnectionListener() {
#Override
public ContainerListener onStart(ClientRequest cr) {
return containerListener;
}
};
resource.addFilter(new ConnectionListenerFilter(connectionListenerFactory));

In Jersey 2.X, i've used a WriterInterceptor to wrap the output stream with a subclass of Apache Commons IO CountingOutputStream that tracks the writing and notify my upload progress code (not shown).
public class UploadMonitorInterceptor implements WriterInterceptor {
#Override
public void aroundWriteTo(WriterInterceptorContext context) throws IOException, WebApplicationException {
// the original outputstream jersey writes with
final OutputStream os = context.getOutputStream();
// you can use Jersey's target/builder properties or
// special headers to set identifiers of the source of the stream
// and other info needed for progress monitoring
String id = (String) context.getProperty("id");
long fileSize = (long) context.getProperty("fileSize");
// subclass of counting stream which will notify my progress
// indicators.
context.setOutputStream(new MyCountingOutputStream(os, id, fileSize));
// proceed with any other interceptors
context.proceed();
}
}
I then registered this interceptor with the client, or with specific targets where you want to use the interceptor.

it should be enough to provide you own MessageBodyWriter for java.io.File which fires some events or notifies some listeners as progress changes
#Provider()
#Produces(MediaType.APPLICATION_OCTET_STREAM)
public class MyFileProvider implements MessageBodyWriter<File> {
public boolean isWriteable(Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
return File.class.isAssignableFrom(type);
}
public void writeTo(File t, Class<?> type, Type genericType, Annotation annotations[], MediaType mediaType, MultivaluedMap<String, Object> httpHeaders, OutputStream entityStream) throws IOException {
InputStream in = new FileInputStream(t);
try {
int read;
final byte[] data = new byte[ReaderWriter.BUFFER_SIZE];
while ((read = in.read(data)) != -1) {
entityStream.write(data, 0, read);
// fire some event as progress changes
}
} finally {
in.close();
}
}
#Override
public long getSize(File t, Class<?> type, Type genericType, Annotation[] annotations, MediaType mediaType) {
return t.length();
}
}
and to make your client application uses this new provider simply:
ClientConfig config = new DefaultClientConfig();
config.getClasses().add(MyFileProvider.class);
or
ClientConfig config = new DefaultClientConfig();
MyFileProvider myProvider = new MyFileProvider ();
cc.getSingletons().add(myProvider);
You would have to also include some algorithm to recognize which file is transfered when receiving progress events.
Edited:
I just found that by default HTTPUrlConnection uses buffering. And to disable buffering you could do couple of things:
httpUrlConnection.setChunkedStreamingMode(chunklength) - disables buffering and uses chunked transfer encoding to send request
httpUrlConnection.setFixedLengthStreamingMode(contentLength) - disables buffering and but ads some constraints to streaming: exact number of bytes must be sent
So I suggest the final solution to your problem uses 1st option and would look like this:
ClientConfig config = new DefaultClientConfig();
config.getClasses().add(MyFileProvider.class);
URLConnectionClientHandler clientHandler = new URLConnectionClientHandler(new HttpURLConnectionFactory() {
#Override
public HttpURLConnection getHttpURLConnection(URL url) throws IOException {
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setChunkedStreamingMode(1024);
return connection;
}
});
Client client = new Client(clientHandler, config);

I have successfully used David's answer. However, I would like to extend on it:
The following aroundWriteTo implementation of my WriterInterceptor shows how a panel (or similar) can also be passed to the CountingOutputStream:
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException
{
final OutputStream outputStream = context.getOutputStream();
long fileSize = (long) context.getProperty(FILE_SIZE_PROPERTY_NAME);
context.setOutputStream(new ProgressFileUploadStream(outputStream, fileSize,
(progressPanel) context
.getProperty(PROGRESS_PANEL_PROPERTY_NAME)));
context.proceed();
}
The afterWrite of the CountingOutputStream can then set the progress:
#Override
protected void afterWrite(int n)
{
double percent = ((double) getByteCount() / fileSize);
progressPanel.setValue((int) (percent * 100));
}
The properties can be set on the Invocation.Builder object:
Invocation.Builder invocationBuilder = webTarget.request();
invocationBuilder.property(
UploadMonitorInterceptor.FILE_SIZE_PROPERTY_NAME, newFile.length());
invocationBuilder.property(
UploadMonitorInterceptor.PROGRESS_PANEL_PROPERTY_NAME,
progressPanel);
Perhaps the most important addition to David's answer and the reason why I decided to post my own is the following code:
client.property(ClientProperties.CHUNKED_ENCODING_SIZE, 1024);
client.property(ClientProperties.REQUEST_ENTITY_PROCESSING, "CHUNKED");
The client object is the the javax.ws.rs.client.Client.
It is essential to disable buffering also with the WriterInterceptor approach. Above code is a straightforward way to do this with Jersey 2.x

Related

#FormParameter data becomes null after reading and setting the same data in ContainerRequestContext entityStream

I have implemented filter and I have called getEntityStream of ContainerRequestContext and set the exact value back by using setEntitystream. If i use this filter then #FormParameter data becomes null and if i don't use filter then everything will be fine (as I am not calling getEntityStream) and i have to use filter to capture request data.
Note: I am getting form params from MultivaluedMap formParams but not from #FormParameter.
Environment :- Rest Easy API with Jboss Wildfly 8 server.
#Provider
#Priority(Priorities.LOGGING)
public class CustomLoggingFilter implements ContainerRequestFilter, ContainerResponseFilter{
final static Logger log = Logger.getLogger(CustomLoggingFilter.class);
#Context
private ResourceInfo resourceInfo;
#Override
public void filter(ContainerRequestContext requestContext)
throws IOException {
MDC.put("start-time", String.valueOf(System.currentTimeMillis()));
String entityParameter = readEntityStream(requestContext);
log.info("Entity Parameter :"+entityParameter);
}
private String readEntityStream(ContainerRequestContext requestContext){
ByteArrayOutputStream outStream = new ByteArrayOutputStream();
final InputStream inputStream = requestContext.getEntityStream();
final StringBuilder builder = new StringBuilder();
int read=0;
final byte[] data = new byte[4096];
try {
while ((read = inputStream.read(data)) != -1) {
outStream.write(data, 0, read);
}
} catch (IOException e) {
e.printStackTrace();
}
byte[] requestEntity = outStream.toByteArray();
if (requestEntity.length == 0) {
builder.append("");
} else {
builder.append(new String(requestEntity));
}
requestContext.setEntityStream(new ByteArrayInputStream(requestEntity) );
return builder.toString();
}
return null;
}
}
class customResource
{
//// This code is not working
#POST
#Path("voiceCallBack")
#ApiOperation(value = "Voice call back from Twilio")
public void voiceCallback(#FormParam("param") String param)
{
log.info("param:" + param);
}
// This code is working
#POST
#Path("voiceCallBackMap")
#ApiOperation(value = "Voice call back from Twilio")
public void voiceCallbackMap(final MultivaluedMap<String, String> formParams)
{
String param = formParams.getFirst("param");
}
}
please suggest me solution & Thanks in Advance.
I found during run time that instance of the entity stream (from http request) is of type org.apache.catalina.connector.CoyoteInputStream (I am using jboss-as-7.1.1.Final). But we are setting entity stream with the instance of java.io.ByteArrayInputStream. So Resteasy is unable to bind individual formparmeters.
There are two solutions for this you can use any one of them :
Use this approach How to read JBoss Resteasy's servlet request twice while maintaing #FormParam binding?
Get form parameters like this:
#POST
#Path("voiceCallBackMap")
#ApiOperation(value = "Voice call back from Twilio")
public void voiceCallbackMap(final MultivaluedMap<String, String> formParams)
{
String param = formParams.getFirst("param");
}

Receiving Multipart Response on client side (ClosableHttpResponse)

I have a java controller which have to send me some text data and different byte arrays. So I am building n multipart request and writing it to stream from HttpServletResponse.
Now my problem is how to parse the response at client side and extract the multiple parts.
SERVER CODE SNIPPET:-
MultipartEntityBuilder builder = MultipartEntityBuilder.create();
// Prepare payload
builder.addBinaryBody("document1", file);
builder.addBinaryBody("document2", file2);
builder.addPart("stringData", new StringBody(jsonData, ContentType.TEXT_PLAIN));
// Set to request body
HttpEntity entity = builder.build();
postRequest.setEntity(entity);
CLIENT CODE SNIPPET:-
HttpPost httpPost = new HttpPost(finalUrl);
StringEntity entity = new StringEntity(json);
httpPost.setEntity(entity);
httpPost.setHeader("Content-type", APPLICATION_JSON_TYPE);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
CloseableHttpResponse response = httpClient.execute(httpPost);
InputStream in = new BufferedInputStream(response.getEntity().getContent());
I checked CloseableHttpResponse and HttpEntity but none of them is providing method to parse multipart request.
EDIT 1:
This is my sample response I am receiving at client side stream:-
--bvRi5oZum37DUldtLgQGSbc5RRVZxKpjZMO4SYDe
Content-Disposition: form-data; name="numeric"
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
01010110
--bvRi5oZum37DUldtLgQGSbc5RRVZxKpjZMO4SYDe
Content-Disposition: form-data; name="stringmessage"
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding:8bit
testmessage
--bvRi5oZum37DUldtLgQGSbc5RRVZxKpjZMO4SYDe
Content-Disposition: form-data; name="binarydata"; filename="file1"
Content-Type: application/octet-stream
Content-Transfer-Encoding: binary
HI, THIS IS MY BINARY DATA
--bvRi5oZum37DUldtLgQGSbc5RRVZxKpjZMO4SYDe
Content-Disposition: form-data; name="ending"
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: 8bit
ending
--bvRi5oZum37DUldtLgQGSbc5RRVZxKpjZMO4SYDe--
I have finally got a workaround for it.
I will be using javax mail MimeMultipart.
Below is a code snipped for the solution:-
ByteArrayDataSource datasource = new ByteArrayDataSource(in, "multipart/form-data");
MimeMultipart multipart = new MimeMultipart(datasource);
int count = multipart.getCount();
log.debug("count " + count);
for (int i = 0; i < count; i++) {
BodyPart bodyPart = multipart.getBodyPart(i);
if (bodyPart.isMimeType("text/plain")) {
log.info("text/plain " + bodyPart.getContentType());
processTextData(bodyPart.getContent());
} else if (bodyPart.isMimeType("application/octet-stream")) {
log.info("application/octet-stream " + bodyPart.getContentType());
processBinaryData(bodyPart.getInputStream()));
} else {
log.warn("default " + bodyPart.getContentType());
}
}
Please let me know if anybody else have any standard solution.
TL;DR
If you know what HttpMessageConverters are, then skip the "In General" part.
If your client is reactive, than you most likely don't have a big problem. Go to the part named "Reactive Client (Spring Webflux)" for details.
If your client is non-reactive i.e. you're using Spring MVC and RestTemplate then the last section is for you. In short, it is not possible out of the box and you need to write custom code.
In General
When we want to read multipart data, then we are at the serialization/marshalling layer of our application. This is basically the same layer as when we transform a JSON or an XML document to a POJO via Jackson for example. What I want to emphasize here is that the logic for parsing multipart data should not take place in a service but rather much earlier.
Our hook to transform multipart data comes as early, as to when an HTTP response enters our application in form of an HttpInputMessage. Out of the box, Spring provides a set of HttpMessageConverters, that are able to transform our HTTP response to an object with which we can work. For example, the MappingJackson2HttpMessageConverter is used to read and write all request/responses that have the MediaType "application/Json".
If the application is reactive, then Spring uses HttpMessageReader
and HttpMessageWriter instead of HttpMessageConverters. They save the same purpose.
The following two sections show how to read (download) a multipart response via the two different paradigms.
Reactive Client (Spring Webflux)
This would be the easiest use case and the only thing we need, is already available in Spring Webflux out of the box.
The class MultipartHttpMessageReader would be doing all the heavy lifting. In case it does not behave exactly how you need it to, you can easily extend it and overwrite the methods to your liking. Your custom Reader can then be registered as a bean like so:
#Configuration
public class MultipartMessageConverterConfiguration {
#Bean
public CodecCustomizer myCustomMultipartHttpMessageWriter() {
return configurer -> configurer.customCodecs()
.register(new MyCustomMultipartHttpMessageWriter());
}
}
Non-Reactive Client (Spring MVC/RestTemplate)
If you have a "classic" application that uses RestTemplate to communicate via HTTP, then you need to rely on the aforementioned HttpMessageConverters. Unfortunately, the MessageConverter that is responsible for reading multipart data, does not support reading/downloading data:
Implementation of HttpMessageConverter to read and write 'normal' HTML forms and also to write (but not read) multipart data (e.g. file uploads)
Source: FormHttpMessageConverter Documentation
So what we need to do, is write our own MessageConverter, which is able to download multipart data. An easy way to do that, would be to make use of the DefaultPartHttpMessageReader that is internally used by MultipartHttpMessageReader. We don't even need Webflux for that, as it is already shipped with spring-web.
First let us define 2 classes in which we save the several parts the we read:
public class MyCustomPart {
public MyCustomPart(byte[] content, String filename, MediaType contentType) {
//assign to corresponding member variables; here omitted
}
}
/**
* Basically a container for a list of objects of the class above.
*/
public class MyCustomMultiParts {
public MyCustomMultiParts(List<MyCustomPart> parts){
//assign to corresponding member variable; here omitted
}
}
Later on, you can always take each Part and convert it to whatever is appropriate. The MyCustomPart represents a single block of your multipart data response. The MyCustomMultiParts represent the whole multipart data.
Now we come to the meaty stuff:
public class CustomMultipartHttpMessageConverter implements HttpMessageConverter<MyCustomMultiParts> {
private final List<MediaType> supportedMediaTypes = new ArrayList<>();
private final DefaultPartHttpMessageReader defaultPartHttpMessageReader;
public CustomMultipartHttpMessageConverter() {
this.supportedMediaTypes.add(MediaType.MULTIPART_FORM_DATA);
this.defaultPartHttpMessageReader = new DefaultPartHttpMessageReader();
}
#Override
public boolean canRead(final Class<?> clazz, #Nullable final MediaType mediaType) {
if (!MyCustomMultiParts.class.isAssignableFrom(clazz)) {
return false;
}
if (mediaType == null) {
return true;
}
for (final MediaType supportedMediaType : getSupportedMediaTypes()) {
if (supportedMediaType.includes(mediaType) && mediaType.getParameter("boundary") != null) {
return true;
}
}
return false;
}
/**
* This wraps the input message into a "reactive" input message, that the reactive DefaultPartHttpMessageReader uses.
*/
private ReactiveHttpInputMessage wrapHttpInputMessage(final HttpInputMessage message) {
return new ReactiveHttpInputMessage() {
#Override
public HttpHeaders getHeaders() {
return message.getHeaders();
}
#SneakyThrows //Part of lombok. Just use a try catch block if you're not using it
#Override
public Flux<DataBuffer> getBody() {
final DefaultDataBuffer wrappedBody = new DefaultDataBufferFactory()
.wrap(message.getBody().readAllBytes());
return Flux.just(wrappedBody);
}
};
}
#Override
public MyCustomMultiParts read(#Nullable final Class<? extends MyCustomMultiParts> clazz,
final HttpInputMessage message) throws IOException, HttpMessageNotReadableException {
final ReactiveHttpInputMessage wrappedMessage = wrapHttpInputMessage(message);
final ResolvableType resolvableType = ResolvableType.forClass(byte[].class); //plays no role
List<Part> rawParts = defaultPartHttpMessageReader.read(resolvableType, wrappedMessage, Map.of())//
.buffer()//
.blockFirst();
//You can check here whether the result exists or just continue
final List<MyCustomPart> customParts = rawParts.stream()// Now we convert to our customPart
.map(part -> {
//Part consists of a DataBuffer, we make a byte[] so we can convert it to whatever we want later
final byte[] content = Optional.ofNullable(part.content().blockFirst())//
.map(DataBuffer::asByteBuffer)//
.map(ByteBuffer::array)//
.orElse(new byte[]{});
final HttpHeaders headers = part.headers();
final String filename = headers.getContentDisposition().getFilename();
final MediaType contentType = headers.getContentType();
return new MyCustomPart(content, filename, contentType);
}).collect(Collectors.toList());
return new MyCustomMultiParts(customParts);
}
#Override
public void write(final MyCustomMultiParts parts, final MediaType contentType,
final HttpOutputMessage outputMessage) {
// we're just interested in reading
throw new UnsupportedOperationException();
}
#Override
public boolean canWrite(final Class<?> clazz, final MediaType mediaType) {
// we're just interested in reading
return false;
}
#Override
public List<MediaType> getSupportedMediaTypes() {
return this.supportedMediaTypes;
}
}
From here on, you should know better what to do with your "CustomPart". Whether it is a JSON, a bitmap or a PDF. From the byte array you can convert it into anything.
Now if you want to test it, you only have to add your CustomConverter to a RestTemplate and then "await" the MyCustomMultiParts that we defined:
// This could also be inside your #Bean definition of RestTemplate of course
final RestTemplate restTemplate = new RestTemplate();
final List<HttpMessageConverter<?>> messageConverters = restTemplate.getMessageConverters();
messageConverters.add(new CustomMultipartHttpMessageConverter());
String url = "http://server.of.choice:8080/whatever-endpoint-that-sends-multiparts/";
final HttpHeaders headers = new HttpHeaders();
headers.setAccept(List.of(MediaType.MULTIPART_FORM_DATA));
final HttpEntity<Void> requestEntity = new HttpEntity<>(headers);
//here we await our MyCustomMultiParts
final MyCustomMultiParts entity = restTemplate.exchange(url, GET, requestEntity, MyCustomMultiParts.class);
Mime4j from Apache is one way to parse the responses from client-side. Its a common practice to use a tool like this.
You can refer this link - http://www.programcreek.com/java-api-examples/index.php?api=org.apache.james.mime4j.MimeException
You can download the jar from this link -
http://james.apache.org/download.cgi#Apache_Mime4J

How to zip- compress HTTP request with Spring RestTemplate?

How to gzip HTTP request, created by org.springframework.web.client.RestTemplate?
I am using Spring 4.2.6 with Spring Boot 1.3.5 (Java SE, not Android or Javascript in the web browser).
I am making some really big POST requests, and I want request body to be compressed.
I propose two solutions, one simpler without streaming and one that supports streaming.
If you don't require streaming, use a custom ClientHttpRequestInterceptor, a Spring feature.
RestTemplate rt = new RestTemplate();
rt.setInterceptors(Collections.singletonList(interceptor));
Where interceptor could be:
ClientHttpRequestInterceptor interceptor = new ClientHttpRequestInterceptor() {
#Override
public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution)
throws IOException {
request.getHeaders().add("Content-Encoding", "gzip");
byte[] gzipped = getGzip(body);
return execution.execute(request, gzipped);
}
}
getGzip I copied
private byte[] getGzip(byte[] body) throws IOException {
ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
try {
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream);
try {
zipStream.write(body);
} finally {
zipStream.close();
}
} finally {
byteStream.close();
}
byte[] compressedData = byteStream.toByteArray();
return compressedData;
}
After configuring the interceptor all requests will be zipped.
The disadvantage of this approach is that it does not support streaming as the ClientHttpRequestInterceptor receives the content as a byte[]
If you require streaming create a custom ClientHttpRequestFactory, say GZipClientHttpRequestFactory, and use it like this:
SimpleClientHttpRequestFactory requestFactory = new SimpleClientHttpRequestFactory();
requestFactory.setBufferRequestBody(false);
ClientHttpRequestFactory gzipRequestFactory = new GZipClientHttpRequestFactory(requestFactory);
RestTemplate rt = new RestTemplate(gzipRequestFactory);
Where GZipClientHttpRequestFactory is:
public class GZipClientHttpRequestFactory extends AbstractClientHttpRequestFactoryWrapper {
public GZipClientHttpRequestFactory(ClientHttpRequestFactory requestFactory) {
super(requestFactory);
}
#Override
protected ClientHttpRequest createRequest(URI uri, HttpMethod httpMethod, ClientHttpRequestFactory requestFactory)
throws IOException {
ClientHttpRequest delegate = requestFactory.createRequest(uri, httpMethod);
return new ZippedClientHttpRequest(delegate);
}
}
And ZippedClientHttpRequest is:
public class ZippedClientHttpRequest extends WrapperClientHttpRequest
{
private GZIPOutputStream zip;
public ZippedClientHttpRequest(ClientHttpRequest delegate) {
super(delegate);
delegate.getHeaders().add("Content-Encoding", "gzip");
// here or in getBody could add content-length to avoid chunking
// but is it available ?
// delegate.getHeaders().add("Content-Length", "39");
}
#Override
public OutputStream getBody() throws IOException {
final OutputStream body = super.getBody();
zip = new GZIPOutputStream(body);
return zip;
}
#Override
public ClientHttpResponse execute() throws IOException {
if (zip!=null) zip.close();
return super.execute();
}
}
And finally WrapperClientHttpRequest is:
public class WrapperClientHttpRequest implements ClientHttpRequest {
private final ClientHttpRequest delegate;
protected WrapperClientHttpRequest(ClientHttpRequest delegate) {
super();
if (delegate==null)
throw new IllegalArgumentException("null delegate");
this.delegate = delegate;
}
protected final ClientHttpRequest getDelegate() {
return delegate;
}
#Override
public OutputStream getBody() throws IOException {
return delegate.getBody();
}
#Override
public HttpHeaders getHeaders() {
return delegate.getHeaders();
}
#Override
public URI getURI() {
return delegate.getURI();
}
#Override
public HttpMethod getMethod() {
return delegate.getMethod();
}
#Override
public ClientHttpResponse execute() throws IOException {
return delegate.execute();
}
}
This approach creates a request with chunked transfer encoding, this can be changed setting the content length header, if size is known.
The advantage of the ClientHttpRequestInterceptor and/or custom ClientHttpRequestFactory approach is that it works with any method of RestTemplate. An alternate approach, passing a RequestCallback is possible only with execute methods, this because the other methods of RestTemplate internally create their own RequestCallback(s) that produce the content.
BTW it seems that there is little support to decompress gzip request on the server. Also related: Sending gzipped data in WebRequest? that points to the Zip Bomb issue. I think you will have to write some code for it.
Further to the above answer from #TestoTestini, if we take advantage of Java 7+'s 'try-with-resources' syntax (since both ByteArrayOutputStream and GZIPOutputStream implement closeable() ) then we can shrink the getGzip function into the following:
private byte[] getGzip(byte[] body) throws IOException {
try (ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream)) {
zipStream.write(body);
byte[] compressedData = byteStream.toByteArray();
return compressedData;
}
}
(I couldn't find a way of commenting on #TestoTestini's original answer and retaining the above code format, hence this Answer).
Since I cannot comment on #roj 's post I'm writing an answer here.
#roj snippet although is neat it actually does not do the same job as #Testo Testini 's snippet.
Testo is closing the streams before:
byteStream.toByteArray();
where in #rog answer, this occurs before the stream.close(), since streams are in the try/resource block.
If you need to use try-with-resources, zipStream should be closed before the byteStream.toByteArray();
The complete snippet should be:
private byte[] getGzip(byte[] body) throws IOException {
try (ByteArrayOutputStream byteStream = new ByteArrayOutputStream();
GZIPOutputStream zipStream = new GZIPOutputStream(byteStream)) {
zipStream.write(body);
zipStream.close();
byte[] compressedData = byteStream.toByteArray();
return compressedData;
}
}
The was getting an error ("Compressed file ended before the end-of-stream marker was reached") and the above fixed the error in my case and I thought that I should share this.

How to download a file using a Java REST service and a data stream

I have 3 machines:
server where the file is located
server where REST service is running ( Jersey)
client(browser) with access to 2nd server but no access to 1st server
How can I directly (without saving the file on 2nd server) download the file from 1st server to client's machine?
From 2nd server I can get a ByteArrayOutputStream to get the file from 1st server, can I pass this stream further to the client using the REST service?
It will work this way?
So basically what I want to achieve is to allow the client to download a file from 1st server using the REST service on 2nd server (since there is no direct access from client to 1st server) using only data streams (so no data touching the file system of 2nd server).
What I try now with EasyStream library:
final FTDClient client = FTDClient.getInstance();
try {
final InputStreamFromOutputStream <String> isOs = new InputStreamFromOutputStream <String>() {
#Override
public String produce(final OutputStream dataSink) throws Exception {
return client.downloadFile2(location, Integer.valueOf(spaceId), URLDecoder.decode(filePath, "UTF-8"), dataSink);
}
};
try {
String fileName = filePath.substring(filePath.lastIndexOf("/") + 1);
StreamingOutput output = new StreamingOutput() {
#Override
public void write(OutputStream outputStream) throws IOException, WebApplicationException {
int length;
byte[] buffer = new byte[1024];
while ((length = isOs.read(buffer)) != -1) {
outputStream.write(buffer, 0, length);
}
outputStream.flush();
}
};
return Response.ok(output, MediaType.APPLICATION_OCTET_STREAM)
.header("Content-Disposition", "attachment; filename=\"" + fileName + "\"")
.build();
}
}
UPDATE2
So my code now with the custom MessageBodyWriter looks simple:
ByteArrayOutputStream baos = new ByteArrayOutputStream(2048) ;
client.downloadFile(location, spaceId, filePath, baos);
return Response.ok(baos).build();
But I get the same heap error when trying with large files.
UPDATE3
Finally managed to get it working !
StreamingOutput did the trick.
Thank you #peeskillet ! Many thanks !
"How can I directly (without saving the file on 2nd server) download the file from 1st server to client's machine?"
Just use the Client API and get the InputStream from the response
Client client = ClientBuilder.newClient();
String url = "...";
final InputStream responseStream = client.target(url).request().get(InputStream.class);
There are two flavors to get the InputStream. You can also use
Response response = client.target(url).request().get();
InputStream is = (InputStream)response.getEntity();
Which one is the more efficient? I'm not sure, but the returned InputStreams are different classes, so you may want to look into that if you care to.
From 2nd server I can get a ByteArrayOutputStream to get the file from 1st server, can I pass this stream further to the client using the REST service?
So most of the answers you'll see in the link provided by #GradyGCooper seem to favor the use of StreamingOutput. An example implementation might be something like
final InputStream responseStream = client.target(url).request().get(InputStream.class);
System.out.println(responseStream.getClass());
StreamingOutput output = new StreamingOutput() {
#Override
public void write(OutputStream out) throws IOException, WebApplicationException {
int length;
byte[] buffer = new byte[1024];
while((length = responseStream.read(buffer)) != -1) {
out.write(buffer, 0, length);
}
out.flush();
responseStream.close();
}
};
return Response.ok(output).header(
"Content-Disposition", "attachment, filename=\"...\"").build();
But if we look at the source code for StreamingOutputProvider, you'll see in the writeTo, that it simply writes the data from one stream to another. So with our implementation above, we have to write twice.
How can we get only one write? Simple return the InputStream as the Response
final InputStream responseStream = client.target(url).request().get(InputStream.class);
return Response.ok(responseStream).header(
"Content-Disposition", "attachment, filename=\"...\"").build();
If we look at the source code for InputStreamProvider, it simply delegates to ReadWriter.writeTo(in, out), which simply does what we did above in the StreamingOutput implementation
public static void writeTo(InputStream in, OutputStream out) throws IOException {
int read;
final byte[] data = new byte[BUFFER_SIZE];
while ((read = in.read(data)) != -1) {
out.write(data, 0, read);
}
}
Asides:
Client objects are expensive resources. You may want to reuse the same Client for request. You can extract a WebTarget from the client for each request.
WebTarget target = client.target(url);
InputStream is = target.request().get(InputStream.class);
I think the WebTarget can even be shared. I can't find anything in the Jersey 2.x documentation (only because it is a larger document, and I'm too lazy to scan through it right now :-), but in the Jersey 1.x documentation, it says the Client and WebResource (which is equivalent to WebTarget in 2.x) can be shared between threads. So I'm guessing Jersey 2.x would be the same. but you may want to confirm for yourself.
You don't have to make use of the Client API. A download can be easily achieved with the java.net package APIs. But since you're already using Jersey, it doesn't hurt to use its APIs
The above is assuming Jersey 2.x. For Jersey 1.x, a simple Google search should get you a bunch of hits for working with the API (or the documentation I linked to above)
UPDATE
I'm such a dufus. While the OP and I are contemplating ways to turn a ByteArrayOutputStream to an InputStream, I missed the simplest solution, which is simply to write a MessageBodyWriter for the ByteArrayOutputStream
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.lang.annotation.Annotation;
import java.lang.reflect.Type;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.ext.MessageBodyWriter;
import javax.ws.rs.ext.Provider;
#Provider
public class OutputStreamWriter implements MessageBodyWriter<ByteArrayOutputStream> {
#Override
public boolean isWriteable(Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType) {
return ByteArrayOutputStream.class == type;
}
#Override
public long getSize(ByteArrayOutputStream t, Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType) {
return -1;
}
#Override
public void writeTo(ByteArrayOutputStream t, Class<?> type, Type genericType,
Annotation[] annotations, MediaType mediaType,
MultivaluedMap<String, Object> httpHeaders, OutputStream entityStream)
throws IOException, WebApplicationException {
t.writeTo(entityStream);
}
}
Then we can simply return the ByteArrayOutputStream in the response
return Response.ok(baos).build();
D'OH!
UPDATE 2
Here are the tests I used (
Resource class
#Path("test")
public class TestResource {
final String path = "some_150_mb_file";
#GET
#Produces(MediaType.APPLICATION_OCTET_STREAM)
public Response doTest() throws Exception {
InputStream is = new FileInputStream(path);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
int len;
byte[] buffer = new byte[4096];
while ((len = is.read(buffer, 0, buffer.length)) != -1) {
baos.write(buffer, 0, len);
}
System.out.println("Server size: " + baos.size());
return Response.ok(baos).build();
}
}
Client test
public class Main {
public static void main(String[] args) throws Exception {
Client client = ClientBuilder.newClient();
String url = "http://localhost:8080/api/test";
Response response = client.target(url).request().get();
String location = "some_location";
FileOutputStream out = new FileOutputStream(location);
InputStream is = (InputStream)response.getEntity();
int len = 0;
byte[] buffer = new byte[4096];
while((len = is.read(buffer)) != -1) {
out.write(buffer, 0, len);
}
out.flush();
out.close();
is.close();
}
}
UPDATE 3
So the final solution for this particular use case was for the OP to simply pass the OutputStream from the StreamingOutput's write method. Seems the third-party API, required a OutputStream as an argument.
StreamingOutput output = new StreamingOutput() {
#Override
public void write(OutputStream out) {
thirdPartyApi.downloadFile(.., .., .., out);
}
}
return Response.ok(output).build();
Not quite sure, but seems the reading/writing within the resource method, using ByteArrayOutputStream`, realized something into memory.
The point of the downloadFile method accepting an OutputStream is so that it can write the result directly to the OutputStream provided. For instance a FileOutputStream, if you wrote it to file, while the download is coming in, it would get directly streamed to the file.
It's not meant for us to keep a reference to the OutputStream, as you were trying to do with the baos, which is where the memory realization comes in.
So with the way that works, we are writing directly to the response stream provided for us. The method write doesn't actually get called until the writeTo method (in the MessageBodyWriter), where the OutputStream is passed to it.
You can get a better picture looking at the MessageBodyWriter I wrote. Basically in the writeTo method, replace the ByteArrayOutputStream with StreamingOutput, then inside the method, call streamingOutput.write(entityStream). You can see the link I provided in the earlier part of the answer, where I link to the StreamingOutputProvider. This is exactly what happens
See example here: Input and Output binary streams using JERSEY?
Pseudo code would be something like this (there are a few other similar options in above mentioned post):
#Path("file/")
#GET
#Produces({"application/pdf"})
public StreamingOutput getFileContent() throws Exception {
public void write(OutputStream output) throws IOException, WebApplicationException {
try {
//
// 1. Get Stream to file from first server
//
while(<read stream from first server>) {
output.write(<bytes read from first server>)
}
} catch (Exception e) {
throw new WebApplicationException(e);
} finally {
// close input stream
}
}
}
Refer this:
#RequestMapping(value="download", method=RequestMethod.GET)
public void getDownload(HttpServletResponse response) {
// Get your file stream from wherever.
InputStream myStream = someClass.returnFile();
// Set the content type and attachment header.
response.addHeader("Content-disposition", "attachment;filename=myfilename.txt");
response.setContentType("txt/plain");
// Copy the stream to the response's output stream.
IOUtils.copy(myStream, response.getOutputStream());
response.flushBuffer();
}
Details at: https://twilblog.github.io/java/spring/rest/file/stream/2015/08/14/return-a-file-stream-from-spring-rest.html

Get request body of my request

How do I get the actual body of request I am about to do?
Invocation i = webTarget.path("somepath")
.request(MediaType.APPLICATION_JSON)
.buildPut(Entity.entity(account, MediaType.APPLICATION_JSON));
log.debug(i.... ); // I want to log the request
You could try to wrap the Outputstream for the Entity. First, by using a javax.ws.rs.client.ClientRequestFilter to add a custom Outputstream to the ClientRequestContext.
Client client = ClientBuilder.newClient().register(MyLoggingFilter.class);
public class MyLoggingOutputStreamWrapper extends OutputStream{
static final Logger logger = Logger.getLogger(...);
ByteArrayOutputStream myBuffer = new ...
private OutputStream target;
public MyLoggingOutputStreamWrapper(OutputStream target){ ...
// will be smarter to implement write(byte [], int, int) and call it from here
public void write(byte [] data){
myBuffer.write(data);
target.write(data);
}
... // other methods to delegate to target, especially the other write method
public void close(){
// not sure, if converting the buffer to a string is enough. may be in a different encoding than the platform default
logger.log(myBuffer.toString());
target.close();
}
}
#Provider
public class MyLoggingFilter implements ClientRequestFilter{
// implement the ClientRequestFilter.filter method
#Override
public void filter(ClientRequestContext requestContext) throws IOException {
requestContext.setEntityOutputstream(new MyLoggingOutputStreamWrapper(requestContext.getEntityOutputstream()));
}
I'm not sure at which point the outputstream is used to serialize the data. It could be at the moment you invoke buildPut(), but more likely it will be on the fly at access of the webclient.
Another approach would be getting the underlying HttpClient and registering some listener there to get the body.
I had a similar problem. I couldn't use the Jersey LoggingFilter (and the new LoggingFeature in 2.23) because I needed to customize the output. For using the other options you can see this post: Jersey: Print the actual request
I've simplified what I did for brevity. It is pretty similar to the original answer, but I adapted the Jersey LoggingStream (it is an internal class you can't access) and took out the ability to log up to a maximum size.
You have a class that extends the OutputStream so you can capture the entity in it. It will write to your OutputStream as well as the original.
public class MyLoggingStream extends FilterOutputStream
{
private final ByteArrayOutputStream baos = new ByteArrayOutputStream();
public MyLoggingStream(final OutputStream inner)
{
super(inner);
}
public String getString(final Charset charset)
{
final byte[] entity = baos.toByteArray();
return new String(entity, charset);
}
#Override
public void write(final int i) throws IOException
{
baos.write(i);
out.write(i);
}
}
Then you have a filter class. It was important for my use case that I was able to grab the entity and log it separately (I've put it as println below for simplicity). In Jersey's LoggingFilter and LoggingFeature the entity gets logged by the Interceptor, so you can't capture it.
#Provider
public class MyLoggingClientFilter implements ClientRequestFilter, ClientResponseFilter, WriterInterceptor
{
protected static String HTTPCLIENT_START_TIME = "my-http-starttime";
protected static String HTTPCLIENT_LOG_STREAM = "my-http-logging-stream";
#Context
private ResourceInfo resourceInfo;
public void filter(final ClientRequestContext requestContext) throws IOException
{
requestContext.setProperty(HTTPCLIENT_START_TIME, System.nanoTime());
final OutputStream stream = new MyLoggingStream(requestContext.getEntityStream());
requestContext.setEntityStream(stream);
requestContext.setProperty(HTTPCLIENT_LOG_STREAM, stream);
}
public void filter(final ClientRequestContext requestContext, final ClientResponseContext responseContext)
{
StringBuilder builder = new StringBuilder("--------------------------").append(System.lineSeparator());
long startTime = (long)requestContext.getProperty(HTTPCLIENT_START_TIME);
final double duration = (System.nanoTime() - startTime) / 1_000_000.0;
builder.append("Response Time: ").append(duration);
if(requestContext.hasEntity())
{
final MyLoggingStream stream = (MyLoggingStream)requestContext.getProperty(HTTPCLIENT_LOG_STREAM);
String body = stream.getString(MessageUtils.getCharset(requestContext.getMediaType()));
builder.append(System.lineSeparator()).append("Entity: ").append(body);
}
builder.append(System.lineSeparator()).append("--------------------------");
System.out.println(builder.toString());
requestContext.removeProperty(HTTPCLIENT_START_TIME);
requestContext.removeProperty(HTTPCLIENT_LOG_STREAM);
}
#Override
public void aroundWriteTo(WriterInterceptorContext context) throws IOException, WebApplicationException
{
// This forces the data to be written to the output stream
context.proceed();
}
}

Categories

Resources