I'm using Jersey on both the server and client of a web application. On the server I have Interceptors as noted in https://jersey.java.net/documentation/latest/filters-and-interceptors.html to handle GZIP compression going out and coming in. From the server side, it's easy enough to select which resource methods are compressed using the #Compress annotation. However, if I also want to selectively compress entities from the Client to the Server, what's the best way to do that?
I had started adding a Content-Encoding: x-gzip header to the request, but my client side Interceptor does not see that header (presumably because it's not an official client side header).
Before you point to section 10.6 of the Jersey documentation, note that this works for the Server side. Although I could do something similar on the Client, I don't want to restrict it by URL. I'd rather control the compression flag as close to the request as possible (i.e. Header?).
Here's what I have so far, but it does not work since my header is removed:
class GzipWriterClientInterceptor implements WriterInterceptor {
private static final Set<String> supportedEncodings = new GZipEncoder().getSupportedEncodings(); //support gzip and x-gzip
#Override
public void aroundWriteTo(WriterInterceptorContext context)
throws IOException, WebApplicationException {
if (supportedEncodings.contains(context.getHeaders().getFirst(HttpHeaderConstants.CONTENT_ENCODING_HEADER))) {
System.out.println("ZIPPING DATA");
final OutputStream outputStream = context.getOutputStream();
context.setOutputStream(new GZIPOutputStream(outputStream));
} else {
context.headers.remove(HttpHeaderConstants.CONTENT_ENCODING_HEADER) //remove it since we won't actually be compressing the data
}
context.proceed();
}
}
Sample Request:
Response response = getBaseTarget().path(getBasePath()).path(graphic.uuid.toString())
.request(DEFAULT_MEDIA_TYPE)
.header(HttpHeaderConstants.CONTENT_ENCODING_HEADER, MediaTypeConstants.ENCODING_GZIP)
.put( Entity.entity(graphic, DEFAULT_MEDIA_TYPE))
I also have a logging filter as well that shows all the request headers. I've simplified the above, but all other headers I add are logged.
Related
I'm trying to reproduce some behavior provide by my front end app by using a java http client.
I'm trying to send (stream) binary data from httpClient to server over PUT request. So content type is application/octet-stream. I've to send an unknown amount of data that is incoming.
Firstly, I used Apache HttpClient because it can handle digest authentication easily (that's is a requirement). For it, I use ContentProducer that enable writing directly to the OutputStream.
Below is an example:
HttpPut sendDataReq= new HttpPut(
HTTP_URI);
ContentProducer myContentProducer = new ContentProducer() {
#Override
public void writeTo(OutputStream out) throws IOException
{
out.write("ContentProducer rocks!".getBytes());
out.write(("Time requested: " + new Date()).getBytes());
}
};
HttpEntity myEntity = new EntityTemplate(myContentProducer);
sendDataReq.setEntity(myEntity );
HttpResponse response= httpClient.execute(sendDataReq);
I expect from this piece of code to stream request (AND NOT RESPONSE) from client to server.
By using Wireshark, I'm able to see my PUT request but it is send over TCP protocol and then nothing. When I try to listen using my front end web app, I can see that the PUT request is sent over HTTP protocol with 0 content length, data is then sent bytes by bytes (packet of some amount of bytes) over HTTP protocol with a log info: CONTINUATION.
Also, I tried with httpUrlConnection but there is no digestAuthentication implementation. So, I give up to use it.
Any hints of what is bad in my ContentProducer and how to accomplish it? Using other java HTTP clients? I can provide Wireshark log of what is expected and what I have.
I'm calling a remote web service and am occasionally getting the following error:-
Error caught: com.sun.xml.internal.ws.server.UnsupportedMediaException: Unsupported Content-Type: text/plain;charset=ISO-8859-1 Supported ones are: [text/xml]
Does anyone know how to get the actual message that was returned by the server? It sounds like it might be text or a web page but I'm unable to get it.
I can catch the UnsupportedMediaException but I don't know what to do to extract the actual response. Here's the code:-
val selectedDate = exchange.`in`.getHeader("selectedDate").toString()
val accountNumberMinor = exchange.`in`.getHeader("accountNumberMinor").toString()
val accountNumberMajor = exchange.`in`.getHeader("accountNumberMajor").toString()
val accountIdentifier = if (accountNumberMinor.trim() != "") accountNumberMinor else accountNumberMajor
val effectiveDate = SimpleDateFormat("yyyy-MM-dd").parse(selectedDate)
val response = webRequest.getResponse(accountIdentifier, selectedDate)
val result = response.result as FixedIncomeCurrencyForwardAccountV10Result
Thanks,
Adam
An HTML page is usually a server error yes. Probably a static service page (like 404 or 5xx). It could even be an error in your request that should be returned as a SOAPFault, but is not implemented as such by the specific server.
Sometimes the server does communicate a valid SOAP (Fault) message, but the content type header is just wrong. In that case you're better off rewriting the Content-Type from the response with a proxy server. See for references on this subject:
SOAP unsupported media exception text/plain Supported ones are: [text/xml]
So, what can you do to view the HTML content?
With JAX-WS you can enable all HTTP web service traffic to be logged to System.out with the following vm options:
-Dcom.sun.xml.ws.transport.http.client.HttpTransportPipe.dump=TRUE
-Dcom.sun.xml.internal.ws.transport.http.client.HttpTransportPipe.dump=TRUE
-Dcom.sun.xml.ws.transport.http.HttpAdapter.dump=TRUE
-Dcom.sun.xml.internal.ws.transport.http.HttpAdapter.dump=TRUE
-Dcom.sun.xml.internal.ws.transport.http.HttpAdapter.dumpTreshold=999999
See for references:
https://www.rgagnon.com/javadetails/java-logging-with-jax-ws.html
https://www.javatips.net/api/com.sun.xml.ws.transport.http.client.httptransportpipe
Now, this will dump all http requests and responses, but you might only be interested in the ones where you don't get soap/xml.
So, what else can you do?
You could set these options programmatically and re-send the request when you catch the UnsupportedMediaException. But in the time this takes the error might have disappeared. Note that these properties are cached, so setting them needs to go through com.sun.xml.ws.transport.http.client.HttpTransportPipe#setDump(Boolean)
If you're willing to switch to the JAX-WS runtime, you could also create your own com.sun.xml.ws.api.pipe.TransportTubeFactory since jaxws-rt can load custom instances of this factory. I have successfully created my own TransportTubeFactory that uses a custom com.sun.xml.ws.transport.http.client.HttpTransportPipe (by extending it and overriding processRequest) that reads the http response from the com.sun.xml.ws.api.pipe.Codec upon catching the UnsupportedMediaException. By wrapping the Codec you can store the input stream on the decode method call.
This runtime is nearly identical to the internal runtime, and should be fully compatible.
This may also work with the internal classes from the Java runtime, but since those are located in RT.jar it's difficult to depend on it and build your project. So i would advice switching to the external JAX-WS runtime.
What you then do with the input stream is up to you (which is the body of the HTTP response at the moment of catching the UnsupportedMediaException).
Note that you can also rewrite most* content type headers in code with this codec wrapper.
See for reference how to add your own implementation of this factory via META-INF/services here:
https://www.javadoc.io/doc/com.sun.xml.ws/jaxws-rt/latest/com.sun.xml.ws/com/sun/xml/ws/api/pipe/TransportTubeFactory.html
In short:
Create a file in META-INF/services called com.sun.xml.ws.api.pipe.TransportTubeFactory
The contents of this file should be a single line with the full class name of your custom factory, for example my.soap.transport.MyTransportTubeFactory
Note; if you're using the classes from the Java runtime instead of jaxws-rt, use com.sun.xml.internal.ws as the package for everything in this post that references com.sun.xml.ws.
*Note: newer versions of this runtime (jaxws-rt-2.3.x or jre 8+ internal) throw a different exception on text/html responses. Sadly before calling Codec.decode. So in that case you would have to copy more code into your custom HttpTransportPipe. text/plain currently still works though.
Some snippets of my code:
public class TransportTubeFactoryImpl extends TransportTubeFactory {
#Override
public Tube doCreate(ClientTubeAssemblerContext context) {
String scheme = context.getAddress().getURI().getScheme();
if (scheme != null) {
if (scheme.equalsIgnoreCase("http") || scheme.equalsIgnoreCase("https")) {
CodecWrapper codec = new CodecWrapper(context.getCodec());
return new HttpTransportPipeImpl(codec, context.getBinding());
}
}
throw new WebServiceException("Unsupported endpoint address: " + context.getAddress());
}
}
public class CodecWrapper implements Codec {
private Codec wrapped;
public CodecWrapper(Codec wrapped) {
this.wrapped = wrapped;
}
#Override
public void decode(InputStream in, String contentType, Packet response) throws IOException {
copyInputStream(in); // todo: implement this
wrapped.decode(in, contentType, response);
}
}
public class HttpTransportPipeImpl extends HttpTransportPipe {
private CodecWrapper codec;
public HttpTransportPipeImpl(CodecWrapper codec, WSBinding binding) {
super(codec, binding);
this.codec = codec;
}
#Override
public NextAction processRequest(Packet request) {
try {
return super.processRequest(request);
} catch (UnsupportedMediaException ex) {
// todo: here you can access the stored data from the codec wrapper
}
}
}
I have also created a complete working demonstration of this principle on my github: https://github.com/s-lindenau/SoapContentTypeDemo
If you still have the option to switch to a completely different client library, you could also check Apache CXF:
How can I make jaxws parse response without checking Content-Type header
I followed the simple example shown GitHub:LittleProxy and have added the following in clientToProxyRequest(HttpObject httpObject) Method.
public HttpResponse clientToProxyRequest(HttpObject httpObject)
{
if(httpObject instanceof DefaultHttpRequest)
{
DefaultHttpRequest httpRequest = (DefaultHttpRequest)httpObject;
logger.info(httpRequest.getUri());
logger.info(httpRequest.toString());
// How to access the POST Body data?
HttpPostRequestDecoder d = new HttpPostRequestDecoder(httpRequest);
d.getBodyHttpDatas(); //NotEnoughDataDecoderException
}
return null;
}
The logger report this, IMO only these two header are relevant here. It's a POST request and there is content ...
POST http://www.... HTTP/1.1
Content-Length: 522
Looking into Netty API documentation the HttpPostRequestDecoder seems to be promising, but I get a NotEnoughDataDecoderException. In Netty JavaDoc this is written, but I do not know how to offer data?
This getMethod returns a List of all HttpDatas from body.
If chunked, all chunks must have been offered using offer() getMethod. If not, NotEnoughDataDecoderException will be raised.
In fact I'm also unsure if this is the right approach to get the POST data in the proxy.
try to add this in your HttpFiltersSourceAdapter to aviod NotEnoughDataDecoderException:
#Override
public int getMaximumRequestBufferSizeInBytes() {
return 1048576;
}
1048576 here is the maximum length of the aggregated content. See POSTing data to netty with Apache HttpClient.
This will enables decompression and aggregation of content, see the source code in org.littleshoot.proxy.impl.ClientToProxyConnection:
// Enable aggregation for filtering if necessary
int numberOfBytesToBuffer = proxyServer.getFiltersSource()
.getMaximumRequestBufferSizeInBytes();
if (numberOfBytesToBuffer > 0) {
aggregateContentForFiltering(pipeline, numberOfBytesToBuffer);
}
Have a Spring Rest application that run inside an embedded Jetty container.
On Client I use RestTemplate(try to).
Use case :
Having an InputStream (I don't have the File), I want to send it to the REST service.
The InputStream can be quite large (no byte[] !).
What I've tried so far :
Added StandardServletMultipartResolver to the Dispatcher context;
On servlet registration executed :
ServletRegistration.Dynamic dispatcher = ...
MultipartConfigElement multipartConfigElement = new MultipartConfigElement("D:/temp");
dispatcher.setMultipartConfig(multipartConfigElement);
On client :
restTemplate.getMessageConverters().add(new FormHttpMessageConverter());
MultiValueMap<String, Object> parts = new LinkedMultiValueMap<String, Object>();
parts.add("attachmentData", new InputStreamResource(data) {
// hacks ...
#Override
public String getFilename() {
//avoid null file name
return "attachment.zip";
}
#Override
public long contentLength() throws IOException {
// avoid calling getInputStream() twice
return -1L;
}
}
ResponseEntity<Att> saved = restTemplate.postForEntity(url, parts, Att.class)
On server :
#RequestMapping("/attachment")
public ResponseEntity<Att> saveAttachment(#RequestParam("attachmentData") javax.servlet.http.Part part) {
try {
InputStream is = part.getInputStream();
// consume is
is.close();
part.delete();
return new ResponseEntity<Att>(att, HttpStatus.CREATED);
}
}
What is happening :
The uploaded InputStream is stored successfully in the configured temp folder (MultiPart1970755229517315824), the Part part parameter is correctly Injected in the handler method.
The delete() method does not delete the file (smth still has opened handles on it).
Anyway it looks very ugly.
Is there a smoother solution ?
You want to use HTTP's Chunked Transfer Coding. You can enable that by setting SimpleClientHttpRequestFactory.setBufferRequestBody(false). See SPR-7909.
You should rather use byte[], and write a wrapper around the webservice to actually send the "large string" in chunks. Add a parameter in the webservice which will indicate the "contentID" of the content, so that the other side knows this part belongs to which half-filled "bucket". Another parameter "chunkID" would help in sequencing of the chunks on the other side. Finally, third parameter, "isFinalChunk" would be set if whatever you are sending is the final thing. This is pretty non-fancy functionality achievable in less than 100 lines of code.
The only issue with this is that you end up making "n" calls to the webservice rather than just one call, which would aggregate the connect delays etc. For realtime stuff, some more network QoS is required, but otherwise you should be fine.
I think this is much simpler, and once you have your own class wrapper to do this simple chopping and gluing, it is scalable to a great extent if your server can handle multiple webservice calls.
I need to send a HTTPPost with body gzipped, the server accepts non gzipped data also but would prefer it gzipped, so Im trying to convert some existing workign code to use gzip
The data is currently set with
httpMethod.setEntity(new UrlEncodedFormEntity(nameValuePairs));
Ive tried sublclassing HttpEntityWrapper
static class GzipWrapper extends HttpEntityWrapper
{
public GzipWrapper(HttpEntity wrapped)
{
super(wrapped);
}
public void writeTo(OutputStream outstream)
throws IOException
{
GZIPOutputStream gzip = new GZIPOutputStream(outstream);
super.writeTo(gzip);
}
}
and changed to
httpMethod.setEntity(new GzipWrapper(
new UrlEncodedFormEntity(nameValuePairs)));
and added
if (!httpMethod.containsHeader("Accept-Encoding"))
{
httpMethod.addHeader("Accept-Encoding", "gzip");
}
but now my request just time outs I think there must be something wrong with my GZIpWrapper but Im not sure what.
On another note I looked at the http://hc.apache.org/httpcomponents-client-ga/httpclient/examples/org/apache/http/examples/client/ClientGZipContentCompression.java. example. Aside from the fact that I dont like interceptors because it is difficult to follow program flow it doesnt make sense to me because the request header is set to tell the server to accept gzip data but nowhere does it actually gzip encode any data, it only unzips the response.
(1) GzipWrapper implementation is wrong. It transforms the entity content when writing it out to output stream but it still returns the Content-Length of the wrapped entity, this causing the server to expect more input than actually transmitted by the client.
(2) You completely misunderstand the purpose of the Accept-Encoding header
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
(3) ClientGZipContentCompression sample is correct. It does not compress outgoing request entity because it is not meant to do so. See point (2)