I have this soap handler:
#Slf4j
public class ResponseInterceptor implements SOAPHandler<SOAPMessageContext> {
#Override
public boolean handleMessage(SOAPMessageContext context) {
try {
SOAPMessage message = context.getMessage();
ByteArrayOutputStream out = new ByteArrayOutputStream();
message.writeTo(out);
String strMsg = new String(out.toByteArray());
} catch (SOAPException | IOException e) {
e.printStackTrace();
}
return false;
}
But I handle Requests. Is there a similar way to handle Responses?
EDIT:
I have next task: I need handle all RAW responces from SOAP service, filter it, and send to apache kafka. I do not want to have unmarshaling operation and I want send RAW responce to kafka
EDIT2:
I write interceptor:
#Slf4j
public class ResponseInterceptor extends AbstractPhaseInterceptor<Message> {
public ResponseInterceptor() {
super(Phase.PRE_UNMARSHAL);
}
#Override
public void handleMessage(Message message) throws Fault {
try {
SOAPMessage soapMessage = message.getContent(SOAPMessage.class);
ByteArrayOutputStream out = new ByteArrayOutputStream();
soapMessage.writeTo(out);
String strMsg = new String(out.toByteArray());
message.getInterceptorChain().abort();
} catch (SOAPException | IOException e) {
e.printStackTrace();
}
}
}
But if I call message.getInterceptorChain().abort(); I get exception in service. But I need just brake this responce and not delivery to web service
CXF interceptors are not "in and of themselves" linked to requests or responses, for at least two reasons :
Many interceptors can work on both sides (e.g. logging the soap payload)
There is a symetry of what is to be done on the request/response side of things, with respect to the client/server nature of the app.
So the way CXF works is that interceptors are bound to "chains", which CXF creates and manages at runtime, and which account for all combinations of the above : IN, OUT, IN_FAULT, OUT_FAULT. You can read all about them here.
If your current interceptor handles "requests" that means one of two things :
If your application is a server, then, your interceptor is bound to the "IN" chain
If your application is a client, then, your interceptor is bound to the "OUT" chain
If you want to handle responses as well as requests, you need to find how/where your custom interceptors are bound to a chain, which is usually in the configuration file in CXF (see : "writing and configuring interceptors" in the abose link).
Many people use CXF with Spring configuration, and therefore, add interceptors at the whole CXF (bus) level like so :
<bean id="MyInterceptor" class="demo.interceptor.MyInterceptor"/>
<!-- We are adding the interceptors to the bus as we will have only one endpoint/service/bus. -->
<cxf:bus>
<cxf:inInterceptors>
<ref bean="MyInterceptor"/>
</cxf:inInterceptors>
<cxf:outInterceptors>
<ref bean="MyInterceptor"/>
</cxf:outInterceptors>
</cxf:bus>
But it can also be done at the enpoint level.
Further reading : How to catch any exceptions in a SOAP webservice method?
Accessing the contents before (un)marshalling
I could expand a lot, but I suggest you look at the org.apache.cxf.interceptor.LoggingInInterceptor (or the respective one for "Out" messages), they are as good an example as you can see of "how to access the raw content without breaking anything".
Related
I am sending JMS messages with XML payloads and some custom headers contentType with values text/json and text/xml using Jmeter.
My Spring integration configuration looks like this:
<jms:message-driven-channel-adapter channel="jmsInChannel" destination-name="queue.demo" connection-factory="jmsConnectionFactory1" />
<int:channel id="jmsInChannel" />
<int:header-value-router input-channel="jmsInChannel" header-name="contentType" default-output-channel="nullChannel">
<int:mapping value="text/json" channel="jsonTransformerChannel" />
<int:mapping value="text/xml" channel="xmlTransformerChannel" />
</int:header-value-router>
Up to this point, everything works perfectly, the messages are successfully routed to their respective transformers.
My problem is, when it's an XML payload I first used a JAXB Unmarshaller along with <ixml:unmarshalling-transofmer ../> provided by http://www.springframework.org/schema/integration/xml.
I could get the payload, but it couldn't process the message afterwards as a JMS message, it became a pure POJO. So I lost the headers, and I couldn't use <int: ../> components without serializing the POJO, which is not what I wanted to achieve.
I have found a work-around, where I defined my own Unmarshalling bean in java like so:
<int:channel id="xmlTransformerChannel" />
<int:transformer input-channel="xmlTransformerChannel" ref="xmlMsgToCustomerPojoTransformer" output-channel="enrichInChannel" />
method:
#SuppressWarnings("rawtypes")
public Message transform(String message) {
logger.info("Message Received \r\n" + message);
try {
MyModel myModel = (MyModel) unmarshaller.unmarshal(new StreamSource(new StringReader(message)));
return MessageBuilder.withPayload(myModel).build();
} catch (XmlMappingException e) {
return MessageBuilder.withPayload(e).build();
} catch (Exception e) {
return MessageBuilder.withPayload(e).build();
}
}
I could successfully process the message as a Spring integration Message, but I lost the original custom JMS headers.
In contrast, all I had to do to transform the json payloads and keep the Message format AND preserving my custom headers is this xml configuration:
<int:channel id="jsonTransformerChannel" />
<int:json-to-object-transformer input-channel="jsonTransformerChannel" output-channel="enrichInChannel" type="com.alrawas.ig5.MyModel" />
My question is, how to keep the original custom JMS headers after unmarshalling the xml payload?
UPDATE:
I did try to write the xml transformer this way, taking Message as input instead of only string, it did not throw exception in this method, but it did later in the routing phase
public Message<?> transform(Message<String> message) {
logger.info("Message Received \r\n" + message);
try {
MyModel myModel = (MyModel) unmarshaller.unmarshal(new StreamSource(new StringReader(message.getPayload())));
return (Message<MyModel>) MessageBuilder.withPayload(myModel).copyHeaders(message.getHeaders()).build();
} catch (XmlMappingException e) {
return MessageBuilder.withPayload(e).build();
} catch (Exception e) {
return MessageBuilder.withPayload(e).build();
}
}
I ran into a problem in a component I use later in this flow:
<int:object-to-json-transformer input-channel="outr" output-channel="outch" />
<int:router method="route" input-channel="outch" default-output-channel="nullChannel">
<bean class="com.alrawas.ig5.MyCustomRouter" />
</int:router>
In my route method threw Cannot cast String to com.alrawas.ig5.MyModel Exception:
public class MyCustomRouter {
public String route(Message<MyModel> myModel) {
Integer tenNumber = myModel.getPayload().getNumber(); // <-- Cast Exception here
System.out.println(myModel);
return (tenNumber % 10 == 0) ? "stayLocal" : "goRemote";
}
}
This Cast Exception only happens after unmarshalling xml, JSON payloads work fine without losing headers or throwing cast exceptions.
UPDATE:
Check my answer below: contentType was not really a custom header
When you develop custom transformer, you need to keep in mind that returning a Message<?> leads you to a fully control over its content.
When transformer function returns a Message, it doesn't populate any request headers.
So, your public Message transform(String message) { must expect a Message as an input and you need to copy all the headers from the request message to reply message. There are appropriate methods on the MessageBuilder.
On the other hand it is fully unclear why do you need to return a Message here since everything in the Spring Integration is going to be wrapped to Message before sending to the output channel.
The take:
The ClassCastException was happening later in the last router, was because I named my custom header contentType. Which was a default jms header used internally. When I changed its value to text/xml, that last router String route(Message<MyModel> myModel) was trying to cast the json into MyModel, and it failed because the header was no more application/json like it should be, it was text/xml. Which caused the ClassCastException.
So I got rid of the custom xml unmarshalling logic bean. I renamed my custom header. And used <ixml:unmarshalling-transformer ../>.
It worked using xml configuration without additional custom java beans.
I have a Camel CxfEndpoint Service defined. The Reception of the messages works fine, but the Response/Acknowledgement Message I am producing has a problem. The WS-Security parts/actions in the message are left in and therefore in the response I have my own WS-Security Parts (Signature Timestamp) plus the WS-Security Parts from the caller/original message.
The Message Acknowledgement is not accepted from the original caller and I suspect that this is the problem (that I have their Signature wth BinarySecuritySessionToken and our own).
The Camel route is rather simple for trying to resolve the issue:
from("myEndpoint")
.transacted()
.process(new PreProcessor())
.to("mock:end")
I have defined the Camel CxfEndpoint in the route as:
CxfEndpoint cxfEndpoint = new CxfEndpoint();
cxfEndpoint.setAddress("http://0.0.0.0:8888/services/Service");
cxfEndpoint.setWsdlURL("Service.wsdl");
cxfEndpoint.setCamelContext(camelContext);
....
Problem example Timestamp:
<wsu:Timestamp wsu:Id="TS-6757512FE17DCDC903153191998160526">
<wsu:Created>2018-07-18T13:19:41.605Z</wsu:Created>
<wsu:Expires>2018-07-18T13:24:41.605Z</wsu:Expires>
</wsu:Timestamp>
<u:Timestamp xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" u:Id="uuid-b2a1c0b2-8263-4afc-bc99-f8a46da80ce7-693">
<u:Created>2018-07-18T13:19:42.905Z</u:Created>
<u:Expires>2018-07-18T13:24:42.905Z</u:Expires>
</u:Timestamp>
The general structure of the response message seems to be fine, but I need to strip the WS-Security Action Parts from the message.
Is there a way to strip these parts or do I need to construct a entirely new message?
Please let me know if you need additional information, thanks.
So I fixed it by adding another Interceptor for removing the Security Header.
I would like to know if this is a acceptable approach or if there is a better solution to this problem.
public class RemoveSecurityHeadersOutInterceptor extends AbstractSoapInterceptor
{
public RemoveSecurityHeadersOutInterceptor(String phase) {
super(Phase.PRE_PROTOCOL);
}
public void handleMessage(SoapMessage message) throws Fault
{
List<Header> headers = message.getHeaders();
headers.removeIf(h -> h.getName().getLocalPart().equals("Security"));
}
}
In an incoming soap request there is a soap:mustUnderstand="1" element in soap header ,how can I handle this in my web service . If soap:mustUnderstand="1" it throws exception when it is 0 (soap:mustUnderstand="0") it runs as expected .
this is my partial soap request is like this
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header xmlns="http://www.xxxxxxx/zzzzz-msg/schema/msg-header-1_0.xsd">
<MessageHeader ResponseRequested="true" version="1.0" Terminate="true" Reverse="true" id="0002P559C1" soap:mustUnderstand="1">
.......
......
I am using Apache CXF for web service .
Your service should explicitly tell CXF that the given header has been understood and processed.
One way of doing it is registering a subclass of SOAPHandler responsible for actual processing of you header. In that interface it's important to implement method Set<QName> getHeaders() and return a set of headers' names that your handler takes care about.
CXF will then treat all those headers as understood
Example:
in Spring context XML:
<jaxws:endpoint ...>
<jaxws:handlers>
<bean class="example.MySOAPHandler" />
</jaxws:handlers>
</jaxws:endpoint>
in Java code:
public class MySOAPHandler implements SOAPHandler<SOAPMessageContext> {
public static final String MY_NS_URI = "http://www.xxxxxxx/zzzzz-msg/schema/msg-header-1_0.xsd";
public static final String MY_HEADER_NAME = "MessageHeader";
#Override
public Set<QName> getHeaders() {
// This will tell CXF that the following headers are UNDERSTOOD
return Collections.singleton(new QName(MY_NS_URI, MY_HEADER_NAME));
}
// other handler methods here
}
If a header block is annotated with mustUnderstand="1" and the
receiver wasn't designed to support the given header, the message
shouldn't be processed and a Fault should be returned to the sender
(with a soap:MustUnderstand status code). When mustUnderstand="0" or
the mustUnderstand attribute isn't present, the receiver can ignore
those headers and continue processing. The mustUnderstand attribute
plays a central role in the overall SOAP processing model.
For details kindly refer to this link
We have REST services exposed via Spring MVC. We use a HandlerExceptionResolver to log exceptions. We currently log the following:
The exception and its stack trace
The URL
The request headers
It would make debugging easier if we could also log the JSON post data as well. Any suggestions on how to get this?
Add this to the class representing the configuration for the application:
import javax.servlet.Filter;
import javax.servlet.http.HttpServletRequest;
import org.springframework.web.filter.AbstractRequestLoggingFilter;
....
#Bean
public Filter loggingFilter(){
AbstractRequestLoggingFilter f = new AbstractRequestLoggingFilter() {
#Override
protected void beforeRequest(HttpServletRequest request, String message) {
System.out.println("beforeRequest: " +message);
}
#Override
protected void afterRequest(HttpServletRequest request, String message) {
System.out.println("afterRequest: " +message);
}
};
f.setIncludeClientInfo(true);
f.setIncludePayload(true);
f.setIncludeQueryString(true);
f.setBeforeMessagePrefix("BEFORE REQUEST [");
f.setAfterMessagePrefix("AFTER REQUEST [");
f.setAfterMessageSuffix("]\n");
return f;
}
you may have to comment out
f.setIncludePayload(true);
You need a filter that would save request body when it's being read and provide the saved data to your exception logger later.
Spring contains AbstractRequestLoggingFilter that does the similar thing. Though it's not directly suitable for your problem, you can use it as a reference to implement your own filter.
There is no easy way to log the payload of the request/response. You can use a java web filter to intercept all the requests and responses and read the JSON data from the stream. But there is one problem, when you will read data from the stream the actual data will be exhausted from stream.
Therefore, you have to implement the wrapper of actual request and response object. Only the copied version of request response will be logged. We have implemented similar solution like follows and it satisfied our requirement:
http://www.wetfeetblog.com/servlet-filer-to-log-request-and-response-details-and-payload/431
http://angelborroy.wordpress.com/2009/03/04/dump-request-and-response-using-javaxservletfilter/
How do I implement a java socket in tapestry5?
What I want to do is create a socket which I can send an XmlHttpRequest over, through a piece of javascript code.
function sendPost(url, postdata, callback) {
xmlHttp=GetXmlHttpObject()
if (xmlHttp==null) {
alert ("Browser does not support HTTP Request")
return
}
xmlHttp.onreadystatechange=callback
xmlHttp.open("POST",url,true)
xmlHttp.send(postdata);
}
Where the URL is the socket i have just created.
So you want to do an AJAX request from your client code to the server, recieve a response and process it in some way? You will not need sockets. Instead, use Tapestry's built-in AJAX functionality.
If you're loading additional content inside your page via Javascript, chances are you will not need to write any code at all. Be sure you have read the AJAX section from the Tapestry docs, and you understand what a Zone is and how it works.
Here's a basic example. Template:
<div id="myZone" t:type="Zone" t:id="myZone">
... [Initial content, if any] ...
</div>
<a t:type="ActionLink" t:id="updateContent" t:zone="myZone">Update</a>
And class:
#Inject
private Zone myZone;
#Inject
private Request request;
#OnEvent(component = "updateContent")
Object updateContent() {
... [your code] ....
if (this.request.isXHR()) {
return this.myZone.getBody();
} else {
return this;
}
}
Tapestry will do everything else, like registering the proper event listener on the link and inserting the updated content in the proper place. The if (this.request.isXHR()) makes sure your page will degrade gracefully for clients without JavaScript enabled.
If you'd like to do something else entirely, like returning a JSON object and processing it with your own JavaScript code, you can return any of these JSON classes from your event handler.
Also, if you want to write your own client-side code, be sure to use the built-in, cross-browser AJAX functionality of Prototype, which ships with Tapestry.
Edit based on comment:
You won't be able to access a different server (host + port) through AJAX because of the same origin policy. You could, however, proxy the call through your Tapestry app. I've modified my code to illustrate this (assuming the thing listening on port 2112 is an HTTP server, otherwise change as needed):
#OnEvent(component = "updateContent")
Object updateContent() throws IOException {
final URL url = new URL("http://localhost:2112");
final HttpURLConnection con = url.openConnection();
final String content;
InputSteam input = null;
try {
input = con.getInputStream();
content = IOUtils.toString(input);
} finally {
IOUtils.closeQuietly(input);
}
return new StreamResponse() {
#Override
public String getContentType() {
return "text/javascript";
}
#Override
public InputStream getStream() throws IOException {
return new ByteArrayInputStream(content.getBytes("UTF-8"));
}
#Override
public void prepareResponse(Response response) {
response.setHeader("Expires", "0");
response.setHeader("Cache-Control",
"must-revalidate, post-check=0, pre-check=0");
}
}
}
Use of Sockets is independent of your webapp view framework - you would do it pretty much the same way regardless of how the view is coded. The only thing that changes is once you've implemented your code that uses sockets, is how that's invoked.
I used tapestry with spring, and so injecting services into the spring context is the most natural approach.
The services subpackage in tapestry is mostly for creating implementations that plug into tapestry, like encoders, property conduits, and binding factories. So whether you use this or not depends upon what you are trying to achieve.
For example, if you are creating a component that reads from a socket, and renders the data read in, then you can create that as a regular component, in the components subpackage.
The XmlHttpRequest will just do a web server request which can be handled perfectly well by whatever you use to run Tapestry in. There is no need to open sockets and stuff.
Just define a route in your wep application to accept the XmlHttpRequest and have a handler, servlet, controller, ... collect the necessary data, transform it to xml and send it to the Javascript component.
I found an example here