Webservices: Request-Response Mapper - java

This is more of a design-pattern question.
My client application [implemented and will run both as part of a scheduled batch job as well as a message processing application] makes SOAP over HTTP calls to a third party Engine to get some membership data. Since the underlying binding is done thought JAX-RPC, my SOAP response is eventually converted / copied into the generated client stubs.
Now, my question - Is it better to maintain my own domain objects and copy the data from the response objects of the service or is it OK if I can directly use the stub objects to do other processing!
Any suggestions?

This question is going to be somewhat subjective. I prefer to always translate to my own domain objects in case I ever need to swap out the web service implementation. If they ever change over to RESTful web services or just simply change up their wsdl on a version upgrade, you may be out of luck if you are using the stub classes throughout your application.
There are cons to this practice though:
You will need to maintain a similar set of classes
If the service never changes, you wont see any returns on your effort
You can always change this later if it proves useful

Related

I need to Build a rest client to make 10k rest api calls/execution of the application in java with best performance. Any useful links will be helpful

Hi i am using Spring 4 Async rest template to make 10k rest api calls to a web service. I have a method that creates the request object and a method that calls the web service. I am using Listenable Future classes and the two methods to create and call are enclosed in another method where the response is handled in future. Any useful links for such a task would be greatly helpful.
First, set up your testing environment.
Then benchmark what you have.
Then adjust your code and compare
(repeat as necessary).
Whatever you do, there is a cost associated with it. You need to be sure that your costs are measured and understood, every step of the way.
A simple Tomcat application might outperform a Spring application or be equivalent depending on what aspects of Spring's inversion of control are being leveraged. Using a Future might be fast or slow, depending on what it is being compared to. Using non-NIO might be faster or slower, depending on the implementation and the data being processed.

REST spring services and JQUERY web integration best practices

Regarding to the best way to design a system using spring-mvc (REST services) and jQuery . I think exists the following approaches.
One war file in which you have spring services and jQuery stuff, with this approach we have all the domain objects available to be used with spring-mvc, we can create initial jsp pages and then refresh some elements using jQuery calls to our services.
Two war files, one having the spring services and the other contains spring-mvc stuff and jquery, in this case the creation of pages could be done by jsp pages and also refresh elements with jquery calls to our services, but to make this possible we need to have a common library of domain objects to be used in the second war, also internally use restTemplate in some controllers that need to be created (It sounds like duplicate code).
Have one war file running the REST services and a other “package” without any java or spring stuff only jquery, it means all the call and information retrieval must to be done using jquery, initial jsp pages creation cannot be done with this option and all the content are obtained via REST services. (no need of use internal controllers to call services by java)
Thinking about it I realized that one and second have the following disadvantages.
Have services and web stuff in the same war file sound like a bad idea thinking in SOA, the movement of this war will result in move unneeded jquery and web stuff.
Have jsp and jquery stuff mixed not sound like a good idea but I think is a common practice (I wonder why?), using this I think we need to create some controllers in the second war to initially create the web pages, go using restTemplate to obtain initial information and then update or refresh using jquery calls. It feels that a have a controller just to retrieve data to the services, why don’t go directly …
I just want to implement the third approach but the question is: there is any disadvantages that I’m not seeing or any
advice that I should know before use that approach? Also there is any suggestion to handle this kind of systems it will be great to hear something from you, coming from java and jquery developers
I agree with you that version 3 gives you the most flexibility and is what you would typically see in the design world.
Treat the rest and the front end as separate applications entirely. If done correctly, you can have a very robust application capable of proper agility.
Version 1: Load the page in an initial controller call, and use jquery to make subsequent service calls. All code exists within one package.
The disadvantage is tight coupling. You are now restricted to the language of your api, and no longer providing a service based approach to your data and services.
I have seen this version applied mostly when the application developer cares more about async front end calls than a SOA based language.
Version 2: Have a war containing Spring Services, and a war for the JS.
The issues with this method can be overcome with the use of a jar instead of another server application. Though this approach is commonly used, the draw backs are still reliance on external packaging.
Using a jar that contains all the code to hit databases and create domain objects separate from the code that the controllers use to serialize and respond to web requests creates a very clean way to manage your api, however this creates a complexity and an extra component that can be avoided using version 3. It also gives the same odd behavior you see in version 1.
I have seen this approach taken by teams developing pure api applications. I have not seen this done on teams that also require a front end component. Method one or three has been used in these cases.
Version 3: Create an application that deals with just the front end responsibility Create an application that handles the server side responsibility.
In both version 2 and version 3, separate your service calls from your http calls. Make them different because it allows modularity.
For instance, we need to respond to http Requests
#Controller
class MyController{
#Autowired
private MyService service;
#GET
public String getData(String dataId){
return service.getData(dataId);
}
}
and we need to respond to active mq requests
Message m = queueReceiver.receive();
if (m instanceof DataRequest) {
DataRequest message = (DataRequest) request;
queueSender.send(service.getData(request.getDataId())); //service call
} else {
// Handle error
}
Also, it gives you the ability to manage what you need to handle on the http side different from your service side.
#GET
public String getData(HttpRequest request, String dataId){
if(!this.handleAuth(request)){
throw new 403();
}
try{
return service.getData(dataId);
catch(Exception e){
throw new WrappedErrorInProperHttpException(e);
}
}
This allows your service layer to handle tasks meaningful to just those services without needing to handle all the http crap. And lets you deal with all the HTTP crap separate from your service layer.

JNDI In .Net / PHP / IPhone

Let me exaplain you the complete situation currently I am stuck with in.
We are developing very much complex application in GWT and Hibernate, we are trying to host client and server code on different servers because of client's requirement. Now, I am able to achieve so using JNDI.
Here comes the tricky part, client need to have that application on different Platform also, database would be same and methods would be the same, lets say iPhone / .Net version of our application. we don't want to generate Server code again because it's gonna be the same for all.
I have tried for WebServices wrapper on the top of my server code but because of complexity of architecture and Classes dependencies I am not able to do so. For example, Lets consider below code.
class Document {
List<User>;
List<AccessLevels>;
}
Document class have list of users, list of accesslevels and lot more list of other classes and that other classes have more lists. Some important server methods takes Class (Document or any other) as input and return some other class in output. And we shouldn't use complex architecture in WebServices.
So, I need to stick with JNDI. Now, I don't know how can I access JNDI call to any other application ???
Please suggest ways to overcome this situation. I am open for technology changes that means JNDI / WebServices or any other technology that servers me well.
Thanking You,
Regards,
I have never seen JNDI used as a mechanism for request/response inter-process communication. I don't believe that this will be a productive line of attack.
You believe that Web Services are inappropriate when the payloads are complex. I disagree, I have seen many successful projects using quite large payloads, with many nested classes. Trivial example: Customers with Orders with Order Lines with Products with ... and so on.
It is clearly desirable to keep payload sizes small, there are serialization and network costs, big objects will be more expensive. But it's by far preferable to have one big request than lot's of little one. A "busy" interface will not perform well across a network.
I suspect that the one problem you may have is that certain of the server-side classes are not pure data, they refer to classes that only make sense on the server, you don't want those classes in you client.
I this case you need to build an "adapter" layer. This is dull work, but no matter what Inter-process communication technique you use you will need to do it. You need what I refer to as Data Transfer Objects (DTOs) - these represent payloads that are understood in client, using only classes reasonable for the client, and which the server can consume and create.
Lets suppose that you use technology XXX (JNDI, Web Service, direct socket call, JMS)
Client --- sends Document DTO --XXX---> Adapter transform DTO to server's Document
and similarly in reverse. My claim is that no matter what XXX is chosen you have the same problem, you need the client to work with "cut-down" objects that reveal none of the server's implementation details.
The adapter has responsibility for creating and understanding DTOs.
I find that working with RESTful Web Services using JAX/RS is very easy once you have a set of DTOs it's the work of minutes to create Web Services.

How to reduce memory size of Apache CXF client stub objects?

My web service client application uses Apache CXF to generate client stubs for talking to several web services. The generated CXF web service stub objects have quite a large memory footprint (10 - 15 web service objects take more than 64 MB of memory). Is there any way to reduce the CXF object footprint?
We had similar problems with Axis. The problem we had was that we wanted to do many concurrent calls to the web service and the Axis clients generated using the WSDL caused each client to use a lot of memory. The clients arent thread safe, so we had to create one client per request.
We had two choices. First we could prune the generated code - but that was not nice for maintenance reasons.
Second, we simply pruned the WSDL to remove the parts that were not relevant to us, and regenerated slimmed down clients. That way, if we called one service method, it's client wouldn't contain bulk for unrelated methods which that thread wouldn't be using.
Worked quite well, but is still a maintenance nightmare, because any time the WSDL gets updated (e.g. our partner releases a new version of their web service), we need to spend time creating cut down wsdls. The ideal solution I guess would be to get our partner to recognise our problems and take ownership of the cut down WSDLs.
We took a different approach to the CXF client. I haven't looked into its memory footprint, which isn't an issue in our context, but it's certainly a simpler method of development than creating stubs. It looks something like this:
JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean();
HTTPClientPolicy httpClientPolicy = new HTTPClientPolicy();
factory.setAddress(endpoint);
factory.getServiceFactory().setDataBinding(new AegisDatabinding());
factory.setServiceClass(myInterface.class);
Object client = factory.create();
((BindingProvider) client).getRequestContext().put(BindingProvider.SESSION_MAINTAIN_PROPERTY, true);
myInterface stub = (myInterface)client;
We just do that (of course we've built some utility classes to simplify things further) for any WS we want to hook up to at runtime (provided, of course, we have its Java interface). Our goal was to make the whole WS thing as transparent to the programmers as possible. We really have no interest in WSDLs and XSDs per se. We suspect that we're not alone.
If your SOAP needs are very basic, you could look into kSOAP2 which is really memory efficient. It is designed to run fine in a J2ME phone application.

SOAP and Spring

I've just finished reading about SOAP via Spring-WS in "Spring in Action", 2nd edition, by Craig Walls from Manning Publications Co. They write about Contract First, much like the Spring docs, with making a message and method XML and then transforming that to XSD and then again to WSDL, while wiring up the marshalling and service path in Spring.
I must admit, I'm not convinced. Why is this a better path than, let's say, making a service interface and generating my service based on that interface? That's quite close to defining my REST #Controllers in Spring3. Do I have options of going a path like this with making SOAP webservices with Spring?
Also: I'd like to duplicate an already existing webservice. I have its WSDL and I can have my service placed instead of it. Is this recommended at all? If so, what's the recommended approach?
Cheers
Nik
I think you must have your wires crossed.
Contract first means defining a WSDL, and then creating Java code to support this WSDL.
Contract last means creating your Java code, and generating a WSDL later.
The danger with contract last is if your WSDL is automatically generated from your Java code, and you refactor your Java code, this causes your WSDL to change.
Spring-WS only supports contract first
2.3.1. Fragility
As mentioned earlier, the
contract-last development style
results in your web service contract
(WSDL and your XSD) being generated
from your Java contract (usually an
interface). If you are using this
approach, you will have no guarantee
that the contract stays constant over
time. Each time you change your Java
contract and redeploy it, there might
be subsequent changes to the web
service contract.
Aditionally, not all SOAP stacks
generate the same web service contract
from a Java contract. This means
changing your current SOAP stack for a
different one (for whatever reason),
might also change your web service
contract.
When a web service contract changes,
users of the contract will have to be
instructed to obtain the new contract
and potentially change their code to
accommodate for any changes in the
contract.
In order for a contract to be useful,
it must remain constant for as long as
possible. If a contract changes, you
will have to contact all of the users
of your service, and instruct them to
get the new version of the contract.
Toolkit's point about Java interfaces being more brittle is correct, but I think there's more.
Just like there's an object-relational impedance mismatch, there's also an object-XML mismatch. The Spring web service docs do a fine job of explaining how collections and the rest can make generating an XML document from a Java or .NET class problematic.
If you take the Spring approach and start with a schema you'll be better off. It'll be more stable, and it'll allow "duck typing". Clients can ignore elements that they don't need, so you can change the schema by adding new elements without affecting them.

Categories

Resources